Sample records for two-step optimization procedure

  1. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  2. Two-dimensional solid-phase extraction strategy for the selective enrichment of aminoglycosides in milk.

    PubMed

    Shen, Aijin; Wei, Jie; Yan, Jingyu; Jin, Gaowa; Ding, Junjie; Yang, Bingcheng; Guo, Zhimou; Zhang, Feifang; Liang, Xinmiao

    2017-03-01

    An orthogonal two-dimensional solid-phase extraction strategy was established for the selective enrichment of three aminoglycosides including spectinomycin, streptomycin, and dihydrostreptomycin in milk. A reversed-phase liquid chromatography material (C 18 ) and a weak cation-exchange material (TGA) were integrated in a single solid-phase extraction cartridge. The feasibility of two-dimensional clean-up procedure that experienced two-step adsorption, two-step rinsing, and two-step elution was systematically investigated. Based on the orthogonality of reversed-phase and weak cation-exchange procedures, the two-dimensional solid-phase extraction strategy could minimize the interference from the hydrophobic matrix existing in traditional reversed-phase solid-phase extraction. In addition, high ionic strength in the extracts could be effectively removed before the second dimension of weak cation-exchange solid-phase extraction. Combined with liquid chromatography and tandem mass spectrometry, the optimized procedure was validated according to the European Union Commission directive 2002/657/EC. A good performance was achieved in terms of linearity, recovery, precision, decision limit, and detection capability in milk. Finally, the optimized two-dimensional clean-up procedure incorporated with liquid chromatography and tandem mass spectrometry was successfully applied to the rapid monitoring of aminoglycoside residues in milk. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  4. Gaussian process regression for geometry optimization

    NASA Astrophysics Data System (ADS)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  5. Numerical modeling and optimization of the Iguassu gas centrifuge

    NASA Astrophysics Data System (ADS)

    Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.

    2017-07-01

    The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.

  6. Hybrid method to estimate two-layered superficial tissue optical properties from simulated data of diffuse reflectance spectroscopy.

    PubMed

    Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin

    2018-04-20

    An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.

  7. Two-step optimization of pressure and recovery of reverse osmosis desalination process.

    PubMed

    Liang, Shuang; Liu, Cui; Song, Lianfa

    2009-05-01

    Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed.

  8. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on in-house object-oriented optimization tool.

  9. Step 6: Does Not Routinely Employ Practices, Procedures Unsupported by Scientific Evidence

    PubMed Central

    Goer, Henci; Sagady Leslie, Mayri; Romano, Amy

    2007-01-01

    Step 6 of the Ten Steps of Mother-Friendly Care addresses two issues: 1) the routine use of interventions (shaving, enemas, intravenous drips, withholding food and fluids, early rupture of membranes, and continuous electronic fetal monitoring; and 2) the optimal rates of induction, episiotomy, cesareans, and vaginal births after cesarean. Rationales for compliance and systematic reviews are presented. PMID:18523680

  10. Automation and Optimization of Multipulse Laser Zona Drilling of Mouse Embryos During Embryo Biopsy.

    PubMed

    Wong, Christopher Yee; Mills, James K

    2017-03-01

    Laser zona drilling (LZD) is a required step in many embryonic surgical procedures, for example, assisted hatching and preimplantation genetic diagnosis. LZD involves the ablation of the zona pellucida (ZP) using a laser while minimizing potentially harmful thermal effects on critical internal cell structures. Develop a method for the automation and optimization of multipulse LZD, applied to cleavage-stage embryos. A two-stage optimization is used. The first stage uses computer vision algorithms to identify embryonic structures and determines the optimal ablation zone farthest away from critical structures such as blastomeres. The second stage combines a genetic algorithm with a previously reported thermal analysis of LZD to optimize the combination of laser pulse locations and pulse durations. The goal is to minimize the peak temperature experienced by the blastomeres while creating the desired opening in the ZP. A proof of concept of the proposed LZD automation and optimization method is demonstrated through experiments on mouse embryos with positive results, as adequately sized openings are created. Automation of LZD is feasible and is a viable step toward the automation of embryo biopsy procedures. LZD is a common but delicate procedure performed by human operators using subjective methods to gauge proper LZD procedure. Automation of LZD removes human error to increase the success rate of LZD. Although the proposed methods are developed for cleavage-stage embryos, the same methods may be applied to most types LZD procedures, embryos at different developmental stages, or nonembryonic cells.

  11. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the first two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-X510 network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  12. Optimal Signal Processing of Frequency-Stepped CW Radar Data

    NASA Technical Reports Server (NTRS)

    Ybarra, Gary A.; Wu, Shawkang M.; Bilbro, Griff L.; Ardalan, Sasan H.; Hearn, Chase P.; Neece, Robert T.

    1995-01-01

    An optimal signal processing algorithm is derived for estimating the time delay and amplitude of each scatterer reflection using a frequency-stepped CW system. The channel is assumed to be composed of abrupt changes in the reflection coefficient profile. The optimization technique is intended to maximize the target range resolution achievable from any set of frequency-stepped CW radar measurements made in such an environment. The algorithm is composed of an iterative two-step procedure. First, the amplitudes of the echoes are optimized by solving an overdetermined least squares set of equations. Then, a nonlinear objective function is scanned in an organized fashion to find its global minimum. The result is a set of echo strengths and time delay estimates. Although this paper addresses the specific problem of resolving the time delay between the two echoes, the derivation is general in the number of echoes. Performance of the optimization approach is illustrated using measured data obtained from an HP-851O network analyzer. It is demonstrated that the optimization approach offers a significant resolution enhancement over the standard processing approach that employs an IFFT. Degradation in the performance of the algorithm due to suboptimal model order selection and the effects of additive white Gaussion noise are addressed.

  13. Potential of extended airbreathing operation of a two-stage launch vehicle by scramjet propulsion

    NASA Astrophysics Data System (ADS)

    Schoettle, U. M.; Hillesheimer, M.; Rahn, M.

    This paper examines the application of scramjet propulsion to extend the ramjet operation of an airbreathing two-stage launch designed for horizontal takeoff and landing. Performance comparisons are made for two alternative propulsion concepts. The mission performance predictions presented are obtained from a multistep optimization procedure employing both trajectory optimization and vehicle design steps to achieve maximum payload capabilities. The simulation results are shown to offer an attractive payload advantage of the scramjet variant over the ramjet powered vehicle.

  14. A structural topological optimization method for multi-displacement constraints and any initial topology configuration

    NASA Astrophysics Data System (ADS)

    Rong, J. H.; Yi, J. H.

    2010-10-01

    In density-based topological design, one expects that the final result consists of elements either black (solid material) or white (void), without any grey areas. Moreover, one also expects that the optimal topology can be obtained by starting from any initial topology configuration. An improved structural topological optimization method for multi- displacement constraints is proposed in this paper. In the proposed method, the whole optimization process is divided into two optimization adjustment phases and a phase transferring step. Firstly, an optimization model is built to deal with the varied displacement limits, design space adjustments, and reasonable relations between the element stiffness matrix and mass and its element topology variable. Secondly, a procedure is proposed to solve the optimization problem formulated in the first optimization adjustment phase, by starting with a small design space and advancing to a larger deign space. The design space adjustments are automatic when the design domain needs expansions, in which the convergence of the proposed method will not be affected. The final topology obtained by the proposed procedure in the first optimization phase, can approach to the vicinity of the optimum topology. Then, a heuristic algorithm is given to improve the efficiency and make the designed structural topology black/white in both the phase transferring step and the second optimization adjustment phase. And the optimum topology can finally be obtained by the second phase optimization adjustments. Two examples are presented to show that the topologies obtained by the proposed method are of very good 0/1 design distribution property, and the computational efficiency is enhanced by reducing the element number of the design structural finite model during two optimization adjustment phases. And the examples also show that this method is robust and practicable.

  15. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    PubMed

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step algorithms can potentially incorporate with different nonlinear differential equation models to reconstruct the GRN.

  16. SU-F-J-66: Anatomy Deformation Based Comparison Between One-Step and Two-Step Optimization for Online ART

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Z; Yu, G; Qin, S

    Purpose: This study investigated that how the quality of adapted plan was affected by inter-fractional anatomy deformation by using one-step and two-step optimization for on line adaptive radiotherapy (ART) procedure. Methods: 10 lung carcinoma patients were chosen randomly to produce IMRT plan by one-step and two-step algorithms respectively, and the prescribed dose was set as 60 Gy on the planning target volume (PTV) for all patients. To simulate inter-fractional target deformation, four specific cases were created by systematic anatomy variation; including target superior shift 0.5 cm, 0.3cm contraction, 0.3 cm expansion and 45-degree rotation. Based on these four anatomy deformation,more » adapted plan, regenerated plan and non-adapted plan were created to evaluate quality of adaptation. Adapted plans were generated automatically by using one-step and two-step algorithms respectively to optimize original plans, and regenerated plans were manually created by experience physicists. Non-adapted plans were produced by recalculating the dose distribution based on corresponding original plans. The deviations among these three plans were statistically analyzed by paired T-test. Results: In PTV superior shift case, adapted plans had significantly better PTV coverage by using two-step algorithm compared with one-step one, and meanwhile there was a significant difference of V95 by comparison with adapted and non-adapted plans (p=0.0025). In target contraction deformation, with almost same PTV coverage, the total lung received lower dose using one-step algorithm than two-step algorithm (p=0.0143,0.0126 for V20, Dmean respectively). In other two deformation cases, there were no significant differences observed by both two optimized algorithms. Conclusion: In geometry deformation such as target contraction, with comparable PTV coverage, one-step algorithm gave better OAR sparing than two-step algorithm. Reversely, the adaptation by using two-step algorithm had higher efficiency and accuracy as target occurred position displacement. We want to thank Dr. Lei Xing and Dr. Yong Yang in the Stanford University School of Medicine for this work. This work was jointly supported by NSFC (61471226), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201516), and China Postdoctoral Science Foundation (2015T80739, 2014M551949).« less

  17. Time optimal control of a jet engine using a quasi-Hermite interpolation model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Comiskey, J. G.

    1979-01-01

    This work made preliminary efforts to generate nonlinear numerical models of a two-spooled turbofan jet engine, and subject these models to a known method of generating global, nonlinear, time optimal control laws. The models were derived numerically, directly from empirical data, as a first step in developing an automatic modelling procedure.

  18. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  19. An optimized two-step derivatization method for analyzing diethylene glycol ozonation products using gas chromatography and mass spectrometry.

    PubMed

    Yu, Ran; Duan, Lei; Jiang, Jingkun; Hao, Jiming

    2017-03-01

    The ozonation of hydroxyl compounds (e.g., sugars and alcohols) gives a broad range of products such as alcohols, aldehydes, ketones, and carboxylic acids. This study developed and optimized a two-step derivatization procedure for analyzing polar products of aldehydes and carboxylic acids from the ozonation of diethylene glycol (DEG) in a non-aqueous environment using gas chromatography-mass spectrometry. Experiments based on Central Composite Design with response surface methodology were carried out to evaluate the effects of derivatization variables and their interactions on the analysis. The most desirable derivatization conditions were reported, i.e., oximation was performed at room temperature overnight with the o-(2,3,4,5,6-pentafluorobenzyl) hydroxyl amine to analyte molar ratio of 6, silylation reaction temperature of 70°C, reaction duration of 70min, and N,O-bis(trimethylsilyl)-trifluoroacetamide volume of 12.5μL. The applicability of this optimized procedure was verified by analyzing DEG ozonation products in an ultrafine condensation particle counter simulation system. Copyright © 2016. Published by Elsevier B.V.

  20. Neural networks for vertical microcode compaction

    NASA Astrophysics Data System (ADS)

    Chu, Pong P.

    1992-09-01

    Neural networks provide an alternative way to solve complex optimization problems. Instead of performing a program of instructions sequentially as in a traditional computer, neural network model explores many competing hypotheses simultaneously using its massively parallel net. The paper shows how to use the neural network approach to perform vertical micro-code compaction for a micro-programmed control unit. The compaction procedure includes two basic steps. The first step determines the compatibility classes and the second step selects a minimal subset to cover the control signals. Since the selection process is an NP- complete problem, to find an optimal solution is impractical. In this study, we employ a customized neural network to obtain the minimal subset. We first formalize this problem, and then define an `energy function' and map it to a two-layer fully connected neural network. The modified network has two types of neurons and can always obtain a valid solution.

  1. Band Structure Simulations of the Photoinduced Changes in the MgB₂:Cr Films.

    PubMed

    Kityk, Iwan V; Fedorchuk, Anatolii O; Ozga, Katarzyna; AlZayed, Nasser S

    2015-04-02

    An approach for description of the photoinduced nonlinear optical effects in the superconducting MgB₂:Cr₂O₃ nanocrystalline film is proposed. It includes the molecular dynamics step-by-step optimization of the two separate crystalline phases. The principal role for the photoinduced nonlinear optical properties plays nanointerface between the two phases. The first modified layers possess a form of slightly modified perfect crystalline structure. The next layer is added to the perfect crystalline structure and the iteration procedure is repeated for the next layer. The total energy here is considered as a varied parameter. To avoid potential jumps on the borders we have carried out additional derivative procedure.

  2. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip.

    PubMed

    Davison, James A

    2015-01-01

    To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer.

  3. A two-step method for developing a control rod program for boiling water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1992-01-01

    This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less

  4. GENESUS: a two-step sequence design program for DNA nanostructure self-assembly.

    PubMed

    Tsutsumi, Takanobu; Asakawa, Takeshi; Kanegami, Akemi; Okada, Takao; Tahira, Tomoko; Hayashi, Kenshi

    2014-01-01

    DNA has been recognized as an ideal material for bottom-up construction of nanometer scale structures by self-assembly. The generation of sequences optimized for unique self-assembly (GENESUS) program reported here is a straightforward method for generating sets of strand sequences optimized for self-assembly of arbitrarily designed DNA nanostructures by a generate-candidates-and-choose-the-best strategy. A scalable procedure to prepare single-stranded DNA having arbitrary sequences is also presented. Strands for the assembly of various structures were designed and successfully constructed, validating both the program and the procedure.

  5. A derived heuristics based multi-objective optimization procedure for micro-grid scheduling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Deb, Kalyanmoy; Fang, Yanjun

    2017-06-01

    With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.

  6. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  7. Optimization and quality control of genome-wide Hi-C library preparation.

    PubMed

    Zhang, Xiang-Yuan; He, Chao; Ye, Bing-Yu; Xie, De-Jian; Shi, Ming-Lei; Zhang, Yan; Shen, Wen-Long; Li, Ping; Zhao, Zhi-Hu

    2017-09-20

    Highest-throughput chromosome conformation capture (Hi-C) is one of the key assays for genome- wide chromatin interaction studies. It is a time-consuming process that involves many steps and many different kinds of reagents, consumables, and equipments. At present, the reproducibility is unsatisfactory. By optimizing the key steps of the Hi-C experiment, such as crosslinking, pretreatment of digestion, inactivation of restriction enzyme, and in situ ligation etc., we established a robust Hi-C procedure and prepared two biological replicates of Hi-C libraries from the GM12878 cells. After preliminary quality control by Sanger sequencing, the two replicates were high-throughput sequenced. The bioinformatics analysis of the raw sequencing data revealed the mapping-ability and pair-mate rate of the raw data were around 90% and 72%, respectively. Additionally, after removal of self-circular ligations and dangling-end products, more than 96% of the valid pairs were reached. Genome-wide interactome profiling shows clear topological associated domains (TADs), which is consistent with previous reports. Further correlation analysis showed that the two biological replicates strongly correlate with each other in terms of both bin coverage and all bin pairs. All these results indicated that the optimized Hi-C procedure is robust and stable, which will be very helpful for the wide applications of the Hi-C assay.

  8. Procedure for minimizing the cost per watt of photovoltaic systems

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance

  9. Jig-Shape Optimization of a Low-Boom Supersonic Aircraft

    NASA Technical Reports Server (NTRS)

    Pak, Chan-Gi

    2018-01-01

    A simple approach for optimizing the jig-shape is proposed in this study. This simple approach is based on an unconstrained optimization problem and applied to a low-boom supersonic aircraft. In this study, the jig-shape optimization is performed using the two-step approach. First, starting design variables are computed using the least-squares surface fitting technique. Next, the jig-shape is further tuned using a numerical optimization procedure based on an in-house object-oriented optimization tool. During the numerical optimization procedure, a design jig-shape is determined by the baseline jig-shape and basis functions. A total of 12 symmetric mode shapes of the cruise-weight configuration, rigid pitch shape, rigid left and right stabilator rotation shapes, and a residual shape are selected as sixteen basis functions. After three optimization runs, the trim shape error distribution is improved, and the maximum trim shape error of 0.9844 inches of the starting configuration becomes 0.00367 inch by the end of the third optimization run.

  10. Systematic procedure for designing processes with multiple environmental objectives.

    PubMed

    Kim, Ki-Joo; Smith, Raymond L

    2005-04-01

    Evaluation of multiple objectives is very important in designing environmentally benign processes. It requires a systematic procedure for solving multiobjective decision-making problems due to the complex nature of the problems, the need for complex assessments, and the complicated analysis of multidimensional results. In this paper, a novel systematic procedure is presented for designing processes with multiple environmental objectives. This procedure has four steps: initialization, screening, evaluation, and visualization. The first two steps are used for systematic problem formulation based on mass and energy estimation and order of magnitude analysis. In the third step, an efficient parallel multiobjective steady-state genetic algorithm is applied to design environmentally benign and economically viable processes and to provide more accurate and uniform Pareto optimal solutions. In the last step a new visualization technique for illustrating multiple objectives and their design parameters on the same diagram is developed. Through these integrated steps the decision-maker can easily determine design alternatives with respect to his or her preferences. Most importantly, this technique is independent of the number of objectives and design parameters. As a case study, acetic acid recovery from aqueous waste mixtures is investigated by minimizing eight potential environmental impacts and maximizing total profit. After applying the systematic procedure, the most preferred design alternatives and their design parameters are easily identified.

  11. Torsional Ultrasound Sensor Optimization for Soft Tissue Characterization

    PubMed Central

    Melchor, Juan; Muñoz, Rafael; Rus, Guillermo

    2017-01-01

    Torsion mechanical waves have the capability to characterize shear stiffness moduli of soft tissue. Under this hypothesis, a computational methodology is proposed to design and optimize a piezoelectrics-based transmitter and receiver to generate and measure the response of torsional ultrasonic waves. The procedure employed is divided into two steps: (i) a finite element method (FEM) is developed to obtain a transmitted and received waveform as well as a resonance frequency of a previous geometry validated with a semi-analytical simplified model and (ii) a probabilistic optimality criteria of the design based on inverse problem from the estimation of robust probability of detection (RPOD) to maximize the detection of the pathology defined in terms of changes of shear stiffness. This study collects different options of design in two separated models, in transmission and contact, respectively. The main contribution of this work describes a framework to establish such as forward, inverse and optimization procedures to choose a set of appropriate parameters of a transducer. This methodological framework may be generalizable for other different applications. PMID:28617353

  12. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  13. Two-speed phacoemulsification for soft cataracts using optimized parameters and procedure step toolbar with the CENTURION Vision System and Balanced Tip

    PubMed Central

    Davison, James A

    2015-01-01

    Purpose To present a cause of posterior capsule aspiration and a technique using optimized parameters to prevent it from happening when operating soft cataracts. Patients and methods A prospective list of posterior capsule aspiration cases was kept over 4,062 consecutive cases operated with the Alcon CENTURION machine and Balanced Tip. Video analysis of one case of posterior capsule aspiration was accomplished. A surgical technique was developed using empirically derived machine parameters and customized setting-selection procedure step toolbar to reduce the pace of aspiration of soft nuclear quadrants in order to prevent capsule aspiration. Results Two cases out of 3,238 experienced posterior capsule aspiration before use of the soft quadrant technique. Video analysis showed an attractive vortex effect with capsule aspiration occurring in 1/5 of a second. A soft quadrant removal setting was empirically derived which had a slower pace and seemed more controlled with no capsule aspiration occurring in the subsequent 824 cases. The setting featured simultaneous linear control from zero to preset maximums for: aspiration flow, 20 mL/min; and vacuum, 400 mmHg, with the addition of torsional tip amplitude up to 20% after the fluidic maximums were achieved. A new setting selection procedure step toolbar was created to increase intraoperative flexibility by providing instantaneous shifting between the soft and normal settings. Conclusion A technique incorporating a reduced pace for soft quadrant acquisition and aspiration can be accomplished through the use of a dedicated setting of integrated machine parameters. Toolbar placement of the procedure button next to the normal setting procedure button provides the opportunity to instantaneously alternate between the two settings. Simultaneous surgeon control over vacuum, aspiration flow, and torsional tip motion may make removal of soft nuclear quadrants more efficient and safer. PMID:26355695

  14. Self-calibration of robot-sensor system

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1990-01-01

    The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.

  15. Bayesian SEM for Specification Search Problems in Testing Factorial Invariance.

    PubMed

    Shi, Dexin; Song, Hairong; Liao, Xiaolan; Terry, Robert; Snyder, Lori A

    2017-01-01

    Specification search problems refer to two important but under-addressed issues in testing for factorial invariance: how to select proper reference indicators and how to locate specific non-invariant parameters. In this study, we propose a two-step procedure to solve these issues. Step 1 is to identify a proper reference indicator using the Bayesian structural equation modeling approach. An item is selected if it is associated with the highest likelihood to be invariant across groups. Step 2 is to locate specific non-invariant parameters, given that a proper reference indicator has already been selected in Step 1. A series of simulation analyses show that the proposed method performs well under a variety of data conditions, and optimal performance is observed under conditions of large magnitude of non-invariance, low proportion of non-invariance, and large sample sizes. We also provide an empirical example to demonstrate the specific procedures to implement the proposed method in applied research. The importance and influences are discussed regarding the choices of informative priors with zero mean and small variances. Extensions and limitations are also pointed out.

  16. Use of Binary Partition Tree and energy minimization for object-based classification of urban land cover

    NASA Astrophysics Data System (ADS)

    Li, Mengmeng; Bijker, Wietske; Stein, Alfred

    2015-04-01

    Two main challenges are faced when classifying urban land cover from very high resolution satellite images: obtaining an optimal image segmentation and distinguishing buildings from other man-made objects. For optimal segmentation, this work proposes a hierarchical representation of an image by means of a Binary Partition Tree (BPT) and an unsupervised evaluation of image segmentations by energy minimization. For building extraction, we apply fuzzy sets to create a fuzzy landscape of shadows which in turn involves a two-step procedure. The first step is a preliminarily image classification at a fine segmentation level to generate vegetation and shadow information. The second step models the directional relationship between building and shadow objects to extract building information at the optimal segmentation level. We conducted the experiments on two datasets of Pléiades images from Wuhan City, China. To demonstrate its performance, the proposed classification is compared at the optimal segmentation level with Maximum Likelihood Classification and Support Vector Machine classification. The results show that the proposed classification produced the highest overall accuracies and kappa coefficients, and the smallest over-classification and under-classification geometric errors. We conclude first that integrating BPT with energy minimization offers an effective means for image segmentation. Second, we conclude that the directional relationship between building and shadow objects represented by a fuzzy landscape is important for building extraction.

  17. Crystallization and preliminary X-ray analysis of membrane-bound pyrophosphatases.

    PubMed

    Kellosalo, Juho; Kajander, Tommi; Honkanen, Riina; Goldman, Adrian

    2013-02-01

    Membrane-bound pyrophosphatases (M-PPases) are enzymes that enhance the survival of plants, protozoans and prokaryotes in energy constraining stress conditions. These proteins use pyrophosphate, a waste product of cellular metabolism, as an energy source for sodium or proton pumping. To study the structure and function of these enzymes we have crystallized two membrane-bound pyrophosphatases recombinantly produced in Saccharomyces cerevisae: the sodium pumping enzyme of Thermotoga maritima (TmPPase) and the proton pumping enzyme of Pyrobaculum aerophilum (PaPPase). Extensive crystal optimization has allowed us to grow crystals of TmPPase that diffract to a resolution of 2.6 Å. The decisive step in this optimization was in-column detergent exchange during the two-step purification procedure. Dodecyl maltoside was used for high temperature solubilization of TmPPase and then exchanged to a series of different detergents. After extensive screening, the new detergent, octyl glucose neopentyl glycol, was found to be the optimal for TmPPase but not PaPPase.

  18. A Fast Proceduere for Optimizing Thermal Protection Systems of Re-Entry Vehicles

    NASA Astrophysics Data System (ADS)

    Ferraiuolo, M.; Riccio, A.; Tescione, D.; Gigliotti, M.

    The aim of the present work is to introduce a fast procedure to optimize thermal protection systems for re-entry vehicles subjected to high thermal loads. A simplified one-dimensional optimization process, performed in order to find the optimum design variables (lengths, sections etc.), is the first step of the proposed design procedure. Simultaneously, the most suitable materials able to sustain high temperatures and meeting the weight requirements are selected and positioned within the design layout. In this stage of the design procedure, simplified (generalized plane strain) FEM models are used when boundary and geometrical conditions allow the reduction of the degrees of freedom. Those simplified local FEM models can be useful because they are time-saving and very simple to build; they are essentially one dimensional and can be used for optimization processes in order to determine the optimum configuration with regard to weight, temperature and stresses. A triple-layer and a double-layer body, subjected to the same aero-thermal loads, have been optimized to minimize the overall weight. Full two and three-dimensional analyses are performed in order to validate those simplified models. Thermal-structural analyses and optimizations are executed by adopting the Ansys FEM code.

  19. Label-free offline versus online activity methods for nucleoside diphosphate kinase b using high performance liquid chromatography.

    PubMed

    Lima, Juliana Maria; Salmazo Vieira, Plínio; Cavalcante de Oliveira, Arthur Henrique; Cardoso, Carmen Lúcia

    2016-08-07

    Nucleoside diphosphate kinase from Leishmania spp. (LmNDKb) has recently been described as a potential drug target to treat leishmaniasis disease. Therefore, screening of LmNDKb ligands requires methodologies that mimic the conditions under which LmNDKb acts in biological systems. Here, we compare two label-free methodologies that could help screen LmNDKb ligands and measure NDKb activity: an offline LC-UV assay for soluble LmNDKb and an online two-dimensional LC-UV system based on LmNDKb immobilised on a silica capillary. The target enzyme was immobilised on the silica capillary via Schiff base formation (to give LmNDKb-ICER-Schiff) or affinity attachment (to give LmNDKb-ICER-His). Several aspects of the ICERs resulting from these procedures were compared, namely kinetic parameters, stability, and procedure steps. Both the LmNDKb immobilisation routes minimised the conformational changes and preserved the substrate binding sites. However, considering the number of steps involved in the immobilisation procedure, the cost of reagents, and the stability of the immobilised enzyme, immobilisation via Schiff base formation proved to be the optimal procedure.

  20. Optimal production of L-threo-2,3-dihydroxyphenylserine (L-threo-DOPS) on a large scale by diastereoselectivity-enhanced variant of L-threonine aldolase expressed in Escherichia coli.

    PubMed

    Gwon, Hui-Jeong; Yoshioka, Hideki; Song, Nho-Eul; Kim, Jong-Hui; Song, Young-Ran; Jeong, Do-Youn; Baik, Sang-Ho

    2012-01-01

    This study examined the efficient production and optimal separation procedures for pure L-threo-3,4-dihydroxyphenylserine (L-threo-DOPS) from a mixture of diastereomers synthesized by whole-cell aldol condensation reaction, harboring diastereoselectivity-enhanced L-threonine aldolase in Escherichia coli JM109. The addition of the reducing agent sodium sulfite was found to stimulate the production of L-threo-DOPS without affecting the diastereoselectivity ratio, especially at the 50 mM concentration. The optimal pH for diastereoselective synthesis was 6.5. The addition of Triton X-100 also strongly affected the synthesis yield, showing the highest conversion yield at a 0.75% concentration; however, the diastereoselectivity of the L-threonine aldolase was not affected. Lowering the temperature to 10°C did not significantly affect the diastereoselectiviy without affecting the synthesis rate. At the optimized conditions, a mixture of L-threo-DOPS and L-erythro-DOPS was synthesized by diastereoselectivity-enhanced L-threonine aldolase from E. coli in a continuous process for 100 hr, yielding an average of 4.0 mg/mL of L-threo-DOPS and 60% diastereoselectivity (de), and was subjected to two steps of ion exchange chromatography. The optimum separation conditions for the resin and solvent were evaluated in which it was found that a two-step process with the ion-exchange resin Dowex 50 W × 8 and activated carbon by washing with 0.5 N acetic acid was sufficient to separate the L-threo-DOPS. By using two-step ion-exchange chromatography, synthesized high-purity L-threo-DOPS of up to 100% was purified with a yield of 71%. The remaining substrates, glycine and 3,4-dihydroxybenzaldehyde, were recovered successfully with a yield of 71.2%. Our results indicate this potential procedure as an economical purification process for the synthesis and purification of important L-threo-DOPS at the pharmaceutical level.

  1. A New Two-Step Approach for Hands-On Teaching of Gene Technology: Effects on Students' Activities During Experimentation in an Outreach Gene Technology Lab

    NASA Astrophysics Data System (ADS)

    Scharfenberg, Franz-Josef; Bogner, Franz X.

    2011-08-01

    Emphasis on improving higher level biology education continues. A new two-step approach to the experimental phases within an outreach gene technology lab, derived from cognitive load theory, is presented. We compared our approach using a quasi-experimental design with the conventional one-step mode. The difference consisted of additional focused discussions combined with students writing down their ideas (step one) prior to starting any experimental procedure (step two). We monitored students' activities during the experimental phases by continuously videotaping 20 work groups within each approach ( N = 131). Subsequent classification of students' activities yielded 10 categories (with well-fitting intra- and inter-observer scores with respect to reliability). Based on the students' individual time budgets, we evaluated students' roles during experimentation from their prevalent activities (by independently using two cluster analysis methods). Independently of the approach, two common clusters emerged, which we labeled as `all-rounders' and as `passive students', and two clusters specific to each approach: `observers' as well as `high-experimenters' were identified only within the one-step approach whereas under the two-step conditions `managers' and `scribes' were identified. Potential changes in group-leadership style during experimentation are discussed, and conclusions for optimizing science teaching are drawn.

  2. Solution of elliptic partial differential equations by fast Poisson solvers using a local relaxation factor. 2: Two-step method

    NASA Technical Reports Server (NTRS)

    Chang, S. C.

    1986-01-01

    A two-step semidirect procedure is developed to accelerate the one-step procedure described in NASA TP-2529. For a set of constant coefficient model problems, the acceleration factor increases from 1 to 2 as the one-step procedure convergence rate decreases from + infinity to 0. It is also shown numerically that the two-step procedure can substantially accelerate the convergence of the numerical solution of many partial differential equations (PDE's) with variable coefficients.

  3. Redo Laparoscopic Gastric Bypass: One-Step or Two-Step Procedure?

    PubMed

    Theunissen, Caroline M J; Guelinckx, Nele; Maring, John K; Langenhoff, Barbara S

    2016-11-01

    The adjustable gastric band (AGB) is a bariatric procedure that used to be widely performed. However, AGB failure-signifying band-related complications or unsatisfactory weight loss, resulting in revision surgery (redo operations)-frequently occurs. Often this entails a conversion to a laparoscopic Roux-en-Y gastric bypass (LRYGB). This can be performed as a one-step or two-step (separate band removal) procedure. Data were collected from patients operated from 2012 to 2014 in a single bariatric centre. We compared 107 redo LRYGB after AGB failure with 1020 primary LRYGB. An analysis was performed of the one-step vs. two-step redo procedures. All redo procedures were performed by experienced bariatric surgeons. No difference in major complication rate was seen (2.8 vs. 2.3 %, p = 0.73) between redo and primary LRYGB, and overall complication severity for redos was low (mainly Clavien-Dindo 1 or 2). Weight loss results were comparable for primary and redo procedures. The one-step and two-step redos were comparable regarding complication rates and readmissions. The operating time for the one-step redo LRYGB was 136 vs. 107.5 min for the two-step (median, p < 0.001), excluding the operating time of separate AGB removal (mean 61 min, range 36-110). Removal of a failed AGB and LRYGB in a one-step procedure is safe when performed by experienced bariatric surgeons. However, when erosion or perforation of the AGB occurs, we advise caution and would perform the redo LRYGB as a two-step procedure. Equal weights can be achieved at 1 year post redo LRYGB as after primary LRYGB procedures.

  4. The appropriateness of use of percutaneous transluminal coronary angioplasty in Spain.

    PubMed

    Aguilar, M D; Fitch, K; Lázaro, P; Bernstein, S J

    2001-05-01

    The rapid increase in the number of percutaneous transluminal coronary angioplasty (PTCA) procedures performed in Spain in recent years raises questions about how appropriately this procedure is being used. To examine this issue, we studied the appropriateness of use of PTCA in Spanish patients and factors associated with inappropriate use. We applied criteria for the appropriate use of PTCA developed by an expert panel of Spanish cardiologists and cardiovascular surgeons to a random sample of 1913 patients undergoing PTCA in Spain in 1997. The patients were selected through a two-step sampling process, stratifying by hospital type (public/private) and volume of procedures (low/medium/high). We examined the association between inappropriate use of PTCA and different clinical and sociodemographic factors. Overall, 46% of the PTCA procedures were appropriate, 31% were uncertain and 22% were inappropriate. Two factors contributing to inappropriate use were patients' receipt of less than optimal medical therapy and their failure to undergo stress testing. Institutional type and volume of procedures were not significantly related with inappropriate use. One of every five PTCA procedures in Spain is done for inappropriate reasons. Assuring that patients receive optimal medical therapy and undergo stress testing when indicated could contribute to more appropriate use of PTCA.

  5. Modeling metabolic networks in C. glutamicum: a comparison of rate laws in combination with various parameter optimization strategies

    PubMed Central

    Dräger, Andreas; Kronfeld, Marcel; Ziller, Michael J; Supper, Jochen; Planatscher, Hannes; Magnus, Jørgen B; Oldiges, Marco; Kohlbacher, Oliver; Zell, Andreas

    2009-01-01

    Background To understand the dynamic behavior of cellular systems, mathematical modeling is often necessary and comprises three steps: (1) experimental measurement of participating molecules, (2) assignment of rate laws to each reaction, and (3) parameter calibration with respect to the measurements. In each of these steps the modeler is confronted with a plethora of alternative approaches, e. g., the selection of approximative rate laws in step two as specific equations are often unknown, or the choice of an estimation procedure with its specific settings in step three. This overall process with its numerous choices and the mutual influence between them makes it hard to single out the best modeling approach for a given problem. Results We investigate the modeling process using multiple kinetic equations together with various parameter optimization methods for a well-characterized example network, the biosynthesis of valine and leucine in C. glutamicum. For this purpose, we derive seven dynamic models based on generalized mass action, Michaelis-Menten and convenience kinetics as well as the stochastic Langevin equation. In addition, we introduce two modeling approaches for feedback inhibition to the mass action kinetics. The parameters of each model are estimated using eight optimization strategies. To determine the most promising modeling approaches together with the best optimization algorithms, we carry out a two-step benchmark: (1) coarse-grained comparison of the algorithms on all models and (2) fine-grained tuning of the best optimization algorithms and models. To analyze the space of the best parameters found for each model, we apply clustering, variance, and correlation analysis. Conclusion A mixed model based on the convenience rate law and the Michaelis-Menten equation, in which all reactions are assumed to be reversible, is the most suitable deterministic modeling approach followed by a reversible generalized mass action kinetics model. A Langevin model is advisable to take stochastic effects into account. To estimate the model parameters, three algorithms are particularly useful: For first attempts the settings-free Tribes algorithm yields valuable results. Particle swarm optimization and differential evolution provide significantly better results with appropriate settings. PMID:19144170

  6. Dual cloud point extraction coupled with hydrodynamic-electrokinetic two-step injection followed by micellar electrokinetic chromatography for simultaneous determination of trace phenolic estrogens in water samples.

    PubMed

    Wen, Yingying; Li, Jinhua; Liu, Junshen; Lu, Wenhui; Ma, Jiping; Chen, Lingxin

    2013-07-01

    A dual cloud point extraction (dCPE) off-line enrichment procedure coupled with a hydrodynamic-electrokinetic two-step injection online enrichment technique was successfully developed for simultaneous preconcentration of trace phenolic estrogens (hexestrol, dienestrol, and diethylstilbestrol) in water samples followed by micellar electrokinetic chromatography (MEKC) analysis. Several parameters affecting the extraction and online injection conditions were optimized. Under optimal dCPE-two-step injection-MEKC conditions, detection limits of 7.9-8.9 ng/mL and good linearity in the range from 0.05 to 5 μg/mL with correlation coefficients R(2) ≥ 0.9990 were achieved. Satisfactory recoveries ranging from 83 to 108% were obtained with lake and tap water spiked at 0.1 and 0.5 μg/mL, respectively, with relative standard deviations (n = 6) of 1.3-3.1%. This method was demonstrated to be convenient, rapid, cost-effective, and environmentally benign, and could be used as an alternative to existing methods for analyzing trace residues of phenolic estrogens in water samples.

  7. Efficiency and Safety of One-Step Procedure Combined Laparoscopic Cholecystectomy and Eretrograde Cholangiopancreatography for Treatment of Cholecysto-Choledocholithiasis: A Randomized Controlled Trial.

    PubMed

    Liu, Zhiyi; Zhang, Luyao; Liu, Yanling; Gu, Yang; Sun, Tieliang

    2017-11-01

    We aimed to evaluate the efficiency and safety of one-step procedure combined endoscopic retrograde cholangiopancreatography (ERCP) and laparoscopic cholecystectomy (LC) for treatment of patients with cholecysto-choledocholithiasis. A prospective randomized study was performed on 63 consecutive cholecysto-choledocholithiasis patients during 2008 and 2011. The efficiency and safety of one-step procedure was assessed by comparing the two-step LC with ERCP + endoscopic sphincterotomy (EST). Outcomes including intraoperative features, postoperative features (length of stay and postoperative complications) were evaluated. One- or two-step procedure of LC with ERCP + EST was successfully performed in all patients, and common bile duct stones were completely removed. Statistical analyses showed that length of stay and pulmonary infection rate were significantly lower in the test group compared with that in the control group (P < 0.05), whereas no statistical difference in other outcomes was found between the two groups (all P > 0.05). The one-step procedure of LC with ERCP + EST is superior to the two-step procedure for treatment of patients with cholecysto-choledocholithiasis regarding to the reduced hospital stay and inhibited occurrence of pulmonary infections. Compared with two-step procedure, one-step procedure of LC with ERCP + EST may be a superior option for cholecysto-choledocholithiasis patients treatment regarding to hospital stay and pulmonary infections.

  8. Integrated multidisciplinary design optimization using discrete sensitivity analysis for geometrically complex aeroelastic configurations

    NASA Astrophysics Data System (ADS)

    Newman, James Charles, III

    1997-10-01

    The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.

  9. High Temperature Advanced Structural Composites. Book 1: Executive Summary and Intermetallic Compounds

    DTIC Science & Technology

    1993-04-02

    Misiolek, W.Z. and German, R.M., "Economical Aspects of Experiment Design for Compaction of High Temperature Composites," Proceedings of the American...ten years, the computational capability should be available. For infiltrated matrix depositions, the research has shown that design fiber... designed for manufacturing, was not completed. However, even with present 2-D fabric composite preforms, a two-step deposition procedure, optimized for the

  10. Adaptive Bayes classifiers for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Raulston, H. S.; Pace, M. O.; Gonzalez, R. C.

    1975-01-01

    An algorithm is developed for a learning, adaptive, statistical pattern classifier for remotely sensed data. The estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest, and (2) a projection of the parameters in time and space. The results reported are for Gaussian data in which the mean vector of each class may vary with time or position after the classifier is trained.

  11. Comparison of IMRT planning with two-step and one-step optimization: a strategy for improving therapeutic gain and reducing the integral dose

    NASA Astrophysics Data System (ADS)

    Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

    2009-12-01

    The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

  12. Novel synthesis of [11C]GVG (Vigabatgrin) for pharmacokinetic studies of addiction treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Y.S.; Studenov, A.R.; Zhang, Z.

    2001-06-10

    We report here a novel synthetic route to prepare the precursor and to efficiently label GVG with C-11. 5-Bromo-3-(carbobenzyloxy)amino-1-pentene was synthesized in five steps from homoserine lactone. This was used in a two step radiosynthesis, displacement with [{sup 11}C]cyanide followed by acid hydrolysis to afford [{sup 11}C]GVG with high radiochemical yields (> 35%, not optimized) and high specific activity (2-5 Ci/{micro}mol). The [{sup 11}C]cyanide trapping was achieved at {minus}5 C with a mixture of Kryptofix and K{sub 2}CO{sub 3} without using conventional aqueous trapping procedure [7]. At this temperature, the excess NH{sub 3} from the target that may interfere withmore » the synthesis would not be trapped [8]. This procedure would be advantageous to any moisture sensitive radiosynthetic steps, as it was the case for our displacement reaction. When conventional aqueous trapping procedure was used, any trace amount of water left, even after prolonged heating, resulted in either no reaction or extremely low yields for the displacement reaction. The entire synthetic procedure should be extendible to the labeling of the pharmacologically active S- form of GVG when using S-homoserine lactone.« less

  13. The PDB_REDO server for macromolecular structure model optimization.

    PubMed

    Joosten, Robbie P; Long, Fei; Murshudov, Garib N; Perrakis, Anastassis

    2014-07-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395-1412]. The PDB_REDO procedure aims for 'constructive validation', aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo-graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB.

  14. The PDB_REDO server for macromolecular structure model optimization

    PubMed Central

    Joosten, Robbie P.; Long, Fei; Murshudov, Garib N.; Perrakis, Anastassis

    2014-01-01

    The refinement and validation of a crystallographic structure model is the last step before the coordinates and the associated data are submitted to the Protein Data Bank (PDB). The success of the refinement procedure is typically assessed by validating the models against geometrical criteria and the diffraction data, and is an important step in ensuring the quality of the PDB public archive [Read et al. (2011 ▶), Structure, 19, 1395–1412]. The PDB_REDO procedure aims for ‘constructive validation’, aspiring to consistent and optimal refinement parameterization and pro-active model rebuilding, not only correcting errors but striving for optimal interpretation of the electron density. A web server for PDB_REDO has been implemented, allowing thorough, consistent and fully automated optimization of the refinement procedure in REFMAC and partial model rebuilding. The goal of the web server is to help practicing crystallo­graphers to improve their model prior to submission to the PDB. For this, additional steps were implemented in the PDB_REDO pipeline, both in the refinement procedure, e.g. testing of resolution limits and k-fold cross-validation for small test sets, and as new validation criteria, e.g. the density-fit metrics implemented in EDSTATS and ligand validation as implemented in YASARA. Innovative ways to present the refinement and validation results to the user are also described, which together with auto-generated Coot scripts can guide users to subsequent model inspection and improvement. It is demonstrated that using the server can lead to substantial improvement of structure models before they are submitted to the PDB. PMID:25075342

  15. Adaptive statistical pattern classifiers for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Gonzalez, R. C.; Pace, M. O.; Raulston, H. S.

    1975-01-01

    A technique for the adaptive estimation of nonstationary statistics necessary for Bayesian classification is developed. The basic approach to the adaptive estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest and (2) a projection of the parameters in time or position. A divergence criterion is developed to monitor algorithm performance. Comparative results of adaptive and nonadaptive classifier tests are presented for simulated four dimensional spectral scan data.

  16. Texas two-step: a framework for optimal multi-input single-output deconvolution.

    PubMed

    Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G

    2007-11-01

    Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.

  17. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  18. Optimal execution in high-frequency trading with Bayesian learning

    NASA Astrophysics Data System (ADS)

    Du, Bian; Zhu, Hongliang; Zhao, Jingdong

    2016-11-01

    We consider optimal trading strategies in which traders submit bid and ask quotes to maximize the expected quadratic utility of total terminal wealth in a limit order book. The trader's bid and ask quotes will be changed by the Poisson arrival of market orders. Meanwhile, the trader may update his estimate of other traders' target sizes and directions by Bayesian learning. The solution of optimal execution in the limit order book is a two-step procedure. First, we model an inactive trading with no limit order in the market. The dealer simply holds dollars and shares of stocks until terminal time. Second, he calibrates his bid and ask quotes to the limit order book. The optimal solutions are given by dynamic programming and in fact they are globally optimal. We also give numerical simulation to the value function and optimal quotes at the last part of the article.

  19. Synthesis of robust water-soluble ZnS:Mn/SiO2 core/shell nanoparticles

    NASA Astrophysics Data System (ADS)

    Sun, Jing; Zhuang, Jiaqi; Guan, Shaowei; Yang, Wensheng

    2008-04-01

    Water-soluble Mn doped ZnS (ZnS:Mn) nanocrystals synthesized by using 3-mercaptopropionic acid (MPA) as stabilizer were homogeneously coated with a dense silica shell through a multi-step procedure. First, 3-mercaptopropyl triethoxy silane (MPS) was used to replace MPA on the particle surface to form a vitreophilic layer for further silica deposition under optimal experimental conditions. Then a two-step silica deposition was performed to form the final water-soluble ZnS:Mn/SiO2 core/shell nanoparticles. The as-prepared core/shell nanoparticles show little change in fluorescence intensity in a wide range of pH value.

  20. Spectral-Spatial Classification of Hyperspectral Images Using Hierarchical Optimization

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new spectral-spatial method for hyperspectral data classification is proposed. For a given hyperspectral image, probabilistic pixelwise classification is first applied. Then, hierarchical step-wise optimization algorithm is performed, by iteratively merging neighboring regions with the smallest Dissimilarity Criterion (DC) and recomputing class labels for new regions. The DC is computed by comparing region mean vectors, class labels and a number of pixels in the two regions under consideration. The algorithm is converged when all the pixels get involved in the region merging procedure. Experimental results are presented on two remote sensing hyperspectral images acquired by the AVIRIS and ROSIS sensors. The proposed approach improves classification accuracies and provides maps with more homogeneous regions, when compared to previously proposed classification techniques.

  1. Improved Cryopreservation of Human Umbilical Vein Endothelial Cells: A Systematic Approach

    NASA Astrophysics Data System (ADS)

    Sultani, A. Billal; Marquez-Curtis, Leah A.; Elliott, Janet A. W.; McGann, Locksley E.

    2016-10-01

    Cryopreservation of human umbilical vein endothelial cells (HUVECs) facilitated their commercial availability for use in vascular biology, tissue engineering and drug delivery research; however, the key variables in HUVEC cryopreservation have not been comprehensively studied. HUVECs are typically cryopreserved by cooling at 1 °C/min in the presence of 10% dimethyl sulfoxide (DMSO). We applied interrupted slow cooling (graded freezing) and interrupted rapid cooling with a hold time (two-step freezing) to identify where in the cooling process cryoinjury to HUVECs occurs. We found that linear cooling at 1 °C/min resulted in higher membrane integrities than linear cooling at 0.2 °C/min or nonlinear two-step freezing. DMSO addition procedures and compositions were also investigated. By combining hydroxyethyl starch with DMSO, HUVEC viability after cryopreservation was improved compared to measured viabilities of commercially available cryopreserved HUVECs and viabilities for HUVEC cryopreservation studies reported in the literature. Furthermore, HUVECs cryopreserved using our improved procedure showed high tube forming capability in a post-thaw angiogenesis assay, a standard indicator of endothelial cell function. As well as presenting superior cryopreservation procedures for HUVECs, the methods developed here can serve as a model to optimize the cryopreservation of other cells.

  2. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  4. Synthesis of nano-sized lithium cobalt oxide via a sol-gel method

    NASA Astrophysics Data System (ADS)

    Li, Guangfen; Zhang, Jing

    2012-07-01

    In this study, nano-structured LiCoO2 thin film were synthesized by coupling a sol-gel process with a spin-coating method using polyacrylic acid (PAA) as chelating agent. The optimized conditions for obtaining a better gel formulation and subsequent homogenous dense film were investigated by varying the calcination temperature, the molar mass of PAA, and the precursor's molar ratios of PAA, lithium, and cobalt ions. The gel films on the silicon substrate surfaces were deposited by multi-step spin-coating process for either increasing the density of the gel film or adjusting the quantity of PAA in the film. The gel film was calcined by an optimized two-step heating procedure in order to obtain regular nano-structured LiCoO2 materials. Both atomic force microscopy (AFM) and scanning electron microscopy (SEM) were utilized to analyze the crystalline and the morphology of the films, respectively.

  5. Method for depleting BWRs using optimal control rod patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1991-01-01

    Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonicsmore » calculations.« less

  6. Reverse-Time Imaging Based on Full-Waveform Inverted Velocity Model for Nondestructive Testing of Heterogeneous Engineered Structures

    NASA Astrophysics Data System (ADS)

    Nguyen, L. T.; Modrak, R. T.; Saenger, E. H.; Tromp, J.

    2017-12-01

    Reverse-time migration (RTM) can reconstruct reflectors and scatterers by cross-correlating the source wavefield and the receiver wavefield given a known velocity model of the background. In nondestructive testing, however, the engineered structure under inspection is often composed of layers of various materials and the background material has been degraded non-uniformly because of environmental or operational effects. On the other hand, ultrasonic waveform tomography based on the principles of full-waveform inversion (FWI) has succeeded in detecting anomalous features in engineered structures. But the building of the wave velocity model of the comprehensive small-size and high-contrast defect(s) is difficult because it requires computationally expensive high-frequency numerical wave simulations and an accurate understanding of large-scale background variations of the engineered structure.To reduce computational cost and improve detection of small defects, a useful approach is to divide the waveform tomography procedure into two steps: first, a low-frequency model-building step aimed at recovering background structure using FWI, and second, a high-frequency imaging step targeting defects using RTM. Through synthetic test cases, we show that the two-step procedure appears more promising in most cases than a single-step inversion. In particular, we find that the new workflow succeeds in the challenging scenario where the defect lies along preexisting layer interface in a composite bridge deck and in related experiments involving noisy data or inaccurate source parameters. The results reveal the potential of the new wavefield imaging method and encourage further developments in data processing, enhancing computation power, and optimizing the imaging workflow itself so that the procedure can efficiently be applied to geometrically complex 3D solids and waveguides. Lastly, owing to the scale invariance of the elastic wave equation, this imaging procedure can be transferred to applications in regional scales as well.

  7. Improved methodologies for the preparation of highly substituted pyridines.

    PubMed

    Fernández Sainz, Yolanda; Raw, Steven A; Taylor, Richard J K

    2005-11-25

    [reaction: see text] Two separate strategies have been developed for the preparation of highly substituted pyridines from 1,2,4-triazines via the inverse-electron-demand Diels-Alder reaction: a microwave-promoted, solvent-free procedure and a tethered imine-enamine (TIE) approach. Both routes avoid the need for a discrete aromatization step and offer significant advantages over the classical methods, giving a wide variety of tri-, tetra-, and penta-substituted pyridines in high, optimized yields.

  8. Application of multivariate techniques in the optimization of a procedure for the determination of bioavailable concentrations of Se and As in estuarine sediments by ICP OES using a concomitant metals analyzer as a hydride generator.

    PubMed

    Lopes, Watson da Luz; Santelli, Ricardo Erthal; Oliveira, Eliane Padua; de Carvalho, Maria de Fátima Batista; Bezerra, Marcos Almeida

    2009-10-15

    A procedure has been developed for the determination of bioavailable concentrations of selenium and arsenic in estuarine sediments employing inductively coupled plasma optical emission spectrometry (ICP OES) using a concomitant metals analyzer device to perform hydride generation. The optimization of hydride generation was done in two steps: using a two-level factorial design for preliminary evaluation of studied factors and a Doehlert design to assess the optimal experimental conditions for analysis. Interferences of transition metallic ions (Cd(2+), Co(2+), Cu(2+), Fe(3+) and Ni(2+)) to selenium and arsenic signals were minimized by using higher hydrochloric acid concentrations. In this way, the procedure allowed the determination of selenium and arsenic in sediments with a detection limit of 25 and 30 microg kg(-1), respectively, assuming a 50-fold sample dilution (0.5 g sample extraction to 25 mL sample final volume). The precision, expressed as a relative standard deviation (% RSD, n=10), was 0.2% for both selenium and arsenic in 200 microg L(-1) solutions, which corresponds to 10 microg g(-1) in sediment samples after acid extraction. Applying the proposed procedure, a linear range of 0.08-10 and 0.10-10 microg g(-1) was obtained for selenium and arsenic, respectively. The developed procedure was validated by the analysis of two certified reference materials: industrial sludge (NIST 2782) and river sediment (NIST 8704). The results were in agreement with the certified values. The developed procedure was applied to evaluate the bioavailability of both elements in four sediment certified reference materials, in which there are not certified values for bioavailable fractions, and also in estuarine sediment samples collected in several sites of Guanabara Bay, an impacted environment in Rio de Janeiro, Brazil.

  9. Solution of elliptic PDEs by fast Poisson solvers using a local relaxation factor

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung

    1986-01-01

    A large class of two- and three-dimensional, nonseparable elliptic partial differential equations (PDEs) is presently solved by means of novel one-step (D'Yakanov-Gunn) and two-step (accelerated one-step) iterative procedures, using a local, discrete Fourier analysis. In addition to being easily implemented and applicable to a variety of boundary conditions, these procedures are found to be computationally efficient on the basis of the results of numerical comparison with other established methods, which lack the present one's: (1) insensitivity to grid cell size and aspect ratio, and (2) ease of convergence rate estimation by means of the coefficient of the PDE being solved. The two-step procedure is numerically demonstrated to outperform the one-step procedure in the case of PDEs with variable coefficients.

  10. Instructional Efficiency of Changing Cognitive Load in an Out-of-School Laboratory

    NASA Astrophysics Data System (ADS)

    Scharfenberg, Franz-Josef; Bogner, Franz X.

    2010-04-01

    Our research objective focused on monitoring students' mental effort and cognitive achievement to unveil potential effects of an instructional change in an out-of-school laboratory offering gene technology modules. Altogether, 231 students (12th graders) attended our day-long hands-on module. Within a quasi-experimental design, a treatment group followed the newly developed two-step approach derived from cognitive load theory while a control group applied experimentation in a conventional one-step mode. The difference consisted of additional focused discussions combined with noting students' ideas (Step 1) prior to starting any experimental procedure (Step 2). We monitored mental effort (nine times during the teaching unit) and cognitive achievement (in a pre-post-design with follow-up test). The treatment demonstrated a change in instructional efficiency (by combining mental effort and cognitive achievement data), especially for intrinsically high-loaded students. Conclusions for optimizing individual cognitive load in science teaching were drawn.

  11. A new one-step procedure for pulmonary valve implantation of the melody valve: Simultaneous prestenting and valve implantation.

    PubMed

    Boudjemline, Younes

    2018-01-01

    To describe a new modification, the one-step procedure, that allows interventionists to pre-stent and implant a Melody valve simultaneously. Percutaneous pulmonary valve implantation (PPVI) is the standard of care for managing patients with dysfunctional right ventricular outflow tract, and the approach is standardized. Patients undergoing PPVI using the one-step procedure were identified in our database. Procedural data and radiation exposure were compared to those in a matched group of patients who underwent PPVI using the conventional two-step procedure. Between January 2016 and January 2017, PPVI was performed in 27 patients (median age/range, 19.1/10-55 years) using the one-step procedure involving manual crimping of one to three bare metal stents over the Melody valve. The stent and Melody valve were delivered successfully using the Ensemble delivery system. No complications occurred. All patients had excellent hemodynamic results (median/range post-PPVI right ventricular to pulmonary artery gradient, 9/0-20 mmHg). Valve function was excellent. Median procedural and fluoroscopic times were 56 and 10.2 min, respectively, which significantly differed from those of the two-step procedure group. Similarly, the dose area product (DAP), and radiation time were statistically lower in the one-step group than in the two-step group (P < 0.001 for all variables). After a median follow-up of 8 months (range, 3-14.7), no patient underwent reintervention, and no device dysfunction was observed. The one-step procedure is a safe modification that allows interventionists to prestent and implants the Melody valve simultaneously. It significantly reduces procedural and fluoroscopic times, and radiation exposure. © 2017 Wiley Periodicals, Inc.

  12. Technical pitfalls and tips for the valve-in-valve procedure

    PubMed Central

    2017-01-01

    Transcatheter aortic valve implantation (TAVI) has emerged as a viable treatment modality for patients with severe aortic valve stenosis and multiple co-morbidities. More recent indications include the use of transcatheter heart valves (THV) to treat degenerated bioprosthetic surgical heart valves (SHV), which are failing due to stenosis or regurgitation. Valve-in-valve (VIV) procedures in the aortic position have been performed with a variety of THV devices, although the balloon-expandable SAPIEN valve platform (Edwards Lifesciences Ltd, Irvine, CA, USA) and self-expandable CoreValve platform (Medtronic Inc., MN, USA) have been used in majority of the patients. VIV treatment is appealing as it is less invasive than conventional surgery but optimal patient selection is vital to avoid complications such as malposition, residual high gradients and coronary obstruction. To minimize the risk of complications, thorough procedural planning is critical. The first step is identification of the degenerated SHV, including its model, size, fluoroscopic appearance. Although label size and stent internal diameter (ID) are provided by the manufacturer, it is important to note the true ID. The true ID is the ID of a SHV after the leaflets are mounted and helps determine the optimal size of THV. The second step is to determine the type and size of the THV. Although this is determined in the majority of the cases by user preference, in certain situations one THV may be more suitable than another. As the procedure is performed under fluoroscopy, the third step is to become familiarized with the fluoroscopic appearance of both the SHV and THV. This helps to determine the landmarks for optimal positioning, which in turn determines the gradients and fixation. The fourth step is to assess the risk of coronary obstruction. This is performed with either aortic root angiography or ECG-gated computerised tomography (CT). Finally, the route of approach must be carefully planned. Once these aspects are addressed, the procedure can be performed efficiently with a low risk of complications. PMID:29062752

  13. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  14. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    NASA Technical Reports Server (NTRS)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  15. An optimized immunohistochemistry protocol for detecting the guidance cue Netrin-1 in neural tissue.

    PubMed

    Salameh, Samer; Nouel, Dominique; Flores, Cecilia; Hoops, Daniel

    2018-01-01

    Netrin-1, an axon guidance protein, is difficult to detect using immunohistochemistry. We performed a multi-step, blinded, and controlled protocol optimization procedure to establish an efficient and effective fluorescent immunohistochemistry protocol for characterizing Netrin-1 expression. Coronal mouse brain sections were used to test numerous antigen retrieval methods and combinations thereof in order to optimize the stain quality of a commercially available Netrin-1 antibody. Stain quality was evaluated by experienced neuroanatomists for two criteria: signal intensity and signal-to-noise ratio. After five rounds of testing protocol variants, we established a modified immunohistochemistry protocol that produced a Netrin-1 signal with good signal intensity and a high signal-to-noise ratio. The key protocol modifications are as follows: •Use phosphate buffer (PB) as the blocking solution solvent.•Use 1% sodium dodecyl sulfate (SDS) treatment for antigen retrieval. The original protocol was optimized for use with the Netrin-1 antibody produced by Novus Biologicals. However, we subsequently further modified the protocol to work with the antibody produced by Abcam. The Abcam protocol uses PBS as the blocking solution solvent and adds a citrate buffer antigen retrieval step.

  16. Optimizing How We Teach Research Methods

    ERIC Educational Resources Information Center

    Cvancara, Kristen E.

    2017-01-01

    Courses: Research Methods (undergraduate or graduate level). Objective: The aim of this exercise is to optimize the ability for students to integrate an understanding of various methodologies across research paradigms within a 15-week semester, including a review of procedural steps and experiential learning activities to practice each method, a…

  17. Practical optimal flight control system design for helicopter aircraft. Volume 2: Software user's guide

    NASA Technical Reports Server (NTRS)

    Riedel, S. A.

    1979-01-01

    A method by which modern and classical control theory techniques may be integrated in a synergistic fashion and used in the design of practical flight control systems is presented. A general procedure is developed, and several illustrative examples are included. Emphasis is placed not only on the synthesis of the design, but on the assessment of the results as well. The first step is to establish the differences, distinguishing characteristics and connections between the modern and classical control theory approaches. Ultimately, this uncovers a relationship between bandwidth goals familiar in classical control and cost function weights in the equivalent optimal system. In order to obtain a practical optimal solution, it is also necessary to formulate the problem very carefully, and each choice of state, measurement and output variable must be judiciously considered. Once design goals are established and problem formulation completed, the control system is synthesized in a straightforward manner. Three steps are involved: filter-observer solution, regulator solution, and the combination of those two into the controller. Assessment of the controller permits and examination and expansion of the synthesis results.

  18. Comparison of Methods for Demonstrating Passage of Time When Using Computer-Based Video Prompting

    ERIC Educational Resources Information Center

    Mechling, Linda C.; Bryant, Kathryn J.; Spencer, Galen P.; Ayres, Kevin M.

    2015-01-01

    Two different video-based procedures for presenting the passage of time (how long a step lasts) were examined. The two procedures were presented within the framework of video prompting to promote independent multi-step task completion across four young adults with moderate intellectual disability. The two procedures demonstrating passage of the…

  19. The partitioning of copper among selected phases of geologic media of two porphyry copper districts, Puerto Rico

    USGS Publications Warehouse

    Learned, R.E.; Chao, T.T.; Sanzolone, R.F.

    1981-01-01

    In experiments designed to determine the manner in which copper is partitioned among selected phases that constitute geologic media, we have applied the five-step sequential extraction procedure of Chao and Theobald to the analysis of drill core, soils, and stream sediments of the Rio Vivi and Rio Tanama porphyry copper districts of Puerto Rico. The extraction procedure affords a convenient means of determining the trace-metal content of the following fractions: (1) Mn oxides and "reactive" Fe oxides; (2) "amorphous" Fe oxides; (3) "crystalline" Fe oxides; (4) sulfides and magnetite; and (5) silicates. An additional extraction between steps (1) and (2) was performed to determine organic-related copper in stream sediments. The experimental results indicate that apportionment of copper among phases constituting geologic media is a function of geochemical environment. Distinctive partitioning patterns were derived from the analysis of drill core from each of three geochemical zones: (a) the supergene zone of oxidation; (b) the supergene zone of enrichment; and (c) the hypogene zone; and similarly, from the analysis of; (d) soils on a weakly leached capping; (e) soils on a strongly leached capping; and (f) active stream sediment. The experimental results also show that geochemical contrasts (anomaly-to-background ratios) vary widely among the five fractions of each sampling medium investigated, and that at least one fraction of each medium provides substantially stronger contrast than does the bulk medium. Fraction (1) provides optimal contrast for stream sediments of the district; fraction (2) provides optimal contrast for soils on a weakly leached capping; fraction (3) provides optimal contrast for soils on a strongly leached capping. Selective extraction procedures appear to have important applications to the orientation and interpretive stages of geochemical exploration. Further investigation and testing of a similar nature are recommended. ?? 1981.

  20. Transformer miniaturization for transcutaneous current/voltage pulse applications.

    PubMed

    Kolen, P T

    1999-05-01

    A general procedure for the design of a miniaturized step up transformer to be used in the context of surface electrode based current/voltage pulse generation is presented. It has been shown that the optimum secondary current pulse width is 4.5 tau, where tau is the time constant associated with the pulse forming network associated with the transformer/electrode interaction. This criteria has been shown to produce the highest peak to average current ratio for the secondary current pulse. The design procedure allows for the calculation of the optimum turns ratio, primary turns, and secondary turns for a given electrode load/tissue and magnetic core parameters. Two design examples for transformer optimization are presented.

  1. Topology optimization of a gas-turbine engine part

    NASA Astrophysics Data System (ADS)

    Faskhutdinov, R. N.; Dubrovskaya, A. S.; Dongauzer, K. A.; Maksimov, P. V.; Trufanov, N. A.

    2017-02-01

    One of the key goals of aerospace industry is a reduction of the gas turbine engine weight. The solution of this task consists in the design of gas turbine engine components with reduced weight retaining their functional capabilities. Topology optimization of the part geometry leads to an efficient weight reduction. A complex geometry can be achieved in a single operation with the Selective Laser Melting technology. It should be noted that the complexity of structural features design does not affect the product cost in this case. Let us consider a step-by-step procedure of topology optimization by an example of a gas turbine engine part.

  2. Cascade Optimization Strategy for Aircraft and Air-Breathing Propulsion System Concepts

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Lavelle, Thomas M.; Hopkins, Dale A.; Coroneos, Rula M.

    1996-01-01

    Design optimization for subsonic and supersonic aircraft and for air-breathing propulsion engine concepts has been accomplished by soft-coupling the Flight Optimization System (FLOPS) and the NASA Engine Performance Program analyzer (NEPP), to the NASA Lewis multidisciplinary optimization tool COMETBOARDS. Aircraft and engine design problems, with their associated constraints and design variables, were cast as nonlinear optimization problems with aircraft weight and engine thrust as the respective merit functions. Because of the diversity of constraint types and the overall distortion of the design space, the most reliable single optimization algorithm available in COMETBOARDS could not produce a satisfactory feasible optimum solution. Some of COMETBOARDS' unique features, which include a cascade strategy, variable and constraint formulations, and scaling devised especially for difficult multidisciplinary applications, successfully optimized the performance of both aircraft and engines. The cascade method has two principal steps: In the first, the solution initiates from a user-specified design and optimizer, in the second, the optimum design obtained in the first step with some random perturbation is used to begin the next specified optimizer. The second step is repeated for a specified sequence of optimizers or until a successful solution of the problem is achieved. A successful solution should satisfy the specified convergence criteria and have several active constraints but no violated constraints. The cascade strategy available in the combined COMETBOARDS, FLOPS, and NEPP design tool converges to the same global optimum solution even when it starts from different design points. This reliable and robust design tool eliminates manual intervention in the design of aircraft and of air-breathing propulsion engines where it eases the cycle analysis procedures. The combined code is also much easier to use, which is an added benefit. This paper describes COMETBOARDS and its cascade strategy and illustrates the capability of the combined design tool through the optimization of a subsonic aircraft and a high-bypass-turbofan wave-rotor-topped engine.

  3. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  4. Multi-objective optimization of solid waste flows: environmentally sustainable strategies for municipalities.

    PubMed

    Minciardi, Riccardo; Paolucci, Massimo; Robba, Michela; Sacile, Roberto

    2008-11-01

    An approach to sustainable municipal solid waste (MSW) management is presented, with the aim of supporting the decision on the optimal flows of solid waste sent to landfill, recycling and different types of treatment plants, whose sizes are also decision variables. This problem is modeled with a non-linear, multi-objective formulation. Specifically, four objectives to be minimized have been taken into account, which are related to economic costs, unrecycled waste, sanitary landfill disposal and environmental impact (incinerator emissions). An interactive reference point procedure has been developed to support decision making; these methods are considered appropriate for multi-objective decision problems in environmental applications. In addition, interactive methods are generally preferred by decision makers as they can be directly involved in the various steps of the decision process. Some results deriving from the application of the proposed procedure are presented. The application of the procedure is exemplified by considering the interaction with two different decision makers who are assumed to be in charge of planning the MSW system in the municipality of Genova (Italy).

  5. Two-step voltage dual electromembrane extraction: A new approach to simultaneous extraction of acidic and basic drugs.

    PubMed

    Asadi, Sakine; Nojavan, Saeed

    2016-06-07

    In the present work, acidic and basic drugs were simultaneously extracted by a novel method of high efficiency herein referred to as two-step voltage dual electromembrane extraction (TSV-DEME). Optimizing effective parameters such as composition of organic liquid membrane, pH values of donor and acceptor solutions, voltage and duration of each step, the method had its figures of merit investigated in pure water, human plasma, wastewater, and breast milk samples. Simultaneous extraction of acidic and basic drugs was done by applying potentials of 150 V and 400 V for 6 min and 19 min as the first and second steps, respectively. The model compounds were extracted from 4 mL of sample solution (pH = 6) into 20 μL of each acceptor solution (32 mM NaOH for acidic drugs and 32 mM HCL for basic drugs). 1-Octanol was immobilized within the pores of a porous hollow fiber of polypropylene, as the supported liquid membrane (SLM) for acidic drugs, and 2-ethyle hexanol, as the SLM for basic drugs. The proposed TSV-DEME technique provided good linearity with the resulting correlation coefficients ranging from 0.993 to 0.998 over a concentration range of 1-1000 ng mL(-1). The limit of detections of the drugs were found to range within 0.3-1.5 ng mL(-1), while the corresponding repeatability ranged from 7.7 to 15.5% (n = 4). The proposed method was further compared to simple dual electromembrane extraction (DEME), indicating significantly higher recoveries for TSV-DEME procedure (38.1-68%), as compared to those of simple DEME procedure (17.7-46%). Finally, the optimized TSV-DEME was applied to extract and quantify model compounds in breast milk, wastewater, and plasma samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. On salesmen and tourists: Two-step optimization in deterministic foragers

    NASA Astrophysics Data System (ADS)

    Maya, Miguel; Miramontes, Octavio; Boyer, Denis

    2017-02-01

    We explore a two-step optimization problem in random environments, the so-called restaurant-coffee shop problem, where a walker aims at visiting the nearest and better restaurant in an area and then move to the nearest and better coffee-shop. This is an extension of the Tourist Problem, a one-step optimization dynamics that can be viewed as a deterministic walk in a random medium. A certain amount of heterogeneity in the values of the resources to be visited causes the emergence of power-laws distributions for the steps performed by the walker, similarly to a Lévy flight. The fluctuations of the step lengths tend to decrease as a consequence of multiple-step planning, thus reducing the foraging uncertainty. We find that the first and second steps of each planned movement play very different roles in heterogeneous environments. The two-step process improves only slightly the foraging efficiency compared to the one-step optimization, at a much higher computational cost. We discuss the implications of these findings for animal and human mobility, in particular in relation to the computational effort that informed agents should deploy to solve search problems.

  7. Biologic considerations regarding the one and two step procedures in the management of patients with invasive carcinoma of the breast.

    PubMed

    Fisher, E R; Sass, R; Fisher, B

    1985-09-01

    Investigation of the biologic significance of delay between biopsy and mastectomy was performed upon women with invasive carcinoma of the breast in protocol four of the NSABP. Since the period of delay was two weeks or less in approximately 75 per cent, no comment concerning the possible effects of longer periods can be made. Life table analyses failed to reveal any difference in ten year survival rates between patients undergoing radical mastectomy management by the one and two step procedures. Similarly, no difference in adjusted ten year survival rate was observed between women managed by the two step procedure who did or did not have residual tumor identified in the mastectomy specimen after the first step or biopsy. Importantly, the clinical or pathologic stages, sizes of tumor or histologic grades were similar in women managed by the one and two step procedures minimizing selection bias. The material used also allowed for study of the possible causative role of biopsy of the breast on the development of sinus histiocytosis in regional axillary lymph nodes. No difference in degree or types of this nodal reaction could be discerned in the lymph nodes of the mastectomy specimens obtained from patients who had undergone the one and two step procedures. This finding indicates that nodal sinus histiocytosis is indeed related to the neoplastic process, albeit in an undefined manner, rather than the trauma of biopsy per se as has been suggested. These results do not invalidate the use of the one step procedure in the management of patients with carcinoma of the breast. Indeed, it is highly likely that it will be commonly used now that breast-conserving operations appear to represent a viable alternative modality for the primary surgical treatment of carcinoma of the breast. Yet, it is apparent that the one step procedure will be performed for technical and practical rather than biologic reasons.

  8. Electrostatic design of protein-protein association rates.

    PubMed

    Schreiber, Gideon; Shaul, Yossi; Gottschalk, Kay E

    2006-01-01

    De novo design and redesign of proteins and protein complexes have made promising progress in recent years. Here, we give an overview of how to use available computer-based tools to design proteins to bind faster and tighter to their protein-complex partner by electrostatic optimization between the two proteins. Electrostatic optimization is possible because of the simple relation between the Debye-Huckel energy of interaction between a pair of proteins and their rate of association. This can be used for rapid, structure-based calculations of the electrostatic attraction between the two proteins in the complex. Using these principles, we developed two computer programs that predict the change in k(on), and as such the affinity, on introducing charged mutations. The two programs have a web interface that is available at www.weizmann.ac.il/home/bcges/PARE.html and http://bip.weizmann.ac.il/hypare. When mutations leading to charge optimization are introduced outside the physical binding site, the rate of dissociation is unchanged and therefore the change in k(on) parallels that of the affinity. This design method was evaluated on a number of different protein complexes resulting in binding rates and affinities of hundreds of fold faster and tighter compared to wild type. In this chapter, we demonstrate the procedure and go step by step over the methodology of using these programs for protein-association design. Finally, the way to easily implement the principle of electrostatic design for any protein complex of choice is shown.

  9. Determination of the clean-up efficiency of the solid-phase extraction of rosemary extracts: Application of full-factorial design in hyphenation with Gaussian peak fit function.

    PubMed

    Meischl, Florian; Kirchler, Christian Günter; Jäger, Michael Andreas; Huck, Christian Wolfgang; Rainer, Matthias

    2018-02-01

    We present a novel method for the quantitative determination of the clean-up efficiency to provide a calculated parameter for peak purity through iterative fitting in conjunction with design of experiments. Rosemary extracts were used and analyzed before and after solid-phase extraction using a self-fabricated mixed-mode sorbent based on poly(N-vinylimidazole/ethylene glycol dimethacrylate). Optimization was performed by variation of washing steps using a full three-level factorial design and response surface methodology. Separation efficiency of rosmarinic acid from interfering compounds was calculated using an iterative fit of Gaussian-like signals and quantifications were performed by the separate integration of the two interfering peak areas. Results and recoveries were analyzed using Design-Expert® software and revealed significant differences between the washing steps. Optimized parameters were considered and used for all further experiments. Furthermore, the solid-phase extraction procedure was tested and compared with commercial available sorbents. In contrast to generic protocols of the manufacturers, the optimized procedure showed excellent recoveries and clean-up rates for the polymer with ion exchange properties. Finally, rosemary extracts from different manufacturing areas and application types were studied to verify the developed method for its applicability. The cleaned-up extracts were analyzed by liquid chromatography with tandem mass spectrometry for detailed compound evaluation to exclude any interference from coeluting molecules. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. A preconcentration system for determination of copper and nickel in water and food samples employing flame atomic absorption spectrometry.

    PubMed

    Tuzen, Mustafa; Soylak, Mustafa; Citak, Demirhan; Ferreira, Hadla S; Korn, Maria G A; Bezerra, Marcos A

    2009-03-15

    A separation/preconcentration procedure using solid phase extraction has been proposed for the flame atomic absorption spectrometric determination of copper and nickel at trace level in food samples. The solid phase is Dowex Optipore SD-2 resin contained on a minicolumn, where analyte ions are sorbed as 5-methyl-4-(2-thiazolylazo) resorcinol chelates. After elution using 1 mol L(-1) nitric acid solution, the analytes are determinate employing flame atomic absorption spectrometry. The optimization step was performed using a full two-level factorial design and the variables studied were: pH, reagent concentration (RC) and amount of resin on the column (AR). Under the experimental conditions established in the optimization step, the procedure allows the determination of copper and nickel with limit of detection of 1.03 and 1.90 microg L(-1), respectively and precision of 7 and 8%, for concentrations of copper and nickel of 200 microg L(-1). The effect of matrix ions was also evaluated. The accuracy was confirmed by analyzing of the followings certified reference materials: NIST SRM 1515 Apple leaves and GBW 07603 Aquatic and Terrestrial Biological Products. The developed method was successfully applied for the determination of copper and nickel in real samples including human hair, chicken meat, black tea and canned fish.

  11. Optimal design of an alignment-free two-DOF rehabilitation robot for the shoulder complex.

    PubMed

    Galinski, Daniel; Sapin, Julien; Dehez, Bruno

    2013-06-01

    This paper presents the optimal design of an alignment-free exoskeleton for the rehabilitation of the shoulder complex. This robot structure is constituted of two actuated joints and is linked to the arm through passive degrees of freedom (DOFs) to drive the flexion-extension and abduction-adduction movements of the upper arm. The optimal design of this structure is performed through two steps. The first step is a multi-objective optimization process aiming to find the best parameters characterizing the robot and its position relative to the patient. The second step is a comparison process aiming to select the best solution from the optimization results on the basis of several criteria related to practical considerations. The optimal design process leads to a solution outperforming an existing solution on aspects as kinematics or ergonomics while being more simple.

  12. Displacement-dispersive liquid-liquid microextraction based on solidification of floating organic drop of trace amounts of palladium in water and road dust samples prior to graphite furnace atomic absorption spectrometry determination.

    PubMed

    Ghanbarian, Maryam; Afzali, Daryoush; Mostafavi, Ali; Fathirad, Fariba

    2013-01-01

    A new displacement-dispersive liquid-liquid microextraction method based on the solidification of floating organic drop was developed for separation and preconcentration of Pd(ll) in road dust and aqueous samples. This method involves two steps of dispersive liquid-liquid microextraction based on solidification. In Step 1, Cu ions react with diethyldithiocarbamate (DDTC) to form Cu-DDTC complex, which is extracted by dispersive liquid-liquid microextraction based on a solidification procedure using 1-undecanol (extraction solvent) and ethanol (dispersive solvent). In Step 2, the extracted complex is first dispersed using ethanol in a sample solution containing Pd ions, then a dispersive liquid-liquid microextraction based on a solidification procedure is performed creating an organic drop. In this step, Pd(ll) replaces Cu(ll) from the pre-extracted Cu-DDTC complex and goes into the extraction solvent phase. Finally, the Pd(ll)-containing drop is introduced into a graphite furnace using a microsyringe, and Pd(ll) is determined using atomic absorption spectrometry. Several factors that influence the extraction efficiency of Pd and its subsequent determination, such as extraction and dispersive solvent type and volume, pH of sample solution, centrifugation time, and concentration of DDTC, are optimized.

  13. Applying a probabilistic seismic-petrophysical inversion and two different rock-physics models for reservoir characterization in offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia

    2018-01-01

    We apply a two-step probabilistic seismic-petrophysical inversion for the characterization of a clastic, gas-saturated, reservoir located in offshore Nile Delta. In particular, we discuss and compare the results obtained when two different rock-physics models (RPMs) are employed in the inversion. The first RPM is an empirical, linear model directly derived from the available well log data by means of an optimization procedure. The second RPM is a theoretical, non-linear model based on the Hertz-Mindlin contact theory. The first step of the inversion procedure is a Bayesian linearized amplitude versus angle (AVA) inversion in which the elastic properties, and the associated uncertainties, are inferred from pre-stack seismic data. The estimated elastic properties constitute the input to the second step that is a probabilistic petrophysical inversion in which we account for the noise contaminating the recorded seismic data and the uncertainties affecting both the derived rock-physics models and the estimated elastic parameters. In particular, a Gaussian mixture a-priori distribution is used to properly take into account the facies-dependent behavior of petrophysical properties, related to the different fluid and rock properties of the different litho-fluid classes. In the synthetic and in the field data tests, the very minor differences between the results obtained by employing the two RPMs, and the good match between the estimated properties and well log information, confirm the applicability of the inversion approach and the suitability of the two different RPMs for reservoir characterization in the investigated area.

  14. A Combined Structural and Electromechanical FE Approach for Industrial Ultrasonic Devices Design

    NASA Astrophysics Data System (ADS)

    Schorderet, Alain; Prenleloup, Alain; Colla, Enrico

    2011-05-01

    Ultrasonic assistance is widely used in manufacturing, both for conventional (e.g. grinding, drilling) and non-conventional (e.g. EDM) processes. Ultrasonic machining is also used as a stand alone process for instance for micro-drilling. Industrial application of these processes requires increasingly efficient and accurate development tools to predict the performance of the ultrasonic device: the so-called sonotrode and the piezo-transducer. This electromechanical system consists of a structural part and of a piezo-electrical part (actuator). In this paper, we show how to combine two simulation softwares—for stuctures and electromechanical devices—to perform a complete design analysis and optimization of a sonotrode for ultrasonic drilling applications. The usual design criteria are the eigenfrequencies of the desired vibrational modes. In addition, during the optimization phase, one also needs to consider the maximum achievable displacement for a given applied voltage. Therefore, one must be able to predict the electromechanical behavior of the integrated piezo-structure system, in order to define, adapt and optimize the electric power supply as well as the control strategy (search, tracking of the eigenfrequency). In this procedure, numerical modelling follows a two-step approach, by means of a solid mechanics FE code (ABAQUS) and of an electromechanical simulation software (ATILA). The example presented illustrates the approach and describes the obtained results for the development of an industrial sonotrode system dedicated to ultrasonic micro-drilling of ceramics. The 3D model of the sonotrode serves as input for generating the FE mesh in ABAQUS and this mesh is then translated into an input file for ATILA. ABAQUS results are used to perform the first optimization step in order to obtain a sonotrode design leading to the requested modal behaviour—eigen-frequency and corresponding dynamic amplification. The second step aims at evaluating the dynamic mechanical response of the complete sonotrode subjected to an ultrasonic voltage excitation. Piezoelectric properties as well as damping properties are requested to fulfill this step. The obtained electrical results—complex system's impedance and electric current- are used to optimize the sonotrode-power supply complete system.

  15. A multiplexed microfluidic toolbox for the rapid optimization of affinity-driven partition in aqueous two phase systems.

    PubMed

    Bras, Eduardo J S; Soares, Ruben R G; Azevedo, Ana M; Fernandes, Pedro; Arévalo-Rodríguez, Miguel; Chu, Virginia; Conde, João P; Aires-Barros, M Raquel

    2017-09-15

    Antibodies and other protein products such as interferons and cytokines are biopharmaceuticals of critical importance which, in order to be safely administered, have to be thoroughly purified in a cost effective and efficient manner. The use of aqueous two-phase extraction (ATPE) is a viable option for this purification, but these systems are difficult to model and optimization procedures require lengthy and expensive screening processes. Here, a methodology for the rapid screening of antibody extraction conditions using a microfluidic channel-based toolbox is presented. A first microfluidic structure allows a simple negative-pressure driven rapid screening of up to 8 extraction conditions simultaneously, using less than 20μL of each phase-forming solution per experiment, while a second microfluidic structure allows the integration of multi-step extraction protocols based on the results obtained with the first device. In this paper, this microfluidic toolbox was used to demonstrate the potential of LYTAG fusion proteins used as affinity tags to optimize the partitioning of antibodies in ATPE processes, where a maximum partition coefficient (K) of 9.2 in a PEG 3350/phosphate system was obtained for the antibody extraction in the presence of the LYTAG-Z dual ligand. This represents an increase of approx. 3.7 fold when compared with the same conditions without the affinity molecule (K=2.5). Overall, this miniaturized and versatile approach allowed the rapid optimization of molecule partition followed by a proof-of-concept demonstration of an integrated back extraction procedure, both of which are critical procedures towards obtaining high purity biopharmaceuticals using ATPE. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Finite element mesh refinement criteria for stress analysis

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1990-01-01

    This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.

  17. Determination of Optimal Subsidy for Materials Saving Investment through Recycle/Recovery at Industrial Level

    NASA Astrophysics Data System (ADS)

    Batzias, Dimitris F.

    2009-08-01

    This work deals with a methodological framework under the form of a simple/short algorithmic procedure (including 11 activity steps and 3 decision nodes) designed/developed for the determination of optimal subsidy for materials saving investment through recycle/recovery (RR) at industrial level. Two case examples are presented, covering both aspects, without and with recycling. The expected Relative Cost Decrease (RCD) because of recycling, which forms a critical index for decision making on subsidizing, is estimated. The developed procedure can be extended outside the industrial unit to include collection/transportation/processing of recyclable wasted products. Since, in such a case, transportation cost and processing cost are conflict depended variables (when the quantity collected/processed Q is the independent/explanatory variable), the determination of Qopt is examined under energy crises conditions, when corresponding subsidies might be granted to re-set the original equilibrium and avoid putting the recycling enterprise in jeopardize due to dangerous lowering of the first break-even point.

  18. Transparent DNA/RNA Co-extraction Workflow Protocol Suitable for Inhibitor-Rich Environmental Samples That Focuses on Complete DNA Removal for Transcriptomic Analyses

    PubMed Central

    Lim, Natalie Y. N.; Roco, Constance A.; Frostegård, Åsa

    2016-01-01

    Adequate comparisons of DNA and cDNA libraries from complex environments require methods for co-extraction of DNA and RNA due to the inherent heterogeneity of such samples, or risk bias caused by variations in lysis and extraction efficiencies. Still, there are few methods and kits allowing simultaneous extraction of DNA and RNA from the same sample, and the existing ones generally require optimization. The proprietary nature of kit components, however, makes modifications of individual steps in the manufacturer’s recommended procedure difficult. Surprisingly, enzymatic treatments are often performed before purification procedures are complete, which we have identified here as a major problem when seeking efficient genomic DNA removal from RNA extracts. Here, we tested several DNA/RNA co-extraction commercial kits on inhibitor-rich soils, and compared them to a commonly used phenol-chloroform co-extraction method. Since none of the kits/methods co-extracted high-quality nucleic acid material, we optimized the extraction workflow by introducing small but important improvements. In particular, we illustrate the need for extensive purification prior to all enzymatic procedures, with special focus on the DNase digestion step in RNA extraction. These adjustments led to the removal of enzymatic inhibition in RNA extracts and made it possible to reduce genomic DNA to below detectable levels as determined by quantitative PCR. Notably, we confirmed that DNase digestion may not be uniform in replicate extraction reactions, thus the analysis of “representative samples” is insufficient. The modular nature of our workflow protocol allows optimization of individual steps. It also increases focus on additional purification procedures prior to enzymatic processes, in particular DNases, yielding genomic DNA-free RNA extracts suitable for metatranscriptomic analysis. PMID:27803690

  19. Sensing a heart infarction marker with surface plasmon resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Kunz, Ulrich; Katerkamp, Andreas; Renneberg, Reinhard; Spener, Friedrich; Cammann, Karl

    1995-02-01

    In this study a direct immunosensor for heart-type fatty acid binding protein (FABP) based on surface plasmon resonance spectroscopy (SPRS) is presented. FABP can be used as a heart infarction marker in clinical diagnostics. The development of a simple and cheap direct optical sensor device is reported in this paper as well as immobilization procedures and optimization of the measuring conditions. The correct working of the SPRS device is controlled by comparing the signals with theoretical calculated values. Two different immunoassay techniques were optimized for a sensitive FABP-analysis. The competitive immunoassay was superior to the sandwich configuration as it had a lower detection limit (100 ng/ml), needed less antibodies and could be carried out in one step.

  20. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  1. Measurement of Mitochondrial Respiration in Isolated Protoplasts: Cytochrome and Alternative Pathways.

    PubMed

    Sunil, Bobba; Raghavendra, Agepati S

    2017-01-01

    The electron partitioning between COX and AOX pathways of mitochondria and their coordination is necessary to meet the energy demands as well as to maintain optimized redox status in plants under varying environmental conditions. The relative contribution of these two pathways to total respiration is an important measure during a given stress condition. We describe in detail the procedure that allows the measurement of the parameters of COX and AOX pathway of respiration in mesophyll protoplasts using Clark-type O 2 electrode. This chapter also lists the steps for rapid isolation procedure for mesophyll protoplasts from pea leaves. The advantages and limitations of the use of metabolic inhibitors and the protoplasts for measuring the respiration are also briefly discussed.

  2. Determining optimal clothing ensembles based on weather forecasts, with particular reference to outdoor winter military activities.

    PubMed

    Morabito, Marco; Pavlinic, Daniela Z; Crisci, Alfonso; Capecchi, Valerio; Orlandini, Simone; Mekjavic, Igor B

    2011-07-01

    Military and civil defense personnel are often involved in complex activities in a variety of outdoor environments. The choice of appropriate clothing ensembles represents an important strategy to establish the success of a military mission. The main aim of this study was to compare the known clothing insulation of the garment ensembles worn by soldiers during two winter outdoor field trials (hike and guard duty) with the estimated optimal clothing thermal insulations recommended to maintain thermoneutrality, assessed by using two different biometeorological procedures. The overall aim was to assess the applicability of such biometeorological procedures to weather forecast systems, thereby developing a comprehensive biometeorological tool for military operational forecast purposes. Military trials were carried out during winter 2006 in Pokljuka (Slovenia) by Slovene Armed Forces personnel. Gastrointestinal temperature, heart rate and environmental parameters were measured with portable data acquisition systems. The thermal characteristics of the clothing ensembles worn by the soldiers, namely thermal resistance, were determined with a sweating thermal manikin. Results showed that the clothing ensemble worn by the military was appropriate during guard duty but generally inappropriate during the hike. A general under-estimation of the biometeorological forecast model in predicting the optimal clothing insulation value was observed and an additional post-processing calibration might further improve forecast accuracy. This study represents the first step in the development of a comprehensive personalized biometeorological forecast system aimed at improving recommendations regarding the optimal thermal insulation of military garment ensembles for winter activities.

  3. Procedures for shape optimization of gas turbine disks

    NASA Technical Reports Server (NTRS)

    Cheu, Tsu-Chien

    1989-01-01

    Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.

  4. Comparing light sensitivity, linearity and step response of electronic cameras for ophthalmology.

    PubMed

    Kopp, O; Markert, S; Tornow, R P

    2002-01-01

    To develop and test a procedure to measure and compare light sensitivity, linearity and step response of electronic cameras. The pixel value (PV) of digitized images as a function of light intensity (I) was measured. The sensitivity was calculated from the slope of the P(I) function, the linearity was estimated from the correlation coefficient of this function. To measure the step response, a short sequence of images was acquired. During acquisition, a light source was switched on and off using a fast shutter. The resulting PV was calculated for each video field of the sequence. A CCD camera optimized for the near-infrared (IR) spectrum showed the highest sensitivity for both, visible and IR light. There are little differences in linearity. The step response depends on the procedure of integration and read out.

  5. UrQt: an efficient software for the Unsupervised Quality trimming of NGS data.

    PubMed

    Modolo, Laurent; Lerat, Emmanuelle

    2015-04-29

    Quality control is a necessary step of any Next Generation Sequencing analysis. Although customary, this step still requires manual interventions to empirically choose tuning parameters according to various quality statistics. Moreover, current quality control procedures that provide a "good quality" data set, are not optimal and discard many informative nucleotides. To address these drawbacks, we present a new quality control method, implemented in UrQt software, for Unsupervised Quality trimming of Next Generation Sequencing reads. Our trimming procedure relies on a well-defined probabilistic framework to detect the best segmentation between two segments of unreliable nucleotides, framing a segment of informative nucleotides. Our software only requires one user-friendly parameter to define the minimal quality threshold (phred score) to consider a nucleotide to be informative, which is independent of both the experiment and the quality of the data. This procedure is implemented in C++ in an efficient and parallelized software with a low memory footprint. We tested the performances of UrQt compared to the best-known trimming programs, on seven RNA and DNA sequencing experiments and demonstrated its optimality in the resulting tradeoff between the number of trimmed nucleotides and the quality objective. By finding the best segmentation to delimit a segment of good quality nucleotides, UrQt greatly increases the number of reads and of nucleotides that can be retained for a given quality objective. UrQt source files, binary executables for different operating systems and documentation are freely available (under the GPLv3) at the following address: https://lbbe.univ-lyon1.fr/-UrQt-.html .

  6. Efficient computation of the genomic relationship matrix and other matrices used in single-step evaluation.

    PubMed

    Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S

    2011-12-01

    Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.

  7. Reliable Transition State Searches Integrated with the Growing String Method.

    PubMed

    Zimmerman, Paul

    2013-07-09

    The growing string method (GSM) is highly useful for locating reaction paths connecting two molecular intermediates. GSM has often been used in a two-step procedure to locate exact transition states (TS), where GSM creates a quality initial structure for a local TS search. This procedure and others like it, however, do not always converge to the desired transition state because the local search is sensitive to the quality of the initial guess. This article describes an integrated technique for simultaneous reaction path and exact transition state search. This is achieved by implementing an eigenvector following optimization algorithm in internal coordinates with Hessian update techniques. After partial convergence of the string, an exact saddle point search begins under the constraint that the maximized eigenmode of the TS node Hessian has significant overlap with the string tangent near the TS. Subsequent optimization maintains connectivity of the string to the TS as well as locks in the TS direction, all but eliminating the possibility that the local search leads to the wrong TS. To verify the robustness of this approach, reaction paths and TSs are found for a benchmark set of more than 100 elementary reactions.

  8. Optimization of PCR for quantification of simian immunodeficiency virus (SIV) genomic RNA in plasma of rhesus macaques (Macaca mulatta) using armored RNA

    PubMed Central

    Monjure, C. J.; Tatum, C. D.; Panganiban, A. T.; Arainga, M.; Traina-Dorge, V.; Marx, P. A.; Didier, E. S.

    2014-01-01

    Introduction Quantification of plasma viral load (PVL) is used to monitor disease progression in SIV-infected macaques. This study was aimed at optimizing of performance characteristics of the quantitative PCR (qPCR) PVL assay. Methods The PVL quantification procedure was optimized by inclusion of an exogenous control Hepatitis C Virus armored RNA (aRNA), a plasma concentration step, extended digestion with proteinase K, and a second RNA elution step. Efficiency of viral RNA (vRNA) extraction was compared using several commercial vRNA extraction kits. Various parameters of qPCR targeting the gag region of SIVmac239, SIVsmE660 and the LTR region of SIVagmSAB were also optimized. Results Modifications of the SIV PVL qPCR procedure increased vRNA recovery, reduced inhibition and improved analytical sensitivity. The PVL values determined by this SIV PVL qPCR correlated with quantification results of SIV-RNA in the same samples using the “industry standard” method of branched-DNA (bDNA) signal amplification. Conclusions Quantification of SIV genomic RNA in plasma of rhesus macaques using this optimized SIV PVL qPCR is equivalent to the bDNA signal amplification method, less costly and more versatile. Use of heterologous aRNA as an internal control is useful for optimizing performance characteristics of PVL qPCRs. PMID:24266615

  9. Optimization of PCR for quantification of simian immunodeficiency virus genomic RNA in plasma of rhesus macaques (Macaca mulatta) using armored RNA.

    PubMed

    Monjure, C J; Tatum, C D; Panganiban, A T; Arainga, M; Traina-Dorge, V; Marx, P A; Didier, E S

    2014-02-01

    Quantification of plasma viral load (PVL) is used to monitor disease progression in SIV-infected macaques. This study was aimed at optimizing of performance characteristics of the quantitative PCR (qPCR) PVL assay. The PVL quantification procedure was optimized by inclusion of an exogenous control hepatitis C virus armored RNA (aRNA), a plasma concentration step, extended digestion with proteinase K, and a second RNA elution step. Efficiency of viral RNA (vRNA) extraction was compared using several commercial vRNA extraction kits. Various parameters of qPCR targeting the gag region of SIVmac239, SIVsmE660, and the LTR region of SIVagmSAB were also optimized. Modifications of the SIV PVL qPCR procedure increased vRNA recovery, reduced inhibition and improved analytical sensitivity. The PVL values determined by this SIV PVL qPCR correlated with quantification results of SIV RNA in the same samples using the 'industry standard' method of branched-DNA (bDNA) signal amplification. Quantification of SIV genomic RNA in plasma of rhesus macaques using this optimized SIV PVL qPCR is equivalent to the bDNA signal amplification method, less costly and more versatile. Use of heterologous aRNA as an internal control is useful for optimizing performance characteristics of PVL qPCRs. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China

    PubMed Central

    Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong

    2017-01-01

    A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed “two-step optimization for spatial accessibility improvement (2SO4SAI).” The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China. PMID:28484707

  11. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China.

    PubMed

    Luo, Jing; Tian, Lingling; Luo, Lei; Yi, Hong; Wang, Fahui

    2017-01-01

    A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed "two-step optimization for spatial accessibility improvement (2SO4SAI)." The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China.

  12. Formulation for Simultaneous Aerodynamic Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.

    1993-01-01

    An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.

  13. CometBoards Users Manual Release 1.0

    NASA Technical Reports Server (NTRS)

    Guptill, James D.; Coroneos, Rula M.; Patnaik, Surya N.; Hopkins, Dale A.; Berke, Lazlo

    1996-01-01

    Several nonlinear mathematical programming algorithms for structural design applications are available at present. These include the sequence of unconstrained minimizations technique, the method of feasible directions, and the sequential quadratic programming technique. The optimality criteria technique and the fully utilized design concept are two other structural design methods. A project was undertaken to bring all these design methods under a common computer environment so that a designer can select any one of these tools that may be suitable for his/her application. To facilitate selection of a design algorithm, to validate and check out the computer code, and to ascertain the relative merits of the design tools, modest finite element structural analysis programs based on the concept of stiffness and integrated force methods have been coupled to each design method. The code that contains both these design and analysis tools, by reading input information from analysis and design data files, can cast the design of a structure as a minimum-weight optimization problem. The code can then solve it with a user-specified optimization technique and a user-specified analysis method. This design code is called CometBoards, which is an acronym for Comparative Evaluation Test Bed of Optimization and Analysis Routines for the Design of Structures. This manual describes for the user a step-by-step procedure for setting up the input data files and executing CometBoards to solve a structural design problem. The manual includes the organization of CometBoards; instructions for preparing input data files; the procedure for submitting a problem; illustrative examples; and several demonstration problems. A set of 29 structural design problems have been solved by using all the optimization methods available in CometBoards. A summary of the optimum results obtained for these problems is appended to this users manual. CometBoards, at present, is available for Posix-based Cray and Convex computers, Iris and Sun workstations, and the VM/CMS system.

  14. Optimized performance of quasi-solid-state DSSC with PEO-bismaleimide polymer blend electrolytes filled with a novel procedure.

    PubMed

    Lee, Dong Ha; Sun, Kyung Chul; Qadir, Muhammad Bilal; Jeong, Sung Hoon

    2014-12-01

    Dye-sensitized solar cell (DSSC) is an attractive renewable energy technology currently under intense investigation. Electrolyte plays an important role in the photovoltaic performance of the DSSCs and many efforts have been contributed to study different kinds of electrolytes with various characteristics such as liquid electrolytes, polymer electrolytes and so on. In this study, DSSC is developed by using quasi-solid electrolyte and a novel procedure is adopted for filling this electrolyte. The quasi-solid-state electrolyte was prepared by mixing Poly ethylene oxide (PEO) and bismaleimide together and constitution was taken as PEO (15 wt%) at various bismaleimide concentrations (1, 3, 5 wt%). The novel procedure of filling electrolyte consists of three major steps (first step: filling liquid electrolyte, second step: vaporization of liquid electrolyte, third step: refilling quasi-solid-state electrolyte). The electrochemical and photovoltaic performances of DSSCs with these electrolytes were also investigated. The electrochemical impedance spectroscopy (EIS) indicated that TiO2/Dye/electrolyte impedance is reduced and electron lifetime is increased, and consequently efficiency of cell has been improved after using this novel procedure. The photovoltaic power conversion efficiency of 6.39% has been achieved under AM 1.5 simulated sunlight (100 W/cm2) through this novel procedure and by using specified blend of polymers.

  15. Optimal Cut-Off Points of Fasting Plasma Glucose for Two-Step Strategy in Estimating Prevalence and Screening Undiagnosed Diabetes and Pre-Diabetes in Harbin, China

    PubMed Central

    Sun, Bo; Lan, Li; Cui, Wenxiu; Xu, Guohua; Sui, Conglan; Wang, Yibaina; Zhao, Yashuang; Wang, Jian; Li, Hongyuan

    2015-01-01

    To identify optimal cut-off points of fasting plasma glucose (FPG) for two-step strategy in screening abnormal glucose metabolism and estimating prevalence in general Chinese population. A population-based cross-sectional study was conducted on 7913 people aged 20 to 74 years in Harbin. Diabetes and pre-diabetes were determined by fasting and 2 hour post-load glucose from the oral glucose tolerance test in all participants. Screening potential of FPG, cost per case identified by two-step strategy, and optimal FPG cut-off points were described. The prevalence of diabetes was 12.7%, of which 65.2% was undiagnosed. Twelve percent or 9.0% of participants were diagnosed with pre-diabetes using 2003 ADA criteria or 1999 WHO criteria, respectively. The optimal FPG cut-off points for two-step strategy were 5.6 mmol/l for previously undiagnosed diabetes (area under the receiver-operating characteristic curve of FPG 0.93; sensitivity 82.0%; cost per case identified by two-step strategy ¥261), 5.3 mmol/l for both diabetes and pre-diabetes or pre-diabetes alone using 2003 ADA criteria (0.89 or 0.85; 72.4% or 62.9%; ¥110 or ¥258), 5.0 mmol/l for pre-diabetes using 1999 WHO criteria (0.78; 66.8%; ¥399), and 4.9 mmol/l for IGT alone (0.74; 62.2%; ¥502). Using the two-step strategy, the underestimates of prevalence reduced to nearly 38% for pre-diabetes or 18.7% for undiagnosed diabetes, respectively. Approximately a quarter of the general population in Harbin was in hyperglycemic condition. Using optimal FPG cut-off points for two-step strategy in Chinese population may be more effective and less costly for reducing the missed diagnosis of hyperglycemic condition. PMID:25785585

  16. Vitrification of zona-free rabbit expanded or hatching blastocysts: a possible model for human blastocysts.

    PubMed

    Cervera, R P; Garcia-Ximénez, F

    2003-10-01

    The purpose of this study was to test the effectiveness of one two-step (A) and two one-step (B1 and B2) vitrification procedures on denuded expanded or hatching rabbit blastocysts held in standard sealed plastic straws as a possible model for human blastocysts. The effect of blastocyst size was also studied on the basis of three size categories (I: diameter <200 micro m; II: diameter 200-299 micro m; III: diameter >/==" BORDER="0">300 micro m). Rabbit expanded or hatching blastocysts were vitrified at day 4 or 5. Before vitrification, the zona pellucida was removed using acidic phosphate buffered saline. For the two-step procedure, prior to vitrification, blastocysts were pre- equilibrated in a solution containing 10% dimethyl sulphoxide (DMSO) and 10% ethylene glycol (EG) for 1 min. Different final vitrification solutions were compared: 20% DMSO and 20% EG with (A and B1) or without (B2) 0.5 mol/l sucrose. Of 198 vitrified blastocysts, 181 (91%) survived, regardless of the vitrification procedure applied. Vitrification procedure A showed significantly higher re-expansion (88%), attachment (86%) and trophectoderm outgrowth (80%) rates than the two one-step vitrification procedures, B1 and B2 (46 and 21%, 20 and 33%, and 18 and 23%, respectively). After warming, blastocysts of greater size (II and III) showed significantly higher attachment (54 and 64%) and trophectoderm outgrowth (44 and 58%) rates than smaller blastocysts (I, attachment: 29%; trophectoderm outgrowth: 25%). These result demonstrate that denuded expanded or hatching rabbit blastocysts of greater size can be satisfactorily vitrified by use of a two-step procedure. The similarity of vitrification solutions used in humans could make it feasible to test such a procedure on human denuded blastocysts of different sizes.

  17. Pudendal nerve neuromodulation with neurophysiology guidance: a potential treatment option for refractory chronic pelvi-perineal pain.

    PubMed

    Carmel, Maude; Lebel, Michel; Tu, Le Mai

    2010-05-01

    Refractory chronic pelvi-perineal pain (RCPPP) is a challenging entity that has devastating consequences for patient's quality of life. Many etiologies have been proposed including pudendal neuralgia. Multiple treatment options are used but the reported results are sub-optimal and temporary. In this article, we present the technique of pudendal nerve neuromodulation with neurophysiology guidance as a treatment option for RCPPP. This technique is a two-step procedure that includes electrode implantation under neurophysiology guidance followed by the implantation of a permanent generator after a successful trial period. We report the cases of three women who underwent this procedure as a last-resort treatment option. After 2 years of follow-up, their symptoms are still significantly improved. No major complication occurred.

  18. Incorporation of isotopic, fluorescent, and heavy-atom-modified nucleotides into RNAs by position-selective labeling of RNA.

    PubMed

    Liu, Yu; Holmstrom, Erik; Yu, Ping; Tan, Kemin; Zuo, Xiaobing; Nesbitt, David J; Sousa, Rui; Stagno, Jason R; Wang, Yun-Xing

    2018-05-01

    Site-specific incorporation of labeled nucleotides is an extremely useful synthetic tool for many structural studies (e.g., NMR, electron paramagnetic resonance (EPR), fluorescence resonance energy transfer (FRET), and X-ray crystallography) of RNA. However, specific-position-labeled RNAs >60 nt are not commercially available on a milligram scale. Position-selective labeling of RNA (PLOR) has been applied to prepare large RNAs labeled at desired positions, and all the required reagents are commercially available. Here, we present a step-by-step protocol for the solid-liquid hybrid phase method PLOR to synthesize 71-nt RNA samples with three different modification applications, containing (i) a 13 C 15 N-labeled segment; (ii) discrete residues modified with Cy3, Cy5, or biotin; or (iii) two iodo-U residues. The flexible procedure enables a wide range of downstream biophysical analyses using precisely localized functionalized nucleotides. All three RNAs were obtained in <2 d, excluding time for preparing reagents and optimizing experimental conditions. With optimization, the protocol can be applied to other RNAs with various labeling schemes, such as ligation of segmentally labeled fragments.

  19. [Adaptation of humans to walking in semi-hard and flexible space suits under terrestrial gravity].

    PubMed

    Panfilov, V E

    2011-01-01

    The spacesuit donning-on procedure can be viewed as the combining of two kinematic circuits into a single human-spacesuit functional system (HSS) for implementation of extravehicular operations. Optimal human-spacesuit interaction hinges on controllability and coordination of HSS mobile components, and also spacesuit slaving to the central nervous system (CNS) mediated through the human locomotion apparatus. Analysis of walking patterns in semi-hard and flexible spacesuits elucidated the direct and feedback relations between the external (spacesuit) and external (locomotion apparatus and CNS) circuits Lack of regularity in the style of spacesuit design creates difficulties for the direct CNS control of locomotion. Consequently, it is necessary to modify the locomotion command program in order to resolve these difficulties and to add flexibility to CNS control The analysis also helped trace algorithm of program modifications with the ultimate result of induced (forced) walk optimization. Learning how to walk in spacesuit Berkut requires no more than 2500 single steps, whereas about 300 steps must be made to master walk skills in spacesuit SKV.

  20. Step-by-Step Technique for Segmental Reconstruction of Reverse Hill-Sachs Lesions Using Homologous Osteochondral Allograft.

    PubMed

    Alkaduhimi, Hassanin; van den Bekerom, Michel P J; van Deurzen, Derek F P

    2017-06-01

    Posterior shoulder dislocations are accompanied by high forces and can result in an anteromedial humeral head impression fracture called a reverse Hill-Sachs lesion. This reverse Hill-Sachs lesion can result in serious complications including posttraumatic osteoarthritis, posterior dislocations, osteonecrosis, persistent joint stiffness, and loss of shoulder function. Treatment is challenging and depends on the amount of bone loss. Several techniques have been reported to describe the surgical treatment of lesions larger than 20%. However, there is still limited evidence with regard to the optimal procedure. Favorable results have been reported by performing segmental reconstruction of the reverse Hill-Sachs lesion with bone allograft. Although the procedure of segmental reconstruction has been used in several studies, its technique has not yet been well described in detail. In this report we propose a step-by-step description of the technique how to perform a segmental reconstruction of a reverse Hill-Sachs defect.

  1. Studying the varied shapes of gold clusters by an elegant optimization algorithm that hybridizes the density functional tight-binding theory and the density functional theory

    NASA Astrophysics Data System (ADS)

    Yen, Tsung-Wen; Lim, Thong-Leng; Yoon, Tiem-Leong; Lai, S. K.

    2017-11-01

    We combined a new parametrized density functional tight-binding (DFTB) theory (Fihey et al. 2015) with an unbiased modified basin hopping (MBH) optimization algorithm (Yen and Lai 2015) and applied it to calculate the lowest energy structures of Au clusters. From the calculated topologies and their conformational changes, we find that this DFTB/MBH method is a necessary procedure for a systematic study of the structural development of Au clusters but is somewhat insufficient for a quantitative study. As a result, we propose an extended hybridized algorithm. This improved algorithm proceeds in two steps. In the first step, the DFTB theory is employed to calculate the total energy of the cluster and this step (through running DFTB/MBH optimization for given Monte-Carlo steps) is meant to efficiently bring the Au cluster near to the region of the lowest energy minimum since the cluster as a whole has explicitly considered the interactions of valence electrons with ions, albeit semi-quantitatively. Then, in the second succeeding step, the energy-minimum searching process will continue with a skilledly replacement of the energy function calculated by the DFTB theory in the first step by one calculated in the full density functional theory (DFT). In these subsequent calculations, we couple the DFT energy also with the MBH strategy and proceed with the DFT/MBH optimization until the lowest energy value is found. We checked that this extended hybridized algorithm successfully predicts the twisted pyramidal structure for the Au40 cluster and correctly confirms also the linear shape of C8 which our previous DFTB/MBH method failed to do so. Perhaps more remarkable is the topological growth of Aun: it changes from a planar (n =3-11) → an oblate-like cage (n =12-15) → a hollow-shape cage (n =16-18) and finally a pyramidal-like cage (n =19, 20). These varied forms of the cluster's shapes are consistent with those reported in the literature.

  2. Dispersive liquid-liquid microextraction for the determination of nitrophenols in soils by microvial insert large volume injection-gas chromatography-mass spectrometry.

    PubMed

    Cacho, J I; Campillo, N; Viñas, P; Hernández-Córdoba, M

    2016-07-22

    A rapid and sensitive procedure for the determination of six NPs in soils by gas chromatography and mass spectrometry (GC-MS) is proposed. Ultrasound assisted extraction (UAE) is used for NP extraction from soil matrices to an organic solvent, while the environmentally friendly technique dispersive liquid-liquid microextraction (DLLME) is used for the preconcentration of the resulting UAE extracts. NPs were derivatized by applying an "in-situ" acetylation procedure, before being injected into the GC-MS system using microvial insert large volume injection (LVI). Several parameters affecting UAE, DLLME, derivatization and injection steps were investigated. The optimized procedure provided recoveries of 86-111% from spiked samples. Precision values of the procedure (expressed as relative standard deviation, RSD) lower than 12%, and limits of quantification ranging from 1.3 to 2.6ngg(-1), depending on the compound, were obtained. Twenty soil samples, obtained from military, industrial and agricultural areas, were analyzed by the proposed method. Two of the analytes were quantified in two of the samples obtained from industrial areas, at concentrations in the 4.8-9.6ngg(-1) range. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A Novel Protocol for Model Calibration in Biological Wastewater Treatment

    PubMed Central

    Zhu, Ao; Guo, Jianhua; Ni, Bing-Jie; Wang, Shuying; Yang, Qing; Peng, Yongzhen

    2015-01-01

    Activated sludge models (ASMs) have been widely used for process design, operation and optimization in wastewater treatment plants. However, it is still a challenge to achieve an efficient calibration for reliable application by using the conventional approaches. Hereby, we propose a novel calibration protocol, i.e. Numerical Optimal Approaching Procedure (NOAP), for the systematic calibration of ASMs. The NOAP consists of three key steps in an iterative scheme flow: i) global factors sensitivity analysis for factors fixing; ii) pseudo-global parameter correlation analysis for non-identifiable factors detection; and iii) formation of a parameter subset through an estimation by using genetic algorithm. The validity and applicability are confirmed using experimental data obtained from two independent wastewater treatment systems, including a sequencing batch reactor and a continuous stirred-tank reactor. The results indicate that the NOAP can effectively determine the optimal parameter subset and successfully perform model calibration and validation for these two different systems. The proposed NOAP is expected to use for automatic calibration of ASMs and be applied potentially to other ordinary differential equations models. PMID:25682959

  4. Remediation of hexavalent chromium contamination in chromite ore processing residue by sodium dithionite and sodium phosphate addition and its mechanism.

    PubMed

    Li, Yunyi; Cundy, Andrew B; Feng, Jingxuan; Fu, Hang; Wang, Xiaojing; Liu, Yangsheng

    2017-05-01

    Large amounts of chromite ore processing residue (COPR) wastes have been deposited in many countries worldwide, generating significant contamination issues from the highly mobile and toxic hexavalent chromium species (Cr(VI)). In this study, sodium dithionite (Na 2 S 2 O 4 ) was used to reduce Cr(VI) to Cr(III) in COPR containing high available Fe, and then sodium phosphate (Na 3 PO 4 ) was utilized to further immobilize Cr(III), via a two-step procedure (TSP). Remediation and immobilization processes and mechanisms were systematically investigated using batch experiments, sequential extraction studies, X-ray diffraction (XRD) and X-ray Photoelectron Spectroscopy (XPS). Results showed that Na 2 S 2 O 4 effectively reduced Cr(VI) to Cr(III), catalyzed by Fe(III). The subsequent addition of Na 3 PO 4 further immobilized Cr(III) by the formation of crystalline CrPO 4 ·6H 2 O. However, addition of Na 3 PO 4 simultaneously with Na 2 S 2 O 4 (via a one-step procedure, OSP) impeded Cr(VI) reduction due to the competitive reaction of Na 3 PO 4 and Na 2 S 2 O 4 with Fe(III). Thus, the remediation efficiency of the TSP was much higher than the corresponding OSP. Using an optimal dosage in the two-step procedure (Na 2 S 2 O 4 at a dosage of 12× the stoichiometric requirement for 15 days, and then Na 3 PO 4 in a molar ratio (i.e. Na 3 PO 4 : initial Cr(VI)) of 4:1 for another 15 days), the total dissolved Cr in the leachate determined via Toxicity Characteristic Leaching Procedure (TCLP Cr) testing of our samples was reduced to 3.8 mg/L (from an initial TCLP Cr of 112.2 mg/L, i.e. at >96% efficiency). Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. TU-D-201-07: Severity Indication in High Dose Rate Brachytherapy Emergency Response Procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, K; Rustad, F

    Purpose: Understanding the corresponding dose to different staff during the High Dose Rate (HDR) Brachytherapy emergency response procedure could help to develop a strategy in efficiency and effective action. In this study, the variation and risk analysis methodology was developed to simulation the HDR emergency response procedure based on severity indicator. Methods: A GammaMedplus iX HDR unit from Varian Medical System was used for this simulation. The emergency response procedure was decomposed based on risk management methods. Severity indexes were used to identify the impact of a risk occurrence on the step including dose to patient and dose to operationmore » staff by varying the time, HDR source activity, distance from the source to patient and staff and the actions. These actions in 7 steps were to press the interrupt button, press emergency shutoff switch, press emergency button on the afterloader keypad, turn emergency hand-crank, remove applicator from the patient, disconnect transfer tube and move afterloader from the patient, and execute emergency surgical recovery. Results: Given the accumulated time in second at the assumed 7 steps were 15, 5, 30, 15, 180, 120, 1800, and the dose rate of HDR source is 10 Ci, the accumulated dose in cGy to patient at 1cm distance were 188, 250, 625, 813, 3063, 4563 and 27063, and the accumulated exposure in rem to operator at outside the vault, 1m and 10cm distance were 0.0, 0.0, 0.1, 0.1, 22.6, 37.6 and 262.6. The variation was determined by the operators in action at different time and distance from the HDR source. Conclusion: The time and dose were estimated for a HDR unit emergency response procedure. It provided information in making optimal decision during the emergency procedure. Further investigation would be to optimize and standardize the responses for other emergency procedure by time-spatial-dose severity function.« less

  6. Two-steps extraction of essential oil, polysaccharides and biphenyl cyclooctene lignans from Schisandra chinensis Baill fruits.

    PubMed

    Cheng, Zhenyu; Yang, Yingjie; Liu, Yan; Liu, Zhigang; Zhou, Hongli; Hu, Haobin

    2014-08-05

    A method for two-steps extraction of essential oil, polysaccharides and lignans from Schisandra chinensis Baill had been established. Firstly, S. chinensis was extracted by hydro-distillation, the extracted solution was separated from the water-insoluble residue and precipitated by adding dehydrated alcohol after the essential oil was collected, and then the precipitate as polysaccharide was collected. Finally, second extraction was performed to obtained lignans from the water-insoluble residue with ultrasonic-microwave assisted extraction (UMAE) method. Response surface methodology was employed to optimize the UMAE parameters, the optimal conditions were as follows: microwave power 430W, ethanol concentration 84%, particle size of sample 120-mesh sieves, ratio of water to raw material 15 and extraction time 2.1min. Under these optimized conditions, the total extraction yields of five lignans (Schisandrol A, Schisantherin A, Deoxyschisandrin, Schisandrin B and Schisandrin C) had reached 14.22±0.135mg/g. Compared with the traditional method of direct extraction of different bioactive components in respective procedure, the extraction yields of polysaccharides and the five lignans had reached 99% and 95%, respectively. The mean recoveries of the 5 lignan compounds and polysaccharides were 97.75-101.08% and their RSD value was less than 3.88%.The approach proposed in this study not only improved the extraction yield of lignans, but also elevated the utilization of Schisandra resources. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Modeling Woven Polymer Matrix Composites with MAC/GMC

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M. (Technical Monitor)

    2000-01-01

    NASA's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) is used to predict the elastic properties of plain weave polymer matrix composites (PMCs). The traditional one step three-dimensional homogertization procedure that has been used in conjunction with MAC/GMC for modeling woven composites in the past is inaccurate due to the lack of shear coupling inherent to the model. However, by performing a two step homogenization procedure in which the woven composite repeating unit cell is homogenized independently in the through-thickness direction prior to homogenization in the plane of the weave, MAC/GMC can now accurately model woven PMCs. This two step procedure is outlined and implemented, and predictions are compared with results from the traditional one step approach and other models and experiments from the literature. Full coupling of this two step technique with MAC/ GMC will result in a widely applicable, efficient, and accurate tool for the design and analysis of woven composite materials and structures.

  8. Detection of Agar, by Analysis of Sugar Markers, Associated with Bacillus Anthracis Spores, After Culture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wunschel, David S.; Colburn, Heather A.; Fox, Alvin

    2008-08-01

    Detection of small quantities of agar associated with spores of Bacillus anthracis could provide key information regarding its source or growth characteristics. Agar, widely used in growth of bacteria on solid surfaces, consists primarily of repeating polysaccharide units of 3,6-anhydro-L-galactose (AGal) and galactose (Gal) with sulfated and O-methylated galactoses present as minor constituents. Two variants of the alditol acetate procedure were evaluated for detection of potential agar markers associated with spores. The first method employed a reductive hydrolysis step, to stabilize labile anhydrogalactose, by converting to anhydrogalactitol. The second eliminated the reductive hydrolysis step simplifying the procedure. Anhydrogalactitol, derived frommore » agar, was detected using both derivatization methods followed by gas chromatography-mass spectrometry (GC-MS) analysis. However, challenges with artefactual background (reductive hydrolysis) or marker destruction (hydrolysis) lead to the search for alternative sugar markers. A minor agar component, 6-O-methyl galactose (6-O-M gal), was readily detected in agar-grown but not broth-grown bacteria. Detection was optimized by the use of gas chromatography-tandem mass spectrometry (GC-MS-MS). With appropriate choice of sugar marker and analytical procedure, detection of sugar markers for agar has considerable potential in microbial forensics.« less

  9. Parallel Algorithms and Patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robey, Robert W.

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  10. Site-specific protein labeling with PRIME and chelation-assisted Click chemistry

    PubMed Central

    Uttamapinant, Chayasith; Sanchez, Mateo I.; Liu, Daniel S.; Yao, Jennifer Z.; White, Katharine A.; Grecian, Scott; Clarke, Scott; Gee, Kyle R.; Ting, Alice Y.

    2016-01-01

    This protocol describes an efficient method to site-specifically label cell-surface or purified proteins with chemical probes in two steps: PRobe Incorporation Mediated by Enzymes (PRIME) followed by chelation-assisted copper-catalyzed azide-alkyne cycloaddition (CuAAC). In the PRIME step, Escherichia coli lipoic acid ligase site-specifically attaches a picolyl azide derivative to a 13-amino acid recognition sequence that has been genetically fused onto the protein of interest. Proteins bearing picolyl azide are chemoselectively derivatized with an alkyne-probe conjugate by chelation-assisted CuAAC in the second step. We describe herein the optimized protocols to synthesize picolyl azide, perform PRIME labeling, and achieve CuAAC derivatization of picolyl azide on live cells, fixed cells, and purified proteins. Reagent preparations, including synthesis of picolyl azide probes and expression of lipoic acid ligase, take 12 d, while the procedure to perform site-specific picolyl azide ligation and CuAAC on cells or on purified proteins takes 40 min-3 h. PMID:23887180

  11. Design of Quiet Rotorcraft Approach Trajectories

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Burley, Casey L.; Boyd, D. Douglas, Jr.; Marcolini, Michael A.

    2009-01-01

    A optimization procedure for identifying quiet rotorcraft approach trajectories is proposed and demonstrated. The procedure employs a multi-objective genetic algorithm in order to reduce noise and create approach paths that will be acceptable to pilots and passengers. The concept is demonstrated by application to two different helicopters. The optimized paths are compared with one another and to a standard 6-deg approach path. The two demonstration cases validate the optimization procedure but highlight the need for improved noise prediction techniques and for additional rotorcraft acoustic data sets.

  12. Performance evaluation of different types of particle representation procedures of Particle Swarm Optimization in Job-shop Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Izah Anuar, Nurul; Saptari, Adi

    2016-02-01

    This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.

  13. Development and validation of a numerical model for cross-section optimization of a multi-part probe for soft tissue intervention.

    PubMed

    Frasson, L; Neubert, J; Reina, S; Oldfield, M; Davies, B L; Rodriguez Y Baena, F

    2010-01-01

    The popularity of minimally invasive surgical procedures is driving the development of novel, safer and more accurate surgical tools. In this context a multi-part probe for soft tissue surgery is being developed in the Mechatronics in Medicine Laboratory at Imperial College, London. This study reports an optimization procedure using finite element methods, for the identification of an interlock geometry able to limit the separation of the segments composing the multi-part probe. An optimal geometry was obtained and the corresponding three-dimensional finite element model validated experimentally. Simulation results are shown to be consistent with the physical experiments. The outcome of this study is an important step in the provision of a novel miniature steerable probe for surgery.

  14. Direct methylation procedure for converting fatty amides to fatty acid methyl esters in feed and digesta samples.

    PubMed

    Jenkins, T C; Thies, E J; Mosley, E E

    2001-05-01

    Two direct methylation procedures often used for the analysis of total fatty acids in biological samples were evaluated for their application to samples containing fatty amides. Methylation of 5 mg of oleamide (cis-9-octadecenamide) in a one-step (methanolic HCl for 2 h at 70 degrees C) or a two-step (sodium methoxide for 10 min at 50 degrees C followed by methanolic HCl for 10 min at 80 degrees C) procedure gave 59 and 16% conversions of oleamide to oleic acid, respectively. Oleic acid recovery from oleamide was increased to 100% when the incubation in methanolic HCl was lengthened to 16 h and increased to 103% when the incubation in methoxide was modified to 24 h at 100 degrees C. However, conversion of oleamide to oleic acid in an animal feed sample was incomplete for the modified (24 h) two-step procedure but complete for the modified (16 h) one-step procedure. Unsaturated fatty amides in feed and digesta samples can be converted to fatty acid methyl esters by incubation in methanolic HCl if the time of exposure to the acid catalyst is extended from 2 to 16 h.

  15. Methods and reproducibility of grading optimized digital color fundus photographs in the Age-Related Eye Disease Study 2 (AREDS2 Report Number 2).

    PubMed

    Danis, Ronald P; Domalpally, Amitha; Chew, Emily Y; Clemons, Traci E; Armstrong, Jane; SanGiovanni, John Paul; Ferris, Frederick L

    2013-07-08

    To establish continuity with the grading procedures and outcomes from the historical data of the Age-Related Eye Disease Study (AREDS), color photographic imaging and evaluation procedures for the assessment of age-related macular degeneration (AMD) were modified for digital imaging in the AREDS2. The reproducibility of the grading of index AMD lesion components and for the AREDS severity scale was tested at the AREDS2 reading center. Digital color stereoscopic fundus photographs from 4203 AREDS2 subjects collected at baseline and annual follow-up visits were optimized for tonal balance and graded according to a standard protocol slightly modified from AREDS. The reproducibility of digital grading of AREDS2 images was assessed by reproducibility exercises, temporal drift (regrading a subset of baseline annually, n = 88), and contemporaneous masked regrading (ongoing, monthly regrade on 5% of submissions, n = 1335 eyes). In AREDS2, 91% and 96% of images received replicate grades within two steps of the baseline value on the AREDS severity scale for temporal drift and contemporaneous assessment, respectively (weighted Kappa of 0.73 and 0.76). Historical data for temporal drift in replicate gradings on the AREDS film-based images were 88% within two steps (weighted Kappa = 0.88). There was no difference in AREDS2-AREDS concordance for temporal drift (exact P = 0.57). Digital color grading has nearly the same reproducibility as historical film grading. There is substantial agreement for testing the predictive utility of the AREDS severity scale in AREDS2 as a clinical trial outcome. (ClinicalTrials.gov number, NCT00345176.)

  16. Multivariate optimization of a procedure employing microwave-assisted digestion for the determination of nickel and vanadium in crude oil by ICP OES.

    PubMed

    Dos Anjos, Shirlei L; Alves, Jeferson C; Rocha Soares, Sarah A; Araujo, Rennan G O; de Oliveira, Olivia M C; Queiroz, Antonio F S; Ferreira, Sergio L C

    2018-02-01

    This work presents the optimization of a sample preparation procedure using microwave-assisted digestion for the determination of nickel and vanadium in crude oil employing inductively coupled plasma optical emission spectrometry (ICP OES). The optimization step was performed utilizing a two-level full factorial design involving the following factors: concentrated nitric acid and hydrogen peroxide volumes, and microwave-assisted digestion temperature. Nickel and vanadium concentrations were used as responses. Additionally, a multiple response based on the normalization of the concentrations by the highest values was built to establish a compromise condition between the two analytes. A Doehlert matrix optimized the instrumental conditions of the ICP OE spectrometer. In this design, the plasma robustness was used as chemometric response. The experiments were performed using a digested oil sample solution doped with magnesium(II) ions, as well as a standard magnesium solution. The optimized method allows for the determination of nickel and vanadium with quantification limits of 0.79 and 0.20μgg -1 , respectively, for a digested sample mass of 0.1g. The precision (expressed as relative standard deviations) was determined using five replicates of two oil samples and the results obtained were 1.63% and 3.67% for nickel and 0.42% and 4.64% for vanadium. Bismuth and yttrium were also tested as internal standards, and the results demonstrate that yttrium allows for a better precision for the method. The accuracy was confirmed by the analysis of the certified reference material trace element in fuel oil (CRM NIST 1634c). The proposed method was applied for the determination of nickel and vanadium in five crude oil samples from Brazilian Basins. The metal concentrations found varied from 7.30 to 33.21μgg -1 for nickel and from 0.63 to 19.42μgg -1 for vanadium. Copyright © 2017. Published by Elsevier B.V.

  17. On the solution of evolution equations based on multigrid and explicit iterative methods

    NASA Astrophysics Data System (ADS)

    Zhukov, V. T.; Novikova, N. D.; Feodoritova, O. B.

    2015-08-01

    Two schemes for solving initial-boundary value problems for three-dimensional parabolic equations are studied. One is implicit and is solved using the multigrid method, while the other is explicit iterative and is based on optimal properties of the Chebyshev polynomials. In the explicit iterative scheme, the number of iteration steps and the iteration parameters are chosen as based on the approximation and stability conditions, rather than on the optimization of iteration convergence to the solution of the implicit scheme. The features of the multigrid scheme include the implementation of the intergrid transfer operators for the case of discontinuous coefficients in the equation and the adaptation of the smoothing procedure to the spectrum of the difference operators. The results produced by these schemes as applied to model problems with anisotropic discontinuous coefficients are compared.

  18. Information Fusion for High Level Situation Assessment and Prediction

    DTIC Science & Technology

    2007-03-01

    procedure includes deciding a sensor set that achieves the optimal trade -off between its cost and benefit, activating the identified sensors, integrating...and effective decision can be made by dynamic inference based on selecting a subset of sensors with the optimal trade -off between their cost and...first step is achieved by designing a sensor selection criterion that represents the trade -off between the sensor benefit and sensor cost. This is then

  19. Combining Approach in Stages with Least Squares for fits of data in hyperelasticity

    NASA Astrophysics Data System (ADS)

    Beda, Tibi

    2006-10-01

    The present work concerns a method of continuous approximation by block of a continuous function; a method of approximation combining the Approach in Stages with the finite domains Least Squares. An identification procedure by sub-domains: basic generating functions are determined step-by-step permitting their weighting effects to be felt. This procedure allows one to be in control of the signs and to some extent of the optimal values of the parameters estimated, and consequently it provides a unique set of solutions that should represent the real physical parameters. Illustrations and comparisons are developed in rubber hyperelastic modeling. To cite this article: T. Beda, C. R. Mecanique 334 (2006).

  20. Genetic Interaction Mapping in Schizosaccharomyces pombe Using the Pombe Epistasis Mapper (PEM) System and a ROTOR HDA Colony Replicating Robot in a 1536 Array Format.

    PubMed

    Roguev, Assen; Xu, Jiewei; Krogan, Nevan

    2018-02-01

    This protocol describes an optimized high-throughput procedure for generating double deletion mutants in Schizosaccharomyces pombe using the colony replicating robot ROTOR HDA and the PEM (pombe epistasis mapper) system. The method is based on generating high-density colony arrays (1536 colonies per agar plate) and passaging them through a series of antidiploid and mating-type selection (ADS-MTS) and double-mutant selection (DMS) steps. Detailed program parameters for each individual replication step are provided. Using this procedure, batches of 25 or more screens can be routinely performed. © 2018 Cold Spring Harbor Laboratory Press.

  1. Microphytobenthos potential productivity estimated in three tidal embayments of the San Francisco Bay system

    USGS Publications Warehouse

    Guarini, Jean-Marc; Cloern, James E.; Edmunds, Jody L.; Gros, Philippe

    2002-01-01

    In this paper we describe a three-step procedure to infer the spatial heterogeneity in microphytobenthos primary productivity at the scale of tidal estuaries and embayments. The first step involves local measurement of the carbon assimilation rate of benthic microalgae to determine the parameters of the photosynthesis-irradiance (P-E) curves (using non-linear optimization methods). In the next step, a resampling technique is used to rebuild pseudo-sampling distributions of the local productivity estimates; these provide error estimates for determining the significance level of differences between sites. The third step combines the previous results with deterministic models of tidal elevation and solar irradiance to compute mean and variance of the daily areal primary productivity over an entire intertidal mudflat area within each embayment. This scheme was applied on three different intertidal mudflat regions of the San Francisco Bay estuary during autumn 1998. Microphytobenthos productivity exhibits strong (ca. 3-fold) significant differences among the major sub-basins of San Francisco Bay. This spatial heterogeneity is attributed to two main causes: significant differences in the photosynthetic competence (P-E parameters) of the microphytobenthos in the different sub-basins, and spatial differences in the phase shifts between the tidal and solar cycles controlling the exposure of intertidal areas to sunlight. The procedure is general and can be used in other estuaries to assess the magnitude and patterns of spatial variability of microphytobenthos productivity at the level of the ecosystems.

  2. Microphytobenthic potential productivity estimated in three tidal embayments of the San Francisco Bay: A comparative study

    USGS Publications Warehouse

    Guarini, J.-M.; Cloern, James E.; Edmunds, J.

    2002-01-01

    In this paper we describe a three-step procedure to infer the spatial heterogeneity in microphytobenthos primary productivity at the scale of tidal estuaries and embayments. The first step involves local measurement of the carbon assimilation rate of benthic microalgae to determine the parameters of the photosynthesis-irradiance (P-E) curves (using non-linear optimization methods). In the next step, a resampling technique is used to rebuild pseudo-sampling distributions of the local productivity estimates; these provide error estimates for determining the significance level of differences between sites. The third step combines the previous results with deterministic models of tidal elevation and solar irradiance to compute mean and variance of the daily areal primary productivity over an entire intertidal mudflat area within each embayment. This scheme was applied on three different intertidal mudflat regions of the San Francisco Bay estuary during autumn 1998. Microphytobenthos productivity exhibits strong (ca. 3-fold) significant differences among the major sub-basins of San Francisco Bay. This spatial heterogeneity is attributed to two main causes: significant differences in the photosynthetic competence (P-E parameters) of the microphytobenthos in the different sub-basins, and spatial differences in the phase shifts between the tidal and solar cycles controlling the exposure of intertidal areas to sunlight. The procedure is general and can be used in other estuaries to assess the magnitude and patterns of spatial variability of microphytobenthos productivity at the level of the ecosystems.

  3. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    NASA Astrophysics Data System (ADS)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost among all admissible affine maps. The procedure can be used on both continuous measures and finite sample sets from distributions. In numerical examples, the procedure is applied to multivariate normal distributions, to a two-dimensional shape transform problem and to color transfer problems. For the second topic, we present an extension to anisotropic flows of the recently developed Helmholtz and wave-vortex decomposition method for one-dimensional spectra measured along ship or aircraft tracks in Buhler et al. (J. Fluid Mech., vol. 756, 2014, pp. 1007-1026). While in the original method the flow was assumed to be homogeneous and isotropic in the horizontal plane, we allow the flow to have a simple kind of horizontal anisotropy that is chosen in a self-consistent manner and can be deduced from the one-dimensional power spectra of the horizontal velocity fields and their cross-correlation. The key result is that an exact and robust Helmholtz decomposition of the horizontal kinetic energy spectrum can be achieved in this anisotropic flow setting, which then also allows the subsequent wave-vortex decomposition step. The new method is developed theoretically and tested with encouraging results on challenging synthetic data as well as on ocean data from the Gulf Stream.

  4. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less

  5. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  6. Output-Sensitive Construction of Reeb Graphs.

    PubMed

    Doraiswamy, H; Natarajan, V

    2012-01-01

    The Reeb graph of a scalar function represents the evolution of the topology of its level sets. This paper describes a near-optimal output-sensitive algorithm for computing the Reeb graph of scalar functions defined over manifolds or non-manifolds in any dimension. Key to the simplicity and efficiency of the algorithm is an alternate definition of the Reeb graph that considers equivalence classes of level sets instead of individual level sets. The algorithm works in two steps. The first step locates all critical points of the function in the domain. Critical points correspond to nodes in the Reeb graph. Arcs connecting the nodes are computed in the second step by a simple search procedure that works on a small subset of the domain that corresponds to a pair of critical points. The paper also describes a scheme for controlled simplification of the Reeb graph and two different graph layout schemes that help in the effective presentation of Reeb graphs for visual analysis of scalar fields. Finally, the Reeb graph is employed in four different applications-surface segmentation, spatially-aware transfer function design, visualization of interval volumes, and interactive exploration of time-varying data.

  7. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  8. Bi-Level Integrated System Synthesis (BLISS)

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.

    1998-01-01

    BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.

  9. A variation-perturbation method for atomic and molecular interactions. I - Theory. II - The interaction potential and van der Waals molecule for Ne-HF

    NASA Astrophysics Data System (ADS)

    Gallup, G. A.; Gerratt, J.

    1985-09-01

    The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.

  10. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  11. Environmentally friendly microwave-assisted sequential extraction method followed by ICP-OES and ion-chromatographic analysis for rapid determination of sulphur forms in coal samples.

    PubMed

    Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine

    2018-05-15

    A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Optimization of automated large-scale production of [(18)F]fluoroethylcholine for PET prostate cancer imaging.

    PubMed

    Pascali, Giancarlo; D'Antonio, Luca; Bovone, Paola; Gerundini, Paolo; August, Thorsten

    2009-07-01

    PET tumor imaging is gaining importance in current clinical practice. FDG-PET is the most utilized approach but suffers from inflammation influences and is not utilizable in prostate cancer detection. Recently, (11)C-choline analogues have been employed successfully in this field of imaging, leading to a growing interest in the utilization of (18)F-labeled analogues: [(18)F]fluoroethylcholine (FEC) has been demonstrated to be promising, especially in prostate cancer imaging. In this work we report an automatic radiosynthesis of this tracer with high yields, short synthesis time and ease of performance, potentially utilizable in routine production sites. We used a Modular Lab system to automatically perform the two-step/one-pot synthesis. In the first step, we labeled ethyleneglycolditosylate obtaining [(18)F]fluoroethyltosylate; in the second step, we performed the coupling of the latter intermediate with neat dimethylethanolamine. The final mixture was purified by means of solid phase extraction; in particular, the product was trapped into a cation-exchange resin and eluted with isotonic saline. The optimized procedure resulted in a non decay corrected yield of 36% and produced a range of 30-45 GBq of product already in injectable form. The product was analyzed for quality control and resulted as pure and sterile; in addition, residual solvents were under the required threshold. In this work, we present an automatic FEC radiosynthesis that has been optimized for routine production. This findings should foster the interest for a wider utilization of this radiomolecule for imaging of prostate cancer with PET, a field for which no gold-standard tracer has yet been validated.

  13. Grain Yield Observations Constrain Cropland CO2 Fluxes Over Europe

    NASA Astrophysics Data System (ADS)

    Combe, M.; de Wit, A. J. W.; Vilà-Guerau de Arellano, J.; van der Molen, M. K.; Magliulo, V.; Peters, W.

    2017-12-01

    Carbon exchange over croplands plays an important role in the European carbon cycle over daily to seasonal time scales. A better description of this exchange in terrestrial biosphere models—most of which currently treat crops as unmanaged grasslands—is needed to improve atmospheric CO2 simulations. In the framework we present here, we model gross European cropland CO2 fluxes with a crop growth model constrained by grain yield observations. Our approach follows a two-step procedure. In the first step, we calculate day-to-day crop carbon fluxes and pools with the WOrld FOod STudies (WOFOST) model. A scaling factor of crop growth is optimized regionally by minimizing the final grain carbon pool difference to crop yield observations from the Statistical Office of the European Union. In a second step, we re-run our WOFOST model for the full European 25 × 25 km gridded domain using the optimized scaling factors. We combine our optimized crop CO2 fluxes with a simple soil respiration model to obtain the net cropland CO2 exchange. We assess our model's ability to represent cropland CO2 exchange using 40 years of observations at seven European FluxNet sites and compare it with carbon fluxes produced by a typical terrestrial biosphere model. We conclude that our new model framework provides a more realistic and strongly observation-driven estimate of carbon exchange over European croplands. Its products will be made available to the scientific community through the ICOS Carbon Portal and serve as a new cropland component in the CarbonTracker Europe inverse model.

  14. Novel actin crosslinker superfamily member identified by a two step degenerate PCR procedure.

    PubMed

    Byers, T J; Beggs, A H; McNally, E M; Kunkel, L M

    1995-07-24

    Actin-crosslinking proteins link F-actin into the bundles and networks that constitute the cytoskeleton. Dystrophin, beta-spectrin, alpha-actinin, ABP-120, ABP-280, and fimbrin share homologous actin-binding domains and comprise an actin crosslinker superfamily. We have identified a novel member of this superfamily (ACF7) using a degenerate primer-mediated PCR strategy that was optimized to resolve less-abundant superfamily sequences. The ACF7 gene is on human chromosome 1 and hybridizes to high molecular weight bands on northern blots. Sequence comparisons argue that ACF7 does not fit into one of the existing families, but represents a new class within the superfamily.

  15. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    PubMed

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  16. In vivo dosimetry using Gafchromic films during pelvic intraoperative electron radiation therapy (IOERT)

    PubMed Central

    Costa, Filipa; Gomes, Dora; Magalhães, Helena; Arrais, Rosário; Moreira, Graciete; Cruz, Maria Fátima; Silva, José Pedro; Santos, Lúcio; Sousa, Olga

    2016-01-01

    Objective: To characterize in vivo dose distributions during pelvic intraoperative electron radiation therapy (IOERT) for rectal cancer and to assess the alterations introduced by irregular irradiation surfaces in the presence of bevelled applicators. Methods: In vivo measurements were performed with Gafchromic films during 32 IOERT procedures. 1 film per procedure was used for the first 20 procedures. The methodology was then optimized for the remaining 12 procedures by using a set of 3 films. Both the average dose and two-dimensional dose distributions for each film were determined. Phantom measurements were performed for comparison. Results: For flat and concave surfaces, the doses measured in vivo agree with expected values. For concave surfaces with step-like irregularities, measured doses tend to be higher than expected doses. Results obtained with three films per procedure show a large variability along the irradiated surface, with important differences from expected profiles. These results are consistent with the presence of surface hotspots, such as those observed in phantoms in the presence of step-like irregularities, as well as fluid build-up. Conclusion: Clinical dose distributions in the IOERT of rectal cancer are often different from the references used for prescription. Further studies are necessary to assess the impact of these differences on treatment outcomes. In vivo measurements are important, but need to be accompanied by accurate imaging of positioning and irradiated surfaces. Advances in knowledge: These results confirm that surface irregularities occur frequently in rectal cancer IOERT and have a measurable effect on the dose distribution. PMID:27188847

  17. [Indications of lung transplantation: Patients selection, timing of listing, and choice of procedure].

    PubMed

    Morisse Pradier, H; Sénéchal, A; Philit, F; Tronc, F; Maury, J-M; Grima, R; Flamens, C; Paulus, S; Neidecker, J; Mornex, J-F

    2016-02-01

    Lung transplantation (LT) is now considered as an excellent treatment option for selected patients with end-stage pulmonary diseases, such as COPD, cystic fibrosis, idiopathic pulmonary fibrosis, and pulmonary arterial hypertension. The 2 goals of LT are to provide a survival benefit and to improve quality of life. The 3-step decision process leading to LT is discussed in this review. The first step is the selection of candidates, which requires a careful examination in order to check absolute and relative contraindications. The second step is the timing of listing for LT; it requires the knowledge of disease-specific prognostic factors available in international guidelines, and discussed in this paper. The third step is the choice of procedure: indications of heart-lung, single-lung, and bilateral-lung transplantation are described. In conclusion, this document provides guidelines to help pulmonologists in the referral and selection processes of candidates for transplantation in order to optimize the outcome of LT. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  18. Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach

    NASA Astrophysics Data System (ADS)

    Gassara, H.; El Hajjaji, A.; Chaabane, M.

    2017-07-01

    This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.

  19. Bias in error estimation when using cross-validation for model selection.

    PubMed

    Varma, Sudhir; Simon, Richard

    2006-02-23

    Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.

  20. Investigations for Thermal and Electrical Conductivity of ABS-Graphene Blended Prototypes

    PubMed Central

    Singh, Rupinder; Sandhu, Gurleen S.; Penna, Rosa; Farina, Ilenia

    2017-01-01

    The thermoplastic materials such as acrylonitrile-butadiene-styrene (ABS) and Nylon have large applications in three-dimensional printing of functional/non-functional prototypes. Usually these polymer-based prototypes are lacking in thermal and electrical conductivity. Graphene (Gr) has attracted impressive enthusiasm in the recent past due to its natural mechanical, thermal, and electrical properties. This paper presents the step by step procedure (as a case study) for development of an in-house ABS-Gr blended composite feedstock filament for fused deposition modelling (FDM) applications. The feedstock filament has been prepared by two different methods (mechanical and chemical mixing). For mechanical mixing, a twin screw extrusion (TSE) process has been used, and for chemical mixing, the composite of Gr in an ABS matrix has been set by chemical dissolution, followed by mechanical blending through TSE. Finally, the electrical and thermal conductivity of functional prototypes prepared from composite feedstock filaments have been optimized. PMID:28773244

  1. Preparation and characterization of silica xerogels as carriers for drugs.

    PubMed

    Czarnobaj, K

    2008-11-01

    The aim of the present study was to utilize the sol-gel method to synthesize different forms of xerogel matrices for drugs and to investigate how the synthesis conditions and solubility of drugs influence the change of the profile of drug release and the structure of the matrices. Silica xerogels doped with drugs were prepared by the sol-gel method from a hydrolyzed tetraethoxysilane (TEOS) solution containing two model compounds: diclofenac diethylamine, (DD)--a water-soluble drug or ibuprofen, (IB)--a water insoluble drug. Two procedures were used for the synthesis of sol-gel derived materials: one-step procedure (the sol-gel reaction was carried out under acidic or basic conditions) and the two-step procedure (first, hydrolysis of TEOS was carried out under acidic conditions, and then condensation of silanol groups was carried out under basic conditions) in order to obtain samples with altered microstructures. In vitro release studies of drugs revealed a similar release profile in two steps: an initial diffusion-controlled release followed by a slower release rate. In all the cases studied, the released amount of DD was higher and the released time was shorter compared with IB for the same type of matrices. The released amount of drugs from two-step prepared xerogels was always lower than that from one-step base-catalyzed xerogels. One-step acid-catalyzed xerogels proved unsuitable as the carriers for the examined drugs.

  2. Optimization of proximity ligation assay (PLA) for detection of protein interactions and fusion proteins in non-adherent cells: application to pre-B lymphocytes.

    PubMed

    Debaize, Lydie; Jakobczyk, Hélène; Rio, Anne-Gaëlle; Gandemer, Virginie; Troadec, Marie-Bérengère

    2017-01-01

    Genetic abnormalities, including chromosomal translocations, are described for many hematological malignancies. From the clinical perspective, detection of chromosomal abnormalities is relevant not only for diagnostic and treatment purposes but also for prognostic risk assessment. From the translational research perspective, the identification of fusion proteins and protein interactions has allowed crucial breakthroughs in understanding the pathogenesis of malignancies and consequently major achievements in targeted therapy. We describe the optimization of the Proximity Ligation Assay (PLA) to ascertain the presence of fusion proteins, and protein interactions in non-adherent pre-B cells. PLA is an innovative method of protein-protein colocalization detection by molecular biology that combines the advantages of microscopy with the advantages of molecular biology precision, enabling detection of protein proximity theoretically ranging from 0 to 40 nm. We propose an optimized PLA procedure. We overcome the issue of maintaining non-adherent hematological cells by traditional cytocentrifugation and optimized buffers, by changing incubation times, and modifying washing steps. Further, we provide convincing negative and positive controls, and demonstrate that optimized PLA procedure is sensitive to total protein level. The optimized PLA procedure allows the detection of fusion proteins and protein interactions on non-adherent cells. The optimized PLA procedure described here can be readily applied to various non-adherent hematological cells, from cell lines to patients' cells. The optimized PLA protocol enables detection of fusion proteins and their subcellular expression, and protein interactions in non-adherent cells. Therefore, the optimized PLA protocol provides a new tool that can be adopted in a wide range of applications in the biological field.

  3. Electro-thermal battery model identification for automotive applications

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Yurkovich, S.; Guezennec, Y.; Yurkovich, B. J.

    This paper describes a model identification procedure for identifying an electro-thermal model of lithium ion batteries used in automotive applications. The dynamic model structure adopted is based on an equivalent circuit model whose parameters are scheduled on the state-of-charge, temperature, and current direction. Linear spline functions are used as the functional form for the parametric dependence. The model identified in this way is valid inside a large range of temperatures and state-of-charge, so that the resulting model can be used for automotive applications such as on-board estimation of the state-of-charge and state-of-health. The model coefficients are identified using a multiple step genetic algorithm based optimization procedure designed for large scale optimization problems. The validity of the procedure is demonstrated experimentally for an A123 lithium ion iron-phosphate battery.

  4. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    2011-12-01

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  5. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  6. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.

  7. Designing nacre-like materials for simultaneous stiffness, strength and toughness: Optimum materials, composition, microstructure and size

    NASA Astrophysics Data System (ADS)

    Barthelat, Francois

    2014-12-01

    Nacre, bone and spider silk are staggered composites where inclusions of high aspect ratio reinforce a softer matrix. Such staggered composites have emerged through natural selection as the best configuration to produce stiffness, strength and toughness simultaneously. As a result, these remarkable materials are increasingly serving as model for synthetic composites with unusual and attractive performance. While several models have been developed to predict basic properties for biological and bio-inspired staggered composites, the designer is still left to struggle with finding optimum parameters. Unresolved issues include choosing optimum properties for inclusions and matrix, and resolving the contradictory effects of certain design variables. Here we overcome these difficulties with a multi-objective optimization for simultaneous high stiffness, strength and energy absorption in staggered composites. Our optimization scheme includes material properties for inclusions and matrix as design variables. This process reveals new guidelines, for example the staggered microstructure is only advantageous if the tablets are at least five times stronger than the interfaces, and only if high volume concentrations of tablets are used. We finally compile the results into a step-by-step optimization procedure which can be applied for the design of any type of high-performance staggered composite and at any length scale. The procedure produces optimum designs which are consistent with the materials and microstructure of natural nacre, confirming that this natural material is indeed optimized for mechanical performance.

  8. Determination of the mass transfer limiting step of dye adsorption onto commercial adsorbent by using mathematical models.

    PubMed

    Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov

    2014-01-01

    Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.

  9. A Cooperative Traffic Control of Vehicle–Intersection (CTCVI) for the Reduction of Traffic Delays and Fuel Consumption

    PubMed Central

    Li, Jinjian; Dridi, Mahjoub; El-Moudni, Abdellah

    2016-01-01

    The problem of reducing traffic delays and decreasing fuel consumption simultaneously in a network of intersections without traffic lights is solved by a cooperative traffic control algorithm, where the cooperation is executed based on the connection of Vehicle-to-Infrastructure (V2I). This resolution of the problem contains two main steps. The first step concerns the itinerary of which intersections are chosen by vehicles to arrive at their destination from their starting point. Based on the principle of minimal travel distance, each vehicle chooses its itinerary dynamically based on the traffic loads in the adjacent intersections. The second step is related to the following proposed cooperative procedures to allow vehicles to pass through each intersection rapidly and economically: on one hand, according to the real-time information sent by vehicles via V2I in the edge of the communication zone, each intersection applies Dynamic Programming (DP) to cooperatively optimize the vehicle passing sequence with minimal traffic delays so that the vehicles may rapidly pass the intersection under the relevant safety constraints; on the other hand, after receiving this sequence, each vehicle finds the optimal speed profiles with the minimal fuel consumption by an exhaustive search. The simulation results reveal that the proposed algorithm can significantly reduce both travel delays and fuel consumption compared with other papers under different traffic volumes. PMID:27999333

  10. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    PubMed

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human planner intervention. A comparison of the results with the optimized solution obtained using a similar optimization model but with human planner intervention revealed that the proposed algorithm produced optimized plans superior to that developed using the manual plan. The proposed algorithm can generate admissible solutions within reasonable computational times and can be used to develop fully automated IMRT treatment planning methods, thus reducing human planners' workloads during iterative processes. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. Galerkin v. discrete-optimal projection in nonlinear model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin Thomas; Barone, Matthew Franklin; Antil, Harbir

    Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes.more » We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.« less

  12. Optimization of contoured hypersonic scramjet inlets with a least-squares parabolized Navier-Stokes procedure

    NASA Technical Reports Server (NTRS)

    Korte, J. J.; Auslender, A. H.

    1993-01-01

    A new optimization procedure, in which a parabolized Navier-Stokes solver is coupled with a non-linear least-squares optimization algorithm, is applied to the design of a Mach 14, laminar two-dimensional hypersonic subscale flight inlet with an internal contraction ratio of 15:1 and a length-to-throat half-height ratio of 150:1. An automated numerical search of multiple geometric wall contours, which are defined by polynomical splines, results in an optimal geometry that yields the maximum total-pressure recovery for the compression process. Optimal inlet geometry is obtained for both inviscid and viscous flows, with the assumption that the gas is either calorically or thermally perfect. The analysis with a calorically perfect gas results in an optimized inviscid inlet design that is defined by two cubic splines and yields a mass-weighted total-pressure recovery of 0.787, which is a 23% improvement compared with the optimized shock-canceled two-ramp inlet design. Similarly, the design procedure obtains the optimized contour for a viscous calorically perfect gas to yield a mass-weighted total-pressure recovery value of 0.749. Additionally, an optimized contour for a viscous thermally perfect gas is obtained to yield a mass-weighted total-pressure recovery value of 0.768. The design methodology incorporates both complex fluid dynamic physics and optimal search techniques without an excessive compromise of computational speed; hence, this methodology is a practical technique that is applicable to optimal inlet design procedures.

  13. Towards Protein Crystallization as a Process Step in Downstream Processing of Therapeutic Antibodies: Screening and Optimization at Microbatch Scale

    PubMed Central

    Zang, Yuguo; Kammerer, Bernd; Eisenkolb, Maike; Lohr, Katrin; Kiefer, Hans

    2011-01-01

    Crystallization conditions of an intact monoclonal IgG4 (immunoglobulin G, subclass 4) antibody were established in vapor diffusion mode by sparse matrix screening and subsequent optimization. The procedure was transferred to microbatch conditions and a phase diagram was built showing surprisingly low solubility of the antibody at equilibrium. With up-scaling to process scale in mind, purification efficiency of the crystallization step was investigated. Added model protein contaminants were excluded from the crystals to more than 95%. No measurable loss of Fc-binding activity was observed in the crystallized and redissolved antibody. Conditions could be adapted to crystallize the antibody directly from concentrated and diafiltrated cell culture supernatant, showing purification efficiency similar to that of Protein A chromatography. We conclude that crystallization has the potential to be included in downstream processing as a low-cost purification or formulation step. PMID:21966480

  14. A heterogeneous Pd-Bi/C catalyst in the synthesis of L-lyxose and L-ribose from naturally occurring D-sugars.

    PubMed

    Fan, Ao; Jaenicke, Stephan; Chuah, Gaik-Khuan

    2011-10-26

    A critical step in the synthesis of the rare sugars, L-lyxose and L-ribose, from the corresponding D-sugars is the oxidation to the lactone. Instead of conventional oxidizing agents like bromine or pyridinium dichromate, it was found that a heterogeneous catalyst, Pd-Bi/C, could be used for the direct oxidation with molecular oxygen. The composition of the catalyst was optimized and the best results were obtained with 5 : 1 atomic ratio of Pd : Bi. The overall yields of the five-step procedure to L-ribose and L-lyxose were 47% and 50%, respectively. The synthetic procedure is advantageous from the viewpoint of overall yield, reduced number of steps, and mild reaction conditions. Furthermore, the heterogeneous oxidation catalyst can be easily separated from the reaction mixture and reused with no loss of activity.

  15. A multiple-objective optimal exploration strategy

    USGS Publications Warehouse

    Christakos, G.; Olea, R.A.

    1988-01-01

    Exploration for natural resources is accomplished through partial sampling of extensive domains. Such imperfect knowledge is subject to sampling error. Complex systems of equations resulting from modelling based on the theory of correlated random fields are reduced to simple analytical expressions providing global indices of estimation variance. The indices are utilized by multiple objective decision criteria to find the best sampling strategies. The approach is not limited by geometric nature of the sampling, covers a wide range in spatial continuity and leads to a step-by-step procedure. ?? 1988.

  16. Ionic liquid-salt aqueous two-phase extraction based on salting-out coupled with high-performance liquid chromatography for the determination of sulfonamides in water and food.

    PubMed

    Han, Juan; Wang, Yun; Liu, Yan; Li, Yanfang; Lu, Yang; Yan, Yongsheng; Ni, Liang

    2013-02-01

    Ionic liquid-salt aqueous two-phase extraction coupled with high-performance liquid chromatography with ultraviolet detection was developed for the determination of sulfonamides in water and food samples. In the procedure, the analytes were extracted from the aqueous samples into the ionic liquid top phase in one step. Three sulfonamides, sulfamerazine, sulfamethoxazole, and sulfamethizole were selected here as model compounds for developing and evaluating the method. The effects of various experimental parameters in extraction step were studied using two optimization methods, one variable at a time and Box-Behnken design. The results showed that the amount of sulfonamides did not have effect on the extraction efficiency. Therefore, a three-level Box-Behnken experimental design with three factors, which combined the response surface modeling, was used to optimize sulfonamides extraction. Under the most favorable extraction parameters, the detection limits (S/N = 3) and quantification limits (S/N = 10) of the proposed method for the target compounds were achieved within the range of 0.15-0.3 ng/mL and 0.5-1.0 ng/mL from spiked samples, respectively, which are lower than or comparable with other reported approaches applied to the determination of the same compounds. Finally, the proposed method was successfully applied to the determination of sulfonamide compounds in different water and food samples and satisfactory recoveries of spiked target compounds in real samples were obtained.

  17. Optimized synthesis of phosphorothioate oligodeoxyribonucleotides substituted with a 5′-protected thiol function and a 3′-amino group

    PubMed Central

    Aubert, Yves; Bourgerie, Sylvain; Meunier, Laurent; Mayer, Roger; Roche, Annie-Claude; Monsigny, Michel; Thuong, Nguyen T.; Asseline, Ulysse

    2000-01-01

    A new deprotection procedure enables a medium scale preparation of phosphodiester and phosphorothioate oligonucleotides substituted with a protected thiol function at their 5′-ends and an amino group at their 3′-ends in good yield (up to 72 OD units/µmol for a 19mer phosphorothioate). Syntheses of 3′-amino-substituted oligonucleotides were carried out on a modified support. A linker containing the thioacetyl moiety was manually coupled in two steps by first adding its phosphoramidite derivative in the presence of tetrazole followed by either oxidation or sulfurization to afford the bis-derivatized oligonucleotide bound to the support. Deprotection was achieved by treating the fully protected oligonucleotide with a mixture of 2,2′-dithiodipyridine and concentrated aqueous ammonia in the presence of phenol and methanol. This procedure enables (i) cleavage of the oligonucleotide from the support, releasing the oligonucleotide with a free amino group at its 3′-end, (ii) deprotection of the phosphate groups and the amino functions of the nucleic bases, as well as (iii) transformation of the 5′-terminal S-acetyl function into a dithiopyridyl group. The bis-derivatized phosphorothioate oligomer was further substituted through a two-step procedure: first, the 3′-amino group was reacted with fluorescein isothiocyanate to yield a fluoresceinylated oligonucleotide; the 5′-dithiopyridyl group was then quantitatively reduced to give a free thiol group which was then substituted by reaction with an Nα-bromoacetyl derivative of a signal peptide containing a KDEL sequence to afford a fluoresceinylated peptide–oligonucleotide conjugate. PMID:10637335

  18. User's manual: Computer-aided design programs for inductor-energy-storage dc-to-dc electronic power converters

    NASA Technical Reports Server (NTRS)

    Huffman, S.

    1977-01-01

    Detailed instructions on the use of two computer-aided-design programs for designing the energy storage inductor for single winding and two winding dc to dc converters are provided. Step by step procedures are given to illustrate the formatting of user input data. The procedures are illustrated by eight sample design problems which include the user input and the computer program output.

  19. The importance of a two-step impression procedure for complete denture fabrication: a systematic review of the literature.

    PubMed

    Regis, R R; Alves, C C S; Rocha, S S M; Negreiros, W A; Freitas-Pontes, K M

    2016-10-01

    The literature has questioned the real need for some clinical and laboratory procedures considered essential for achieving better results for complete denture fabrication. The aim of this study was to review the current literature concerning the relevance of a two-step impression procedure to achieve better clinical results in fabricating conventional complete dentures. Through an electronic search strategy of the PubMed/MEDLINE database, randomised controlled clinical trials which compared complete denture fabrication in adults in which one or two steps of impressions occurred were identified. The selections were made by three independent reviewers. Among the 540 titles initially identified, four studies (seven published papers) reporting on 257 patients evaluating aspects such as oral health-related quality of life, patient satisfaction with dentures in use, masticatory performance and chewing ability, denture quality, direct and indirect costs were considered eligible. The quality of included studies was assessed according to the Cochrane guidelines. The clinical studies considered for this review suggest that a two-step impression procedure may not be mandatory for the success of conventional complete denture fabrication regarding a variety of clinical aspects of denture quality and patients' perceptions of the treatment. © 2016 John Wiley & Sons Ltd.

  20. Display/control requirements for automated VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Kleinman, D. L.; Young, L. R.

    1976-01-01

    A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.

  1. Optimization of Interior Permanent Magnet Motor by Quality Engineering and Multivariate Analysis

    NASA Astrophysics Data System (ADS)

    Okada, Yukihiro; Kawase, Yoshihiro

    This paper has described the method of optimization based on the finite element method. The quality engineering and the multivariable analysis are used as the optimization technique. This optimizing method consists of two steps. At Step.1, the influence of parameters for output is obtained quantitatively, at Step.2, the number of calculation by the FEM can be cut down. That is, the optimal combination of the design parameters, which satisfies the required characteristic, can be searched for efficiently. In addition, this method is applied to a design of IPM motor to reduce the torque ripple. The final shape can maintain average torque and cut down the torque ripple 65%. Furthermore, the amount of permanent magnets can be reduced.

  2. Vitamin B12 production from crude glycerol by Propionibacterium freudenreichii ssp. shermanii: optimization of medium composition through statistical experimental designs.

    PubMed

    Kośmider, Alicja; Białas, Wojciech; Kubiak, Piotr; Drożdżyńska, Agnieszka; Czaczyk, Katarzyna

    2012-02-01

    A two-step statistical experimental design was employed to optimize the medium for vitamin B(12) production from crude glycerol by Propionibacterium freudenreichii ssp. shermanii. In the first step, using Plackett-Burman design, five of 13 tested medium components (calcium pantothenate, NaH(2)PO(4)·2H(2)O, casein hydrolysate, glycerol and FeSO(4)·7H(2)O) were identified as factors having significant influence on vitamin production. In the second step, a central composite design was used to optimize levels of medium components selected in the first step. Valid statistical models describing the influence of significant factors on vitamin B(12) production were established for each optimization phase. The optimized medium provided a 93% increase in final vitamin concentration compared to the original medium. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Spin density and orbital optimization in open shell systems: A rational and computationally efficient proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giner, Emmanuel, E-mail: gnrmnl@unife.it; Angeli, Celestino, E-mail: anc@unife.it

    2016-03-14

    The present work describes a new method to compute accurate spin densities for open shell systems. The proposed approach follows two steps: first, it provides molecular orbitals which correctly take into account the spin delocalization; second, a proper CI treatment allows to account for the spin polarization effect while keeping a restricted formalism and avoiding spin contamination. The main idea of the optimization procedure is based on the orbital relaxation of the various charge transfer determinants responsible for the spin delocalization. The algorithm is tested and compared to other existing methods on a series of organic and inorganic open shellmore » systems. The results reported here show that the new approach (almost black-box) provides accurate spin densities at a reasonable computational cost making it suitable for a systematic study of open shell systems.« less

  4. Requirements analysis and preliminary design of a robotic assistant for reconstructive microsurgery.

    PubMed

    Vanthournhout, L; Herman, B; Duisit, J; Château, F; Szewczyk, J; Lengelé, B; Raucent, B

    2015-08-01

    Microanastomosis is a microsurgical gesture that involves suturing two very small blood vessels together. This gesture is used in many operations such as avulsed member auto-grafting, pediatric surgery, reconstructive surgery - including breast reconstruction by free flap. When vessels have diameters smaller than one millimeter, hand tremors make movements difficult to control. This paper introduces our preliminary steps towards robotic assistance for helping surgeons to perform microanastomosis in optimal conditions, in order to increase gesture quality and reliability even on smaller diameters. A general needs assessment and an experimental motion analysis were performed to define the requirements of the robot. Geometric parameters of the kinematic structure were then optimized to fulfill specific objectives. A prototype of the robot is currently being designed and built in order to providing a sufficient increase in accuracy without prolonging the duration of the procedure.

  5. A comprehensive and efficient process for counseling patients desiring sterilization.

    PubMed

    Haws, J M; Butta, P G; Girvin, S

    1997-06-01

    To optimize the time spent counseling a sterilization patient, this article presents a 10-step process that includes all steps necessary to ensure a comprehensive counseling session: (1) Discuss current contraception use and all available methods; (2) assess the client's interest in/readiness for sterilization; (3) emphasize that the procedure is meant to be permanent, but there is a possibility of failure; (4) explain the surgical procedure using visuals, and include a discussion of benefits and risks; (5) explain privately to the client the need to use condoms if engaging in risky sexual activity; (6) have the client read and sign an informed consent form; (7) schedule an appointment for the procedure and provide the patient with a copy of all necessary paperwork; (8) discuss cost and payment method; (9) provide written preoperative and postoperative instructions; and (10) schedule a postoperation visit, or a postoperation semen analysis.

  6. Optimal Elevation and Configuration of Hanford's Double-Shell Tank Waste Mixer Pumps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onishi, Yasuo; Yokuda, Satoru T.; Majumder, Catherine A.

    The objective of this study was to compare the mixing performance of the Lawrence pump, which has injection nozzles at the top, with an alternative pump that has injection nozzles at the bottom, and to determine the optimal elevation for the alternative pump. Sixteen cases were evaluated: two sludge thicknesses at eight levels. A two-step evaluation approach was used: Step 1 to evaluate all 16 cases with the non-rotating mixer pump model and Step 2 to further evaluate four of those cases with the more realistic rotating mixer pump model. The TEMPEST code was used.

  7. Alternative Procedure of Heat Integration Tehnique Election between Two Unit Processes to Improve Energy Saving

    NASA Astrophysics Data System (ADS)

    Santi, S. S.; Renanto; Altway, A.

    2018-01-01

    The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.

  8. Using Green Star Metrics to Optimize the Greenness of Literature Protocols for Syntheses

    ERIC Educational Resources Information Center

    Duarte, Rita C. C.; Ribeiro, M. Gabriela T. C.; Machado, Adélio A. S. C.

    2015-01-01

    A procedure to improve the greenness of a synthesis, without performing laboratory work, using alternative protocols available in the literature is presented. The greenness evaluation involves the separate assessment of the different steps described in the available protocols--reaction, isolation, and purification--as well as the global process,…

  9. Implementation and Development of the Incremental Hole Drilling Method for the Measurement of Residual Stress in Thermal Spray Coatings

    NASA Astrophysics Data System (ADS)

    Valente, T.; Bartuli, C.; Sebastiani, M.; Loreto, A.

    2005-12-01

    The experimental measurement of residual stresses originating within thick coatings deposited by thermal spray on solid substrates plays a role of fundamental relevance in the preliminary stages of coating design and process parameters optimization. The hole-drilling method is a versatile and widely used technique for the experimental determination of residual stress in the most superficial layers of a solid body. The consolidated procedure, however, can only be implemented for metallic bulk materials or for homogeneous, linear elastic, and isotropic materials. The main objective of the present investigation was to adapt the experimental method to the measurement of stress fields built up in ceramic coatings/metallic bonding layers structures manufactured by plasma spray deposition. A finite element calculation procedure was implemented to identify the calibration coefficients necessary to take into account the elastic modulus discontinuities that characterize the layered structure through its thickness. Experimental adjustments were then proposed to overcome problems related to the low thermal conductivity of the coatings. The number of calculation steps and experimental drilling steps were finally optimized.

  10. Preparation of alpha-emitting nuclides by electrodeposition

    NASA Astrophysics Data System (ADS)

    Lee, M. H.; Lee, C. W.

    2000-06-01

    A method is described for electrodepositing the alpha-emitting nuclides. To determine the optimum conditions for plating plutonium, the effects of electrolyte concentration, chelating reagent, current, pH of electrolyte and the time of plating on the electrodeposition were investigated on the base of the ammonium oxalate-ammonium sulfate electrolyte containing diethyl triamino pentaacetic acid. An optimized electrodeposition procedure for the determination of plutonium was validated by application to environmental samples. The chemical yield of the optimized method of electrodeposition step in the environmental sample was a little higher than that of Talvitie's method. The developed electrodeposition procedure in this study was applied to determine the radionuclides such as thorium, uranium and americium that the electrodeposition yields were a little higher than those of the conventional method.

  11. Approach for ochratoxin A fast screening in spices using clean-up tandem immunoassay columns with confirmation by high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS).

    PubMed

    Goryacheva, I Yu; De Saeger, S; Lobeau, M; Eremin, S A; Barna-Vetró, I; Van Peteghem, C

    2006-09-01

    An approach for ochratoxin A (OTA) fast cost-effective screening based on clean-up tandem immunoassay columns was developed and optimized for OTA detection with a cut-off level of 10 microg kg(-1) in spices. Two procedures were tested and applied for OTA detection. Column with bottom detection immunolayer was optimized for OTA determination in Capsicum ssp. spices. A modified clean-up tandem immunoassay procedure with top detection immunolayer was successfully applied for all tested spices. Its main advantages were decreasing of the number of analysis steps and quantity of antibody and also minimizing of matrix effects. The total duration of the extraction and analysis was about 40 min for six samples. Chilli, red pepper, pili-pili, cayenne, paprika, nutmeg, ginger, white pepper and black pepper samples were analyzed for OTA contamination by the proposed clean-up tandem immunoassay procedures. Clean-up tandem immunoassay results were confirmed by HPLC-MS/MS with immunoaffinity column clean-up. Among 17 tested Capsicum ssp. spices, 6 samples (35%) contained OTA in a concentration exceeding the 10 microg kg(-1) limit discussed by the European Commission. All tested nutmeg (n=8), ginger (n=5), white pepper (n=7) and black pepper (n=6) samples did not contain OTA above this action level.

  12. Optimization for minimum sensitivity to uncertain parameters

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw

    1994-01-01

    A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.

  13. Solving the infeasible trust-region problem using approximations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott

    2004-07-01

    The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less

  14. Procedural key steps in laparoscopic colorectal surgery, consensus through Delphi methodology.

    PubMed

    Dijkstra, Frederieke A; Bosker, Robbert J I; Veeger, Nicolaas J G M; van Det, Marc J; Pierie, Jean Pierre E N

    2015-09-01

    While several procedural training curricula in laparoscopic colorectal surgery have been validated and published, none have focused on dividing surgical procedures into well-identified segments, which can be trained and assessed separately. This enables the surgeon and resident to focus on a specific segment, or combination of segments, of a procedure. Furthermore, it will provide a consistent and uniform method of training for residents rotating through different teaching hospitals. The goal of this study was to determine consensus on the key steps of laparoscopic right hemicolectomy and laparoscopic sigmoid colectomy among experts in our University Medical Center and affiliated hospitals. This will form the basis for the INVEST video-assisted side-by-side training curriculum. The Delphi method was used for determining consensus on key steps of both procedures. A list of 31 steps for laparoscopic right hemicolectomy and 37 steps for laparoscopic sigmoid colectomy was compiled from textbooks and national and international guidelines. In an online questionnaire, 22 experts in 12 hospitals within our teaching region were invited to rate all steps on a Likert scale on importance for the procedure. Consensus was reached in two rounds. Sixteen experts agreed to participate. Of these 16 experts, 14 (88%) completed the questionnaire for both procedures. Of the 14 who completed the first round, 13 (93%) completed the second round. Cronbach's alpha was 0.79 for the right hemicolectomy and 0.91 for the sigmoid colectomy, showing high internal consistency between the experts. For the right hemicolectomy, 25 key steps were established; for the sigmoid colectomy, 24 key steps were established. Expert consensus on the key steps for laparoscopic right hemicolectomy and laparoscopic sigmoid colectomy was reached. These key steps will form the basis for a video-assisted teaching curriculum.

  15. An efficient microwave-assisted synthesis method for the production of water soluble amine-terminated Si nanoparticles.

    PubMed

    Atkins, Tonya M; Louie, Angelique Y; Kauzlarich, Susan M

    2012-07-27

    Silicon nanoparticles can be considered a green material, especially when prepared via a microwave-assisted method without the use of highly reactive reducing agents or hydrofluoric acid. A simple solution synthesis of hydrogen-terminated Si- and Mn-doped Si nanoparticles via microwave-assisted synthesis is demonstrated. The reaction of the Zintl salt, Na(4)Si(4), or Mn-doped Na(4)Si(4), Na(4)Si(4(Mn)), with ammonium bromide, NH(4)Br, produces small dispersible nanoparticles along with larger particles that precipitate. Allylamine and 1-amino-10-undecene were reacted with the hydrogen-terminated Si nanoparticles to provide water solubility and stability. A one-pot, single-reaction process and a one-pot, two-step reaction process were investigated. Details of the microwave-assisted process are provided, with the optimal synthesis being the one-pot, two-step reaction procedure and a total time of about 15 min. The nanoparticles were characterized by transmission electron microscopy (TEM), x-ray diffraction, and fluorescence spectroscopies. The microwave-assisted method reliably produces a narrow size distribution of Si nanoparticles in solution.

  16. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  17. An optimized and validated SPE-LC-MS/MS method for the determination of caffeine and paraxanthine in hair.

    PubMed

    De Kesel, Pieter M M; Lambert, Willy E; Stove, Christophe P

    2015-11-01

    Caffeine is the probe drug of choice to assess the phenotype of the drug metabolizing enzyme CYP1A2. Typically, molar concentration ratios of paraxanthine, caffeine's major metabolite, to its precursor are determined in plasma following administration of a caffeine test dose. The aim of this study was to develop and validate an LC-MS/MS method for the determination of caffeine and paraxanthine in hair. The different steps of a hair extraction procedure were thoroughly optimized. Following a three-step decontamination procedure, caffeine and paraxanthine were extracted from 20 mg of ground hair using a solution of protease type VIII in Tris buffer (pH 7.5). Resulting hair extracts were cleaned up on Strata-X™ SPE cartridges. All samples were analyzed on a Waters Acquity UPLC® system coupled to an AB SCIEX API 4000™ triple quadrupole mass spectrometer. The final method was fully validated based on international guidelines. Linear calibration lines for caffeine and paraxanthine ranged from 20 to 500 pg/mg. Precision (%RSD) and accuracy (%bias) were below 12% and 7%, respectively. The isotopically labeled internal standards compensated for the ion suppression observed for both compounds. Relative matrix effects were below 15%RSD. The recovery of the sample preparation procedure was high (>85%) and reproducible. Caffeine and paraxanthine were stable in hair for at least 644 days. The effect of the hair decontamination procedure was evaluated as well. Finally, the applicability of the developed procedure was demonstrated by determining caffeine and paraxanthine concentrations in hair samples of ten healthy volunteers. The optimized and validated method for determination of caffeine and paraxanthine in hair proved to be reliable and may serve to evaluate the potential of hair analysis for CYP1A2 phenotyping. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. 48 CFR 6.102 - Use of competitive procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ACQUISITION PLANNING COMPETITION REQUIREMENTS Full and Open Competition 6.102 Use of competitive procedures. The competitive procedures available for use in fulfilling the requirement for full and open... procedures (e.g., two-step sealed bidding). (d) Other competitive procedures. (1) Selection of sources for...

  19. Utilization of optimized BCR three-step sequential and dilute HCl single extraction procedures for soil-plant metal transfer predictions in contaminated lands.

    PubMed

    Kubová, Jana; Matús, Peter; Bujdos, Marek; Hagarová, Ingrid; Medved', Ján

    2008-05-30

    The prediction of soil metal phytoavailability using the chemical extractions is a conventional approach routinely used in soil testing. The adequacy of such soil tests for this purpose is commonly assessed through a comparison of extraction results with metal contents in relevant plants. In this work, the fractions of selected risk metals (Al, As, Cd, Cu, Fe, Mn, Ni, Pb, Zn) that can be taken up by various plants were obtained by optimized BCR (Community Bureau of Reference) three-step sequential extraction procedure (SEP) and by single 0.5 mol L(-1) HCl extraction. These procedures were validated using five soil and sediment reference materials (SRM 2710, SRM 2711, CRM 483, CRM 701, SRM RTH 912) and applied to significantly different acidified soils for the fractionation of studied metals. The new indicative values of Al, Cd, Cu, Fe, Mn, P, Pb and Zn fractional concentrations for these reference materials were obtained by the dilute HCl single extraction. The influence of various soil genesis, content of essential elements (Ca, Mg, K, P) and different anthropogenic sources of acidification on extraction yields of individual risk metal fractions was investigated. The concentrations of studied elements were determined by atomic spectrometry methods (flame, graphite furnace and hydride generation atomic absorption spectrometry and inductively coupled plasma optical emission spectrometry). It can be concluded that the data of extraction yields from first BCR SEP acid extractable step and soil-plant transfer coefficients can be applied to the prediction of qualitative mobility of selected risk metals in different soil systems.

  20. Automated modal parameter estimation using correlation analysis and bootstrap sampling

    NASA Astrophysics Data System (ADS)

    Yaghoubi, Vahid; Vakilzadeh, Majid K.; Abrahamsson, Thomas J. S.

    2018-02-01

    The estimation of modal parameters from a set of noisy measured data is a highly judgmental task, with user expertise playing a significant role in distinguishing between estimated physical and noise modes of a test-piece. Various methods have been developed to automate this procedure. The common approach is to identify models with different orders and cluster similar modes together. However, most proposed methods based on this approach suffer from high-dimensional optimization problems in either the estimation or clustering step. To overcome this problem, this study presents an algorithm for autonomous modal parameter estimation in which the only required optimization is performed in a three-dimensional space. To this end, a subspace-based identification method is employed for the estimation and a non-iterative correlation-based method is used for the clustering. This clustering is at the heart of the paper. The keys to success are correlation metrics that are able to treat the problems of spatial eigenvector aliasing and nonunique eigenvectors of coalescent modes simultaneously. The algorithm commences by the identification of an excessively high-order model from frequency response function test data. The high number of modes of this model provides bases for two subspaces: one for likely physical modes of the tested system and one for its complement dubbed the subspace of noise modes. By employing the bootstrap resampling technique, several subsets are generated from the same basic dataset and for each of them a model is identified to form a set of models. Then, by correlation analysis with the two aforementioned subspaces, highly correlated modes of these models which appear repeatedly are clustered together and the noise modes are collected in a so-called Trashbox cluster. Stray noise modes attracted to the mode clusters are trimmed away in a second step by correlation analysis. The final step of the algorithm is a fuzzy c-means clustering procedure applied to a three-dimensional feature space to assign a degree of physicalness to each cluster. The proposed algorithm is applied to two case studies: one with synthetic data and one with real test data obtained from a hammer impact test. The results indicate that the algorithm successfully clusters similar modes and gives a reasonable quantification of the extent to which each cluster is physical.

  1. One-pot, two-step desymmetrization of symmetrical benzils catalyzed by the methylsulfinyl (dimsyl) anion.

    PubMed

    Ragno, Daniele; Bortolini, Olga; Giovannini, Pier Paolo; Massi, Alessandro; Pacifico, Salvatore; Zaghi, Anna

    2014-08-14

    An operationally simple one-pot, two-step procedure for the desymmetrization of benzils is herein described. This consists in the chemoselective cross-benzoin reaction of symmetrical benzils with aromatic aldehydes catalyzed by the methyl sulfinyl (dimsyl) anion, followed by microwave-assisted oxidation of the resulting benzoylated benzoins with nitrate, avoiding the costly isolation procedure. Both electron-withdrawing and electron-donating substituents may be accommodated on the aromatic rings of the final unsymmetrical benzil.

  2. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  3. An online replanning method using warm start optimization and aperture morphing for flattening-filter-free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahunbay, Ergun E., E-mail: eahunbay@mcw.edu; Ates,

    Purpose: In a situation where a couch shift for patient positioning is not preferred or prohibited (e.g., MR-linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening-filter-free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here the authors propose a new two-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online step. The offline step is to create a series of preshifted-plans (PSPs) obtained by a so-called “warm start”more » optimization (starting optimization from the original plan, rather than from scratch) at a series of isocenter shifts. The PSPs all have the same number of segments with very similar shapes, since the warm start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by picking the closest PSP or linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated and almost instantaneous (no optimization or dose calculation needed). The previously developed SAM algorithm is then applied for daily deformation. The authors tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. Plan interpolation method is effective in diminishing the unflat beam effect and may allow reducing the required number of PSPs. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusions: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation except the delineation of target contour required by the SAM process.« less

  4. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution.

    PubMed

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-16

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  5. Optimal 2D-SIM reconstruction by two filtering steps with Richardson-Lucy deconvolution

    NASA Astrophysics Data System (ADS)

    Perez, Victor; Chang, Bo-Jui; Stelzer, Ernst Hans Karl

    2016-11-01

    Structured illumination microscopy relies on reconstruction algorithms to yield super-resolution images. Artifacts can arise in the reconstruction and affect the image quality. Current reconstruction methods involve a parametrized apodization function and a Wiener filter. Empirically tuning the parameters in these functions can minimize artifacts, but such an approach is subjective and produces volatile results. We present a robust and objective method that yields optimal results by two straightforward filtering steps with Richardson-Lucy-based deconvolutions. We provide a resource to identify artifacts in 2D-SIM images by analyzing two main reasons for artifacts, out-of-focus background and a fluctuating reconstruction spectrum. We show how the filtering steps improve images of test specimens, microtubules, yeast and mammalian cells.

  6. Functional-to-form mapping for assembly design automation

    NASA Astrophysics Data System (ADS)

    Xu, Z. G.; Liu, W. M.; Shen, W. D.; Yang, D. Y.; Liu, T. T.

    2017-11-01

    Assembly-level function-to-form mapping is the most effective procedure towards design automation. The research work mainly includes: the assembly-level function definitions, product network model and the two-step mapping mechanisms. The function-to-form mapping is divided into two steps, i.e. mapping of function-to-behavior, called the first-step mapping, and the second-step mapping, i.e. mapping of behavior-to-structure. After the first step mapping, the three dimensional transmission chain (or 3D sketch) is studied, and the feasible design computing tools are developed. The mapping procedure is relatively easy to be implemented interactively, but, it is quite difficult to finish it automatically. So manual, semi-automatic, automatic and interactive modification of the mapping model are studied. A mechanical hand F-F mapping process is illustrated to verify the design methodologies.

  7. Biomass-to-electricity: analysis and optimization of the complete pathway steam explosion--enzymatic hydrolysis--anaerobic digestion with ICE vs SOFC as biogas users.

    PubMed

    Santarelli, M; Barra, S; Sagnelli, F; Zitella, P

    2012-11-01

    The paper deals with the energy analysis and optimization of a complete biomass-to-electricity energy pathway, starting from raw biomass towards the production of renewable electricity. The first step (biomass-to-biogas) is based on a real pilot plant located in Environment Park S.p.A. (Torino, Italy) with three main steps ((1) impregnation; (2) steam explosion; (3) enzymatic hydrolysis), completed by a two-step anaerobic fermentation. In the second step (biogas-to-electricity), the paper considers two technologies: internal combustion engines and a stack of solid oxide fuel cells. First, the complete pathway has been modeled and validated through experimental data. After, the model has been used for an analysis and optimization of the complete thermo-chemical and biological process, with the objective function of maximization of the energy balance at minimum consumption. The comparison between ICE and SOFC shows the better performance of the integrated plants based on SOFC. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  9. Composting. Sludge Treatment and Disposal Course #166. Instructor's Guide [and] Student Workbook.

    ERIC Educational Resources Information Center

    Arasmith, E. E.

    Composting is a lesson developed for a sludge treatment and disposal course. The lesson discusses the basic theory of composting and the basic operation, in a step-by-step sequence, of the two typical composting procedures: windrow and forced air static pile. The lesson then covers basic monitoring and operational procedures. The instructor's…

  10. Global linear-irreversible principle for optimization in finite-time thermodynamics

    NASA Astrophysics Data System (ADS)

    Johal, Ramandeep S.

    2018-03-01

    There is intense effort into understanding the universal properties of finite-time models of thermal machines —at optimal performance— such as efficiency at maximum power, coefficient of performance at maximum cooling power, and other such criteria. In this letter, a global principle consistent with linear irreversible thermodynamics is proposed for the whole cycle —without considering details of irreversibilities in the individual steps of the cycle. This helps to express the total duration of the cycle as τ \\propto {\\bar{Q}^2}/{Δ_\\text{tot}S} , where \\bar{Q} models the effective heat transferred through the machine during the cycle, and Δ_ \\text{tot} S is the total entropy generated. By taking \\bar{Q} in the form of simple algebraic means (such as arithmetic and geometric means) over the heats exchanged by the reservoirs, the present approach is able to predict various standard expressions for figures of merit at optimal performance, as well as the bounds respected by them. It simplifies the optimization procedure to a one-parameter optimization, and provides a fresh perspective on the issue of universality at optimal performance, for small difference in reservoir temperatures. As an illustration, we compare the performance of a partially optimized four-step endoreversible cycle with the present approach.

  11. An Alternative Approach to the Operation of Multinational Reservoir Systems: Application to the Amistad & Falcon System (Lower Rio Grande/Rí-o Bravo)

    NASA Astrophysics Data System (ADS)

    Serrat-Capdevila, A.; Valdes, J. B.

    2005-12-01

    An optimization approach for the operation of international multi-reservoir systems is presented. The approach uses Stochastic Dynamic Programming (SDP) algorithms, both steady-state and real-time, to develop two models. In the first model, the reservoirs and flows of the system are aggregated to yield an equivalent reservoir, and the obtained operating policies are disaggregated using a non-linear optimization procedure for each reservoir and for each nation water balance. In the second model a multi-reservoir approach is applied, disaggregating the releases for each country water share in each reservoir. The non-linear disaggregation algorithm uses SDP-derived operating policies as boundary conditions for a local time-step optimization. Finally, the performance of the different approaches and methods is compared. These models are applied to the Amistad-Falcon International Reservoir System as part of a binational dynamic modeling effort to develop a decision support system tool for a better management of the water resources in the Lower Rio Grande Basin, currently enduring a severe drought.

  12. Optimized Enrichment of Phosphoproteomes by Fe-IMAC Column Chromatography.

    PubMed

    Ruprecht, Benjamin; Koch, Heiner; Domasinska, Petra; Frejno, Martin; Kuster, Bernhard; Lemeer, Simone

    2017-01-01

    Phosphorylation is among the most important post-translational modifications of proteins and has numerous regulatory functions across all domains of life. However, phosphorylation is often substoichiometric, requiring selective and sensitive methods to enrich phosphorylated peptides from complex cellular digests. Various methods have been devised for this purpose and we have recently described a Fe-IMAC HPLC column chromatography setup which is capable of comprehensive, reproducible, and selective enrichment of phosphopeptides out of complex peptide mixtures. In contrast to other formats such as StageTips or batch incubations using TiO 2 or Ti-IMAC beads, Fe-IMAC HPLC columns do not suffer from issues regarding incomplete phosphopeptide binding or elution and enrichment efficiency scales linearly with the amount of starting material. Here, we provide a step-by-step protocol for the entire phosphopeptide enrichment procedure including sample preparation (lysis, digestion, desalting), Fe-IMAC column chromatography (column setup, operation, charging), measurement by LC-MS/MS (nHPLC gradient, MS parameters) and data analysis (MaxQuant). To increase throughput, we have optimized several key steps such as the gradient time of the Fe-IMAC separation (15 min per enrichment), the number of consecutive enrichments possible between two chargings (>20) and the column recharging itself (<1 h). We show that the application of this protocol enables the selective (>90 %) identification of more than 10,000 unique phosphopeptides from 1 mg of HeLa digest within 2 h of measurement time (Q Exactive Plus).

  13. And never the twain shall meet? Integrating revenue cycle and supply chain functions.

    PubMed

    Matjucha, Karen A; Chung, Bianca

    2008-09-01

    Four initial steps to implementing a profit and loss management model are: Identify the supplies clinicians are using. Empower stakeholders to remove items that are not commonly used. Reduce factors driving wasted product. Review the chargemaster to ensure that supplies used in selected procedures are represented. Strategically set prices that optimize maximum allowable reimbursement.

  14. Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers

    NASA Astrophysics Data System (ADS)

    Marinca, Vasile; Ene, Remus-Daniel; Bereteu, Liviu

    2017-10-01

    Dynamic response time is an important feature for determining the performance of magnetorheological (MR) dampers in practical civil engineering applications. The objective of this paper is to show how to use the Optimal Homotopy Asymptotic Method (OHAM) to give approximate analytical solutions of the nonlinear differential equation of a modified Bingham model with non-viscous exponential damping. Our procedure does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. OHAM is very efficient in practice for ensuring very rapid convergence of the solution after only one iteration and with a small number of steps.

  15. Application of modified Rosenbrock's method for optimization of nutrient media used in microorganism culturing.

    PubMed

    Votruba, J; Pilát, P; Prokop, A

    1975-12-01

    The Rosenbrock's procedure has been modified for optimization of nutrient medium composition and has been found to be less tedious than the Box-Wilson method, especially for larger numbers of optimized parameters. Its merits are particularly obvious with multiparameter optimization where the gradient method, so far the only one employed in microbiology from a variety of optimization methods (e.g., refs, 9 and 10), becomes impractical because of the excessive number of experiments required. The method suggested is also more stable during optimization than the gradient methods which are very sensitive to the selection of steps in the direction of the gradient and may thus easily shoot out of the optimized region. It is also anticipated that other direct search methods, particularly simplex design, may be easily adapted for optimization of medium composition. It is obvious that direct search methods may find an application in process improvement in antibiotic and related industries.

  16. How quantizable matter gravitates: A practitioner's guide

    NASA Astrophysics Data System (ADS)

    Schuller, Frederic P.; Witte, Christof

    2014-05-01

    We present the practical step-by-step procedure for constructing canonical gravitational dynamics and kinematics directly from any previously specified quantizable classical matter dynamics, and then illustrate the application of this recipe by way of two completely worked case studies. Following the same procedure, any phenomenological proposal for fundamental matter dynamics must be supplemented with a suitable gravity theory providing the coefficients and kinematical interpretation of the matter theory, before any of the two theories can be meaningfully compared to experimental data.

  17. A Simulation Optimization Approach to Epidemic Forecasting

    PubMed Central

    Nsoesie, Elaine O.; Beckman, Richard J.; Shashaani, Sara; Nagaraj, Kalyani S.; Marathe, Madhav V.

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area. PMID:23826222

  18. A Simulation Optimization Approach to Epidemic Forecasting.

    PubMed

    Nsoesie, Elaine O; Beckman, Richard J; Shashaani, Sara; Nagaraj, Kalyani S; Marathe, Madhav V

    2013-01-01

    Reliable forecasts of influenza can aid in the control of both seasonal and pandemic outbreaks. We introduce a simulation optimization (SIMOP) approach for forecasting the influenza epidemic curve. This study represents the final step of a project aimed at using a combination of simulation, classification, statistical and optimization techniques to forecast the epidemic curve and infer underlying model parameters during an influenza outbreak. The SIMOP procedure combines an individual-based model and the Nelder-Mead simplex optimization method. The method is used to forecast epidemics simulated over synthetic social networks representing Montgomery County in Virginia, Miami, Seattle and surrounding metropolitan regions. The results are presented for the first four weeks. Depending on the synthetic network, the peak time could be predicted within a 95% CI as early as seven weeks before the actual peak. The peak infected and total infected were also accurately forecasted for Montgomery County in Virginia within the forecasting period. Forecasting of the epidemic curve for both seasonal and pandemic influenza outbreaks is a complex problem, however this is a preliminary step and the results suggest that more can be achieved in this area.

  19. Shape design of internal cooling passages within a turbine blade

    NASA Astrophysics Data System (ADS)

    Nowak, Grzegorz; Nowak, Iwona

    2012-04-01

    The article concerns the optimization of the shape and location of non-circular passages cooling the blade of a gas turbine. To model the shape, four Bezier curves which form a closed profile of the passage were used. In order to match the shape of the passage to the blade profile, a technique was put forward to copy and scale the profile fragments into the component, and build the outline of the passage on the basis of them. For so-defined cooling passages, optimization calculations were carried out with a view to finding their optimal shape and location in terms of the assumed objectives. The task was solved as a multi-objective problem with the use of the Pareto method, for a cooling system composed of four and five passages. The tool employed for the optimization was the evolutionary algorithm. The article presents the impact of the population on the task convergence, and discusses the impact of different optimization objectives on the Pareto optimal solutions obtained. Due to the problem of different impacts of individual objectives on the position of the solution front which was noticed during the calculations, a two-step optimization procedure was introduced. Also, comparative optimization calculations for the scalar objective function were carried out and set up against the non-dominated solutions obtained in the Pareto approach. The optimization process resulted in a configuration of the cooling system that allows a significant reduction in the temperature of the blade and its thermal stress.

  20. A facile and efficient single-step approach for the fabrication of vancomycin functionalized polymer-based monolith as chiral stationary phase for nano-liquid chromatography.

    PubMed

    Xu, Dongsheng; Shao, Huikai; Luo, Rongying; Wang, Qiqin; Sánchez-López, Elena; Fanali, Salvatore; Marina, Maria Luisa; Jiang, Zhengjin

    2018-07-06

    A facile single-step preparation strategy for fabricating vancomycin functionalized organic polymer-based monolith within 100μm fused-silica capillary was developed. The synthetic chiral functional monomer, i.e 2-isocyanatoethyl methacrylate (ICNEML) derivative of vancomycin, was co-polymerized with the cross-linker ethylene dimethacrylate (EDMA) in the presence of methanol and dimethyl sulfoxide as the selected porogens. The co-polymerization conditions were systematically optimized in order to obtain satisfactory column performance. Adequate permeability, stability and column morphology were observed for the optimized poly(ICNEML-vancomycin-co-EDMA) monolith. A series of chiral drugs were evaluated on the monolith in either polar organic-phase or reversed-phase modes. After the optimization of separation conditions, baseline or partial enantioseparation were obtained for series of drugs including thalidomide, colchicine, carteolol, salbutamol, clenbuterol and several other β-blockers. The proposed single-step approach not only resulted in a vancomycin functionalized organic polymer-based monolith with acceptable performance, but also significantly simplified the preparation procedure by reducing time and labor. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Load forecasting via suboptimal seasonal autoregressive models and iteratively reweighted least squares estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mbamalu, G.A.N.; El-Hawary, M.E.

    The authors propose suboptimal least squares or IRWLS procedures for estimating the parameters of a seasonal multiplicative AR model encountered during power system load forecasting. The proposed method involves using an interactive computer environment to estimate the parameters of a seasonal multiplicative AR process. The method comprises five major computational steps. The first determines the order of the seasonal multiplicative AR process, and the second uses the least squares or the IRWLS to estimate the optimal nonseasonal AR model parameters. In the third step one obtains the intermediate series by back forecast, which is followed by using the least squaresmore » or the IRWLS to estimate the optimal season AR parameters. The final step uses the estimated parameters to forecast future load. The method is applied to predict the Nova Scotia Power Corporation's 168 lead time hourly load. The results obtained are documented and compared with results based on the Box and Jenkins method.« less

  2. Modern concepts in facial nerve reconstruction

    PubMed Central

    2010-01-01

    Background Reconstructive surgery of the facial nerve is not daily routine for most head and neck surgeons. The published experience on strategies to ensure optimal functional results for the patients are based on small case series with a large variety of surgical techniques. On this background it is worthwhile to develop a standardized approach for diagnosis and treatment of patients asking for facial rehabilitation. Conclusion A standardized approach is feasible: Patients with chronic facial palsy first need an exact classification of the palsy's aetiology. A step-by-step clinical examination, if necessary MRI imaging and electromyographic examination allow a classification of the palsy's aetiology as well as the determination of the severity of the palsy and the functional deficits. Considering the patient's desire, age and life expectancy, an individual surgical concept is applicable using three main approaches: a) early extratemporal reconstruction, b) early reconstruction of proximal lesions if extratemporal reconstruction is not possible, c) late reconstruction or in cases of congenital palsy. Twelve to 24 months after the last step of surgical reconstruction a standardized evaluation of the therapeutic results is recommended to evaluate the necessity for adjuvant surgical procedures or other adjuvant procedures, e.g. botulinum toxin application. Up to now controlled trials on the value of physiotherapy and other adjuvant measures are missing to give recommendation for optimal application of adjuvant therapies. PMID:21040532

  3. Comparing Multi-Step IMAC and Multi-Step TiO2 Methods for Phosphopeptide Enrichment

    PubMed Central

    Yue, Xiaoshan; Schunter, Alissa; Hummon, Amanda B.

    2016-01-01

    Phosphopeptide enrichment from complicated peptide mixtures is an essential step for mass spectrometry-based phosphoproteomic studies to reduce sample complexity and ionization suppression effects. Typical methods for enriching phosphopeptides include immobilized metal affinity chromatography (IMAC) or titanium dioxide (TiO2) beads, which have selective affinity and can interact with phosphopeptides. In this study, the IMAC enrichment method was compared with the TiO2 enrichment method, using a multi-step enrichment strategy from whole cell lysate, to evaluate their abilities to enrich for different types of phosphopeptides. The peptide-to-beads ratios were optimized for both IMAC and TiO2 beads. Both IMAC and TiO2 enrichments were performed for three rounds to enable the maximum extraction of phosphopeptides from the whole cell lysates. The phosphopeptides that are unique to IMAC enrichment, unique to TiO2 enrichment, and identified with both IMAC and TiO2 enrichment were analyzed for their characteristics. Both IMAC and TiO2 enriched similar amounts of phosphopeptides with comparable enrichment efficiency. However, phosphopeptides that are unique to IMAC enrichment showed a higher percentage of multi-phosphopeptides, as well as a higher percentage of longer, basic, and hydrophilic phosphopeptides. Also, the IMAC and TiO2 procedures clearly enriched phosphopeptides with different motifs. Finally, further enriching with two rounds of TiO2 from the supernatant after IMAC enrichment, or further enriching with two rounds of IMAC from the supernatant TiO2 enrichment does not fully recover the phosphopeptides that are not identified with the corresponding multi-step enrichment. PMID:26237447

  4. [Determination of serum or plasma alpha-tocopherol by high performance liquid chromatography: optimization of operative models].

    PubMed

    Jezequel-Cuer, M; Le Moël, G; Mounié, J; Peynet, J; Le Bizec, C; Vernet, M H; Artur, Y; Laschi-Loquerie, A; Troupel, S

    1995-01-01

    A previous multicentric study set up by the Société française de biologie clinique has emphasized the usefulness of a standardized procedure for the determination by high performance liquid chromatography of alpha-tocopherol in serum or plasma. In our study, we have tested every step of the different published procedures: internal standard adduct, lipoprotein denaturation and vitamin extraction. Reproducibility of results was improved by the use of tocol as an internal standard when compared to retinol or alpha-tocopherol acetates. Lipoprotein denaturation was more efficient with ethanol addition than with methanol and when the ethanol/water ratio was > or = 0.7. Use of n-hexane or n-heptane gave the same recovery of alpha-tocopherol. When organic solvent/water ratio was > or = 1, n-hexane enabled to efficiently extract, in a one-step procedure, the alpha-tocopherol from both normo and hyperlipidemic sera. Performances of the selected procedure were: detection limit: 0.5 microM--linear range: 750 microM--within run coefficient of variation: 2.03%--day to day: 4.76%. Finally, this pluricentric study allows us to propose an optimised procedure for the determination of alpha-tocopherol in serum or plasma.

  5. Practices & Procedures of Mason Tending I & II. Instructor Manual. Trainee Manual.

    ERIC Educational Resources Information Center

    Laborers-AGC Education and Training Fund, Pomfret Center, CT.

    This packet consists of the instructor and trainee manuals for two courses: practices and procedures of mason tending I and II. The instructor manual for mason tending I contains a schedule for a 40-hour, 5-day course and instructor outline. The outline provides a step-by-step description of the instructor's activities and includes answer sheets…

  6. A comparison of statistical criteria for setting optimally discriminating MCAT and GPA thresholds in medical school admissions.

    PubMed

    Albanese, Mark A; Farrell, Philip; Dottl, Susan L

    2005-01-01

    Using Medical College Admission Test-grade point average (MCAT-GPA) scores as a threshold has the potential to address issues raised in recent Supreme Court cases, but it introduces complicated methodological issues for medical school admissions. To assess various statistical indexes to determine optimally discriminating thresholds for MCAT-GPA scores. Entering classes from 1992 through 1998 (N = 752) are used to develop guidelines for cut scores that optimize discrimination between students who pass and do not pass the United States Medical Licensing Examination (USMLE) Step 1 on the first attempt. Risk differences, odds ratios, sensitivity, and specificity discriminated best for setting thresholds. Compensatory versus noncompensatory procedures both accounted for 54% of Step 1 failures, but demanded different performance requirements (noncompensatory MCAT-biological sciences = 8, physical sciences = 7, verbal reasoning = 7--sum of scores = 22; compensatory MCAT total = 24). Rational and defensible intellectual achievement thresholds that are likely to comply with recent Supreme Court decisions can be set from MCAT scores and GPAs.

  7. Optimal Synthesis of Compliant Mechanisms using Subdivision and Commercial FEA (DETC2004-57497)

    NASA Technical Reports Server (NTRS)

    Hull, Patrick V.; Canfield, Stephen

    2004-01-01

    The field of distributed-compliance mechanisms has seen significant work in developing suitable topology optimization tools for their design. These optimal design tools have grown out of the techniques of structural optimization. This paper will build on the previous work in topology optimization and compliant mechanism design by proposing an alternative design space parameterization through control points and adding another step to the process, that of subdivision. The control points allow a specific design to be represented as a solid model during the optimization process. The process of subdivision creates an additional number of control points that help smooth the surface (for example a C(sup 2) continuous surface depending on the method of subdivision chosen) creating a manufacturable design free of some traditional numerical instabilities. Note that these additional control points do not add to the number of design parameters. This alternative parameterization and description as a solid model effectively and completely separates the design variables from the analysis variables during the optimization procedure. The motivation behind this work is to create an automated design tool from task definition to functional prototype created on a CNC or rapid-prototype machine. This paper will describe the proposed compliant mechanism design process and will demonstrate the procedure on several examples common in the literature.

  8. Rapid Two-Step Procedure for Large-Scale Purification of Pediocin-Like Bacteriocins and Other Cationic Antimicrobial Peptides from Complex Culture Medium

    PubMed Central

    Uteng, Marianne; Hauge, Håvard Hildeng; Brondz, Ilia; Nissen-Meyer, Jon; Fimland, Gunnar

    2002-01-01

    A rapid and simple two-step procedure suitable for both small- and large-scale purification of pediocin-like bacteriocins and other cationic peptides has been developed. In the first step, the bacterial culture was applied directly on a cation-exchange column (1-ml cation exchanger per 100-ml cell culture). Bacteria and anionic compounds passed through the column, and cationic bacteriocins were subsequently eluted with 1 M NaCl. In the second step, the bacteriocin fraction was applied on a low-pressure, reverse-phase column and the bacteriocins were detected as major optical density peaks upon elution with propanol. More than 80% of the activity that was initially in the culture supernatant was recovered in both purification steps, and the final bacteriocin preparation was more than 90% pure as judged by analytical reverse-phase chromatography and capillary electrophoresis. PMID:11823243

  9. Explant culture: An advantageous method for isolation of mesenchymal stem cells from human tissues.

    PubMed

    Hendijani, Fatemeh

    2017-04-01

    Mesenchymal stem cell (MSC) research progressively moves towards clinical phases. Accordingly, a wide range of different procedures were presented in the literature for MSC isolation from human tissues; however, there is not yet any close focus on the details to offer precise information for best method selection. Choosing a proper isolation method is a critical step in obtaining cells with optimal quality and yield in companion with clinical and economical considerations. In this concern, current review widely discusses advantages of omitting proteolysis step in isolation process and presence of tissue pieces in primary culture of MSCs, including removal of lytic stress on cells, reduction of in vivo to in vitro transition stress for migrated/isolated cells, reduction of price, processing time and labour, removal of viral contamination risk, and addition of supporting functions of extracellular matrix and released growth factors from tissue explant. In next sections, it provides an overall report of technical highlights and molecular events of explant culture method for isolation of MSCs from human tissues including adipose tissue, bone marrow, dental pulp, hair follicle, cornea, umbilical cord and placenta. Focusing on informative collection of molecular and methodological data about explant methods can make it easy for researchers to choose an optimal method for their experiments/clinical studies and also stimulate them to investigate and optimize more efficient procedures according to clinical and economical benefits. © 2017 John Wiley & Sons Ltd.

  10. Radiometric and spectral stray light correction for the portable remote imaging spectrometer (PRISM) coastal ocean sensor

    NASA Astrophysics Data System (ADS)

    Haag, Justin M.; Van Gorp, Byron E.; Mouroulis, Pantazis; Thompson, David R.

    2017-09-01

    The airborne Portable Remote Imaging Spectrometer (PRISM) instrument is based on a fast (F/1.8) Dyson spectrometer operating at 350-1050 nm and a two-mirror telescope combined with a Teledyne HyViSI 6604A detector array. Raw PRISM data contain electronic and optical artifacts that must be removed prior to radiometric calibration. We provide an overview of the process transforming raw digital numbers to calibrated radiance values. Electronic panel artifacts are first corrected using empirical relationships developed from laboratory data. The instrument spectral response functions (SRF) are reconstructed using a measurement-based optimization technique. Removal of SRF effects from the data improves retrieval of true spectra, particularly in the typically low-signal near-ultraviolet and near-infrared regions. As a final step, radiometric calibration is performed using corrected measurements of an object of known radiance. Implementation of the complete calibration procedure maximizes data quality in preparation for subsequent processing steps, such as atmospheric removal and spectral signature classification.

  11. Optimization Under Uncertainty for Wake Steering Strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, Julian; Annoni, Jennifer; King, Ryan N

    2017-08-03

    This presentation covers the motivation for this research, optimization under the uncertainty problem formulation, a two-turbine case, the Princess Amalia Wind Farm case, and conclusions and next steps.

  12. Optimal leveling of flow over one-dimensional topography by Marangoni stresses

    NASA Astrophysics Data System (ADS)

    Gramlich, C. M.; Homsy, G. M.; Kalliadasis, Serafim

    2001-11-01

    A thin viscous film flowing over a step down in topography exhibits a capillary ridge near the step, which may be undesirable in applications. This paper investigates optimal leveling of the ridge by means of a Marangoni stress such as might be produced by a localized heater creating temperature variations at the film surface. Lubrication theory results in a differential equation for the free surface, which can be solved numerically for any given topography and temperature profile. Leveling the ridge is then formulated as an optimization problem to minimize the maximum free-surface height by varying the heater strength, position, and width. Optimized heaters with 'top-hat' or parabolic temperature profiles replace the original ridge with two smaller ridges of equal size, achieving leveling of better than 50%. An optimized asymmetric n-step temperature distribution results in (n+1) ridges and reduces the variation in surface height by a factor of better than 1/(n+1).

  13. Magnetostatic focal spot correction for x-ray tubes operating in strong magnetic fields using iterative optimization

    PubMed Central

    Lillaney, Prasheel; Shin, Mihye; Conolly, Steven M.; Fahrig, Rebecca

    2012-01-01

    Purpose: Combining x-ray fluoroscopy and MR imaging systems for guidance of interventional procedures has become more commonplace. By designing an x-ray tube that is immune to the magnetic fields outside of the MR bore, the two systems can be placed in close proximity to each other. A major obstacle to robust x-ray tube design is correcting for the effects of the magnetic fields on the x-ray tube focal spot. A potential solution is to design active shielding that locally cancels the magnetic fields near the focal spot. Methods: An iterative optimization algorithm is implemented to design resistive active shielding coils that will be placed outside the x-ray tube insert. The optimization procedure attempts to minimize the power consumption of the shielding coils while satisfying magnetic field homogeneity constraints. The algorithm is composed of a linear programming step and a nonlinear programming step that are interleaved with each other. The coil results are verified using a finite element space charge simulation of the electron beam inside the x-ray tube. To alleviate heating concerns an optimized coil solution is derived that includes a neodymium permanent magnet. Any demagnetization of the permanent magnet is calculated prior to solving for the optimized coils. The temperature dynamics of the coil solutions are calculated using a lumped parameter model, which is used to estimate operation times of the coils before temperature failure. Results: For a magnetic field strength of 88 mT, the algorithm solves for coils that consume 588 A/cm2. This specific coil geometry can operate for 15 min continuously before reaching temperature failure. By including a neodymium magnet in the design the current density drops to 337 A/cm2, which increases the operation time to 59 min. Space charge simulations verify that the coil designs are effective, but for oblique x-ray tube geometries there is still distortion of the focal spot shape along with deflections of approximately 3 mm in the radial and circumferential directions on the anode. Conclusions: Active shielding is an attractive solution for correcting the effects of magnetic fields on the x-ray focal spot. If extremely long fluoroscopic exposure times are required, longer operation times can be achieved by including a permanent magnet with the active shielding design. PMID:22957623

  14. Optimization of cryoprotectant loading into murine and human oocytes.

    PubMed

    Karlsson, Jens O M; Szurek, Edyta A; Higgins, Adam Z; Lee, Sang R; Eroglu, Ali

    2014-02-01

    Loading of cryoprotectants into oocytes is an important step of the cryopreservation process, in which the cells are exposed to potentially damaging osmotic stresses and chemical toxicity. Thus, we investigated the use of physics-based mathematical optimization to guide design of cryoprotectant loading methods for mouse and human oocytes. We first examined loading of 1.5 M dimethyl sulfoxide (Me(2)SO) into mouse oocytes at 23°C. Conventional one-step loading resulted in rates of fertilization (34%) and embryonic development (60%) that were significantly lower than those of untreated controls (95% and 94%, respectively). In contrast, the mathematically optimized two-step method yielded much higher rates of fertilization (85%) and development (87%). To examine the causes for oocyte damage, we performed experiments to separate the effects of cell shrinkage and Me(2)SO exposure time, revealing that neither shrinkage nor Me(2)SO exposure single-handedly impairs the fertilization and development rates. Thus, damage during one-step Me(2)SO addition appears to result from interactions between the effects of Me(2)SO toxicity and osmotic stress. We also investigated Me(2)SO loading into mouse oocytes at 30°C. At this temperature, fertilization rates were again lower after one-step loading (8%) in comparison to mathematically optimized two-step loading (86%) and untreated controls (96%). Furthermore, our computer algorithm generated an effective strategy for reducing Me(2)SO exposure time, using hypotonic diluents for cryoprotectant solutions. With this technique, 1.5 M Me(2)SO was successfully loaded in only 2.5 min, with 92% fertilizability. Based on these promising results, we propose new methods to load cryoprotectants into human oocytes, designed using our mathematical optimization approach. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Optimization of Cryoprotectant Loading into Murine and Human Oocytes

    PubMed Central

    Karlsson, Jens O.M.; Szurek, Edyta A.; Higgins, Adam Z.; Lee, Sang R.; Eroglu, Ali

    2014-01-01

    Loading of cryoprotectants into oocytes is an important step of the cryopreservation process, in which the cells are exposed to potentially damaging osmotic stresses and chemical toxicity. Thus, we investigated the use of physics-based mathematical optimization to guide design of cryoprotectant loading methods for mouse and human oocytes. We first examined loading of 1.5 M dimethylsulfoxide (Me2SO) into mouse oocytes at 23°C. Conventional one-step loading resulted in rates of fertilization (34%) and embryonic development (60%) that were significantly lower than those of untreated controls (95% and 94%, respectively). In contrast, the mathematically optimized two-step method yielded much higher rates of fertilization (85%) and development (87%). To examine the causes for oocyte damage, we performed experiments to separate the effects of cell shrinkage and Me2SO exposure time, revealing that neither shrinkage nor Me2SO exposure single-handedly impairs the fertilization and development rates. Thus, damage during one-step Me2SO addition appears to result from interactions between the effects of Me2SO toxicity and osmotic stress. We also investigated Me2SO loading into mouse oocytes at 30°C. At this temperature, fertilization rates were again lower after one-step loading (8%) in comparison to mathematically optimized two-step loading (86%) and untreated controls (96%). Furthermore, our computer algorithm generated an effective strategy for reducing Me2SO exposure time, using hypotonic diluents for cryoprotectant solutions. With this technique, 1.5 M Me2SO was successfully loaded in only 2.5 min, with 92% fertilizability. Based on these promising results, we propose new methods to load cryoprotectants into human oocytes, designed using our mathematical optimization approach. PMID:24246951

  16. One-Step and Two-Step Facility Acquisition for Military Construction: Project Selection and Implementation Procedures

    DTIC Science & Technology

    1990-08-01

    the guidance in this report. 1-4. Scope This guidance covers selection of projects suitable for a One-Step or Two-Step approach, development of design...conducted, focus on resolving proposal deficiencies; prices are not "negotiated" in the common use of the term. A Request for Proposal (RFP) states project ...carefully examines experience and past performance in the design of similar projects and building types. Quality of

  17. Parallel Nonnegative Least Squares Solvers for Model Order Reduction

    DTIC Science & Technology

    2016-03-01

    NNLS problems that arise when the Energy Conserving Sampling and Weighting hyper -reduction procedure is used when constructing a reduced-order model...ScaLAPACK and performance results are presented. nonnegative least squares, model order reduction, hyper -reduction, Energy Conserving Sampling and...optimal solution. ........................................ 20 Table 6 Reduced mesh sizes produced for each solver in the ECSW hyper -reduction step

  18. Production and Isolation of Azaspiracid-1 and -2 from Azadinium spinosum Culture in Pilot Scale Photobioreactors

    PubMed Central

    Jauffrais, Thierry; Kilcoyne, Jane; Séchet, Véronique; Herrenknecht, Christine; Truquet, Philippe; Hervé, Fabienne; Bérard, Jean Baptiste; Nulty, Cíara; Taylor, Sarah; Tillmann, Urban; Miles, Christopher O.; Hess, Philipp

    2012-01-01

    Azaspiracid (AZA) poisoning has been reported following consumption of contaminated shellfish, and is of human health concern. Hence, it is important to have sustainable amounts of the causative toxins available for toxicological studies and for instrument calibration in monitoring programs, without having to rely on natural toxin events. Continuous pilot scale culturing was carried out to evaluate the feasibility of AZA production using Azadinium spinosum cultures. Algae were harvested using tangential flow filtration or continuous centrifugation. AZAs were extracted using solid phase extraction (SPE) procedures, and subsequently purified. When coupling two stirred photobioreactors in series, cell concentrations reached 190,000 and 210,000 cell·mL−1 at steady state in bioreactors 1 and 2, respectively. The AZA cell quota decreased as the dilution rate increased from 0.15 to 0.3 day−1, with optimum toxin production at 0.25 day−1. After optimization, SPE procedures allowed for the recovery of 79 ± 9% of AZAs. The preparative isolation procedure previously developed for shellfish was optimized for algal extracts, such that only four steps were necessary to obtain purified AZA1 and -2. A purification efficiency of more than 70% was achieved, and isolation from 1200 L of culture yielded 9.3 mg of AZA1 and 2.2 mg of AZA2 of >95% purity. This work demonstrated the feasibility of sustainably producing AZA1 and -2 from A. spinosum cultures. PMID:22822378

  19. Computerized optimization of multiple isocentres in stereotactic convergent beam irradiation

    NASA Astrophysics Data System (ADS)

    Treuer, U.; Treuer, H.; Hoevels, M.; Müller, R. P.; Sturm, V.

    1998-01-01

    A method for the fully computerized determination and optimization of positions of target points and collimator sizes in convergent beam irradiation is presented. In conventional interactive trial and error methods, which are very time consuming, the treatment parameters are chosen according to the operator's experience and improved successively. This time is reduced significantly by the use of a computerized procedure. After the definition of target volume and organs at risk in the CT or MR scans, an initial configuration is created automatically. In the next step the target point positions and collimator diameters are optimized by the program. The aim of the optimization is to find a configuration for which a prescribed dose at the target surface is approximated as close as possible. At the same time dose peaks inside the target volume are minimized and organs at risk and tissue surrounding the target are spared. To enhance the speed of the optimization a fast method for approximate dose calculation in convergent beam irradiation is used. A possible application of the method for calculating the leaf positions when irradiating with a micromultileaf collimator is briefly discussed. The success of the procedure has been demonstrated for several clinical cases with up to six target points.

  20. Fungal bioleaching of WPCBs using Aspergillus niger: Observation, optimization and kinetics.

    PubMed

    Faraji, Fariborz; Golmohammadzadeh, Rabeeh; Rashchi, Fereshteh; Alimardani, Navid

    2018-07-01

    In this study, Aspergillus niger (A. niger) as an environmentally friendly agent for fungal bioleaching of waste printed circuit boards (WPCBs) was employed. D-optimal response surface methodology (RSM) was utilized for optimization of the bioleaching parameters including bioleaching method (one step, two step and spent medium) and pulp densities (0.5 g L -1 to 20 g L -1 ) to maximize the recovery of Zn, Ni and Cu from WPCBs. According to the high performance liquid chromatography analysis, citric, oxalic, malic and gluconic acids were the most abundant organic acids produced by A.niger in 21 days experiments. Maximum recoveries of 98.57% of Zn, 43.95% of Ni and 64.03% of Cu were achieved based on acidolysis and complexolysis dissolution mechanisms of organic acids. Based on the kinetic studies, the rate controlling mechanism for Zn dissolution at one step approach was found to be diffusion through liquid film, while it was found to be mixed control for both two step and spent medium. Furthermore, rate of Cu dissolution which is controlled by diffusion in one step and two step approaches, detected to be controlled by chemical reaction at spent medium. It was shown that for Ni, the rate is controlled by chemical reaction for all the methods studied. Eventually, it was understood that A. niger is capable of leaching 100% of Zn, 80.39% of Ni and 85.88% of Cu in 30 days. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Crafty Corner.

    ERIC Educational Resources Information Center

    Naturescope, 1986

    1986-01-01

    Presents step-by-step procedures for two arts and crafts lessons that focus on mammals. Directions are offered for making mammal-shaped dough magnets and also for creating mammal note cards. Examples of each are illustrated. (ML)

  2. VEIL Surgical Steps.

    PubMed

    Raghunath, S K; Nagaraja, H; Srivatsa, N

    2017-03-01

    Inguinal lymphadenectomy remains the standard of care for metastatic nodal disease in cases of penile, urethral, vulval and vaginal cancers. Outcomes, including cure rates and overall and progression-free survivals, have progressively improved in these diseases with extending criteria to offer inguinal lymph node dissection for patients 'at-risk' for metastasis or loco-regional recurrence. Hence, despite declining incidence of advanced stages of these cancers, many patients will still need to undergo lymphadenectomy for optimal oncological outcomes. Inguinal node dissection is a morbid procedure with operative morbidity noted in almost two third of the patients. Video endoscopic inguinal lymphadenectomy (VEIL) was described and currently practiced with proven equivalent oncological outcomes. We describe our technique of VEIL using laparoscopic and robotic access as well as various new surgical strategies.

  3. The Effects of Varying Levels of Treatment Integrity on Child Compliance during Treatment with a Three-Step Prompting Procedure

    ERIC Educational Resources Information Center

    Wilder, David A.; Atwell, Julie; Wine, Byron

    2006-01-01

    The effects of three levels of treatment integrity (100%, 50%, and 0%) on child compliance were evaluated in the context of the implementation of a three-step prompting procedure. Two typically developing preschool children participated in the study. After baseline data on compliance to one of three common demands were collected, a therapist…

  4. Spotting optimization for oligo microarrays on aldehyde-glass.

    PubMed

    Dawson, Erica D; Reppert, Amy E; Rowlen, Kathy L; Kuck, Laura R

    2005-06-15

    Low-density microarrays that utilize short oligos (<100 nt) for capture are highly attractive for use in diagnostic applications, yet these experiments require strict quality control and meticulous reproducibility. However, a survey of current literature indicates vast inconsistencies in the spotting and processing procedures. In this study, spotting and processing protocols were optimized for aldehyde-functionalized glass substrates. Figures of merit were developed for quantitative comparison of spot quality and reproducibility. Experimental variables examined included oligo concentration in the spotting buffer, composition of the spotting buffer, postspotting "curing" conditions, and postspotting wash conditions. Optimized conditions included the use of 3-4 microM oligo in a 3x standard saline citrate/0.05% sodium dodecyl sulfate/0.001% (3-[(3-cholamidopropyl) dimethylammonia]-1-propane sulfonate) spotting buffer, 24-h postspotting reaction at 100% relative humidity, and a four-step wash procedure. Evaluation of six types of aldehyde-functionalized glass substrates indicated that those manufactured by CEL Associates, Inc. yield the highest oligo coverage.

  5. Long range personalized cancer treatment strategies incorporating evolutionary dynamics.

    PubMed

    Yeang, Chen-Hsiang; Beckman, Robert A

    2016-10-22

    Current cancer precision medicine strategies match therapies to static consensus molecular properties of an individual's cancer, thus determining the next therapeutic maneuver. These strategies typically maintain a constant treatment while the cancer is not worsening. However, cancers feature complicated sub-clonal structure and dynamic evolution. We have recently shown, in a comprehensive simulation of two non-cross resistant therapies across a broad parameter space representing realistic tumors, that substantial improvement in cure rates and median survival can be obtained utilizing dynamic precision medicine strategies. These dynamic strategies explicitly consider intratumoral heterogeneity and evolutionary dynamics, including predicted future drug resistance states, and reevaluate optimal therapy every 45 days. However, the optimization is performed in single 45 day steps ("single-step optimization"). Herein we evaluate analogous strategies that think multiple therapeutic maneuvers ahead, considering potential outcomes at 5 steps ahead ("multi-step optimization") or 40 steps ahead ("adaptive long term optimization (ALTO)") when recommending the optimal therapy in each 45 day block, in simulations involving both 2 and 3 non-cross resistant therapies. We also evaluate an ALTO approach for situations where simultaneous combination therapy is not feasible ("Adaptive long term optimization: serial monotherapy only (ALTO-SMO)"). Simulations utilize populations of 764,000 and 1,700,000 virtual patients for 2 and 3 drug cases, respectively. Each virtual patient represents a unique clinical presentation including sizes of major and minor tumor subclones, growth rates, evolution rates, and drug sensitivities. While multi-step optimization and ALTO provide no significant average survival benefit, cure rates are significantly increased by ALTO. Furthermore, in the subset of individual virtual patients demonstrating clinically significant difference in outcome between approaches, by far the majority show an advantage of multi-step or ALTO over single-step optimization. ALTO-SMO delivers cure rates superior or equal to those of single- or multi-step optimization, in 2 and 3 drug cases respectively. In selected virtual patients incurable by dynamic precision medicine using single-step optimization, analogous strategies that "think ahead" can deliver long-term survival and cure without any disadvantage for non-responders. When therapies require dose reduction in combination (due to toxicity), optimal strategies feature complex patterns involving rapidly interleaved pulses of combinations and high dose monotherapy. This article was reviewed by Wendy Cornell, Marek Kimmel, and Andrzej Swierniak. Wendy Cornell and Andrzej Swierniak are external reviewers (not members of the Biology Direct editorial board). Andrzej Swierniak was nominated by Marek Kimmel.

  6. Multipurpose silicon photonics signal processor core.

    PubMed

    Pérez, Daniel; Gasulla, Ivana; Crudgington, Lee; Thomson, David J; Khokhar, Ali Z; Li, Ke; Cao, Wei; Mashanovich, Goran Z; Capmany, José

    2017-09-21

    Integrated photonics changes the scaling laws of information and communication systems offering architectural choices that combine photonics with electronics to optimize performance, power, footprint, and cost. Application-specific photonic integrated circuits, where particular circuits/chips are designed to optimally perform particular functionalities, require a considerable number of design and fabrication iterations leading to long development times. A different approach inspired by electronic Field Programmable Gate Arrays is the programmable photonic processor, where a common hardware implemented by a two-dimensional photonic waveguide mesh realizes different functionalities through programming. Here, we report the demonstration of such reconfigurable waveguide mesh in silicon. We demonstrate over 20 different functionalities with a simple seven hexagonal cell structure, which can be applied to different fields including communications, chemical and biomedical sensing, signal processing, multiprocessor networks, and quantum information systems. Our work is an important step toward this paradigm.Integrated optical circuits today are typically designed for a few special functionalities and require complex design and development procedures. Here, the authors demonstrate a reconfigurable but simple silicon waveguide mesh with different functionalities.

  7. An improved robust buffer allocation method for the project scheduling problem

    NASA Astrophysics Data System (ADS)

    Ghoddousi, Parviz; Ansari, Ramin; Makui, Ahmad

    2017-04-01

    Unpredictable uncertainties cause delays and additional costs for projects. Often, when using traditional approaches, the optimizing procedure of the baseline project plan fails and leads to delays. In this study, a two-stage multi-objective buffer allocation approach is applied for robust project scheduling. In the first stage, some decisions are made on buffer sizes and allocation to the project activities. A set of Pareto-optimal robust schedules is designed using the meta-heuristic non-dominated sorting genetic algorithm (NSGA-II) based on the decisions made in the buffer allocation step. In the second stage, the Pareto solutions are evaluated in terms of the deviation from the initial start time and due dates. The proposed approach was implemented on a real dam construction project. The outcomes indicated that the obtained buffered schedule reduces the cost of disruptions by 17.7% compared with the baseline plan, with an increase of about 0.3% in the project completion time.

  8. Application of Rosenbrock search technique to reduce the drilling cost of a well in Bai-Hassan oil field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aswad, Z.A.R.; Al-Hadad, S.M.S.

    1983-03-01

    The powerful Rosenbrock search technique, which optimizes both the search directions using the Gram-Schmidt procedure and the step size using the Fibonacci line search method, has been used to optimize the drilling program of an oil well drilled in Bai-Hassan oil field in Kirkuk, Iran, using the twodimensional drilling model of Galle and Woods. This model shows the effect of the two major controllable variables, weight on bit and rotary speed, on the drilling rate, while considering other controllable variables such as the mud properties, hydrostatic pressure, hydraulic design, and bit selection. The effect of tooth dullness on the drillingmore » rate is also considered. Increasing the weight on the drill bit with a small increase or decrease in ratary speed resulted in a significant decrease in the drilling cost for most bit runs. It was found that a 48% reduction in this cost and a 97-hour savings in the total drilling time was possible under certain conditions.« less

  9. Genetic algorithm in the structural design of Cooke triplet lenses

    NASA Astrophysics Data System (ADS)

    Hazra, Lakshminarayan; Banerjee, Saswatee

    1999-08-01

    This paper is in tune with our efforts to develop a systematic method for multicomponent lens design. Our aim is to find a suitable starting point in the final configuration space, so that popular local search methods like damped least squares (DLS) may directly lead to a useful solution. For 'ab initio' design problems, a thin lens layout specifying the powers of the individual components and the intercomponent separations are worked out analytically. Requirements of central aberration targets for the individual components in order to satisfy the prespecified primary aberration targets for the overall system are then determined by nonlinear optimization. The next step involves structural design of the individual components by optimization techniques. This general method may be adapted for the design of triplets and their derivatives. However, for the thin lens design of a Cooke triplet composed of three airspaced singlets, the two steps of optimization mentioned above may be combined into a single optimization procedure. The optimum configuration for each of the single set, catering to the required Gaussian specification and primary aberration targets for the Cooke triplet, are determined by an application of genetic algorithm (GA). Our implementation of this algorithm is based on simulations of some complex tools of natural evolution, like selection, crossover and mutation. Our version of GA may or may not converge to a unique optimum, depending on some of the algorithm specific parameter values. With our algorithm, practically useful solutions are always available, although convergence to a global optimum can not be guaranteed. This is perfectly in keeping with our need to allow 'floating' of aberration targets in the subproblem level. Some numerical results dealing with our preliminary investigations on this problem are presented.

  10. Experimental investigation of the structural behavior of equine urethra.

    PubMed

    Natali, Arturo Nicola; Carniel, Emanuele Luigi; Frigo, Alessandro; Fontanella, Chiara Giulia; Rubini, Alessandro; Avital, Yochai; De Benedictis, Giulia Maria

    2017-04-01

    An integrated experimental and computational investigation was developed aiming to provide a methodology for characterizing the structural response of the urethral duct. The investigation provides information that are suitable for the actual comprehension of lower urinary tract mechanical functionality and the optimal design of prosthetic devices. Experimental activity entailed the execution of inflation tests performed on segments of horse penile urethras from both proximal and distal regions. Inflation tests were developed imposing different volumes. Each test was performed according to a two-step procedure. The tubular segment was inflated almost instantaneously during the first step, while volume was held constant for about 300s to allow the development of relaxation processes during the second step. Tests performed on the same specimen were interspersed by 600s of rest to allow the recovery of the specimen mechanical condition. Results from experimental activities were statistically analyzed and processed by means of a specific mechanical model. Such computational model was developed with the purpose of interpreting the general pressure-volume-time response of biologic tubular structures. The model includes parameters that interpret the elastic and viscous behavior of hollow structures, directly correlated with the results from the experimental activities. Post-processing of experimental data provided information about the non-linear elastic and time-dependent behavior of the urethral duct. In detail, statistically representative pressure-volume and pressure relaxation curves were identified, and summarized by structural parameters. Considering elastic properties, initial stiffness ranged between 0.677 ± 0.026kPa and 0.262 ± 0.006kPa moving from proximal to distal region of penile urethra. Viscous parameters showed typical values of soft biological tissues, as τ 1 =0.153±0.018s, τ 2 =17.458 ± 1.644s and τ 1 =0.201 ± 0.085, τ 2 = 8.514 ± 1.379s for proximal and distal regions respectively. A general procedure for the mechanical characterization of the urethral duct has been provided. The proposed methodology allows identifying mechanical parameters that properly express the mechanical behavior of the biological tube. The approach is especially suitable for evaluating the influence of degenerative phenomena on the lower urinary tract mechanical functionality. The information are mandatory for the optimal design of potential surgical procedures and devices. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A two-step parameter optimization algorithm for improving estimation of optical properties using spatial frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Hu, Dong; Lu, Renfu; Ying, Yibin

    2018-03-01

    This research was aimed at optimizing the inverse algorithm for estimating the optical absorption (μa) and reduced scattering (μs‧) coefficients from spatial frequency domain diffuse reflectance. Studies were first conducted to determine the optimal frequency resolution and start and end frequencies in terms of the reciprocal of mean free path (1/mfp‧). The results showed that the optimal frequency resolution increased with μs‧ and remained stable when μs‧ was larger than 2 mm-1. The optimal end frequency decreased from 0.3/mfp‧ to 0.16/mfp‧ with μs‧ ranging from 0.4 mm-1 to 3 mm-1, while the optimal start frequency remained at 0 mm-1. A two-step parameter estimation method was proposed based on the optimized frequency parameters, which improved estimation accuracies by 37.5% and 9.8% for μa and μs‧, respectively, compared with the conventional one-step method. Experimental validations with seven liquid optical phantoms showed that the optimized algorithm resulted in the mean absolute errors of 15.4%, 7.6%, 5.0% for μa and 16.4%, 18.0%, 18.3% for μs‧ at the wavelengths of 675 nm, 700 nm, and 715 nm, respectively. Hence, implementation of the optimized parameter estimation method should be considered in order to improve the measurement of optical properties of biological materials when using spatial frequency domain imaging technique.

  12. A Two-Step Approach to Analyze Satisfaction Data

    ERIC Educational Resources Information Center

    Ferrari, Pier Alda; Pagani, Laura; Fiorio, Carlo V.

    2011-01-01

    In this paper a two-step procedure based on Nonlinear Principal Component Analysis (NLPCA) and Multilevel models (MLM) for the analysis of satisfaction data is proposed. The basic hypothesis is that observed ordinal variables describe different aspects of a latent continuous variable, which depends on covariates connected with individual and…

  13. Read Two Impress: An Intervention for Disfluent Readers

    ERIC Educational Resources Information Center

    Young, Chase; Rasinski, Timothy; Mohr, Kathleen A. J.

    2016-01-01

    The authors describe a research-based method to increase students' reading fluency. The method is called Read Two Impress, which is derived from the Neurological Impress Method and the method of repeated readings. The authors provide step-by-step procedures to effectively implement the reading fluency intervention. Previous research indicates that…

  14. Two-Step Semi-Microscale Preparation of a Cinnamate Ester Sunscreen Analog

    ERIC Educational Resources Information Center

    Stabile, Ryan G.; Dicks, Andrew P.

    2004-01-01

    A student procedure focusing on multistep sunscreen synthesis and spectroscopic analysis is reported. A two-step synthetic pathway towards sunscreens, an analog of a commercially available UV light blocker is designed, given the current high profile nature of skin cancer and media attention towards sunscreens.

  15. A novel two-step method for screening shade tolerant mutant plants via dwarfism

    USDA-ARS?s Scientific Manuscript database

    When subjected to shade, plants undergo rapid shoot elongation, which often makes them more prone to disease and mechanical damage. It has been reported that, in turfgrass, induced dwarfism can enhance shade tolerance. Here, we describe a two-step procedure for isolating shade tolerant mutants of ...

  16. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    This paper describes a fully integrated aerodynamic/dynamic optimization procedure for helicopter rotor blades. The procedure combines performance and dynamics analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuver; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case the objective function involves power required (in hover, forward flight, and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  17. Fully integrated aerodynamic/dynamic optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Lamarsh, William J., II; Adelman, Howard M.

    1992-01-01

    A fully integrated aerodynamic/dynamic optimization procedure is described for helicopter rotor blades. The procedure combines performance and dynamic analyses with a general purpose optimizer. The procedure minimizes a linear combination of power required (in hover, forward flight, and maneuver) and vibratory hub shear. The design variables include pretwist, taper initiation, taper ratio, root chord, blade stiffnesses, tuning masses, and tuning mass locations. Aerodynamic constraints consist of limits on power required in hover, forward flight and maneuvers; airfoil section stall; drag divergence Mach number; minimum tip chord; and trim. Dynamic constraints are on frequencies, minimum autorotational inertia, and maximum blade weight. The procedure is demonstrated for two cases. In the first case, the objective function involves power required (in hover, forward flight and maneuver) and dynamics. The second case involves only hover power and dynamics. The designs from the integrated procedure are compared with designs from a sequential optimization approach in which the blade is first optimized for performance and then for dynamics. In both cases, the integrated approach is superior.

  18. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  19. Stationary-phase optimized selectivity liquid chromatography: development of a linear gradient prediction algorithm.

    PubMed

    De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat

    2010-03-01

    Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.

  20. Individualizing drug dosage with longitudinal data.

    PubMed

    Zhu, Xiaolu; Qu, Annie

    2016-10-30

    We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Application of the stepwise focusing method to optimize the cost-effectiveness of genome-wide association studies with limited research budgets for genotyping and phenotyping.

    PubMed

    Ohashi, J; Clark, A G

    2005-05-01

    The recent cataloguing of a large number of SNPs enables us to perform genome-wide association studies for detecting common genetic variants associated with disease. Such studies, however, generally have limited research budgets for genotyping and phenotyping. It is therefore necessary to optimize the study design by determining the most cost-effective numbers of SNPs and individuals to analyze. In this report we applied the stepwise focusing method, with two-stage design, developed by Satagopan et al. (2002) and Saito & Kamatani (2002), to optimize the cost-effectiveness of a genome-wide direct association study using a transmission/disequilibrium test (TDT). The stepwise focusing method consists of two steps: a large number of SNPs are examined in the first focusing step, and then all the SNPs showing a significant P-value are tested again using a larger set of individuals in the second focusing step. In the framework of optimization, the numbers of SNPs and families and the significance levels in the first and second steps were regarded as variables to be considered. Our results showed that the stepwise focusing method achieves a distinct gain of power compared to a conventional method with the same research budget.

  2. Optimization of Aerospace Structure Subject to Damage Tolerance Criteria

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.

    1999-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers. It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages. A common method for topology optimization is that of compliance minimization which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system. Sherrnan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this. SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.

  3. 16 CFR 1610.6 - Test procedure.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 2 2012-01-01 2012-01-01 false Test procedure. 1610.6 Section 1610.6... FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.6 Test procedure. The test procedure is divided into two... according to paragraph (b)(1) of this section. (a) Step 1—Testing in the original state. (1) Tests shall be...

  4. 16 CFR 1610.6 - Test procedure.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 2 2014-01-01 2014-01-01 false Test procedure. 1610.6 Section 1610.6... FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.6 Test procedure. The test procedure is divided into two... according to paragraph (b)(1) of this section. (a) Step 1—Testing in the original state. (1) Tests shall be...

  5. 16 CFR 1610.6 - Test procedure.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 2 2011-01-01 2011-01-01 false Test procedure. 1610.6 Section 1610.6... FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.6 Test procedure. The test procedure is divided into two... according to paragraph (b)(1) of this section. (a) Step 1—Testing in the original state. (1) Tests shall be...

  6. Fully automated two-step assay for detection of metallothionein through magnetic isolation using functionalized γ-Fe2O3 particles.

    PubMed

    Merlos Rodrigo, Miguel Angel; Krejcova, Ludmila; Kudr, Jiri; Cernei, Natalia; Kopel, Pavel; Richtera, Lukas; Moulick, Amitava; Hynek, David; Adam, Vojtech; Stiborova, Marie; Eckschlager, Tomas; Heger, Zbynek; Zitka, Ondrej

    2016-12-15

    Metallothioneins (MTs) are involved in heavy metal detoxification in a wide range of living organisms. Currently, it is well known that MTs play substantial role in many pathophysiological processes, including carcinogenesis, and they can serve as diagnostic biomarkers. In order to increase the applicability of MT in cancer diagnostics, an easy-to-use and rapid method for its detection is required. Hence, the aim of this study was to develop a fully automated and high-throughput assay for the estimation of MT levels. Here, we report the optimal conditions for the isolation of MTs from rabbit liver and their characterization using MALDI-TOF MS. In addition, we described a two-step assay, which started with an isolation of the protein using functionalized paramagnetic particles and finished with their electrochemical analysis. The designed easy-to-use, cost-effective, error-free and fully automated procedure for the isolation of MT coupled with a simple analytical detection method can provide a prototype for the construction of a diagnostic instrument, which would be appropriate for the monitoring of carcinogenesis or MT-related chemoresistance of tumors. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. [Design method of convex master gratings for replicating flat-field concave gratings].

    PubMed

    Zhou, Qian; Li, Li-Feng

    2009-08-01

    Flat-field concave diffraction grating is the key device of a portable grating spectrometer with the advantage of integrating dispersion, focusing and flat-field in a single device. It directly determines the quality of a spectrometer. The most important two performances determining the quality of the spectrometer are spectral image quality and diffraction efficiency. The diffraction efficiency of a grating depends mainly on its groove shape. But it has long been a problem to get a uniform predetermined groove shape across the whole concave grating area, because the incident angle of the ion beam is restricted by the curvature of the concave substrate, and this severely limits the diffraction efficiency and restricts the application of concave gratings. The authors present a two-step method for designing convex gratings, which are made holographically with two exposure point sources placed behind a plano-convex transparent glass substrate, to solve this problem. The convex gratings are intended to be used as the master gratings for making aberration-corrected flat-field concave gratings. To achieve high spectral image quality for the replicated concave gratings, the refraction effect at the planar back surface and the extra optical path lengths through the substrate thickness experienced by the two divergent recording beams are considered during optimization. This two-step method combines the optical-path-length function method and the ZEMAX software to complete the optimization with a high success rate and high efficiency. In the first step, the optical-path-length function method is used without considering the refraction effect to get an approximate optimization result. In the second step, the approximate result of the first step is used as the initial value for ZEMAX to complete the optimization including the refraction effect. An example of design problem was considered. The simulation results of ZEMAX proved that the spectral image quality of a replicated concave grating is comparable with that of a directly recorded concave grating.

  8. Two-step reconstruction method using global optimization and conjugate gradient for ultrasound-guided diffuse optical tomography.

    PubMed

    Tavakoli, Behnoosh; Zhu, Quing

    2013-01-01

    Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.

  9. Optimized statistical parametric mapping procedure for NIRS data contaminated by motion artifacts : Neurometric analysis of body schema extension.

    PubMed

    Suzuki, Satoshi

    2017-09-01

    This study investigated the spatial distribution of brain activity on body schema (BS) modification induced by natural body motion using two versions of a hand-tracing task. In Task 1, participants traced Japanese Hiragana characters using the right forefinger, requiring no BS expansion. In Task 2, participants performed the tracing task with a long stick, requiring BS expansion. Spatial distribution was analyzed using general linear model (GLM)-based statistical parametric mapping of near-infrared spectroscopy data contaminated with motion artifacts caused by the hand-tracing task. Three methods were utilized in series to counter the artifacts, and optimal conditions and modifications were investigated: a model-free method (Step 1), a convolution matrix method (Step 2), and a boxcar-function-based Gaussian convolution method (Step 3). The results revealed four methodological findings: (1) Deoxyhemoglobin was suitable for the GLM because both Akaike information criterion and the variance against the averaged hemodynamic response function were smaller than for other signals, (2) a high-pass filter with a cutoff frequency of .014 Hz was effective, (3) the hemodynamic response function computed from a Gaussian kernel function and its first- and second-derivative terms should be included in the GLM model, and (4) correction of non-autocorrelation and use of effective degrees of freedom were critical. Investigating z-maps computed according to these guidelines revealed that contiguous areas of BA7-BA40-BA21 in the right hemisphere became significantly activated ([Formula: see text], [Formula: see text], and [Formula: see text], respectively) during BS modification while performing the hand-tracing task.

  10. Robust learning for optimal treatment decision with NP-dimensionality

    PubMed Central

    Shi, Chengchun; Song, Rui; Lu, Wenbin

    2016-01-01

    In order to identify important variables that are involved in making optimal treatment decision, Lu, Zhang and Zeng (2013) proposed a penalized least squared regression framework for a fixed number of predictors, which is robust against the misspecification of the conditional mean model. Two problems arise: (i) in a world of explosively big data, effective methods are needed to handle ultra-high dimensional data set, for example, with the dimension of predictors is of the non-polynomial (NP) order of the sample size; (ii) both the propensity score and conditional mean models need to be estimated from data under NP dimensionality. In this paper, we propose a robust procedure for estimating the optimal treatment regime under NP dimensionality. In both steps, penalized regressions are employed with the non-concave penalty function, where the conditional mean model of the response given predictors may be misspecified. The asymptotic properties, such as weak oracle properties, selection consistency and oracle distributions, of the proposed estimators are investigated. In addition, we study the limiting distribution of the estimated value function for the obtained optimal treatment regime. The empirical performance of the proposed estimation method is evaluated by simulations and an application to a depression dataset from the STAR*D study. PMID:28781717

  11. Determination of Ignitable Liquids in Fire Debris: Direct Analysis by Electronic Nose

    PubMed Central

    Ferreiro-González, Marta; Barbero, Gerardo F.; Palma, Miguel; Ayuso, Jesús; Álvarez, José A.; Barroso, Carmelo G.

    2016-01-01

    Arsonists usually use an accelerant in order to start or accelerate a fire. The most widely used analytical method to determine the presence of such accelerants consists of a pre-concentration step of the ignitable liquid residues followed by chromatographic analysis. A rapid analytical method based on headspace-mass spectrometry electronic nose (E-Nose) has been developed for the analysis of Ignitable Liquid Residues (ILRs). The working conditions for the E-Nose analytical procedure were optimized by studying different fire debris samples. The optimized experimental variables were related to headspace generation, specifically, incubation temperature and incubation time. The optimal conditions were 115 °C and 10 min for these two parameters. Chemometric tools such as hierarchical cluster analysis (HCA) and linear discriminant analysis (LDA) were applied to the MS data (45–200 m/z) to establish the most suitable spectroscopic signals for the discrimination of several ignitable liquids. The optimized method was applied to a set of fire debris samples. In order to simulate post-burn samples several ignitable liquids (gasoline, diesel, citronella, kerosene, paraffin) were used to ignite different substrates (wood, cotton, cork, paper and paperboard). A full discrimination was obtained on using discriminant analysis. This method reported here can be considered as a green technique for fire debris analyses. PMID:27187407

  12. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  13. Talbot-Lau x-ray deflectometry phase-retrieval methods for electron density diagnostics in high-energy density experiments.

    PubMed

    Valdivia, Maria Pia; Stutman, Dan; Stoeckl, Christian; Mileham, Chad; Begishev, Ildar A; Bromage, Jake; Regan, Sean P

    2018-01-10

    Talbot-Lau x-ray interferometry uses incoherent x-ray sources to measure refraction index changes in matter. These measurements can provide accurate electron density mapping through phase retrieval. An adaptation of the interferometer has been developed in order to meet the specific requirements of high-energy density experiments. This adaptation is known as a moiré deflectometer, which allows for single-shot capabilities in the form of interferometric fringe patterns. The moiré x-ray deflectometry technique requires a set of object and reference images in order to provide electron density maps, which can be costly in the high-energy density environment. In particular, synthetic reference phase images obtained ex situ through a phase-scan procedure, can provide a feasible solution. To test this procedure, an object phase map was retrieved from a single-shot moiré image obtained from a plasma-produced x-ray source. A reference phase map was then obtained from phase-stepping measurements using a continuous x-ray tube source in a small laboratory setting. The two phase maps were used to retrieve an electron density map. A comparison of the moiré and phase-stepping phase-retrieval methods was performed to evaluate single-exposure plasma electron density mapping for high-energy density and other transient plasma experiments. It was found that a combination of phase-retrieval methods can deliver accurate refraction angle mapping. Once x-ray backlighter quality is optimized, the ex situ method is expected to deliver electron density mapping with improved resolution. The steps necessary for improved diagnostic performance are discussed.

  14. Modular synthesis of a dual metal-dual semiconductor nano-heterostructure

    DOE PAGES

    Amirav, Lilac; Oba, Fadekemi; Aloni, Shaul; ...

    2015-04-29

    Reported is the design and modular synthesis of a dual metal-dual semiconductor heterostructure with control over the dimensions and placement of its individual components. Analogous to molecular synthesis, colloidal synthesis is now evolving into a series of sequential synthetic procedures with separately optimized steps. Here we detail the challenges and parameters that must be considered when assembling such a multicomponent nanoparticle, and their solutions.

  15. Optimization of the Production of Inactivated Clostridium novyi Type B Vaccine Using Computational Intelligence Techniques.

    PubMed

    Aquino, P L M; Fonseca, F S; Mozzer, O D; Giordano, R C; Sousa, R

    2016-07-01

    Clostridium novyi causes necrotic hepatitis in sheep and cattle, as well as gas gangrene. The microorganism is strictly anaerobic, fastidious, and difficult to cultivate in industrial scale. C. novyi type B produces alpha and beta toxins, with the alpha toxin being linked to the presence of specific bacteriophages. The main strategy to combat diseases caused by C. novyi is vaccination, employing vaccines produced with toxoids or with toxoids and bacterins. In order to identify culture medium components and concentrations that maximized cell density and alpha toxin production, a neuro-fuzzy algorithm was applied to predict the yields of the fermentation process for production of C. novyi type B, within a global search procedure using the simulated annealing technique. Maximizing cell density and toxin production is a multi-objective optimization problem and could be treated by a Pareto approach. Nevertheless, the approach chosen here was a step-by-step one. The optimum values obtained with this approach were validated in laboratory scale, and the results were used to reload the data matrix for re-parameterization of the neuro-fuzzy model, which was implemented for a final optimization step with regards to the alpha toxin productivity. With this methodology, a threefold increase of alpha toxin could be achieved.

  16. Two-step purification method of vitellogenin from three teleost fish species: rainbow trout (Oncorhynchus mykiss), gudgeon (Gobio gobio) and chub (Leuciscus cephalus).

    PubMed

    Brion, F; Rogerieux, F; Noury, P; Migeon, B; Flammarion, P; Thybaud, E; Porcher, J M

    2000-01-14

    A two-step purification protocol was developed to purify rainbow trout (Oncorhynchus mykiss) vitellogenin (Vtg) and was successfully applied to Vtg of chub (Leuciscus cephalus) and gudgeon (Gobio gobio). Capture and intermediate purification were performed by anion-exchange chromatography on a Resource Q column and a polishing step was performed by gel permeation chromatography on Superdex 200 column. This method is a rapid two-step purification procedure that gave a pure solution of Vtg as assessed by silver staining electrophoresis and immunochemical characterisation.

  17. Opening wedge and anatomic-specific plates in foot and ankle applications.

    PubMed

    Kluesner, Andrew J; Morris, Jason B

    2011-08-01

    As surgeons continually push to improve techniques and outcomes, anatomic-specific and procedure-specific fixation options are becoming increasingly available. The unique size, shape, and function of the foot provide an ideal framework for the use of anatomic-specific plates. These distinctive plate characteristics range from anatomic contouring and screw placements to incorporated step-offs and wedges. By optimizing support, compression, and stabilization, patients may return to weight bearing and activity sooner, improving outcomes. This article discusses anatomic-specific plates and their use in forefoot and rearfoot surgical procedures. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Determination of endocrine-disrupting compounds in water samples by magnetic nanoparticle-assisted dispersive liquid-liquid microextraction combined with gas chromatography-tandem mass spectrometry.

    PubMed

    Pérez, Rosa Ana; Albero, Beatriz; Tadeo, José Luis; Sánchez-Brunete, Consuelo

    2016-11-01

    A rapid extraction procedure is presented for the determination of five endocrine-disrupting compounds, estrone, ethinylestradiol, bisphenol A, triclosan, and 2-ethylhexylsalicylate, in water samples. The analysis involves a two-step extraction procedure that combines dispersive liquid-liquid microextraction (DLLME) with dispersive micro-solid phase extraction (D-μ-SPE), using magnetic nanoparticles, followed by in situ derivatization in the injection port of a gas chromatograph coupled to triple quadrupole mass spectrometry. The use of uncoated or oleate-coated Fe 3 O 4 nanoparticles as sorbent in the extraction process was evaluated and compared. The main parameters involved in the extraction process were optimized applying experimental designs. Uncoated Fe 3 O 4 nanoparticles were selected in order to simplify and make more cost-effective the procedure. DLLME was carried out at pH 3, during 2 min, followed by the addition of the nanoparticles for D-μ-SPE employing 1 min in the extraction. Analysis of spiked water samples of different sources gave satisfactory recovery results for all the compounds with detection limits ranging from 7 to 180 ng l -1 . Finally, the procedure was applied in tap, well, and river water. Graphical abstract Diagram of the extraction method using magnetic nanoparticles (MNPs).

  19. Two-step purification of His-tagged Nef protein in native condition using heparin and immobilized metal ion affinity chromatographies.

    PubMed

    Finzi, Andrés; Cloutier, Jonathan; Cohen, Eric A

    2003-07-01

    The Nef protein encoded by human immunodeficiency virus type 1 (HIV-1) has been shown to be an important factor of progression of viral growth and pathogenesis in both in vitro and in vivo. The lack of a simple procedure to purify Nef in its native conformation has limited molecular studies on Nef function. A two-step procedure that includes heparin and immobilized metal ion affinity chromatographies (IMACs) was developed to purify His-tagged Nef (His(6)-Nef) expressed in bacteria in native condition. During the elaboration of this purification procedure, we identified two closely SDS-PAGE-migrating contaminating bacterial proteins, SlyD and GCHI, that co-eluted with His(6)-Nef in IMAC in denaturing condition and developed purification steps to eliminate these contaminants in native condition. Overall, this study describes a protocol that allows rapid purification of His(6)-Nef protein expressed in bacteria in native condition and that removes metal affinity resin-binding bacterial proteins that can contaminate recombinant His-tagged protein preparation.

  20. Bed usage in a Dublin teaching hospital: a prospective audit.

    PubMed

    John, A; Breen, D P; Ghafar, Aabdul; Olphert, T; Burke, C M

    2004-01-01

    We prospectively audited inpatient bed use in our hospital for the first three months of this year. While 70% (mean age 54 +/- 20.8 years) of our patients went home on the day they were medically discharged, 30% (mean age 70.3 +/- 18.3 years) remained in the hospital awaiting step-down facilities. The total of 486 bed days occupied by overstaying patients would if available, have allowed treatment of 54% more patients without any increase in the hospital complement of beds, preventing the cancellation of elective procedures and preventing patients remaining on trolleys overnight. These prospective data emphasise (1) a highly inefficient use of acute hospital beds; (2) the need for step-down facilities; (3) efficient use of existing hospital beds is the highest priority both for optimal patient care and optimal use of expensive hospital resources; (4) efficient use of existing facilities should be achieved before the construction of additional facilities.

  1. Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover

    NASA Technical Reports Server (NTRS)

    Dangelo, K. R.

    1974-01-01

    A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.

  2. Optimized enrichment for the detection of Escherichia coli O26 in French raw milk cheeses.

    PubMed

    Savoye, F; Rozand, C; Bouvier, M; Gleizal, A; Thevenot, D

    2011-06-01

    Our main objective was to optimize the enrichment of Escherichia coli O26 in raw milk cheeses for their subsequent detection with a new automated immunological method. Ten enrichment broths were tested for the detection of E. coli O26. Two categories of experimentally inoculated raw milk cheeses, semi-hard uncooked cheese and 'Camembert' type cheese, were initially used to investigate the relative efficacy of the different enrichments. The enrichments that were considered optimal for the growth of E. coli O26 in these cheeses were then challenged with other types of raw milk cheeses. Buffered peptone water supplemented with cefixim-tellurite and acriflavin was shown to optimize the growth of E. coli O26 artificially inoculated in the cheeses tested. Despite the low inoculum level (1-10 CFU per 25 g) in the cheeses, E. coli O26 counts reached at least 5.10(4) CFU ml(-1) after 24-h incubation at 41.5 °C in this medium. All the experimentally inoculated cheeses were found positive by the immunological method in the enrichment broth selected. Optimized E. coli O26 enrichment and rapid detection constitute the first steps of a complete procedure that could be used in routine to detect E. coli O26 in raw milk cheeses. © 2011 The Authors. Letters in Applied Microbiology © 2011 The Society for Applied Microbiology.

  3. Parameter Optimization for Feature and Hit Generation in a General Unknown Screening Method-Proof of Concept Study Using a Design of Experiment Approach for a High Resolution Mass Spectrometry Procedure after Data Independent Acquisition.

    PubMed

    Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas

    2018-03-06

    High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.

  4. TEACH-M: A pilot study evaluating an instructional sequence for persons with impaired memory and executive functions.

    PubMed

    Ehlhardt, L A; Sohlberg, M M; Glang, A; Albin, R

    2005-08-10

    The purpose of this pilot study was to evaluate an instructional package that facilitates learning and retention of multi-step procedures for persons with severe memory and executive function impairments resulting from traumatic brain injury. The study used a multiple baseline across participants design. Four participants, two males and two females, ranging in age from 36-58 years, were taught a 7-step e-mail task. The instructional package (TEACH-M) was the experimental intervention and the number of correct e-mail steps learned was the dependent variable. Treatment effects were replicated across the four participants and maintained at 30 days post-treatment. Generalization and social validity data further supported the treatment programme. The results suggest that individuals with severe cognitive impairments are capable of learning new skills. Directions for future research include application of the instructional package to other multi-step procedures.

  5. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  6. Optimization of experimental conditions for the monitoring of nucleation and growth of racemic Diprophylline from the supercooled melt

    NASA Astrophysics Data System (ADS)

    Lemercier, Aurélien; Viel, Quentin; Brandel, Clément; Cartigny, Yohann; Dargent, Eric; Petit, Samuel; Coquerel, Gérard

    2017-08-01

    Since more and more pharmaceutical substances are developed as amorphous forms, it is nowadays of major relevance to get insights into the nucleation and growth mechanisms from supercooled melts (SCM). A step-by-step approach of recrystallization from a SCM is presented here, designed to elucidate the impact of various experimental parameters. Using the bronchodilator agent Diprophylline (DPL) as a model compound, it is shown that optimal conditions for informative observations of the crystallization behaviour from supercooled racemic DPL require to place samples between two cover slides with a maximum sample thickness of 20 μm, and to monitor recrystallization during an annealing step of 30 min at 70 °C, i.e. about 33 °C above the temperature of glass transition. In these optimized conditions, it could be established that DPL crystallization proceeds in two steps: spontaneous nucleation and growth of large and well-faceted particles of a new crystal form (primary crystals: PC) and subsequent crystallization of a previously known form (RII) that develops from specific surfaces of PC. The formation of PC particles therefore constitutes the key-step of the crystallization events and is shown to be favoured by at least 2.33 wt% of the major chemical impurity, Theophylline.

  7. Computer object segmentation by nonlinear image enhancement, multidimensional clustering, and geometrically constrained contour optimization

    NASA Astrophysics Data System (ADS)

    Bruynooghe, Michel M.

    1998-04-01

    In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.

  8. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    NASA Astrophysics Data System (ADS)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-01

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  9. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory.

    PubMed

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  10. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE PAGES

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  11. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  12. Comparison of effects of dry versus wet swallowing on Eustachian tube function via a nine-step inflation/deflation test.

    PubMed

    Adali, M Kemal; Uzun, Cem

    2005-09-01

    The aim of the present study is to evaluate the effect of swallowing type (dry versus wet) on the outcome of a nine-step inflation/deflation tympanometric Eustachian tube function (ETF) test in healthy adults. Fourteen normal healthy volunteers, between 19 and 28 years of age, were included in the study. The nine-step test was performed in two different test procedures: (1) test with dry swallows (dry test procedure) and (2) test with liquid swallows (wet test procedure). If the equilibration of middle-ear (ME) pressure was successful in all the steps of the nine-step test, ETF was considered 'Good'. Otherwise, the test was considered 'Poor', and the test was repeated at a second session. In the dry test procedure, ETF was 'Good' in 21 ears at the first session and in 24 ears after the second session (p > 0.05). However, in the wet test procedure, ETF was 'Good' in 13 ears at the first session and in 21 ears after the second session (p < 0.05). At the first session, ETF was 'Good' in 21 and 13 ears in the dry and wet test procedures, respectively. The difference was statistically significant (p < 0.05). However, after the second session, the overall number of ears with 'Good' tubal function was almost the same in both test procedures (24 ears at dry test procedures versus 21 ears at wet test procedures;p > 0.05). Dry swallowing seems to be more effective for the equilibration of ME pressure. Thus, a single-session dependent evaluation of ETF may be efficient for the dry test procedure of the nine-step test. Swallowing with water may be easier for subjects, but a repetition of the test at a second session may be necessary when the test result is 'Poor'.

  13. Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less

  14. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    PubMed

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  15. Protocol for vital dye staining of corneal endothelial cells.

    PubMed

    Park, Sunju; Fong, Alan G; Cho, Hyung; Zhang, Cheng; Gritz, David C; Mian, Gibran; Herzlich, Alexandra A; Gore, Patrick; Morganti, Ashley; Chuck, Roy S

    2012-12-01

    To describe a step-by-step methodology to establish a reproducible staining protocol for the evaluation of human corneal endothelial cells. Four procedures were performed to determine the best protocol. (1) To determine the optimal trypan blue staining method, goat corneas were stained with 4 dilutions of trypan blue (0.4%, 0.2%, 0.1%, and 0.05%) and 1% alizarin red. (2) To determine the optimal alizarin red staining method, goat corneas were stained with 2 dilutions of alizarin red (1% and 0.5%) and 0.2% trypan blue. (3) To ensure that trypan blue truly stains damaged cells, goat corneas were exposed to either 3% hydrogen peroxide or to balanced salt solution, and then stained with 0.2% trypan blue and 0.5% alizarin red. (4) Finally, fresh human corneal buttons were examined; 1 group was stained with 0.2% trypan blue and another group with 0.4% trypan blue. For the 4 procedures performed, the results are as follows: (1) trypan blue staining was not observed in any of the normal corneal samples; (2) 0.5% alizarin red demonstrated sharper cell borders than 1% alizarin red; (3) positive trypan blue staining was observed in the hydrogen peroxide exposed tissue in damaged areas; (4) 0.4% trypan blue showed more distinct positive staining than 0.2% trypan blue. We were able to determine the optimal vital dye staining conditions for human corneal endothelial cells using 0.4% trypan blue and 0.5% alizarin red.

  16. An optimal open/closed-loop control method with application to a pre-stressed thin duralumin plate

    NASA Astrophysics Data System (ADS)

    Nadimpalli, Sruthi Raju

    The excessive vibrations of a pre-stressed duralumin plate, suppressed by a combination of open-loop and closed-loop controls, also known as open/closed-loop control, is studied in this thesis. The two primary steps involved in this process are: Step (I) with an assumption that the closed-loop control law is proportional, obtain the optimal open-loop control by direct minimization of the performance measure consisting of energy at terminal time and a penalty on open-loop control force via calculus of variations. If the performance measure also involves a penalty on closed-loop control effort then a Fourier based method is utilized. Step (II) the energy at terminal time is minimized numerically to obtain optimal values of feedback gains. The optimal closed-loop control gains obtained are used to describe the displacement and the velocity of open-loop, closed-loop and open/closed-loop controlled duralumin plate.

  17. The Use of Lean Six Sigma Methodology in Increasing Capacity of a Chemical Production Facility at DSM.

    PubMed

    Meeuwse, Marco

    2018-03-30

    Lean Six Sigma is an improvement method, combining Lean, which focuses on removing 'waste' from a process, with Six Sigma, which is a data-driven approach, making use of statistical tools. Traditionally it is used to improve the quality of products (reducing defects), or processes (reducing variability). However, it can also be used as a tool to increase the productivity or capacity of a production plant. The Lean Six Sigma methodology is therefore an important pillar of continuous improvement within DSM. In the example shown here a multistep batch process is improved, by analyzing the duration of the relevant process steps, and optimizing the procedures. Process steps were performed in parallel instead of sequential, and some steps were made shorter. The variability was reduced, giving the opportunity to make a tighter planning, and thereby reducing waiting times. Without any investment in new equipment or technical modifications, the productivity of the plant was improved by more than 20%; only by changing procedures and the programming of the process control system.

  18. A simple, rapid and green ultrasound assisted and ionic liquid dispersive microextraction procedure for the determination of tin in foods employing ETAAS.

    PubMed

    Tuzen, Mustafa; Uluozlu, Ozgur Dogan; Mendil, Durali; Soylak, Mustafa; Machado, Luana O R; Dos Santos, Walter N L; Ferreira, Sergio L C

    2018-04-15

    This paper proposes a simple, rapid and green ultrasound assisted and ionic liquid dispersive microextraction procedure using pyrocatechol violet (PV) as complexing reagent and 1-hexyl-3-methylimidazolium bis(trifluoromethylsulfonyl)-imide [C 6 MIM][Tf 2 N] as ionic liquid for the detection of tin employing electrothermal atomic absorption spectrometry (ETAAS). The optimization step was performed using a two-level full factorial design involving the following factors: pH of the working media, amount reagents, ionic liquid volume and extraction time and the chemometric response was tin recovery. The procedure allowed the determination of tin with limits of detection and quantification of 3.4 and 11.3 ng L -1 , respectively. The relative standard deviation was 4.5% for a tin solution of 0.50 µg L -1 . The validation method was confirmed by analysis of rice flour certified reference material. The method was applied for the quantification of tin in several food samples. The concentration range found varied from 0.10 to 1.50 µg g -1 . Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    NASA Astrophysics Data System (ADS)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  20. An on-line pre-concentration system for determination of cadmium in drinking water using FAAS.

    PubMed

    dos Santos, Walter N L; Costa, Jorge L O; Araujo, Rennan G O; de Jesus, Djane S; Costa, Antônio C S

    2006-10-11

    In the present paper, a minicolumn of polyurethane foam loaded with 4-(2-pyridylazo)-resorcinol (PAR) is proposed as pre-concentration system for cadmium determination in drinking water samples by flame atomic absorption spectrometry. The optimization step was performed using two-level full factorial design and Doehlert matrix, involving the variables: sampling flow rate, elution concentration, buffer concentration and pH. Using the established experimental conditions in the optimization step of: pH 8.2, sampling flow rate 8.5 mL min(-1), buffer concentration 0.05 mol L(-1) and elution concentration of 1.0 mol L(-1), this system allows the determination of cadmium with detection limit (LD) (3sigma/S) of 20.0 ng L(-1) and quantification limit (LQ) (10sigma/S) of 64 ng L(-1), precision expressed as relative standard deviation (R.S.D.) of 5.0 and 4.7% for cadmium concentration of 5.0 and 40.0 microg L(-1), respectively, and a pre-concentration factor of 158 for a sample volume of 20.0 mL. The accuracy was confirmed by cadmium determination in the standard reference material, NIST SRM 1643d trace elements in natural water. This procedure was applied for cadmium determination in drinking water samples collected from Salvador City, Bahia, Brazil. For five samples analyzed, the achieved concentrations varied from 0.31 to 0.86 microg L(-1).

  1. Evaluation of standardized sample collection, packaging, and decontamination procedures to assess cross-contamination potential during Bacillus anthracis incident response operations

    PubMed Central

    Calfee, M. Worth; Tufts, Jenia; Meyer, Kathryn; McConkey, Katrina; Mickelsen, Leroy; Rose, Laura; Dowell, Chad; Delaney, Lisa; Weber, Angela; Morse, Stephen; Chaitram, Jasmine; Gray, Marshall

    2016-01-01

    Sample collection procedures and primary receptacle (sample container and bag) decontamination methods should prevent contaminant transfer between contaminated and non-contaminated surfaces and areas during bio-incident operations. Cross-contamination of personnel, equipment, or sample containers may result in the exfiltration of biological agent from the exclusion (hot) zone and have unintended negative consequences on response resources, activities and outcomes. The current study was designed to: (1) evaluate currently recommended sample collection and packaging procedures to identify procedural steps that may increase the likelihood of spore exfiltration or contaminant transfer; (2) evaluate the efficacy of currently recommended primary receptacle decontamination procedures; and (3) evaluate the efficacy of outer packaging decontamination methods. Wet- and dry-deposited fluorescent tracer powder was used in contaminant transfer tests to qualitatively evaluate the currently-recommended sample collection procedures. Bacillus atrophaeus spores, a surrogate for Bacillus anthracis, were used to evaluate the efficacy of spray- and wipe-based decontamination procedures. Both decontamination procedures were quantitatively evaluated on three types of sample packaging materials (corrugated fiberboard, polystyrene foam, and polyethylene plastic), and two contamination mechanisms (wet or dry inoculums). Contaminant transfer results suggested that size-appropriate gloves should be worn by personnel, templates should not be taped to or removed from surfaces, and primary receptacles should be selected carefully. The decontamination tests indicated that wipe-based decontamination procedures may be more effective than spray-based procedures; efficacy was not influenced by material type but was affected by the inoculation method. Incomplete surface decontamination was observed in all tests with dry inoculums. This study provides a foundation for optimizing current B. anthracis response procedures to minimize contaminant exfiltration. PMID:27362274

  2. Evaluation of standardized sample collection, packaging, and decontamination procedures to assess cross-contamination potential during Bacillus anthracis incident response operations.

    PubMed

    Calfee, M Worth; Tufts, Jenia; Meyer, Kathryn; McConkey, Katrina; Mickelsen, Leroy; Rose, Laura; Dowell, Chad; Delaney, Lisa; Weber, Angela; Morse, Stephen; Chaitram, Jasmine; Gray, Marshall

    2016-12-01

    Sample collection procedures and primary receptacle (sample container and bag) decontamination methods should prevent contaminant transfer between contaminated and non-contaminated surfaces and areas during bio-incident operations. Cross-contamination of personnel, equipment, or sample containers may result in the exfiltration of biological agent from the exclusion (hot) zone and have unintended negative consequences on response resources, activities and outcomes. The current study was designed to: (1) evaluate currently recommended sample collection and packaging procedures to identify procedural steps that may increase the likelihood of spore exfiltration or contaminant transfer; (2) evaluate the efficacy of currently recommended primary receptacle decontamination procedures; and (3) evaluate the efficacy of outer packaging decontamination methods. Wet- and dry-deposited fluorescent tracer powder was used in contaminant transfer tests to qualitatively evaluate the currently-recommended sample collection procedures. Bacillus atrophaeus spores, a surrogate for Bacillus anthracis, were used to evaluate the efficacy of spray- and wipe-based decontamination procedures. Both decontamination procedures were quantitatively evaluated on three types of sample packaging materials (corrugated fiberboard, polystyrene foam, and polyethylene plastic), and two contamination mechanisms (wet or dry inoculums). Contaminant transfer results suggested that size-appropriate gloves should be worn by personnel, templates should not be taped to or removed from surfaces, and primary receptacles should be selected carefully. The decontamination tests indicated that wipe-based decontamination procedures may be more effective than spray-based procedures; efficacy was not influenced by material type but was affected by the inoculation method. Incomplete surface decontamination was observed in all tests with dry inoculums. This study provides a foundation for optimizing current B. anthracis response procedures to minimize contaminant exfiltration.

  3. Determination of nonylphenol and nonylphenol ethoxylates in environmental solid samples by ultrasonic-assisted extraction and high performance liquid chromatography-fluorescence detection.

    PubMed

    Núñez, L; Turiel, E; Tadeo, J L

    2007-04-06

    A simple and rapid analytical method for the determination of nonylphenol (NP) and nonylphenol ethoxylates (NPEOx) in solid environmental samples has been developed. This method combines an ultrasonic-assisted extraction procedure in small columns and an enrichment step onto C(18) solid-phase extraction cartridges prior to separation using HPLC with fluorescence detection. Method optimization was carried out using soil samples fortified at different concentration levels (from 0.1 to 100 microg/g). Under optimum conditions, 2g of soil was placed in small glass columns and extraction was performed assisted by sonication (SAESC) at 45 degrees C in two consecutive steps of 15 min using a mixture of H(2)O/MeOH (30/70). The obtained extracts were collected, loaded onto 500 mg C(18) cartridges, and analytes were eluted with 3 x 1 ml of methanol and 1 ml of acetonitrile. Finally, sample extracts were evaporated under a nitrogen stream, redissolved in 500 microl H(2)O/AcN (50/50), and passed though a 0.45 microm nylon filter before final determination by HPLC-FL. The developed procedure allowed to achieve quantitative recoveries for NP and NPEOx, and was properly validated. Finally, the method was applied to the determination of these compounds in soils and other environmental solid samples such as sediments, compost and sludge.

  4. Investigation of interaction between magnetic silica particles and lambda phage DNA fragment.

    PubMed

    Smerkova, Kristyna; Dostalova, Simona; Vaculovicova, Marketa; Kynicky, Jindrich; Trnkova, Libuse; Kralik, Miroslav; Adam, Vojtech; Hubalek, Jaromir; Provaznik, Ivo; Kizek, Rene

    2013-12-01

    Nucleic acids belong to the most important molecules and therefore the understanding of their properties, function and behavior is crucial. Even though a range of analytical and biochemical methods have been developed for this purpose, one common step is essential for all of them - isolation of the nucleic acid from the from complex sample matrix. The use of magnetic particles for the separation of nucleic acids has many advantages over other isolation methods. In this study, an isolation procedure for extraction of DNA was optimized. Each step of the isolation process including washing, immobilization and elution was optimized and therefore the efficiency was increased from 1.7% to 28.7% and the total time was shortened from 75 to 30min comparing to the previously described method. Quantification of the particular parameter influence was performed by square-wave voltammetry using hanging drop mercury electrode. Further, we compared the optimized method with standard chloroform extraction and applied on isolation of DNA from Staphylococcus aureus and Escherichia coli. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  6. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  7. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  8. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  9. 47 CFR 80.319 - Radiotelegraph distress call and message transmission procedure.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ..., when time is vital, the first and second steps may be omitted. These two steps of the distress... transmissions under paragraphs (a) (5) and (6) of this section, which are to permit direction finding stations...

  10. User's guide to four-body and three-body trajectory optimization programs

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A collection of computer programs and subroutines written in FORTRAN to calculate 4-body (sun-earth-moon-space) and 3-body (earth-moon-space) optimal trajectories is presented. The programs incorporate a variable step integration technique and a quadrature formula to correct single step errors. The programs provide capability to solve initial value problem, two point boundary value problem of a transfer from a given initial position to a given final position in fixed time, optimal 2-impulse transfer from an earth parking orbit of given inclination to a given final position and velocity in fixed time and optimal 3-impulse transfer from a given position to a given final position and velocity in fixed time.

  11. Stirling engine design manual

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1978-01-01

    This manual is intended to serve both as an introduction to Stirling engine analysis methods and as a key to the open literature on Stirling engines. Over 800 references are listed and these are cross referenced by date of publication, author and subject. Engine analysis is treated starting from elementary principles and working through cycles analysis. Analysis methodologies are classified as first, second or third order depending upon degree of complexity and probable application; first order for preliminary engine studies, second order for performance prediction and engine optimization, and third order for detailed hardware evaluation and engine research. A few comparisons between theory and experiment are made. A second order design procedure is documented step by step with calculation sheets and a worked out example to follow. Current high power engines are briefly described and a directory of companies and individuals who are active in Stirling engine development is included. Much remains to be done. Some of the more complicated and potentially very useful design procedures are now only referred to. Future support will enable a more thorough job of comparing all available design procedures against experimental data which should soon be available.

  12. Optimization of chiral structures for microscale propulsion.

    PubMed

    Keaveny, Eric E; Walker, Shawn W; Shelley, Michael J

    2013-02-13

    Recent advances in micro- and nanoscale fabrication techniques allow for the construction of rigid, helically shaped microswimmers that can be actuated using applied magnetic fields. These swimmers represent the first steps toward the development of microrobots for targeted drug delivery and minimally invasive surgical procedures. To assess the performance of these devices and improve on their design, we perform shape optimization computations to determine swimmer geometries that maximize speed in the direction of a given applied magnetic torque. We directly assess aspects of swimmer shapes that have been developed in previous experimental studies, including helical propellers with elongated cross sections and attached payloads. From these optimizations, we identify key improvements to existing designs that result in swimming speeds that are 70-470% of their original values.

  13. Determination of Total Carbohydrates in Algal Biomass: Laboratory Analytical Procedure (LAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Wychen, Stefanie; Laurens, Lieve M. L.

    This procedure uses two-step sulfuric acid hydrolysis to hydrolyze the polymeric forms of carbohydrates in algal biomass into monomeric subunits. The monomers are then quantified by either HPLC or a suitable spectrophotometric method.

  14. Fumed silica nanoparticle mediated biomimicry for optimal cell-material interactions for artificial organ development.

    PubMed

    de Mel, Achala; Ramesh, Bala; Scurr, David J; Alexander, Morgan R; Hamilton, George; Birchall, Martin; Seifalian, Alexander M

    2014-03-01

    Replacement of irreversibly damaged organs due to chronic disease, with suitable tissue engineered implants is now a familiar area of interest to clinicians and multidisciplinary scientists. Ideal tissue engineering approaches require scaffolds to be tailor made to mimic physiological environments of interest with specific surface topographical and biological properties for optimal cell-material interactions. This study demonstrates a single-step procedure for inducing biomimicry in a novel nanocomposite base material scaffold, to re-create the extracellular matrix, which is required for stem cell integration and differentiation to mature cells. Fumed silica nanoparticle mediated procedure of scaffold functionalization, can be potentially adapted with multiple bioactive molecules to induce cellular biomimicry, in the development human organs. The proposed nanocomposite materials already in patients for number of implants, including world first synthetic trachea, tear ducts and vascular bypass graft. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Location detection and tracking of moving targets by a 2D IR-UWB radar system.

    PubMed

    Nguyen, Van-Han; Pyun, Jae-Young

    2015-03-19

    In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.

  16. Hydrogen bonding in malonaldehyde: a density functional and reparametrized semiempirical approach

    NASA Astrophysics Data System (ADS)

    Kovačević, Goran; Hrenar, Tomica; Došlić, Nadja

    2003-08-01

    Intramolecular proton transfer in malonaldehyde (MA) has been investigated by density functional theory (DFT). The DFT results were used for the construction of a high quality semiempirical potential energy surface with a reparametrized PM3 Hamiltonian. A two-step reparameterization procedure is proposed in which (i) the PM3-MAIS core-core functions for the O-H and H-H interactions were used and a new functional form for the O-O correction function was proposed and (ii) a set of specific reaction parameters (SRP) has been obtained via genetic algorithm optimization. The quality of the reparametrized semiempirical potential energy surfaces was tested by calculating the tunneling splitting of vibrational levels and the anharmonic vibrational frequencies of the system. The applicability to multi-dimensional dynamics in large molecular systems is discussed.

  17. Selective extraction and separation of oxymatrine from Sophora flavescens Ait. extract by silica-confined ionic liquid.

    PubMed

    Bi, Wentao; Tian, Minglei; Row, Kyung Ho

    2012-01-01

    This study highlighted the application of a two-stepped extraction method for extraction and separation of oxymatrine from Sophora flavescens Ait. extract by utilizing silica-confined ionic liquids as sorbent. The optimized silica-confined ionic liquid was firstly mixed with plant extract to adsorb oxymatrine. Simultaneously, some interference, such as matrine, was removed. The obtained suspension was then added to a cartridge for solid phase extraction. Through these two steps, target compound was adequately separated from interferences with 93.4% recovery. In comparison with traditional solid phase extraction, this method accelerates loading and reduces the use of organic solvents during washing. Moreover, the optimization of loading volume was simplified as optimization of solid/liquid ratio. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Stability of discrete time recurrent neural networks and nonlinear optimization problems.

    PubMed

    Singh, Jayant; Barabanov, Nikita

    2016-02-01

    We consider the method of Reduction of Dissipativity Domain to prove global Lyapunov stability of Discrete Time Recurrent Neural Networks. The standard and advanced criteria for Absolute Stability of these essentially nonlinear systems produce rather weak results. The method mentioned above is proved to be more powerful. It involves a multi-step procedure with maximization of special nonconvex functions over polytopes on every step. We derive conditions which guarantee an existence of at most one point of local maximum for such functions over every hyperplane. This nontrivial result is valid for wide range of neuron transfer functions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. [Management of spine injuries in polytraumatized patients].

    PubMed

    Heyde, C E; Ertel, W; Kayser, R

    2005-09-01

    The management of spine injuries in polytraumatized patients remains a great challenge for the diagnostic procedures and institution of appropriate treatment by integrating spinal trauma treatment into the whole treatment concept as well as following the treatment steps for the injured spine itself. The established concept of "damage control" and criteria regarding the optimal time and manner for operative treatment of the injured spine in the polytrauma setting is presented and discussed.

  20. Improved quality-by-design compliant methodology for method development in reversed-phase liquid chromatography.

    PubMed

    Debrus, Benjamin; Guillarme, Davy; Rudaz, Serge

    2013-10-01

    A complete strategy dedicated to quality-by-design (QbD) compliant method development using design of experiments (DOE), multiple linear regressions responses modelling and Monte Carlo simulations for error propagation was evaluated for liquid chromatography (LC). The proposed approach includes four main steps: (i) the initial screening of column chemistry, mobile phase pH and organic modifier, (ii) the selectivity optimization through changes in gradient time and mobile phase temperature, (iii) the adaptation of column geometry to reach sufficient resolution, and (iv) the robust resolution optimization and identification of the method design space. This procedure was employed to obtain a complex chromatographic separation of 15 antipsychotic basic drugs, widely prescribed. To fully automate and expedite the QbD method development procedure, short columns packed with sub-2 μm particles were employed, together with a UHPLC system possessing columns and solvents selection valves. Through this example, the possibilities of the proposed QbD method development workflow were exposed and the different steps of the automated strategy were critically discussed. A baseline separation of the mixture of antipsychotic drugs was achieved with an analysis time of less than 15 min and the robustness of the method was demonstrated simultaneously with the method development phase. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Optimized breeding strategies for multiple trait integration: II. Process efficiency in event pyramiding and trait fixation.

    PubMed

    Peng, Ting; Sun, Xiaochun; Mumm, Rita H

    2014-01-01

    Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity as well as seed chipping and tissue sampling approaches to facilitate genotyping. With selfing approaches, two generations of selfing rather than one for trait fixation (i.e. 'F2 enrichment' as per Bonnett et al. in Strategies for efficient implementation of molecular markers in wheat breeding. Mol Breed 15:75-85, 2005) were utilized to eliminate bottlenecking due to extremely low frequencies of desired genotypes in the population. The efficiency indicators such as total number of plants grown across generations, total number of marker data points, total number of generations, number of seeds sampled by seed chipping, number of plants requiring tissue sampling, and number of pollinations (i.e. selfing and crossing) were considered in comparisons of breeding strategies. A breeding strategy involving seed chipping and a two-generation selfing approach (SC + SELF) was determined to be the most efficient breeding strategy in terms of time to market and resource requirements. Doubled haploidy may have limited utility in trait fixation for MTI under the defined breeding scenario. This outcome paves the way for optimizing the last step in the MTI process, version testing, which involves hybridization of female and male RP conversions to create versions of the converted hybrid for performance evaluation and possible commercial release.

  2. 16 CFR § 1610.6 - Test procedure.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 2 2013-01-01 2013-01-01 false Test procedure. § 1610.6 Section § 1610.6... FLAMMABILITY OF CLOTHING TEXTILES The Standard § 1610.6 Test procedure. The test procedure is divided into two... according to paragraph (b)(1) of this section. (a) Step 1—Testing in the original state. (1) Tests shall be...

  3. Architecture design of a generic centralized adjudication module integrated in a web-based clinical trial management system

    PubMed Central

    Zhao, Wenle; Pauls, Keith

    2015-01-01

    Background Centralized outcome adjudication has been used widely in multi-center clinical trials in order to prevent potential biases and to reduce variations in important safety and efficacy outcome assessments. Adjudication procedures could vary significantly among different studies. In practice, the coordination of outcome adjudication procedures in many multicenter clinical trials remains as a manual process with low efficiency and high risk of delay. Motivated by the demands from two large clinical trial networks, a generic outcome adjudication module has been developed by the network’s data management center within a homegrown clinical trial management system. In this paper, the system design strategy and database structure are presented. Methods A generic database model was created to transfer different adjudication procedures into a unified set of sequential adjudication steps. Each adjudication step was defined by one activate condition, one lock condition, one to five categorical data items to capture adjudication results, and one free text field for general comments. Based on this model, a generic outcome adjudication user interface and a generic data processing program were developed within a homegrown clinical trial management system to provide automated coordination of outcome adjudication. Results By the end of 2014, this generic outcome adjudication module had been implemented in 10 multicenter trials. A total of 29 adjudication procedures were defined with the number of adjudication steps varying from 1 to 7. The implementation of a new adjudication procedure in this generic module took an experienced programmer one or two days. A total of 7,336 outcome events had been adjudicated and 16,235 adjudication step activities had been recorded. In a multicenter trial, 1144 safety outcome event submissions went through a three-step adjudication procedure and reported a median of 3.95 days from safety event case report form submission to adjudication completion. In another trial, 277 clinical outcome events were adjudicated by a six-step procedure and took a median of 23.84 days from outcome event case report form submission to adjudication procedure completion. Conclusions A generic outcome adjudication module integrated in the clinical trial management system made the automated coordination of efficacy and safety outcome adjudication a reality. PMID:26464429

  4. Design optimization studies using COSMIC NASTRAN

    NASA Technical Reports Server (NTRS)

    Pitrof, Stephen M.; Bharatram, G.; Venkayya, Vipperla B.

    1993-01-01

    The purpose of this study is to create, test and document a procedure to integrate mathematical optimization algorithms with COSMIC NASTRAN. This procedure is very important to structural design engineers who wish to capitalize on optimization methods to ensure that their design is optimized for its intended application. The OPTNAST computer program was created to link NASTRAN and design optimization codes into one package. This implementation was tested using two truss structure models and optimizing their designs for minimum weight, subject to multiple loading conditions and displacement and stress constraints. However, the process is generalized so that an engineer could design other types of elements by adding to or modifying some parts of the code.

  5. The use of optimization techniques to design controlled diffusion compressor blading

    NASA Technical Reports Server (NTRS)

    Sanger, N. L.

    1982-01-01

    A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.

  6. 32 CFR 644.409 - Procedures for Interchange of National Forest Lands.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Procedures for Interchange of National Forest... Interests § 644.409 Procedures for Interchange of National Forest Lands. (a) General. The interchange of national forest lands is accomplished in three steps: first, agreement must be reached between the two...

  7. Study of inhaler technique in asthma patients: differences between pediatric and adult patients

    PubMed Central

    Manríquez, Pablo; Acuña, Ana María; Muñoz, Luis; Reyes, Alvaro

    2015-01-01

    Objective: Inhaler technique comprises a set of procedures for drug delivery to the respiratory system. The oral inhalation of medications is the first-line treatment for lung diseases. Using the proper inhaler technique ensures sufficient drug deposition in the distal airways, optimizing therapeutic effects and reducing side effects. The purposes of this study were to assess inhaler technique in pediatric and adult patients with asthma; to determine the most common errors in each group of patients; and to compare the results between the two groups. Methods: This was a descriptive cross-sectional study. Using a ten-step protocol, we assessed inhaler technique in 135 pediatric asthma patients and 128 adult asthma patients. Results: The most common error among the pediatric patients was failing to execute a 10-s breath-hold after inhalation, whereas the most common error among the adult patients was failing to exhale fully before using the inhaler. Conclusions: Pediatric asthma patients appear to perform most of the inhaler technique steps correctly. However, the same does not seem to be true for adult patients. PMID:26578130

  8. A morphing-based scheme for large deformation analysis with stereo-DIC

    NASA Astrophysics Data System (ADS)

    Genovese, Katia; Sorgente, Donato

    2018-05-01

    A key step in the DIC-based image registration process is the definition of the initial guess for the non-linear optimization routine aimed at finding the parameters describing the pixel subset transformation. This initialization may result very challenging and possibly fail when dealing with pairs of largely deformed images such those obtained from two angled-views of not-flat objects or from the temporal undersampling of rapidly evolving phenomena. To address this problem, we developed a procedure that generates a sequence of intermediate synthetic images for gradually tracking the pixel subset transformation between the two extreme configurations. To this scope, a proper image warping function is defined over the entire image domain through the adoption of a robust feature-based algorithm followed by a NURBS-based interpolation scheme. This allows a fast and reliable estimation of the initial guess of the deformation parameters for the subsequent refinement stage of the DIC analysis. The proposed method is described step-by-step by illustrating the measurement of the large and heterogeneous deformation of a circular silicone membrane undergoing axisymmetric indentation. A comparative analysis of the results is carried out by taking as a benchmark a standard reference-updating approach. Finally, the morphing scheme is extended to the most general case of the correspondence search between two largely deformed textured 3D geometries. The feasibility of this latter approach is demonstrated on a very challenging case: the full-surface measurement of the severe deformation (> 150% strain) suffered by an aluminum sheet blank subjected to a pneumatic bulge test.

  9. SU-E-J-126: An Online Replanning Method for FFF Beams Without Couch Shift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahunbay, E; Ates, O; Li, X

    2015-06-15

    Purpose: In a situation that couch shift for patient positioning is not preferred or prohibited (e.g., MR-Linac), segment aperture morphing (SAM) can address target dislocation and deformation. For IMRT/VMAT with flattening filter free (FFF) beams, however, SAM method would lead to an adverse translational dose effect due to the beam unflattening. Here we propose a new 2-step process to address both the translational effect of FFF beams and the target deformation. Methods: The replanning method consists of an offline and an online steps. The offline step is to create a series of pre-shifted plans (PSP) obtained by a so calledmore » “warm start” optimization (starting optimization from the original plan, rather from scratch) at a series of isocenter shifts with fixed distance (e.g. 2 cm, at x,y,z = 2,0,0 ; 2,2,0 ; 0,2,0; …;− 2,0,0). The PSPs all have the same number of segments with very similar shapes, since the warm-start optimization only adjusts the MLC positions instead of regenerating them. In the online step, a new plan is obtained by linearly interpolating the MLC positions and the monitor units of the closest PSPs for the shift determined from the image of the day. This two-step process is completely automated, and instantaneously fast (no optimization or dose calculation needed). The previously-developed SAM algorithm is then applied for daily deformation. We tested the method on sample prostate and pancreas cases. Results: The two-step interpolation method can account for the adverse dose effects from FFF beams, while SAM corrects for the target deformation. The whole process takes the same time as the previously reported SAM process (5–10 min). Conclusion: The new two-step method plus SAM can address both the translation effects of FFF beams and target deformation, and can be executed in full automation requiring no additional time from the SAM process. This research was supported by Elekta inc. (Crawley, UK)« less

  10. Two-step chlorination: A new approach to disinfection of a primary sewage effluent.

    PubMed

    Li, Yu; Yang, Mengting; Zhang, Xiangru; Jiang, Jingyi; Liu, Jiaqi; Yau, Cie Fu; Graham, Nigel J D; Li, Xiaoyan

    2017-01-01

    Sewage disinfection aims at inactivating pathogenic microorganisms and preventing the transmission of waterborne diseases. Chlorination is extensively applied for disinfecting sewage effluents. The objective of achieving a disinfection goal and reducing disinfectant consumption and operational costs remains a challenge in sewage treatment. In this study, we have demonstrated that, for the same chlorine dosage, a two-step addition of chlorine (two-step chlorination) was significantly more efficient in disinfecting a primary sewage effluent than a one-step addition of chlorine (one-step chlorination), and shown how the two-step chlorination was optimized with respect to time interval and dosage ratio. Two-step chlorination of the sewage effluent attained its highest disinfection efficiency at a time interval of 19 s and a dosage ratio of 5:1. Compared to one-step chlorination, two-step chlorination enhanced the disinfection efficiency by up to 0.81- or even 1.02-log for two different chlorine doses and contact times. An empirical relationship involving disinfection efficiency, time interval and dosage ratio was obtained by best fitting. Mechanisms (including a higher overall Ct value, an intensive synergistic effect, and a shorter recovery time) were proposed for the higher disinfection efficiency of two-step chlorination in the sewage effluent disinfection. Annual chlorine consumption costs in one-step and two-step chlorination of the primary sewage effluent were estimated. Compared to one-step chlorination, two-step chlorination reduced the cost by up to 16.7%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  12. Holistic irrigation water management approach based on stochastic soil water dynamics

    NASA Astrophysics Data System (ADS)

    Alizadeh, H.; Mousavi, S. J.

    2012-04-01

    Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. The area suffers from the water scarcity problem and therefore the trade-off between the level of deficit and economical profit should be assessed. Based on the results, while the maximum net benefit has been obtained for the stress-avoidance (SA) irrigation policy, the highest water profitability, defined by economical net benefit gained from unit irrigation water volume application, has been resulted when only about 60% of water used in the SA policy is applied.

  13. Computer Assisted Design, Prediction, and Execution of Economical Organic Syntheses

    NASA Astrophysics Data System (ADS)

    Gothard, Nosheen Akber

    The synthesis of useful organic molecules via simple and cost-effective routes is a core challenge in organic chemistry. In industry or academia, organic chemists use their chemical intuition, technical expertise and published procedures to determine an optimal pathway. This approach, not only takes time and effort, but also is cost prohibitive. Many potential optimal routes scratched on paper fail to get experimentally tested. In addition, with new methods being discovered daily are often overlooked by established techniques. This thesis reports a computational technique that assist the discovery of economical synthetic routes to useful organic targets. Organic chemistry exists as a network where chemicals are connected by reactions, analogous to citied connected by roads in a geographic map. This network topology of organic reactions in the network of organic chemistry (NOC) allows the application of graph-theory to devise algorithms for synthetic optimization of organic targets. A computational approach comprised of customizable algorithms, pre-screening filters, and existing chemoinformatic techniques is capable of answering complex questions and perform mechanistic tasks desired by chemists such as optimization of organic syntheses. One-pot reactions are central to modern synthesis since they save resources and time by avoiding isolation, purification, characterization, and production of chemical waste after each synthetic step. Sometimes, such reactions are identified by chance or, more often, by careful inspection of individual steps that are to be wired together. Algorithms are used to discover one-pot reactions and validated experimentally. Which demonstrate that the computationally predicted sequences can indeed by carried out experimentally in good overall yields. The experimental examples are chosen to from small networks of reactions around useful chemicals such as quinoline scaffolds, quinoline-based inhibitors of phosphoinositide 3-kinase delta (PI3Kdelta) enzyme, and thiophene derivatives. In this way, we replace individual synthetic connections with two-, three-, or even four-step one-pot sequences. Lastly, the computational method is utilized to devise hypothetical synthetic route to popular pharmaceutical drugs like NaproxenRTM and TaxolRTM. The algorithmically generated optimal pathways are evaluated with chemistry logic. By applying labor/cost factor It was revealed that not all shorter synthesis routes are economical, sometimes "Longest way round is the shortest way home" lengthier routes are found to be more economical and environmentally friendly.

  14. A METHOD FOR DETERMINING THE COMPATIBILITY OF HAZARDOUS WASTES

    EPA Science Inventory

    This report describes a method for determining the compatibility of the binary combinations of hazardous wastes. The method consists of two main parts, namely: (1) the step-by-step compatibility analysis procedures, and (2) the hazardous wastes compatibility chart. The key elemen...

  15. Deep generative learning for automated EHR diagnosis of traditional Chinese medicine.

    PubMed

    Liang, Zhaohui; Liu, Jun; Ou, Aihua; Zhang, Honglai; Li, Ziping; Huang, Jimmy Xiangji

    2018-05-04

    Computer-aided medical decision-making (CAMDM) is the method to utilize massive EMR data as both empirical and evidence support for the decision procedure of healthcare activities. Well-developed information infrastructure, such as hospital information systems and disease surveillance systems, provides abundant data for CAMDM. However, the complexity of EMR data with abstract medical knowledge makes the conventional model incompetent for the analysis. Thus a deep belief networks (DBN) based model is proposed to simulate the information analysis and decision-making procedure in medical practice. The purpose of this paper is to evaluate a deep learning architecture as an effective solution for CAMDM. A two-step model is applied in our study. At the first step, an optimized seven-layer deep belief network (DBN) is applied as an unsupervised learning algorithm to perform model training to acquire feature representation. Then a support vector machine model is adopted to DBN at the second step of the supervised learning. There are two data sets used in the experiments. One is a plain text data set indexed by medical experts. The other is a structured dataset on primary hypertension. The data are randomly divided to generate the training set for the unsupervised learning and the testing set for the supervised learning. The model performance is evaluated by the statistics of mean and variance, the average precision and coverage on the data sets. Two conventional shallow models (support vector machine / SVM and decision tree / DT) are applied as the comparisons to show the superiority of our proposed approach. The deep learning (DBN + SVM) model outperforms simple SVM and DT on two data sets in terms of all the evaluation measures, which confirms our motivation that the deep model is good at capturing the key features with less dependence when the index is built up by manpower. Our study shows the two-step deep learning model achieves high performance for medical information retrieval over the conventional shallow models. It is able to capture the features of both plain text and the highly-structured database of EMR data. The performance of the deep model is superior to the conventional shallow learning models such as SVM and DT. It is an appropriate knowledge-learning model for information retrieval of EMR system. Therefore, deep learning provides a good solution to improve the performance of CAMDM systems. Copyright © 2018. Published by Elsevier B.V.

  16. Optimized postweld heat treatment procedures for 17-4 PH stainless steels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaduri, A.K.; Sujith, S.; Srinivasan, G.

    1995-05-01

    The postweld heat treatment (PWHT) procedures for 17-4 PH stainless steel weldments of matching chemistry was optimized vis-a-vis its microstructure prior to welding based on microstructural studies and room-temperature mechanical properties. The 17-4 PH stainless steel was welded in two different prior microstructural conditions (condition A and condition H 1150) and then postweld heat treated to condition H900 or condition H1150, using different heat treatment procedures. Microstructural investigations and room-temperature tensile properties were determined to study the combined effects of prior microstructural and PWHT procedures.

  17. Optimization of adenovirus 40 and 41 recovery from tap water using small disk filters.

    PubMed

    McMinn, Brian R

    2013-11-01

    Currently, the U.S. Environmental Protection Agency's Information Collection Rule (ICR) for the primary concentration of viruses from drinking and surface waters uses the 1MDS filter, but a more cost effective option, the NanoCeram® filter, has been shown to recover comparable levels of enterovirus and norovirus from both matrices. In order to achieve the highest viral recoveries, filtration methods require the identification of optimal concentration conditions that are unique for each virus type. This study evaluated the effectiveness of 1MDS and NanoCeram filters in recovering adenovirus (AdV) 40 and 41 from tap water, and optimized two secondary concentration procedures the celite and organic flocculation method. Adjustments in pH were made to both virus elution solutions and sample matrices to determine which resulted in higher virus recovery. Samples were analyzed by quantitative PCR (qPCR) and Most Probable Number (MPN) techniques and AdV recoveries were determined by comparing levels of virus in sample concentrates to that in the initial input. The recovery of adenovirus was highest for samples in unconditioned tap water (pH 8) using the 1MDS filter and celite for secondary concentration. Elution buffer containing 0.1% sodium polyphosphate at pH 10.0 was determined to be most effective overall for both AdV types. Under these conditions, the average recovery for AdV40 and 41 was 49% and 60%, respectively. By optimizing secondary elution steps, AdV recovery from tap water could be improved at least two-fold compared to the currently used methodology. Identification of the optimal concentration conditions for human AdV (HAdV) is important for timely and sensitive detection of these viruses from both surface and drinking waters. Published by Elsevier B.V.

  18. Simultaneous determination of PPCPs, EDCs, and artificial sweeteners in environmental water samples using a single-step SPE coupled with HPLC-MS/MS and isotope dilution.

    PubMed

    Tran, Ngoc Han; Hu, Jiangyong; Ong, Say Leong

    2013-09-15

    A high-throughput method for the simultaneous determination of 24 pharmaceuticals and personal care products (PPCPs), endocrine disrupting chemicals (EDCs) and artificial sweeteners (ASs) was developed. The method was based on a single-step solid phase extraction (SPE) coupled with high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) and isotope dilution. In this study, a single-step SPE procedure was optimized for simultaneous extraction of all target analytes. Good recoveries (≥ 70%) were observed for all target analytes when extraction was performed using Chromabond(®) HR-X (500 mg, 6 mL) cartridges under acidic condition (pH 2). HPLC-MS/MS parameters were optimized for the simultaneous analysis of 24 PPCPs, EDCs and ASs in a single injection. Quantification was performed by using 13 isotopically labeled internal standards (ILIS), which allows correcting efficiently the loss of the analytes during SPE procedure, matrix effects during HPLC-MS/MS and fluctuation in MS/MS signal intensity due to instrument. Method quantification limit (MQL) for most of the target analytes was below 10 ng/L in all water samples. The method was successfully applied for the simultaneous determination of PPCPs, EDCs and ASs in raw wastewater, surface water and groundwater samples collected in a local catchment area in Singapore. In conclusion, the developed method provided a valuable tool for investigating the occurrence, behavior, transport, and the fate of PPCPs, EDCs and ASs in the aquatic environment. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Minimizing Postsampling Degradation of Peptides by a Thermal Benchtop Tissue Stabilization Method

    PubMed Central

    Segerström, Lova; Gustavsson, Jenny

    2016-01-01

    Enzymatic degradation is a major concern in peptide analysis. Postmortem metabolism in biological samples entails considerable risk for measurements misrepresentative of true in vivo concentrations. It is therefore vital to find reliable, reproducible, and easy-to-use procedures to inhibit enzymatic activity in fresh tissues before subjecting them to qualitative and quantitative analyses. The aim of this study was to test a benchtop thermal stabilization method to optimize measurement of endogenous opioids in brain tissue. Endogenous opioid peptides are generated from precursor proteins through multiple enzymatic steps that include conversion of one bioactive peptide to another, often with a different function. Ex vivo metabolism may, therefore, lead to erroneous functional interpretations. The efficacy of heat stabilization was systematically evaluated in a number of postmortem handling procedures. Dynorphin B (DYNB), Leu-enkephalin-Arg6 (LARG), and Met-enkephalin-Arg6-Phe7 (MEAP) were measured by radioimmunoassay in rat hypothalamus, striatum (STR), and cingulate cortex (CCX). Also, simplified extraction protocols for stabilized tissue were tested. Stabilization affected all peptide levels to varying degrees compared to those prepared by standard dissection and tissue handling procedures. Stabilization increased DYNB in hypothalamus, but not STR or CCX, whereas LARG generally decreased. MEAP increased in hypothalamus after all stabilization procedures, whereas for STR and CCX, the effect was dependent on the time point for stabilization. The efficacy of stabilization allowed samples to be left for 2 hours in room temperature (20°C) without changes in peptide levels. This study shows that conductive heat transfer is an easy-to-use and efficient procedure for the preservation of the molecular composition in biological samples. Region- and peptide-specific critical steps were identified and stabilization enabled the optimization of tissue handling and opioid peptide analysis. The result is improved diagnostic and research value of the samples with great benefits for basic research and clinical work. PMID:27007059

  20. Multi Objective Controller Design for Linear System via Optimal Interpolation

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay

    1996-01-01

    We propose a methodology for the design of a controller which satisfies a set of closed-loop objectives simultaneously. The set of objectives consists of: (1) pole placement, (2) decoupled command tracking of step inputs at steady-state, and (3) minimization of step response transients with respect to envelope specifications. We first obtain a characterization of all controllers placing the closed-loop poles in a prescribed region of the complex plane. In this characterization, the free parameter matrix Q(s) is to be determined to attain objectives (2) and (3). Objective (2) is expressed as determining a Pareto optimal solution to a vector valued optimization problem. The solution of this problem is obtained by transforming it to a scalar convex optimization problem. This solution determines Q(O) and the remaining freedom in choosing Q(s) is used to satisfy objective (3). We write Q(s) = (l/v(s))bar-Q(s) for a prescribed polynomial v(s). Bar-Q(s) is a polynomial matrix which is arbitrary except that Q(O) and the order of bar-Q(s) are fixed. Obeying these constraints bar-Q(s) is now to be 'shaped' to minimize the step response characteristics of specific input/output pairs according to the maximum envelope violations. This problem is expressed as a vector valued optimization problem using the concept of Pareto optimality. We then investigate a scalar optimization problem associated with this vector valued problem and show that it is convex. The organization of the report is as follows. The next section includes some definitions and preliminary lemmas. We then give the problem statement which is followed by a section including a detailed development of the design procedure. We then consider an aircraft control example. The last section gives some concluding remarks. The Appendix includes the proofs of technical lemmas, printouts of computer programs, and figures.

  1. 3D-fabrication of tunable and high-density arrays of crystalline silicon nanostructures

    NASA Astrophysics Data System (ADS)

    Wilbers, J. G. E.; Berenschot, J. W.; Tiggelaar, R. M.; Dogan, T.; Sugimura, K.; van der Wiel, W. G.; Gardeniers, J. G. E.; Tas, N. R.

    2018-04-01

    In this report, a procedure for the 3D-nanofabrication of ordered, high-density arrays of crystalline silicon nanostructures is described. Two nanolithography methods were utilized for the fabrication of the nanostructure array, viz. displacement Talbot lithography (DTL) and edge lithography (EL). DTL is employed to perform two (orthogonal) resist-patterning steps to pattern a thin Si3N4 layer. The resulting patterned double layer serves as an etch mask for all further etching steps for the fabrication of ordered arrays of silicon nanostructures. The arrays are made by means of anisotropic wet etching of silicon in combination with an isotropic retraction etch step of the etch mask, i.e. EL. The procedure enables fabrication of nanostructures with dimensions below 15 nm and a potential density of 1010 crystals cm-2.

  2. Isothermal DNA origami folding: avoiding denaturing conditions for one-pot, hybrid-component annealing

    NASA Astrophysics Data System (ADS)

    Kopielski, Andreas; Schneider, Anne; Csáki, Andrea; Fritzsche, Wolfgang

    2015-01-01

    The DNA origami technique offers great potential for nanotechnology. Using biomolecular self-assembly, defined 2D and 3D nanoscale DNA structures can be realized. DNA origami allows the positioning of proteins, fluorophores or nanoparticles with an accuracy of a few nanometers and enables thereby novel nanoscale devices. Origami assembly usually includes a thermal denaturation step at 90 °C. Additional components used for nanoscale assembly (such as proteins) are often thermosensitive, and possibly damaged by such harsh conditions. They have therefore to be attached in an extra second step to avoid defects. To enable a streamlined one-step nanoscale synthesis - a so called one-pot folding - an adaptation of the folding procedures is required. Here we present a thermal optimization of this process for a 2D DNA rectangle-shaped origami resulting in an isothermal assembly protocol below 60 °C without thermal denaturation. Moreover, a room temperature protocol is presented using the chemical additive betaine, which is biocompatible in contrast to chemical denaturing approaches reported previously.The DNA origami technique offers great potential for nanotechnology. Using biomolecular self-assembly, defined 2D and 3D nanoscale DNA structures can be realized. DNA origami allows the positioning of proteins, fluorophores or nanoparticles with an accuracy of a few nanometers and enables thereby novel nanoscale devices. Origami assembly usually includes a thermal denaturation step at 90 °C. Additional components used for nanoscale assembly (such as proteins) are often thermosensitive, and possibly damaged by such harsh conditions. They have therefore to be attached in an extra second step to avoid defects. To enable a streamlined one-step nanoscale synthesis - a so called one-pot folding - an adaptation of the folding procedures is required. Here we present a thermal optimization of this process for a 2D DNA rectangle-shaped origami resulting in an isothermal assembly protocol below 60 °C without thermal denaturation. Moreover, a room temperature protocol is presented using the chemical additive betaine, which is biocompatible in contrast to chemical denaturing approaches reported previously. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr04176c

  3. NASA Spinoff Article: Automated Procedures To Improve Safety on Oil Rigs

    NASA Technical Reports Server (NTRS)

    Garud, Sumedha

    2013-01-01

    On May 11th, 2013, two astronauts emerged from the interior of the International Space Station (ISS) and worked their way toward the far end of spacecraft. Over the next 51/2 hours, the two replaced an ammonia pump that had developed a significant leak a few days before. On the ISS, ammonia serves the vital role of cooling components-in this case, one of the station's eight solar arrays. Throughout the extravehicular activity (EVA), the astronauts stayed in constant contact with mission control: every movement, every action strictly followed a carefully planned set of procedures to maximize crew safety and the chances of success. Though the leak had come as a surprise, NASA was prepared to handle it swiftly thanks in part to the thousands of procedures that have been written to cover every aspect of the ISS's operations. The ISS is not unique in this regard: Every NASA mission requires well-written procedures-or detailed lists of step-by-step instructions-that cover how to operate equipment in any scenario, from normal operations to the challenges created by malfunctioning hardware or software. Astronauts and mission control train and drill extensively in procedures to ensure they know what the proper procedures are and when they should be used. These procedures used to be exclusively written on paper, but over the past decade, NASA has transitioned to digital formats. Electronic-based documentation simplifies storage and use, allowing astronauts and flight controllers to find instructions more quickly and display them through a variety of media. Electronic procedures are also a crucial step toward automation: once instructions are digital, procedure display software can be designed to assist in authoring, reviewing, and even executing them.

  4. Mechanochemical synthesis and intercalation of Ca(II)Fe(III)-layered double hydroxides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferencz, Zs.; Szabados, M.; Varga, G.

    2016-01-15

    A mechanochemical method (grinding the components without added water – dry grinding, followed by further grinding in the presence of minute amount of water or NaOH solution – wet grinding) was used in this work for the preparation and intercalation of CaFe-layered double hydroxides (LDHs). Both the pristine LDHs and the amino acid anion (cystinate and tyrosinate) intercalated varieties were prepared by the two-step grinding procedure in a mixer mill. By systematically changing the conditions of the preparation method, a set of parameters could be determined, which led to the formation of close to phase-pure LDH. The optimisation procedure wasmore » also applied for the intercalation processes of the amino acid anions. The resulting materials were structurally characterised by a range of methods (X-ray diffractometry, scanning electron microscopy, energy dispersive analysis, thermogravimetry, X-ray absorption and infra-red spectroscopies). It was proven that this simple mechanochemical procedure was able to produce complex organic–inorganic nanocomposites: LDHs intercalated with amino acid anions. - Graphical abstract: Amino acid anion-Ca(II)Fe(III)-LDHs were successfully prepared by a two-step milling procedure. - Highlights: • Synthesis of pristine and amino acid intercalated CaFe-LDHs by two-step milling. • Identifying the optimum synthesis and intercalation parameters. • Characterisation of the samples with a range of instrumental methods.« less

  5. Comparison of patency and cost-effectiveness of self-expandable metal and plastic stents used for malignant biliary strictures: a Polish single-center study.

    PubMed

    Budzyńska, Agnieszka; Nowakowska-Duława, Ewa; Marek, Tomasz; Hartleb, Marek

    2016-10-01

    Most patients with malignant biliary obstruction are suited only for palliation by endoscopic drainage with plastic stents (PS) or self-expandable metal stents (SEMS). To compare the clinical outcome and costs of biliary stenting with SEMS and PS in patients with malignant biliary strictures. A total of 114 patients with malignant jaundice who underwent 376 endoscopic retrograde biliary drainage (ERBD) were studied. ERBD with the placement of PS was performed in 80 patients, with one-step SEMS in 20 patients and two-step SEMS in 14 patients. Significantly fewer ERBD interventions were performed in patients with one-step SEMS than PS or the two-step SEMS technique (2.0±1.12 vs. 3.1±1.7 or 5.7±2.1, respectively, P<0.0001). The median hospitalization duration per procedure was similar for the three groups of patients. The patients' survival time was the longest in the two-step SEMS group in comparison with the one-step SEMS and PS groups (596±270 vs. 276±141 or 208±219 days, P<0.001). Overall median time to recurrent biliary obstruction was 89.3±159 days for PS and 120.6±101 days for SEMS (P=0.01). The total cost of hospitalization with ERBD was higher for two-step SEMS than for one-step SEMS or PS (1448±312, 1152±135 and 977±156&OV0556;, P<0.0001). However, the estimated annual cost of medical care for one-step SEMS was higher than that for the two-step SEMS or PS groups (4618, 4079, and 3995&OV0556;, respectively). Biliary decompression by SEMS is associated with longer patency and reduced number of auxiliary procedures; however, repeated PS insertions still remain the most cost-effective strategy.

  6. Two-step liquid phase microextraction combined with capillary electrophoresis: a new approach to simultaneous determination of basic and zwitterionic compounds.

    PubMed

    Nojavan, Saeed; Moharami, Arezoo; Fakhari, Ali Reza

    2012-08-01

    In this work, two-step hollow fiber-based liquid-phase microextraction procedure was evaluated for extraction of the zwitterionic cetirizine (CTZ) and basic hydroxyzine (HZ) in human plasma. In the first step of extraction, the pH of sample was adjusted at 5.0 in order to promote liquid-phase microextraction of the zwitterionic CTZ. In the second step, the pH of sample was increased up to 11.0 for extraction of basic HZ. In this procedure, the extraction times for the first and the second steps were 30 and 20 min, respectively. Owing to the high ratio between the volumes of donor phase and acceptor phase, CTZ and HZ were enriched by factors of 280 and 355, respectively. The linearity of the analytical method was investigated for both compounds in the range of 10-500 ng mL(-1) (R(2) > 0.999). Limit of quantification (S/N = 10) for CTZ and HZ was 10 ng mL(-1) , while the limit of detection was 3 ng mL(-1) for both compounds at a signal to noise ratio of 3:1. Intraday and interday relative standard deviations (RSDs, n = 6) were in the range of 6.5-16.2%. This procedure enabled CTZ and HZ to be analyzed simultaneously by capillary electrophoresis. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Incorporating deliverable monitor unit constraints into spot intensity optimization in intensity modulated proton therapy treatment planning

    PubMed Central

    Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong

    2014-01-01

    The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656

  8. Efficient mixing scheme for self-consistent all-electron charge density

    NASA Astrophysics Data System (ADS)

    Shishidou, Tatsuya; Weinert, Michael

    2015-03-01

    In standard ab initio density-functional theory calculations, the charge density ρ is gradually updated using the ``input'' and ``output'' densities of the current and previous iteration steps. To accelerate the convergence, Pulay mixing has been widely used with great success. It expresses an ``optimal'' input density ρopt and its ``residual'' Ropt by a linear combination of the densities of the iteration sequences. In large-scale metallic systems, however, the long range nature of Coulomb interaction often causes the ``charge sloshing'' phenomenon and significantly impacts the convergence. Two treatments, represented in reciprocal space, are known to suppress the sloshing: (i) the inverse Kerker metric for Pulay optimization and (ii) Kerker-type preconditioning in mixing Ropt. In all-electron methods, where the charge density does not have a converging Fourier representation, treatments equivalent or similar to (i) and (ii) have not been described so far. In this work, we show that, by going through the calculation of Hartree potential, one can accomplish the procedures (i) and (ii) without entering the reciprocal space. Test calculations are done with a FLAPW method.

  9. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  10. The concurrent multiplicative-additive approach for gauge-radar/satellite multisensor precipitation estimates

    NASA Astrophysics Data System (ADS)

    Garcia-Pintado, J.; Barberá, G. G.; Erena Arrabal, M.; Castillo, V. M.

    2010-12-01

    Objective analysis schemes (OAS), also called ``succesive correction methods'' or ``observation nudging'', have been proposed for multisensor precipitation estimation combining remote sensing data (meteorological radar or satellite) with data from ground-based raingauge networks. However, opposite to the more complex geostatistical approaches, the OAS techniques for this use are not optimized. On the other hand, geostatistical techniques ideally require, at the least, modelling the covariance from the rain gauge data at every time step evaluated, which commonly cannot be soundly done. Here, we propose a new procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) for operational rainfall estimation using rain gauges and meteorological radar, which does not require explicit modelling of spatial covariances. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on the OAS, whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The approach considers radar estimates as background a priori information (first guess), so that nudging to observations (gauges) may be relaxed smoothly to the first guess, and the relaxation shape is obtained from the sequential optimization. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, an OAS spatially variable adjustment with multiplicative factors, ordinary cokriging, and kriging with external drift. In theory, it could be equally applicable to gauge-satellite estimates and other hydrometeorological variables.

  11. Digital adaptive flight controller development

    NASA Technical Reports Server (NTRS)

    Kaufman, H.; Alag, G.; Berry, P.; Kotob, S.

    1974-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Two designs are described for an example aircraft. Each of these designs uses a weighted least squares procedure to identify parameters defining the dynamics of the aircraft. The two designs differ in the way in which control law parameters are determined. One uses the solution of an optimal linear regulator problem to determine these parameters while the other uses a procedure called single stage optimization. Extensive simulation results and analysis leading to the designs are presented.

  12. High-order time-marching reinitialization for regional level-set functions

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-02-01

    In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.

  13. The Grad-Shafranov Reconstruction of Toroidal Magnetic Flux Ropes: Method Development and Benchmark Studies

    NASA Astrophysics Data System (ADS)

    Hu, Qiang

    2017-09-01

    We develop an approach of the Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium, as widely applied in fusion plasmas. The geometry is such that the arbitrary cross-section of the torus has rotational symmetry about the rotation axis, Z, with a major radius, r0. The magnetic field configuration is thus determined by a scalar flux function, Ψ, and a functional F that is a single-variable function of Ψ. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional F(Ψ) to determine an optimal Z-axis orientation, and ii) for the chosen Z, a χ2 minimization process resulting in a range of r0. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedure and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in Z and r0 are 9° and 22%, respectively, and the relative percent error in the numerical GS solutions is smaller than 10%. We also make public the computer codes for these implementations and benchmark studies.

  14. Optimization applications in aircraft engine design and test

    NASA Technical Reports Server (NTRS)

    Pratt, T. K.

    1984-01-01

    Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.

  15. One step DNA assembly for combinatorial metabolic engineering.

    PubMed

    Coussement, Pieter; Maertens, Jo; Beauprez, Joeri; Van Bellegem, Wouter; De Mey, Marjan

    2014-05-01

    The rapid and efficient assembly of multi-step metabolic pathways for generating microbial strains with desirable phenotypes is a critical procedure for metabolic engineering, and remains a significant challenge in synthetic biology. Although several DNA assembly methods have been developed and applied for metabolic pathway engineering, many of them are limited by their suitability for combinatorial pathway assembly. The introduction of transcriptional (promoters), translational (ribosome binding site (RBS)) and enzyme (mutant genes) variability to modulate pathway expression levels is essential for generating balanced metabolic pathways and maximizing the productivity of a strain. We report a novel, highly reliable and rapid single strand assembly (SSA) method for pathway engineering. The method was successfully optimized and applied to create constructs containing promoter, RBS and/or mutant enzyme libraries. To demonstrate its efficiency and reliability, the method was applied to fine-tune multi-gene pathways. Two promoter libraries were simultaneously introduced in front of two target genes, enabling orthogonal expression as demonstrated by principal component analysis. This shows that SSA will increase our ability to tune multi-gene pathways at all control levels for the biotechnological production of complex metabolites, achievable through the combinatorial modulation of transcription, translation and enzyme activity. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  16. Synthesis and Process Optimization of Electrospun PEEK-Sulfonated Nanofibers by Response Surface Methodology

    PubMed Central

    Boaretti, Carlo; Roso, Martina; Lorenzetti, Alessandra; Modesti, Michele

    2015-01-01

    In this study electrospun nanofibers of partially sulfonated polyether ether ketone have been produced as a preliminary step for a possible development of composite proton exchange membranes for fuel cells. Response surface methodology has been employed for the modelling and optimization of the electrospinning process, using a Box-Behnken design. The investigation, based on a second order polynomial model, has been focused on the analysis of the effect of both process (voltage, tip-to-collector distance, flow rate) and material (sulfonation degree) variables on the mean fiber diameter. The final model has been verified by a series of statistical tests on the residuals and validated by a comparison procedure of samples at different sulfonation degrees, realized according to optimized conditions, for the production of homogeneous thin nanofibers. PMID:28793427

  17. H(2)- and H(infinity)-design tools for linear time-invariant systems

    NASA Technical Reports Server (NTRS)

    Ly, Uy-Loi

    1989-01-01

    Recent advances in optimal control have brought design techniques based on optimization of H(2) and H(infinity) norm criteria, closer to be attractive alternatives to single-loop design methods for linear time-variant systems. Significant steps forward in this technology are the deeper understanding of performance and robustness issues of these design procedures and means to perform design trade-offs. However acceptance of the technology is hindered by the lack of convenient design tools to exercise these powerful multivariable techniques, while still allowing single-loop design formulation. Presented is a unique computer tool for designing arbitrary low-order linear time-invarient controllers than encompasses both performance and robustness issues via the familiar H(2) and H(infinity) norm optimization. Application to disturbance rejection design for a commercial transport is demonstrated.

  18. Synthesis and Process Optimization of Electrospun PEEK-Sulfonated Nanofibers by Response Surface Methodology.

    PubMed

    Boaretti, Carlo; Roso, Martina; Lorenzetti, Alessandra; Modesti, Michele

    2015-07-07

    In this study electrospun nanofibers of partially sulfonated polyether ether ketone have been produced as a preliminary step for a possible development of composite proton exchange membranes for fuel cells. Response surface methodology has been employed for the modelling and optimization of the electrospinning process, using a Box-Behnken design. The investigation, based on a second order polynomial model, has been focused on the analysis of the effect of both process (voltage, tip-to-collector distance, flow rate) and material (sulfonation degree) variables on the mean fiber diameter. The final model has been verified by a series of statistical tests on the residuals and validated by a comparison procedure of samples at different sulfonation degrees, realized according to optimized conditions, for the production of homogeneous thin nanofibers.

  19. SN-38 loading capacity of hydrophobic polymer blend nanoparticles: formulation, optimization and efficacy evaluation.

    PubMed

    Dimchevska, Simona; Geskovski, Nikola; Petruševski, Gjorgji; Chacorovska, Marina; Popeski-Dimovski, Riste; Ugarkovic, Sonja; Goracinova, Katerina

    2017-03-01

    One of the most important problems in nanoencapsulation of extremely hydrophobic drugs is poor drug loading due to rapid drug crystallization outside the polymer core. The effort to use nanoprecipitation, as a simple one-step procedure with good reproducibility and FDA approved polymers like Poly(lactic-co-glycolic acid) (PLGA) and Polycaprolactone (PCL), will only potentiate this issue. Considering that drug loading is one of the key defining characteristics, in this study we attempted to examine whether the nanoparticle (NP) core composed of two hydrophobic polymers will provide increased drug loading for 7-Ethyl-10-hydroxy-camptothecin (SN-38), relative to NPs prepared using individual polymers. D-optimal design was applied to optimize PLGA/PCL ratio in the polymer blend and the mode of addition of the amphiphilic copolymer Lutrol ® F127 in order to maximize SN-38 loading and obtain NPs with acceptable size for passive tumor targeting. Drug/polymer and polymer/polymer interaction analysis pointed to high degree of compatibility and miscibility among both hydrophobic polymers, providing core configuration with higher drug loading capacity. Toxicity studies outlined the biocompatibility of the blank NPs. Increased in vitro efficacy of drug-loaded NPs compared to the free drug was confirmed by growth inhibition studies using SW-480 cell line. Additionally, the optimized NP formulation showed very promising blood circulation profile with elimination half-time of 7.4 h.

  20. A twin purification/enrichment procedure based on two versatile solid/liquid extracting agents for efficient uptake of ultra-trace levels of lorazepam and clonazepam from complex bio-matrices.

    PubMed

    Hemmati, Maryam; Rajabi, Maryam; Asghari, Alireza

    2017-11-17

    In this research work, two consecutive dispersive solid/liquid phase microextractions based on efficient extraction media were developed for the influential and clean pre-concentration of clonazepam and lorazepam from complicated bio-samples. The magnetism nature of the proposed nanoadsorbent proceeded the clean-up step conveniently and swiftly (∼5min), pursued by a further enrichment via a highly effective and rapid emulsification microextraction process (∼4min) based on a deep eutectic solvent (DES). Finally, the instrumental analysis step was practicable via high performance liquid chromatography-ultraviolet detection. The solid phase used was an adequate magnetic nanocomposite termed as polythiophene-sodium dodecyl benzene sulfonate/iron oxide (PTh-DBSNa/Fe 3 O 4 ), easily and cost-effectively prepared by the impressive co-precipitation method followed by the efficient in situ sonochemical oxidative polymerization approach. The identification techniques viz. FESEM, XRD, and EDX certified the supreme physico-chemical properties of this effective nanosorbent. Also the powerful liquid extraction agent, DES, based on bio-degradable choline chloride, possessed a high efficiency, tolerable safety, low cost, and facile and mild synthesis route. The parameters involved in this versatile hyphenated procedure, efficiently evaluated via the central composite design (CCD), showed that the best extraction conditions consisted of an initial pH value of 7.2, 17mg of the PTh-DBSNa/Fe 3 O 4 nanocomposite, 20 air-agitation cycles (first step), 245μL of methanol, 250μL of DES, 440μL of THF, and 8 air-agitation cycles (second step). Under the optimal conditions, the understudied drugs could be accurately determined in the wide linear dynamic ranges (LDRs) of 4.0-3000ngmL -1 and 2.0-2000ngmL -1 for clonazepam and lorazepam, respectively, with low limits of detection (LODs) ranged from 0.7 to 1.0ngmL -1 . The enrichment factor (EF) and percentage extraction recovery (%ER) values were found to be 75 and 57% for clonazepam and 56 and 42% for lorazepam at the spiked level of 75.0ngmL -1 , possessing proper repeatabilities (relative standard deviation values (RSDs) below 5.9%, n=3). These valid analytical features provided quite accurate drug analyses at therapeutically low spans and levels below potentially toxic domains, implying a proper purification/enrichment of the proposed microextraction procedure. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. SU-F-T-250: What Does It Take to Correctly Assess the High Failure Modes of an Advanced Radiotherapy Procedure Such as Stereotactic Body Radiation Therapy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, D; Vile, D; Rosu, M

    Purpose: Assess the correct implementation of risk-based methodology of TG 100 to optimize quality management and patient safety procedures for Stereotactic Body Radiation Therapy. Methods: A detailed process map of SBRT treatment procedure was generated by a team of three physicists with varying clinical experience at our institution to assess the potential high-risk failure modes. The probabilities of occurrence (O), severity (S) and detectability (D) for potential failure mode in each step of the process map were assigned by these individuals independently on the scale from1 to 10. The risk priority numbers (RPN) were computed and analyzed. The highest 30more » potential modes from each physicist’s analysis were then compared. Results: The RPN values assessed by the three physicists ranged from 30 to 300. The magnitudes of the RPN values from each physicist were different, and there was no concordance in the highest RPN values recorded by three physicists independently. The 10 highest RPN values belonged to sub steps of CT simulation, contouring and delivery in the SBRT process map. For these 10 highest RPN values, at least two physicists, irrespective of their length of experience had concordance but no general conclusions emerged. Conclusion: This study clearly shows that the risk-based assessment of a clinical process map requires great deal of preparation, group discussions, and participation by all stakeholders. One group albeit physicists cannot effectively implement risk-based methodology proposed by TG100. It should be a team effort in which the physicists can certainly play the leading role. This also corroborates TG100 recommendation that risk-based assessment of clinical processes is a multidisciplinary team effort.« less

  2. Ocean regional circulation model sensitizes to resolution of the lateral boundary conditions

    NASA Astrophysics Data System (ADS)

    Pham, Van Sy; Hwang, Jin Hwan

    2017-04-01

    Dynamical downscaling with nested regional oceanographic models is an effective approach for forecasting operationally coastal weather and projecting long term climate on the ocean. Nesting procedures deliver the unwanted in dynamic downscaling due to the differences of numerical grid sizes and updating steps. Therefore, such unavoidable errors restrict the application of the Ocean Regional Circulation Model (ORCMs) in both short-term forecasts and long-term projections. The current work identifies the effects of errors induced by computational limitations during nesting procedures on the downscaled results of the ORCMs. The errors are quantitatively evaluated for each error source and its characteristics by the Big-Brother Experiments (BBE). The BBE separates identified errors from each other and quantitatively assess the amount of uncertainties employing the same model to simulate for both nesting and nested model. Here, we focus on discussing errors resulting from two main matters associated with nesting procedures. They should be the spatial grids' differences and the temporal updating steps. After the diverse cases from separately running of the BBE, a Taylor diagram was adopted to analyze the results and suggest an optimization intern of grid size and updating period and domain sizes. Key words: lateral boundary condition, error, ocean regional circulation model, Big-Brother Experiment. Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled "Development of integrated estuarine management system" and a National Research Foundation of Korea (NRF) Grant (No. 2015R1A5A 7037372) funded by MSIP of Korea. The authors thank the Integrated Research Institute of Construction and Environmental Engineering of Seoul National University for administrative support.

  3. Enrichment of human bone marrow aspirates for low-density mononuclear cells using a haemonetics discontinuous blood cell separator.

    PubMed

    Raijmakers, R; de Witte, T; Koekman, E; Wessels, J; Haanen, C

    1986-01-01

    Isopycnic density floatation centrifugation has been proven to be a suitable technique to enrich bone marrow aspirates for clonogenic cells on a small scale. We have tested a Haemonetics semicontinuous blood cell separator in order to process large volumes of bone marrow with minimal bone marrow manipulation. The efficacy of isopycnic density floatation was tested in a one and a two-step procedure. Both procedures showed a recovery of about 20% of the nucleated cells and 1-2% of the erythrocytes. The enrichment of clonogenic cells in the one-step procedure appeared superior to the two-step enrichment, first separating buffy coat cells. The recovery of clonogenic cells was 70 and 50%, respectively. Repopulation capacity of the low-density cell fraction containing the clonogenic cells was excellent after autologous reinfusion (6 cases) and allogeneic bone marrow transplantation (3 cases). Fast enrichment of large volumes of bone marrow aspirates with low-density cells containing the clonogenic cells by isopycnic density floatation centrifugation can be done safely using a Haemonetics blood cell separator.

  4. Computer-based planning of optimal donor sites for autologous osseous grafts

    NASA Astrophysics Data System (ADS)

    Krol, Zdzislaw; Chlebiej, Michal; Zerfass, Peter; Zeilhofer, Hans-Florian U.; Sader, Robert; Mikolajczak, Pawel; Keeve, Erwin

    2002-05-01

    Bone graft surgery is often necessary for reconstruction of craniofacial defects after trauma, tumor, infection or congenital malformation. In this operative technique the removed or missing bone segment is filled with a bone graft. The mainstay of the craniofacial reconstruction rests with the replacement of the defected bone by autogeneous bone grafts. To achieve sufficient incorporation of the autograft into the host bone, precise planning and simulation of the surgical intervention is required. The major problem is to determine as accurately as possible the donor site where the graft should be dissected from and to define the shape of the desired transplant. A computer-aided method for semi-automatic selection of optimal donor sites for autografts in craniofacial reconstructive surgery has been developed. The non-automatic step of graft design and constraint setting is followed by a fully automatic procedure to find the best fitting position. In extension to preceding work, a new optimization approach based on the Levenberg-Marquardt method has been implemented and embedded into our computer-based surgical planning system. This new technique enables, once the pre-processing step has been performed, selection of the optimal donor site in time less than one minute. The method has been applied during surgery planning step in more than 20 cases. The postoperative observations have shown that functional results, such as speech and chewing ability as well as restoration of bony continuity were clearly better compared to conventionally planned operations. Moreover, in most cases the duration of the surgical interventions has been distinctly reduced.

  5. Adjoint Techniques for Topology Optimization of Structures Under Damage Conditions

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.; Haftka, Raphael T.

    2000-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation (Haftka and Gurdal, 1992) in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers (Akgun et al., 1998a and 1999). It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages (Haftka et al., 1983). A common method for topology optimization is that of compliance minimization (Bendsoe, 1995) which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system.. Shennan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this (Akgun et al., 1998b). SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.

  6. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    NASA Astrophysics Data System (ADS)

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; Singh, Balwinder; Chen, Xingyuan; Edwards, Jim

    2017-02-01

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associated with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step convergence is applicable.

  7. Infiltration/cure modeling of resin transfer molded composite materials using advanced fiber architectures

    NASA Technical Reports Server (NTRS)

    Loos, Alfred C.; Weideman, Mark H.; Long, Edward R., Jr.; Kranbuehl, David E.; Kinsley, Philip J.; Hart, Sean M.

    1991-01-01

    A model was developed which can be used to simulate infiltration and cure of textile composites by resin transfer molding. Fabric preforms were resin infiltrated and cured using model generated optimized one-step infiltration/cure protocols. Frequency dependent electromagnetic sensing (FDEMS) was used to monitor in situ resin infiltration and cure during processing. FDEMS measurements of infiltration time, resin viscosity, and resin degree of cure agreed well with values predicted by the simulation model. Textile composites fabricated using a one-step infiltration/cure procedure were uniformly resin impregnated and void free. Fiber volume fraction measurements by the resin digestion method compared well with values predicted using the model.

  8. Biomass conversion determined via fluorescent cellulose decay assay.

    PubMed

    Wischmann, Bente; Toft, Marianne; Malten, Marco; McFarland, K C

    2012-01-01

    An example of a rapid microtiter plate assay (fluorescence cellulose decay, FCD) that determines the conversion of cellulose in a washed biomass substrate is reported. The conversion, as verified by HPLC, is shown to correlate to the monitored FCD in the assay. The FCD assay activity correlates to the performance of multicomponent enzyme mixtures and is thus useful for the biomass industry. The development of an optimized setup of the 96-well microtiter plate is described, and is used to test a model that shortens the assay incubation time from 72 to 24h. A step-by-step procedure of the final assay is described. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Integral-geometry characterization of photobiomodulation effects on retinal vessel morphology

    PubMed Central

    Barbosa, Marconi; Natoli, Riccardo; Valter, Kriztina; Provis, Jan; Maddess, Ted

    2014-01-01

    The morphological characterization of quasi-planar structures represented by gray-scale images is challenging when object identification is sub-optimal due to registration artifacts. We propose two alternative procedures that enhances object identification in the integral-geometry morphological image analysis (MIA) framework. The first variant streamlines the framework by introducing an active contours segmentation process whose time step is recycled as a multi-scale parameter. In the second variant, we used the refined object identification produced in the first variant to perform the standard MIA with exact dilation radius as multi-scale parameter. Using this enhanced MIA we quantify the extent of vaso-obliteration in oxygen-induced retinopathic vascular growth, the preventative effect (by photobiomodulation) of exposure during tissue development to near-infrared light (NIR, 670 nm), and the lack of adverse effects due to exposure to NIR light. PMID:25071966

  10. Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.

    1998-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  11. FMRQ-A Multiagent Reinforcement Learning Algorithm for Fully Cooperative Tasks.

    PubMed

    Zhang, Zhen; Zhao, Dongbin; Gao, Junwei; Wang, Dongqing; Dai, Yujie

    2017-06-01

    In this paper, we propose a multiagent reinforcement learning algorithm dealing with fully cooperative tasks. The algorithm is called frequency of the maximum reward Q-learning (FMRQ). FMRQ aims to achieve one of the optimal Nash equilibria so as to optimize the performance index in multiagent systems. The frequency of obtaining the highest global immediate reward instead of immediate reward is used as the reinforcement signal. With FMRQ each agent does not need the observation of the other agents' actions and only shares its state and reward at each step. We validate FMRQ through case studies of repeated games: four cases of two-player two-action and one case of three-player two-action. It is demonstrated that FMRQ can converge to one of the optimal Nash equilibria in these cases. Moreover, comparison experiments on tasks with multiple states and finite steps are conducted. One is box-pushing and the other one is distributed sensor network problem. Experimental results show that the proposed algorithm outperforms others with higher performance.

  12. The optimization of essential oils supercritical CO2 extraction from Lavandula hybrida through static-dynamic steps procedure and semi-continuous technique using response surface method

    PubMed Central

    Kamali, Hossein; Aminimoghadamfarouj, Noushin; Golmakani, Ebrahim; Nematollahi, Alireza

    2015-01-01

    Aim: The aim of this study was to examine and evaluate crucial variables in essential oils extraction process from Lavandula hybrida through static-dynamic and semi-continuous techniques using response surface method. Materials and Methods: Essential oil components were extracted from Lavandula hybrida (Lavandin) flowers using supercritical carbon dioxide via static-dynamic steps (SDS) procedure, and semi-continuous (SC) technique. Results: Using response surface method the optimum extraction yield (4.768%) was obtained via SDS at 108.7 bar, 48.5°C, 120 min (static: 8×15), 24 min (dynamic: 8×3 min) in contrast to the 4.620% extraction yield for the SC at 111.6 bar, 49.2°C, 14 min (static), 121.1 min (dynamic). Conclusion: The results indicated that a substantial reduction (81.56%) solvent usage (kg CO2/g oil) is observed in the SDS method versus the conventional SC method. PMID:25598636

  13. Automatic procedures for the synthesis of difficult peptides using oxyma as activating reagent: A comparative study on the use of bases and on different deprotection and agitation conditions.

    PubMed

    Caporale, A; Doti, N; Monti, A; Sandomenico, A; Ruvo, M

    2018-04-01

    Solid-Phase Peptide Synthesis (SPPS) is a rapid and efficient methodology for the chemical synthesis of peptides and small proteins. However, the assembly of peptide sequences classified as "difficult" poses severe synthetic problems in SPPS for the occurrence of extensive aggregation of growing peptide chains which often leads to synthesis failure. In this framework, we have investigated the impact of different synthetic procedures on the yield and final purity of three well-known "difficult peptides" prepared using oxyma as additive for the coupling steps. In particular, we have comparatively investigated the use of piperidine and morpholine/DBU as deprotection reagents, the addition of DIPEA, collidine and N-methylmorpholine as bases to the coupling reagent. Moreover, the effect of different agitation modalities during the acylation reactions has been investigated. Data obtained represent a step forward in optimizing strategies for the synthesis of "difficult peptides". Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Platelet-rich plasma differs according to preparation method and human variability.

    PubMed

    Mazzocca, Augustus D; McCarthy, Mary Beth R; Chowaniec, David M; Cote, Mark P; Romeo, Anthony A; Bradley, James P; Arciero, Robert A; Beitzel, Knut

    2012-02-15

    Varying concentrations of blood components in platelet-rich plasma preparations may contribute to the variable results seen in recently published clinical studies. The purposes of this investigation were (1) to quantify the level of platelets, growth factors, red blood cells, and white blood cells in so-called one-step (clinically used commercial devices) and two-step separation systems and (2) to determine the influence of three separate blood draws on the resulting components of platelet-rich plasma. Three different platelet-rich plasma (PRP) separation methods (on blood samples from eight subjects with a mean age [and standard deviation] of 31.6 ± 10.9 years) were used: two single-spin processes (PRPLP and PRPHP) and a double-spin process (PRPDS) were evaluated for concentrations of platelets, red and white blood cells, and growth factors. Additionally, the effect of three repetitive blood draws on platelet-rich plasma components was evaluated. The content and concentrations of platelets, white blood cells, and growth factors for each method of separation differed significantly. All separation techniques resulted in a significant increase in platelet concentration compared with native blood. Platelet and white blood-cell concentrations of the PRPHP procedure were significantly higher than platelet and white blood-cell concentrations produced by the so-called single-step PRPLP and the so-called two-step PRPDS procedures, although significant differences between PRPLP and PRPDS were not observed. Comparing the results of the three blood draws with regard to the reliability of platelet number and cell counts, wide variations of intra-individual numbers were observed. Single-step procedures are capable of producing sufficient amounts of platelets for clinical usage. Within the evaluated procedures, platelet numbers and numbers of white blood cells differ significantly. The intra-individual results of platelet-rich plasma separations showed wide variations in platelet and cell numbers as well as levels of growth factors regardless of separation method.

  15. Insights on beer volatile profile: Optimization of solid-phase microextraction procedure taking advantage of the comprehensive two-dimensional gas chromatography structured separation.

    PubMed

    Martins, Cátia; Brandão, Tiago; Almeida, Adelaide; Rocha, Sílvia M

    2015-06-01

    The aroma profile of beer is crucial for its quality and consumer acceptance, which is modu-lated by a network of variables. The main goal of this study was to optimize solid-phase microextraction experimental parameters (fiber coating, extraction temperature, and time), taking advantage of the comprehensive two-dimensional gas chromatography structured separation. As far as we know, it is the first time that this approach was used to the untargeted and comprehensive study of the beer volatile profile. Decarbonation is a critical sample preparation step, and two conditions were tested: static and under ultrasonic treatment, and the static condition was selected. Considering the conditions that promoted the highest extraction efficiency, the following parameters were selected: poly(dimethylsiloxane)/divinylbenzene fiber coating, at 40ºC, using 10 min of pre-equilibrium followed by 30 min of extraction. Around 700-800 compounds per sample were detected, corresponding to the beer volatile profile. An exploratory application was performed with commercial beers, using a set of 32 compounds with reported impact on beer aroma, in which different patterns can be observed through the structured chromatogram. In summary, the obtained results emphasize the potential of this methodology to allow an in-depth study of volatile molecular composition of beer. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  17. Procedures and Standards for Residential Ventilation System Commissioning: An Annotated Bibliography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stratton, J. Chris; Wray, Craig P.

    2013-04-01

    Beginning with the 2008 version of Title 24, new homes in California must comply with ANSI/ASHRAE Standard 62.2-2007 requirements for residential ventilation. Where installed, the limited data available indicate that mechanical ventilation systems do not always perform optimally or even as many codes and forecasts predict. Commissioning such systems when they are installed or during subsequent building retrofits is a step towards eliminating deficiencies and optimizing the tradeoff between energy use and acceptable IAQ. Work funded by the California Energy Commission about a decade ago at Berkeley Lab documented procedures for residential commissioning, but did not focus on ventilation systems.more » Since then, standards and approaches for commissioning ventilation systems have been an active area of work in Europe. This report describes our efforts to collect new literature on commissioning procedures and to identify information that can be used to support the future development of residential-ventilation-specific procedures and standards. We recommend that a standardized commissioning process and a commissioning guide for practitioners be developed, along with a combined energy and IAQ benefit assessment standard and tool, and a diagnostic guide for estimating continuous pollutant emission rates of concern in residences (including a database that lists emission test data for commercially-available labeled products).« less

  18. Effort versus Reward: Preparing Samples for Fungal Community Characterization in High-Throughput Sequencing Surveys of Soils

    PubMed Central

    Song, Zewei; Schlatter, Dan; Kennedy, Peter; Kinkel, Linda L.; Kistler, H. Corby; Nguyen, Nhu; Bates, Scott T.

    2015-01-01

    Next generation fungal amplicon sequencing is being used with increasing frequency to study fungal diversity in various ecosystems; however, the influence of sample preparation on the characterization of fungal community is poorly understood. We investigated the effects of four procedural modifications to library preparation for high-throughput sequencing (HTS). The following treatments were considered: 1) the amount of soil used in DNA extraction, 2) the inclusion of additional steps (freeze/thaw cycles, sonication, or hot water bath incubation) in the extraction procedure, 3) the amount of DNA template used in PCR, and 4) the effect of sample pooling, either physically or computationally. Soils from two different ecosystems in Minnesota, USA, one prairie and one forest site, were used to assess the generality of our results. The first three treatments did not significantly influence observed fungal OTU richness or community structure at either site. Physical pooling captured more OTU richness compared to individual samples, but total OTU richness at each site was highest when individual samples were computationally combined. We conclude that standard extraction kit protocols are well optimized for fungal HTS surveys, but because sample pooling can significantly influence OTU richness estimates, it is important to carefully consider the study aims when planning sampling procedures. PMID:25974078

  19. Building America's Industrial Revolution: The Boott Cotton Mills of Lowell, Massachusetts. Teaching with Historic Places.

    ERIC Educational Resources Information Center

    Stowell, Stephen

    1995-01-01

    Presents a high school unit about the U.S. Industrial Revolution featuring the Boott Cotton Mills of Lowell, Massachusetts. Includes student objectives, step-by-step instructional procedures, and discussion questions. Provides two maps, five illustrations, one photograph, and three student readings. (ACM)

  20. Dual Adaptive Filtering by Optimal Projection Applied to Filter Muscle Artifacts on EEG and Comparative Study

    PubMed Central

    Peyrodie, Laurent; Szurhaj, William; Bolo, Nicolas; Pinti, Antonio; Gallois, Philippe

    2014-01-01

    Muscle artifacts constitute one of the major problems in electroencephalogram (EEG) examinations, particularly for the diagnosis of epilepsy, where pathological rhythms occur within the same frequency bands as those of artifacts. This paper proposes to use the method dual adaptive filtering by optimal projection (DAFOP) to automatically remove artifacts while preserving true cerebral signals. DAFOP is a two-step method. The first step consists in applying the common spatial pattern (CSP) method to two frequency windows to identify the slowest components which will be considered as cerebral sources. The two frequency windows are defined by optimizing convolutional filters. The second step consists in using a regression method to reconstruct the signal independently within various frequency windows. This method was evaluated by two neurologists on a selection of 114 pages with muscle artifacts, from 20 clinical recordings of awake and sleeping adults, subject to pathological signals and epileptic seizures. A blind comparison was then conducted with the canonical correlation analysis (CCA) method and conventional low-pass filtering at 30 Hz. The filtering rate was 84.3% for muscle artifacts with a 6.4% reduction of cerebral signals even for the fastest waves. DAFOP was found to be significantly more efficient than CCA and 30 Hz filters. The DAFOP method is fast and automatic and can be easily used in clinical EEG recordings. PMID:25298967

  1. Earth As An Unstructured Mesh and Its Recovery from Seismic Waveform Data

    NASA Astrophysics Data System (ADS)

    De Hoop, M. V.

    2015-12-01

    We consider multi-scale representations of Earth's interior from thepoint of view of their possible recovery from multi- andhigh-frequency seismic waveform data. These representations areintrinsically connected to (geologic, tectonic) structures, that is,geometric parametrizations of Earth's interior. Indeed, we address theconstruction and recovery of such parametrizations using localiterative methods with appropriately designed data misfits andguaranteed convergence. The geometric parametrizations containinterior boundaries (defining, for example, faults, salt bodies,tectonic blocks, slabs) which can, in principle, be obtained fromsuccessive segmentation. We make use of unstructured meshes. For the adaptation and recovery of an unstructured mesh we introducean energy functional which is derived from the Hausdorff distance. Viaan augmented Lagrangian method, we incorporate the mentioned datamisfit. The recovery is constrained by shape optimization of theinterior boundaries, and is reminiscent of Hausdorff warping. We useelastic deformation via finite elements as a regularization whilefollowing a two-step procedure. The first step is an update determinedby the energy functional; in the second step, we modify the outcome ofthe first step where necessary to ensure that the new mesh isregular. This modification entails an array of techniques includingtopology correction involving interior boundary contacting andbreakup, edge warping and edge removal. We implement this as afeed-back mechanism from volume to interior boundary meshesoptimization. We invoke and apply a criterion of mesh quality controlfor coarsening, and for dynamical local multi-scale refinement. Wepresent a novel (fluid-solid) numerical framework based on theDiscontinuous Galerkin method.

  2. Optimum design of hybrid phase locked loops

    NASA Technical Reports Server (NTRS)

    Lee, P.; Yan, T.

    1981-01-01

    The design procedure of phase locked loops is described in which the analog loop filter is replaced by a digital computer. Specific design curves are given for the step and ramp input changes in phase. It is shown that the designed digital filter depends explicitly on the product of the sampling time and the noise bandwidth of the phase locked loop. This technique of optimization can be applied to the design of digital analog loops for other applications.

  3. Data Treatment for LC-MS Untargeted Analysis.

    PubMed

    Riccadonna, Samantha; Franceschi, Pietro

    2018-01-01

    Liquid chromatography-mass spectrometry (LC-MS) untargeted experiments require complex chemometrics strategies to extract information from the experimental data. Here we discuss "data preprocessing", the set of procedures performed on the raw data to produce a data matrix which will be the starting point for the subsequent statistical analysis. Data preprocessing is a crucial step on the path to knowledge extraction, which should be carefully controlled and optimized in order to maximize the output of any untargeted metabolomics investigation.

  4. Optimizing experimental procedures for quantitative evaluation of crop plant performance in high throughput phenotyping systems

    PubMed Central

    Junker, Astrid; Muraya, Moses M.; Weigelt-Fischer, Kathleen; Arana-Ceballos, Fernando; Klukas, Christian; Melchinger, Albrecht E.; Meyer, Rhonda C.; Riewe, David; Altmann, Thomas

    2015-01-01

    Detailed and standardized protocols for plant cultivation in environmentally controlled conditions are an essential prerequisite to conduct reproducible experiments with precisely defined treatments. Setting up appropriate and well defined experimental procedures is thus crucial for the generation of solid evidence and indispensable for successful plant research. Non-invasive and high throughput (HT) phenotyping technologies offer the opportunity to monitor and quantify performance dynamics of several hundreds of plants at a time. Compared to small scale plant cultivations, HT systems have much higher demands, from a conceptual and a logistic point of view, on experimental design, as well as the actual plant cultivation conditions, and the image analysis and statistical methods for data evaluation. Furthermore, cultivation conditions need to be designed that elicit plant performance characteristics corresponding to those under natural conditions. This manuscript describes critical steps in the optimization of procedures for HT plant phenotyping systems. Starting with the model plant Arabidopsis, HT-compatible methods were tested, and optimized with regard to growth substrate, soil coverage, watering regime, experimental design (considering environmental inhomogeneities) in automated plant cultivation and imaging systems. As revealed by metabolite profiling, plant movement did not affect the plants' physiological status. Based on these results, procedures for maize HT cultivation and monitoring were established. Variation of maize vegetative growth in the HT phenotyping system did match well with that observed in the field. The presented results outline important issues to be considered in the design of HT phenotyping experiments for model and crop plants. It thereby provides guidelines for the setup of HT experimental procedures, which are required for the generation of reliable and reproducible data of phenotypic variation for a broad range of applications. PMID:25653655

  5. Semi-automated discrimination of retinal pigmented epithelial cells in two-photon fluorescence images of mouse retinas.

    PubMed

    Alexander, Nathan S; Palczewska, Grazyna; Palczewski, Krzysztof

    2015-08-01

    Automated image segmentation is a critical step toward achieving a quantitative evaluation of disease states with imaging techniques. Two-photon fluorescence microscopy (TPM) has been employed to visualize the retinal pigmented epithelium (RPE) and provide images indicating the health of the retina. However, segmentation of RPE cells within TPM images is difficult due to small differences in fluorescence intensity between cell borders and cell bodies. Here we present a semi-automated method for segmenting RPE cells that relies upon multiple weak features that differentiate cell borders from the remaining image. These features were scored by a search optimization procedure that built up the cell border in segments around a nucleus of interest. With six images used as a test, our method correctly identified cell borders for 69% of nuclei on average. Performance was strongly dependent upon increasing retinosome content in the RPE. TPM image analysis has the potential of providing improved early quantitative assessments of diseases affecting the RPE.

  6. Use of EPANET solver to manage water distribution in Smart City

    NASA Astrophysics Data System (ADS)

    Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.

    2018-02-01

    Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.

  7. Optimization of operating parameters of hybrid vertical down-flow constructed wetland systems for domestic sewerage treatment.

    PubMed

    Huang, Zhujian; Zhang, Xianning; Cui, Lihua; Yu, Guangwei

    2016-09-15

    In this work, three hybrid vertical down-flow constructed wetland (HVDF-CW) systems with different compound substrates were fed with domestic sewage and their pollutants removal performance under different hydraulic loading and step-feeding ratio was investigated. The results showed that the hydraulic loading and step-feeding ratio were two crucial factors determining the removal efficiency of most pollutants, while substrate types only significantly affected the removal of COD and NH4(+)-N. Generally, the lower the hydraulic loading, the better removal efficiency of all contaminants, except for TN. By contrast, the increase of step-feeding ratio would slightly reduce the removal rate of ammonium and TP but obviously promoted the TN removal. Therefore, the optimal operation of this CWs could be achieved with low hydraulic loading combined with 50% of step-feeding ratio when TN removal is the priority, whereas medium or low hydraulic loading without step-feeding would be suitable when TN removal is not taken into consideration. The obtained results in this study can provide us with a guideline for design and optimization of hybrid vertical flow constructed wetland systems to improve the pollutants removal from domestic sewage. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Design and Optimization of Composite Gyroscope Momentum Wheel Rings

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2007-01-01

    Stress analysis and preliminary design/optimization procedures are presented for gyroscope momentum wheel rings composed of metallic, metal matrix composite, and polymer matrix composite materials. The design of these components involves simultaneously minimizing both true part volume and mass, while maximizing angular momentum. The stress analysis results are combined with an anisotropic failure criterion to formulate a new sizing procedure that provides considerable insight into the design of gyroscope momentum wheel ring components. Results compare the performance of two optimized metallic designs, an optimized SiC/Ti composite design, and an optimized graphite/epoxy composite design. The graphite/epoxy design appears to be far superior to the competitors considered unless a much greater premium is placed on volume efficiency compared to mass efficiency.

  9. Development of a purification procedure for the isolation of nucleosides from urine prior to mass spectrometric analysis.

    PubMed

    Dudley, E; El-Shakawi, S; Games, D E; Newton, R P

    2000-03-01

    A chromatographic separation of nucleosides from urine has been developed in order to facilitate their mass spectrometric analysis for clinical diagnosis. A number of chromatographic resins were studied in order to develop an effective and efficient purification procedure. The optimized sequential protocol comprises a centrifugation, acidification and neutralization step, followed by application of an affinity chromatographic column and finally further separation on an acidic cation exchange column and a basic anion exchanger. This scheme shows effective clean-up of a standard radiolabelled nucleoside with a recovery of 92.5%, and recovery of nucleosides added to urine samples before extraction showed recoveries of 72-82%.

  10. Data Processing for Atmospheric Phase Interferometers

    NASA Technical Reports Server (NTRS)

    Acosta, Roberto J.; Nessel, James A.; Morabito, David D.

    2009-01-01

    This paper presents a detailed discussion of calibration procedures used to analyze data recorded from a two-element atmospheric phase interferometer (API) deployed at Goldstone, California. In addition, we describe the data products derived from those measurements that can be used for site intercomparison and atmospheric modeling. Simulated data is used to demonstrate the effectiveness of the proposed algorithm and as a means for validating our procedure. A study of the effect of block size filtering is presented to justify our process for isolating atmospheric fluctuation phenomena from other system-induced effects (e.g., satellite motion, thermal drift). A simulated 24 hr interferometer phase data time series is analyzed to illustrate the step-by-step calibration procedure and desired data products.

  11. DETERMINATION OF PESTICIDES AND PCB'S IN INDUSTRIAL AND MUNICIPAL WASTEWATERS

    EPA Science Inventory

    Steps in the procedure for the analysis of 25 chlorinated pesticides and polychlorinated biphenyls were studied. Two gas chromatographic columns and two detectors (electron capture and Hall electrolytic conductivity) were evaluated. Extractions were performed with two solvents (d...

  12. An Optimized Analytical Method for the Simultaneous Detection of Iodoform, Iodoacetic Acid, and Other Trihalomethanes and Haloacetic Acids in Drinking Water

    PubMed Central

    Jiang, Songhui; Templeton, Michael R.; He, Gengsheng; Qu, Weidong

    2013-01-01

    An optimized method is presented using liquid-liquid extraction and derivatization for the extraction of iodoacetic acid (IAA) and other haloacetic acids (HAA9) and direct extraction of iodoform (IF) and other trihalomethanes (THM4) from drinking water, followed by detection by gas chromatography with electron capture detection (GC-ECD). A Doehlert experimental design was performed to determine the optimum conditions for the five most significant factors in the derivatization step: namely, the volume and concentration of acidic methanol (optimized values  = 15%, 1 mL), the volume and concentration of Na2SO4 solution (129 g/L, 8.5 mL), and the volume of saturated NaHCO3 solution (1 mL). Also, derivatization time and temperature were optimized by a two-variable Doehlert design, resulting in the following optimized parameters: an extraction time of 11 minutes for IF and THM4 and 14 minutes for IAA and HAA9; mass of anhydrous Na2SO4 of 4 g for IF and THM4 and 16 g for IAA and HAA9; derivatization time of 160 min and temperature at 40°C. Under optimal conditions, the optimized procedure achieves excellent linearity (R2 ranges 0.9990–0.9998), low detection limits (0.0008–0.2 µg/L), low quantification limits (0.008–0.4 µg/L), and good recovery (86.6%–106.3%). Intra- and inter-day precision were less than 8.9% and 8.8%, respectively. The method was validated by applying it to the analysis of raw, flocculated, settled, and finished waters collected from a water treatment plant in China. PMID:23613747

  13. Spectrum online-tunable Mach-Zehnder interferometer based on step-like tapers and its refractive index sensing characteristics

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Chen, Mao-qing; Xia, Feng; Hu, Hai-feng

    2017-11-01

    A novel refractive index (RI) sensor based on an asymmetrical Mach-Zehnder interferometer (MZI) with two different step-like tapers is proposed. The step-like taper is fabricated by fusion splicing two half tapers with an appropriate offset. By further applying offset and discharging to the last fabricated step-like taper of MZI, influence of taper parameters on interference spectrum is investigated using only one device. This simple technique provides an on-line method to sweep parameters of step-like tapers and speeds up the optimization process of interference spectrum, meanwhile. In RI sensing experiment, the sensor has a high sensitivity of -185.79 nm/RIU (refractive index unit) in the RI range of 1.3333-1.3673.

  14. [Influence on microstructure of dental zirconia ceramics prepared by two-step sintering].

    PubMed

    Jian, Chao; Li, Ning; Wu, Zhikai; Teng, Jing; Yan, Jiazhen

    2013-10-01

    To investigate the microstructure of dental zirconia ceramics prepared by two-step sintering. Nanostructured zirconia powder was dry compacted, cold isostatic pressed, and pre-sintered. The pre-sintered discs were cut processed into samples. Conventional sintering, single-step sintering, and two-step sintering were carried out, and density and grain size of the samples were measured. Afterward, T1 and/or T2 of two-step sintering ranges were measured. Effects on microstructure of different routes, which consisted of two-step sintering and conventional sintering were discussed. The influence of T1 and/or T2 on density and grain size were analyzed as well. The range of T1 was between 1450 degrees C and 1550 degrees C, and the range of T2 was between 1250 degrees C and 1350 degrees C. Compared with conventional sintering, finer microstructure of higher density and smaller grain could be obtained by two-step sintering. Grain growth was dependent on T1, whereas density was not much related with T1. However, density was dependent on T2, and grain size was minimally influenced. Two-step sintering could ensure a sintering body with high density and small grain, which is good for optimizing the microstructure of dental zirconia ceramics.

  15. Two-Step Production of Phenylpyruvic Acid from L-Phenylalanine by Growing and Resting Cells of Engineered Escherichia coli: Process Optimization and Kinetics Modeling.

    PubMed

    Hou, Ying; Hossain, Gazi Sakir; Li, Jianghua; Shin, Hyun-Dong; Liu, Long; Du, Guocheng; Chen, Jian

    2016-01-01

    Phenylpyruvic acid (PPA) is widely used in the pharmaceutical, food, and chemical industries. Here, a two-step bioconversion process, involving growing and resting cells, was established to produce PPA from l-phenylalanine using the engineered Escherichia coli constructed previously. First, the biotransformation conditions for growing cells were optimized (l-phenylalanine concentration 20.0 g·L-1, temperature 35°C) and a two-stage temperature control strategy (keep 20°C for 12 h and increase the temperature to 35°C until the end of biotransformation) was performed. The biotransformation conditions for resting cells were then optimized in 3-L bioreactor and the optimized conditions were as follows: agitation speed 500 rpm, aeration rate 1.5 vvm, and l-phenylalanine concentration 30 g·L-1. The total maximal production (mass conversion rate) reached 29.8 ± 2.1 g·L-1 (99.3%) and 75.1 ± 2.5 g·L-1 (93.9%) in the flask and 3-L bioreactor, respectively. Finally, a kinetic model was established, and it was revealed that the substrate and product inhibition were the main limiting factors for resting cell biotransformation.

  16. Calibration of a texture-based model of a ground-water flow system, western San Joaquin Valley, California

    USGS Publications Warehouse

    Phillips, Steven P.; Belitz, Kenneth

    1991-01-01

    The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.

  17. On the effect of response transformations in sequential parameter optimization.

    PubMed

    Wagner, Tobias; Wessing, Simon

    2012-01-01

    Parameter tuning of evolutionary algorithms (EAs) is attracting more and more interest. In particular, the sequential parameter optimization (SPO) framework for the model-assisted tuning of stochastic optimizers has resulted in established parameter tuning algorithms. In this paper, we enhance the SPO framework by introducing transformation steps before the response aggregation and before the actual modeling. Based on design-of-experiments techniques, we empirically analyze the effect of integrating different transformations. We show that in particular, a rank transformation of the responses provides significant improvements. A deeper analysis of the resulting models and additional experiments with adaptive procedures indicates that the rank and the Box-Cox transformation are able to improve the properties of the resultant distributions with respect to symmetry and normality of the residuals. Moreover, model-based effect plots document a higher discriminatory power obtained by the rank transformation.

  18. Deadlock-free genetic scheduling algorithm for automated manufacturing systems based on deadlock control policy.

    PubMed

    Xing, KeYi; Han, LiBin; Zhou, MengChu; Wang, Feng

    2012-06-01

    Deadlock-free control and scheduling are vital for optimizing the performance of automated manufacturing systems (AMSs) with shared resources and route flexibility. Based on the Petri net models of AMSs, this paper embeds the optimal deadlock avoidance policy into the genetic algorithm and develops a novel deadlock-free genetic scheduling algorithm for AMSs. A possible solution of the scheduling problem is coded as a chromosome representation that is a permutation with repetition of parts. By using the one-step look-ahead method in the optimal deadlock control policy, the feasibility of a chromosome is checked, and infeasible chromosomes are amended into feasible ones, which can be easily decoded into a feasible deadlock-free schedule. The chromosome representation and polynomial complexity of checking and amending procedures together support the cooperative aspect of genetic search for scheduling problems strongly.

  19. Estimating Slope and Level Change in N = 1 Designs

    ERIC Educational Resources Information Center

    Solanas, Antonio; Manolov, Rumen; Onghena, Patrick

    2010-01-01

    The current study proposes a new procedure for separately estimating slope change and level change between two adjacent phases in single-case designs. The procedure eliminates baseline trend from the whole data series before assessing treatment effectiveness. The steps necessary to obtain the estimates are presented in detail, explained, and…

  20. Perception of School Safety of a Local School

    ERIC Educational Resources Information Center

    Massey-Jones, Darla

    2013-01-01

    This qualitative case study investigated the perception of school safety, what current policies and procedures were effective, and what policies and procedures should be implemented. Data were collected in two steps, by survey and focus group interview. Analysis determined codes that revealed several themes relevant to the perception of school…

  1. Tracking Virus Particles in Fluorescence Microscopy Images Using Multi-Scale Detection and Multi-Frame Association.

    PubMed

    Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl

    2015-11-01

    Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.

  2. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

  3. Reduced-Order Models Based on POD-Tpwl for Compositional Subsurface Flow Simulation

    NASA Astrophysics Data System (ADS)

    Durlofsky, L. J.; He, J.; Jin, L. Z.

    2014-12-01

    A reduced-order modeling procedure applicable for compositional subsurface flow simulation will be described and applied. The technique combines trajectory piecewise linearization (TPWL) and proper orthogonal decomposition (POD) to provide highly efficient surrogate models. The method is based on a molar formulation (which uses pressure and overall component mole fractions as the primary variables) and is applicable for two-phase, multicomponent systems. The POD-TPWL procedure expresses new solutions in terms of linearizations around solution states generated and saved during previously simulated 'training' runs. High-dimensional states are projected into a low-dimensional subspace using POD. Thus, at each time step, only a low-dimensional linear system needs to be solved. Results will be presented for heterogeneous three-dimensional simulation models involving CO2 injection. Both enhanced oil recovery and carbon storage applications (with horizontal CO2 injectors) will be considered. Reasonably close agreement between full-order reference solutions and compositional POD-TPWL simulations will be demonstrated for 'test' runs in which the well controls differ from those used for training. Construction of the POD-TPWL model requires preprocessing overhead computations equivalent to about 3-4 full-order runs. Runtime speedups using POD-TPWL are, however, very significant - typically O(100-1000). The use of POD-TPWL for well control optimization will also be illustrated. For this application, some amount of retraining during the course of the optimization is required, which leads to smaller, but still significant, speedup factors.

  4. Development of a novel naphthoic acid ionic liquid and its application in "no-organic solvent microextraction" for determination of triclosan and methyltriclosan in human fluids and the method optimization by central composite design.

    PubMed

    Wang, Hui; Gao, Jiajia; Yu, Nana; Qu, Jingang; Fang, Fang; Wang, Huili; Wang, Mei; Wang, Xuedong

    2016-07-01

    In traditional ionic liquids (ILs)-based microextraction, the hydrophobic and hydrophilic ILs are often used as extractant and disperser, respectively. However, the functional effects of ILs are not utilized in microextraction procedures. Herein, we introduced 1-naphthoic acid into imidazolium ring to synthesize a novel ionic liquid 1-butyl-3-methylimidazolium naphthoic acid salt ([C4MIM][NPA]), and its structure was characterized by IR, (1)H NMR and MS. On the basis of its acidic property and lower solubility than common [CnMIM][BF4], it was used as a mixing dispersive solvent with [C4MIM][BF4] in "functionalized ionic liquid-based no organic solvent microextraction (FIL-NOSM)". Utilization of [C4MIM][NPA] in FIL-NOSM procedures has two obvious advantages: (1) it promoted the non-polar environment, increased volume of the sedimented phase, and thus could enhance the extraction recoveries of triclosan (TCS) and methyltriclosan (MTCS) by more than 10%; and (2) because of the acidic property, it can act as a pH modifier, avoiding extra pH adjustment step. By combining single factor optimization and central composite design, the main factors in the FIL-NOSM method were optimized. Under the optimal conditions, the relative recoveries of TCS and MTCS reached up to 98.60-106.09%, and the LODs of them were as low as 0.12-0.15µgL(-1) in plasma and urine samples. In total, this [C4MIM][NPA]-based FIL-NOSM method provided high extraction efficiency, and required less pretreatment time and unutilized any organic solvent. To the best of our knowledge, this is the first application of [C4mim][NPA]-based microextraction method for the simultaneous quantification of trace TCS and MTCS in human fluids. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. High efficient perovskite solar cell material CH3NH3PbI3: Synthesis of films and their characterization

    NASA Astrophysics Data System (ADS)

    Bera, Amrita Mandal; Wargulski, Dan Ralf; Unold, Thomas

    2018-04-01

    Hybrid organometal perovskites have been emerged as promising solar cell material and have exhibited solar cell efficiency more than 20%. Thin films of Methylammonium lead iodide CH3NH3PbI3 perovskite materials have been synthesized by two different (one step and two steps) methods and their morphological properties have been studied by scanning electron microscopy and optical microscope imaging. The morphology of the perovskite layer is one of the most important parameters which affect solar cell efficiency. The morphology of the films revealed that two steps method provides better surface coverage than the one step method. However, the grain sizes were smaller in case of two steps method. The films prepared by two steps methods on different substrates revealed that the grain size also depend on the substrate where an increase of the grain size was found from glass substrate to FTO with TiO2 blocking layer to FTO without any change in the surface coverage area. Present study reveals that an improved quality of films can be obtained by two steps method by an optimization of synthesis processes.

  6. Optimized manual and automated recovery of amplifiable DNA from tissues preserved in buffered formalin and alcohol-based fixative.

    PubMed

    Duval, Kristin; Aubin, Rémy A; Elliott, James; Gorn-Hondermann, Ivan; Birnboim, H Chaim; Jonker, Derek; Fourney, Ron M; Frégeau, Chantal J

    2010-02-01

    Archival tissue preserved in fixative constitutes an invaluable resource for histological examination, molecular diagnostic procedures and for DNA typing analysis in forensic investigations. However, available material is often limited in size and quantity. Moreover, recovery of DNA is often severely compromised by the presence of covalent DNA-protein cross-links generated by formalin, the most prevalent fixative. We describe the evaluation of buffer formulations, sample lysis regimens and DNA recovery strategies and define optimized manual and automated procedures for the extraction of high quality DNA suitable for molecular diagnostics and genotyping. Using a 3-step enzymatic digestion protocol carried out in the absence of dithiothreitol, we demonstrate that DNA can be efficiently released from cells or tissues preserved in buffered formalin or the alcohol-based fixative GenoFix. This preparatory procedure can then be integrated to traditional phenol/chloroform extraction, a modified manual DNA IQ or automated DNA IQ/Te-Shake-based extraction in order to recover DNA for downstream applications. Quantitative recovery of high quality DNA was best achieved from specimens archived in GenoFix and extracted using magnetic bead capture.

  7. 14 CFR 302.37 - Waiver of procedural steps after hearing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Waiver of procedural steps after hearing... Applicability Oral Evidentiary Hearing Proceedings § 302.37 Waiver of procedural steps after hearing. The parties to any proceeding may agree to waive any one or more of the procedural steps provided in § 302.29...

  8. Architecture design of a generic centralized adjudication module integrated in a web-based clinical trial management system.

    PubMed

    Zhao, Wenle; Pauls, Keith

    2016-04-01

    Centralized outcome adjudication has been used widely in multicenter clinical trials in order to prevent potential biases and to reduce variations in important safety and efficacy outcome assessments. Adjudication procedures could vary significantly among different studies. In practice, the coordination of outcome adjudication procedures in many multicenter clinical trials remains as a manual process with low efficiency and high risk of delay. Motivated by the demands from two large clinical trial networks, a generic outcome adjudication module has been developed by the network's data management center within a homegrown clinical trial management system. In this article, the system design strategy and database structure are presented. A generic database model was created to transfer different adjudication procedures into a unified set of sequential adjudication steps. Each adjudication step was defined by one activate condition, one lock condition, one to five categorical data items to capture adjudication results, and one free text field for general comments. Based on this model, a generic outcome adjudication user interface and a generic data processing program were developed within a homegrown clinical trial management system to provide automated coordination of outcome adjudication. By the end of 2014, this generic outcome adjudication module had been implemented in 10 multicenter trials. A total of 29 adjudication procedures were defined with the number of adjudication steps varying from 1 to 7. The implementation of a new adjudication procedure in this generic module took an experienced programmer 1 or 2 days. A total of 7336 outcome events had been adjudicated and 16,235 adjudication step activities had been recorded. In a multicenter trial, 1144 safety outcome event submissions went through a three-step adjudication procedure and reported a median of 3.95 days from safety event case report form submission to adjudication completion. In another trial, 277 clinical outcome events were adjudicated by a six-step procedure and took a median of 23.84 days from outcome event case report form submission to adjudication procedure completion. A generic outcome adjudication module integrated in the clinical trial management system made the automated coordination of efficacy and safety outcome adjudication a reality. © The Author(s) 2015.

  9. A mixed optimization method for automated design of fuselage structures.

    NASA Technical Reports Server (NTRS)

    Sobieszczanski, J.; Loendorf, D.

    1972-01-01

    A procedure for automating the design of transport aircraft fuselage structures has been developed and implemented in the form of an operational program. The structure is designed in two stages. First, an overall distribution of structural material is obtained by means of optimality criteria to meet strength and displacement constraints. Subsequently, the detailed design of selected rings and panels consisting of skin and stringers is performed by mathematical optimization accounting for a set of realistic design constraints. The practicality and computer efficiency of the procedure is demonstrated on cylindrical and area-ruled large transport fuselages.

  10. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  11. Optimal experimental designs for the estimation of thermal properties of composite materials

    NASA Technical Reports Server (NTRS)

    Scott, Elaine P.; Moncman, Deborah A.

    1994-01-01

    Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.

  12. Modeling the BOD of Danube River in Serbia using spatial, temporal, and input variables optimized artificial neural network models.

    PubMed

    Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V

    2016-05-01

    This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.

  13. Three-dimensional aerodynamic shape optimization of supersonic delta wings

    NASA Technical Reports Server (NTRS)

    Burgreen, Greg W.; Baysal, Oktay

    1994-01-01

    A recently developed three-dimensional aerodynamic shape optimization procedure AeSOP(sub 3D) is described. This procedure incorporates some of the most promising concepts from the area of computational aerodynamic analysis and design, specifically, discrete sensitivity analysis, a fully implicit 3D Computational Fluid Dynamics (CFD) methodology, and 3D Bezier-Bernstein surface parameterizations. The new procedure is demonstrated in the preliminary design of supersonic delta wings. Starting from a symmetric clipped delta wing geometry, a Mach 1.62 asymmetric delta wing and two Mach 1. 5 cranked delta wings were designed subject to various aerodynamic and geometric constraints.

  14. Speak Out (K-8) [and] Election '80.

    ERIC Educational Resources Information Center

    Illinois State Board of Education, Springfield.

    These two teaching guides contain step-by-step procedures for an election education program in which all Illinois school children vote for and elect a State animal. The program, mandated by the Illinois State Legislature, is intended to provide students with the unique opportunity to learn about the entire election process through actual voting…

  15. RBS Career Education. Evaluation Planning Manual. Education Is Going to Work.

    ERIC Educational Resources Information Center

    Kershner, Keith M.

    Designed for use with the Research for Better Schools career education program, this evaluation planning manual focuses on procedures and issues central to planning the evaluation of an educational program. Following a statement on the need for evaluation, nine sequential steps for evaluation planning are discussed. The first two steps, program…

  16. Optical pattern recognition algorithms on neural-logic equivalent models and demonstration of their prospects and possible implementations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Zaitsev, Alexandr V.; Voloshin, Victor M.

    2001-03-01

    Historic information regarding the appearance and creation of fundamentals of algebra-logical apparatus-`equivalental algebra' for description of neuro-nets paradigms and algorithms is considered which is unification of theory of neuron nets (NN), linear algebra and the most generalized neuro-biology extended for matrix case. A survey is given of `equivalental models' of neuron nets and associative memory is suggested new, modified matrix-tenzor neurological equivalental models (MTNLEMS) are offered with double adaptive-equivalental weighing (DAEW) for spatial-non- invariant recognition (SNIR) and space-invariant recognition (SIR) of 2D images (patterns). It is shown, that MTNLEMS DAEW are the most generalized, they can describe the processes in NN both within the frames of known paradigms and within new `equivalental' paradigm of non-interaction type, and the computing process in NN under using the offered MTNLEMs DAEW is reduced to two-step and multi-step algorithms and step-by-step matrix-tenzor procedures (for SNIR) and procedures of defining of space-dependent equivalental functions from two images (for SIR).

  17. Big Data-Based Approach to Detect, Locate, and Enhance the Stability of an Unplanned Microgrid Islanding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Li, Yan; Zhang, Yingchen

    In this paper, a big data-based approach is proposed for the security improvement of an unplanned microgrid islanding (UMI). The proposed approach contains two major steps: the first step is big data analysis of wide-area monitoring to detect a UMI and locate it; the second step is particle swarm optimization (PSO)-based stability enhancement for the UMI. First, an optimal synchrophasor measurement device selection (OSMDS) and matching pursuit decomposition (MPD)-based spatial-temporal analysis approach is proposed to significantly reduce the volume of data while keeping appropriate information from the synchrophasor measurements. Second, a random forest-based ensemble learning approach is trained to detectmore » the UMI. When combined with grid topology, the UMI can be located. Then the stability problem of the UMI is formulated as an optimization problem and the PSO is used to find the optimal operational parameters of the UMI. An eigenvalue-based multiobjective function is proposed, which aims to improve the damping and dynamic characteristics of the UMI. Finally, the simulation results demonstrate the effectiveness and robustness of the proposed approach.« less

  18. A method for generating reduced-order combustion mechanisms that satisfy the differential entropy inequality

    NASA Astrophysics Data System (ADS)

    Ream, Allen E.; Slattery, John C.; Cizmas, Paul G. A.

    2018-04-01

    This paper presents a new method for determining the Arrhenius parameters of a reduced chemical mechanism such that it satisfies the second law of thermodynamics. The strategy is to approximate the progress of each reaction in the reduced mechanism from the species production rates of a detailed mechanism by using a linear least squares method. A series of non-linear least squares curve fittings are then carried out to find the optimal Arrhenius parameters for each reaction. At this step, the molar rates of production are written such that they comply with a theorem that provides the sufficient conditions for satisfying the second law of thermodynamics. This methodology was used to modify the Arrhenius parameters for the Westbrook and Dryer two-step mechanism and the Peters and Williams three-step mechanism for methane combustion. Both optimized mechanisms showed good agreement with the detailed mechanism for species mole fractions and production rates of most major species. Both optimized mechanisms showed significant improvement over previous mechanisms in minor species production rate prediction. Both optimized mechanisms produced no violations of the second law of thermodynamics.

  19. Design of Ultra-Wideband Tapered Slot Antenna by Using Binomial Transformer with Corrugation

    NASA Astrophysics Data System (ADS)

    Chareonsiri, Yosita; Thaiwirot, Wanwisa; Akkaraekthalin, Prayoot

    2017-05-01

    In this paper, the tapered slot antenna (TSA) with corrugation is proposed for UWB applications. The multi-section binomial transformer is used to design taper profile of the proposed TSA that does not involve using time consuming optimization. A step-by-step procedure for synthesis of the step impedance values related with step slot widths of taper profile is presented. The smooth taper can be achieved by fitting the smoothing curve to the entire step slot. The design of TSA based on this method yields results with a quite flat gain and wide impedance bandwidth covering UWB spectrum from 3.1 GHz to 10.6 GHz. To further improve the radiation characteristics, the corrugation is added on the both edges of the proposed TSA. The effects of different corrugation shapes on the improvement of antenna gain and front-to-back ratio (F-to-B ratio) are investigated. To demonstrate the validity of the design, the prototypes of TSA without and with corrugation are fabricated and measured. The results show good agreement between simulation and measurement.

  20. A green recyclable SO(3)H-carbon catalyst derived from glycerol for the production of biodiesel from FFA-containing karanja (Pongamia glabra) oil in a single step.

    PubMed

    Prabhavathi Devi, B L A; Vijai Kumar Reddy, T; Vijaya Lakshmi, K; Prasad, R B N

    2014-02-01

    Simultaneous esterification and transesterification method is employed for the preparation of biodiesel from 7.5% free fatty acid (FFA) containing karanja (Pongamia glabra) oil using water resistant and reusable carbon-based solid acid catalyst derived from glycerol in a single step. The optimum reaction parameters for obtaining biodiesel in >99% yield by simultaneous esterification and transesterification are: methanol (1:45 mole ratio of oil), catalyst 20wt.% of oil, temperature 160°C and reaction time of 4h. After the reaction, the catalyst was easily recovered by filtration and reused for five times with out any deactivation under optimized conditions. This single-step process could be a potential route for biodiesel production from high FFA containing oils by simplifying the procedure and reducing costs and effluent generation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Modified two-step emulsion solvent evaporation technique for fabricating biodegradable rod-shaped particles in the submicron size range.

    PubMed

    Safari, Hanieh; Adili, Reheman; Holinstat, Michael; Eniola-Adefeso, Omolola

    2018-05-15

    Though the emulsion solvent evaporation (ESE) technique has been previously modified to produce rod-shaped particles, it cannot generate small-sized rods for drug delivery applications due to the inherent coupling and contradicting requirements for the formation versus stretching of droplets. The separation of the droplet formation from the stretching step should enable the creation of submicron droplets that are then stretched in the second stage by manipulation of the system viscosity along with the surface-active molecule and oil-phase solvent. A two-step ESE protocol is evaluated where oil droplets are formed at low viscosity followed by a step increase in the aqueous phase viscosity to stretch droplets. Different surface-active molecules and oil phase solvents were evaluated to optimize the yield of biodegradable PLGA rods. Rods were assessed for drug loading via an imaging agent and vascular-targeted delivery application via blood flow adhesion assays. The two-step ESE method generated PLGA rods with major and minor axis down to 3.2 µm and 700 nm, respectively. Chloroform and sodium metaphosphate was the optimal solvent and surface-active molecule, respectively, for submicron rod fabrication. Rods demonstrated faster release of Nile Red compared to spheres and successfully targeted an inflamed endothelium under shear flow in vitro and in vivo. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Optimum Design of High-Speed Prop-Rotors

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; McCarthy, Thomas Robert

    1993-01-01

    An integrated multidisciplinary optimization procedure is developed for application to rotary wing aircraft design. The necessary disciplines such as dynamics, aerodynamics, aeroelasticity, and structures are coupled within a closed-loop optimization process. The procedure developed is applied to address two different problems. The first problem considers the optimization of a helicopter rotor blade and the second problem addresses the optimum design of a high-speed tilting proprotor. In the helicopter blade problem, the objective is to reduce the critical vibratory shear forces and moments at the blade root, without degrading rotor aerodynamic performance and aeroelastic stability. In the case of the high-speed proprotor, the goal is to maximize the propulsive efficiency in high-speed cruise without deteriorating the aeroelastic stability in cruise and the aerodynamic performance in hover. The problems studied involve multiple design objectives; therefore, the optimization problems are formulated using multiobjective design procedures. A comprehensive helicopter analysis code is used for the rotary wing aerodynamic, dynamic and aeroelastic stability analyses and an algorithm developed specifically for these purposes is used for the structural analysis. A nonlinear programming technique coupled with an approximate analysis procedure is used to perform the optimization. The optimum blade designs obtained in each case are compared to corresponding reference designs.

  3. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    USGS Publications Warehouse

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  4. STIR: Redox-Switchable Olefin Polymerization Catalysis: Electronically Tunable Ligands for Controlled Polymer Synthesis

    DTIC Science & Technology

    2013-03-28

    positions leading us to utilize a two-step procedure in which the amines were treated with methylchloroformate before being fully reduced with lithium ...was carried out using lithium aluminum hydride before undergoing a similar two-step methylation as described above to yield bisferrocenyl ligand 16...of Ni-based complex 30. CV’s were ran in DCM with tetrabutylammonium hexafluorophosphate electrolyte and referenced to a ferrocene standard. In

  5. Structural optimization of framed structures using generalized optimality criteria

    NASA Technical Reports Server (NTRS)

    Kolonay, R. M.; Venkayya, Vipperla B.; Tischler, V. A.; Canfield, R. A.

    1989-01-01

    The application of a generalized optimality criteria to framed structures is presented. The optimality conditions, Lagrangian multipliers, resizing algorithm, and scaling procedures are all represented as a function of the objective and constraint functions along with their respective gradients. The optimization of two plane frames under multiple loading conditions subject to stress, displacement, generalized stiffness, and side constraints is presented. These results are compared to those found by optimizing the frames using a nonlinear mathematical programming technique.

  6. Pricing strategy for aesthetic surgery: economic analysis of a resident clinic's change in fees.

    PubMed

    Krieger, L M; Shaw, W W

    1999-02-01

    The laws of microeconomics explain how prices affect consumer purchasing decisions and thus overall revenues and profits. These principles can easily be applied to the behavior aesthetic plastic surgery patients. The UCLA Division of Plastic Surgery resident aesthetics clinic recently offered a radical price change for its services. The effects of this change on demand for services and revenue were tracked. Economic analysis was applied to see if this price change resulted in the maximization of total revenues, or if additional price changes could further optimize them. Economic analysis of pricing involves several steps. The first step is to assess demand. The number of procedures performed by a given practice at different price levels can be plotted to create a demand curve. From this curve, price sensitivities of consumers can be calculated (price elasticity of demand). This information can then be used to determine the pricing level that creates demand for the exact number of procedures that yield optimal revenues. In economic parlance, revenues are maximized by pricing services such that elasticity is equal to 1 (the point of unit elasticity). At the UCLA resident clinic, average total fees per procedure were reduced by 40 percent. This resulted in a 250-percent increase in procedures performed for representative 4-month periods before and after the price change. Net revenues increased by 52 percent. Economic analysis showed that the price elasticity of demand before the price change was 6.2. After the price change it was 1. We conclude that the magnitude of the price change resulted in a fee schedule that yielded the highest possible revenues from the resident clinic. These results show that changes in price do affect total revenue and that the nature of these effects can be understood, predicted, and maximized using the tools of microeconomics.

  7. Crew/computer communications study. Volume 1: Final report. [onboard computerized communications system for spacecrews

    NASA Technical Reports Server (NTRS)

    Johannes, J. D.

    1974-01-01

    Techniques, methods, and system requirements are reported for an onboard computerized communications system that provides on-line computing capability during manned space exploration. Communications between man and computer take place by sequential execution of each discrete step of a procedure, by interactive progression through a tree-type structure to initiate tasks or by interactive optimization of a task requiring man to furnish a set of parameters. Effective communication between astronaut and computer utilizes structured vocabulary techniques and a word recognition system.

  8. Disclosing medical mistakes: a communication management plan for physicians.

    PubMed

    Petronio, Sandra; Torke, Alexia; Bosslet, Gabriel; Isenberg, Steven; Wocial, Lucia; Helft, Paul R

    2013-01-01

    There is a growing consensus that disclosure of medical mistakes is ethically and legally appropriate, but such disclosures are made difficult by medical traditions of concern about medical malpractice suits and by physicians' own emotional reactions. Because the physician may have compelling reasons both to keep the information private and to disclose it to the patient or family, these situations can be conceptualized as privacy dilemmas. These dilemmas may create barriers to effectively addressing the mistake and its consequences. Although a number of interventions exist to address privacy dilemmas that physicians face, current evidence suggests that physicians tend to be slow to adopt the practice of disclosing medical mistakes. This discussion proposes a theoretically based, streamlined, two-step plan that physicians can use as an initial guide for conversations with patients about medical mistakes. The mistake disclosure management plan uses the communication privacy management theory. The steps are 1) physician preparation, such as talking about the physician's emotions and seeking information about the mistake, and 2) use of mistake disclosure strategies that protect the physician-patient relationship. These include the optimal timing, context of disclosure delivery, content of mistake messages, sequencing, and apology. A case study highlighted the disclosure process. This Mistake Disclosure Management Plan may help physicians in the early stages after mistake discovery to prepare for the initial disclosure of a medical mistakes. The next step is testing implementation of the procedures suggested.

  9. Subspace methods for identification of human ankle joint stiffness.

    PubMed

    Zhao, Y; Westwick, D T; Kearney, R E

    2011-11-01

    Joint stiffness, the dynamic relationship between the angular position of a joint and the torque acting about it, describes the dynamic, mechanical behavior of a joint during posture and movement. Joint stiffness arises from both intrinsic and reflex mechanisms, but the torques due to these mechanisms cannot be measured separately experimentally, since they appear and change together. Therefore, the direct estimation of the intrinsic and reflex stiffnesses is difficult. In this paper, we present a new, two-step procedure to estimate the intrinsic and reflex components of ankle stiffness. In the first step, a discrete-time, subspace-based method is used to estimate a state-space model for overall stiffness from the measured overall torque and then predict the intrinsic and reflex torques. In the second step, continuous-time models for the intrinsic and reflex stiffnesses are estimated from the predicted intrinsic and reflex torques. Simulations and experimental results demonstrate that the algorithm estimates the intrinsic and reflex stiffnesses accurately. The new subspace-based algorithm has three advantages over previous algorithms: 1) It does not require iteration, and therefore, will always converge to an optimal solution; 2) it provides better estimates for data with high noise or short sample lengths; and 3) it provides much more accurate results for data acquired under the closed-loop conditions, that prevail when subjects interact with compliant loads.

  10. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography.

    PubMed

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L; Armour, Wes; Waterman, David G; Iwata, So; Evans, Gwyndaf

    2013-08-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein.

  11. Clustering procedures for the optimal selection of data sets from multiple crystals in macromolecular crystallography

    PubMed Central

    Foadi, James; Aller, Pierre; Alguel, Yilmaz; Cameron, Alex; Axford, Danny; Owen, Robin L.; Armour, Wes; Waterman, David G.; Iwata, So; Evans, Gwyndaf

    2013-01-01

    The availability of intense microbeam macromolecular crystallography beamlines at third-generation synchrotron sources has enabled data collection and structure solution from microcrystals of <10 µm in size. The increased likelihood of severe radiation damage where microcrystals or particularly sensitive crystals are used forces crystallographers to acquire large numbers of data sets from many crystals of the same protein structure. The associated analysis and merging of multi-crystal data is currently a manual and time-consuming step. Here, a computer program, BLEND, that has been written to assist with and automate many of the steps in this process is described. It is demonstrated how BLEND has successfully been used in the solution of a novel membrane protein. PMID:23897484

  12. A novel two-step procedure to expand Sca-1+ cells clonally

    PubMed Central

    Tang, Yao Liang; Shen, Leping; Qian, Keping; Phillips, M. Ian

    2007-01-01

    Resident cardiac stem cells (CSCs) are characterized by their capacity to self-renew in culture, and are multi-potent for forming normal cell types in hearts. CSCs were originally isolated directly from enzymatically digested hearts using stem cell markers. However, long exposure to enzymatic digestion can affect the integrity of stem cell markers on the cell surface, and also compromise stem cell function. Alternatively resident CSCs can migrate from tissue explant and form cardiospheres in culture. However, fibroblast contamination can easily occur during CSC culture. To avoid these problems, we developed a two-step procedure by growing the cells before selecting the Sca1+ cells and culturing in cardiac fibroblast conditioned medium, they avoid fibroblast overgrowth. PMID:17577582

  13. Critical aspects of data analysis for quantification in laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Motto-Ros, V.; Syvilay, D.; Bassel, L.; Negre, E.; Trichard, F.; Pelascini, F.; El Haddad, J.; Harhira, A.; Moncayo, S.; Picard, J.; Devismes, D.; Bousquet, B.

    2018-02-01

    In this study, a collaborative contest focused on LIBS data processing has been conducted in an original way since the participants did not share the same samples to be analyzed on their own LIBS experiments but a set of LIBS spectra obtained from one single experiment. Each participant was asked to provide the predicted concentrations of several elements for two glass samples. The analytical contest revealed a wide diversity of results among participants, even when the same spectral lines were considered for the analysis. Then, a parametric study was conducted to investigate the influence of each step during the data processing. This study was based on several analytical figures of merit such as the determination coefficient, uncertainty, limit of quantification and prediction ability (i.e., trueness). Then, it was possible to interpret the results provided by the participants, emphasizing the fact that the type of data extraction, baseline modeling as well as the calibration model play key roles in the quantification performance of the technique. This work provides a set of recommendations based on a systematic evaluation of the quantification procedure with the aim of optimizing the methodological steps toward the standardization of LIBS.

  14. Synthesis of core-shell molecularly imprinted polymer microspheres by precipitation polymerization for the inline molecularly imprinted solid-phase extraction of thiabendazole from citrus fruits and orange juice samples.

    PubMed

    Barahona, Francisco; Turiel, Esther; Cormack, Peter A G; Martín-Esteban, Antonio

    2011-01-01

    In this work, the synthesis of molecularly imprinted polymer microspheres with narrow particle size distributions and core-shell morphology by a two-step precipitation polymerization procedure is described. Polydivinylbenzene (poly DVB-80) core particles were used as seed particles in the production of molecularly imprinted polymer shells by copolymerization of divinylbenzene-80 with methacrylic acid in the presence of thiabendazole (TBZ) and an appropriate porogen. Thereafter, polymer particles were packed into refillable stainless steel HPLC columns used in the development of an inline molecularly imprinted SPE method for the determination of TBZ in citrus fruits and orange juice samples. Under optimized chromatographic conditions, recoveries of TBZ within the range 81.1-106.4%, depending upon the sample, were obtained, with RSDs lower than 10%. This novel method permits the unequivocal determination of TBZ in the samples under study, according to the maximum residue levels allowed within Europe, in less than 20 min and without any need for a clean-up step in the analytical protocol. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. A high performance pMOSFET with two-step recessed SiGe-S/D structure for 32 nm node and beyond

    NASA Astrophysics Data System (ADS)

    Yasutake, Nobuaki; Azuma, Atsushi; Ishida, Tatsuya; Ohuchi, Kazuya; Aoki, Nobutoshi; Kusunoki, Naoki; Mori, Shinji; Mizushima, Ichiro; Morooka, Tetsu; Kawanaka, Shigeru; Toyoshima, Yoshiaki

    2007-11-01

    A novel SiGe-S/D structure for high performance pMOSFET called two-step recessed SiGe-source/drain (S/D) is developed with careful optimization of recessed SiGe-S/D structure. With this method, hole mobility, short channel effect and S/D resistance in pMOSFET are improved compared with conventional recessed SiGe-S/D structure. To enhance device performance such as drain current drivability, SiGe region has to be closer to channel region. Then, conventional deep SiGe-S/D region with carefully optimized shallow SiGe SDE region showed additional device performance improvement without SCE degradation. As a result, high performance 24 nm gate length pMOSFET was demonstrated with drive current of 451 μA/μm at ∣ Vdd∣ of 0.9 V and Ioff of 100 nA/μm (552 μA/μm at ∣ Vdd∣ of 1.0 V). Furthermore, by combining with Vdd scaling, we indicate the extendability of two-step recessed SiGe-S/D structure down to 15 nm node generation.

  16. [Implementation of a rational standard of hygiene for preparation of operating rooms].

    PubMed

    Bauer, M; Scheithauer, S; Moerer, O; Pütz, H; Sliwa, B; Schmidt, C E; Russo, S G; Waeschle, R M

    2015-10-01

    The assurance of high standards of care is a major requirement in German hospitals while cost reduction and efficient use of resources are mandatory. These requirements are particularly evident in the high-risk and cost-intensive operating theatre field with multiple process steps. The cleaning of operating rooms (OR) between surgical procedures is of major relevance for patient safety and requires time and human resources. The hygiene procedure plan for OR cleaning between operations at the university hospital in Göttingen was revised and optimized according to the plan-do-check-act principle due to not clearly defined specifications of responsibilities, use of resources, prolonged process times and increased staff engagement. The current status was evaluated in 2012 as part of the first step "plan". The subsequent step "do" included an expert symposium with external consultants, interdisciplinary consensus conferences with an actualization of the former hygiene procedure plan and the implementation process. All staff members involved were integrated into this management change process. The penetration rate of the training and information measures as well as the acceptance and compliance with the new hygiene procedure plan were reviewed within step "check". The rates of positive swabs and air sampling as well as of postoperative wound infections were analyzed for quality control and no evidence for a reduced effectiveness of the new hygiene plan was found. After the successful implementation of these measures the next improvement cycle ("act") was performed in 2014 which led to a simplification of the hygiene plan by reduction of the number of defined cleaning and disinfection programs for preparation of the OR. The reorganization measures described led to a comprehensive commitment of the hygiene procedure plan by distinct specifications for responsibilities, for the course of action and for the use of resources. Furthermore, a simplification of the plan, a rational staff assignment and reduced process times were accomplished. Finally, potential conflicts due to an insufficient evidence-based knowledge of personnel was reduced. This present project description can be used by other hospitals as a guideline for similar changes in management processes.

  17. Structural tailoring of engine blades (STAEBL)

    NASA Technical Reports Server (NTRS)

    Platt, C. E.; Pratt, T. K.; Brown, K. W.

    1982-01-01

    A mathematical optimization procedure was developed for the structural tailoring of engine blades and was used to structurally tailor two engine fan blades constructed of composite materials without midspan shrouds. The first was a solid blade made from superhybrid composites, and the second was a hollow blade with metal matrix composite inlays. Three major computerized functions were needed to complete the procedure: approximate analysis with the established input variables, optimization of an objective function, and refined analysis for design verification.

  18. Image registration with auto-mapped control volumes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, Eduard; Xing Lei

    2006-04-15

    Many image registration algorithms rely on the use of homologous control points on the two input image sets to be registered. In reality, the interactive identification of the control points on both images is tedious, difficult, and often a source of error. We propose a two-step algorithm to automatically identify homologous regions that are used as a priori information during the image registration procedure. First, a number of small control volumes having distinct anatomical features are identified on the model image in a somewhat arbitrary fashion. Instead of attempting to find their correspondences in the reference image through user interaction,more » in the proposed method, each of the control regions is mapped to the corresponding part of the reference image by using an automated image registration algorithm. A normalized cross-correlation (NCC) function or mutual information was used as the auto-mapping metric and a limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) was employed to optimize the function to find the optimal mapping. For rigid registration, the transformation parameters of the system are obtained by averaging that derived from the individual control volumes. In our deformable calculation, the mapped control volumes are treated as the nodes or control points with known positions on the two images. If the number of control volumes is not enough to cover the whole image to be registered, additional nodes are placed on the model image and then located on the reference image in a manner similar to the conventional BSpline deformable calculation. For deformable registration, the established correspondence by the auto-mapped control volumes provides valuable guidance for the registration calculation and greatly reduces the dimensionality of the problem. The performance of the two-step registrations was applied to three rigid registration cases (two PET-CT registrations and a brain MRI-CT registration) and one deformable registration of inhale and exhale phases of a lung 4D CT. Algorithm convergence was confirmed by starting the registration calculations from a large number of initial transformation parameters. An accuracy of {approx}2 mm was achieved for both deformable and rigid registration. The proposed image registration method greatly reduces the complexity involved in the determination of homologous control points and allows us to minimize the subjectivity and uncertainty associated with the current manual interactive approach. Patient studies have indicated that the two-step registration technique is fast, reliable, and provides a valuable tool to facilitate both rigid and nonrigid image registrations.« less

  19. Tetraethylene glycol promoted two-step, one-pot rapid synthesis of indole-3-[1- 11C]acetic acid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Sojeong; Qu, Wenchao; Alexoff, David L.

    2014-12-12

    An operationally friendly, two-step, one-pot process has been developed for the rapid synthesis of carbon-11 labeled indole-3-acetic acid ([ 11]IAA or [ 11]auxin). By replacing an aprotic polar solvent with tetraethylene glycol, nucleophilic [ 11]cyanation and alkaline hydrolysis reactions were performed consecutively in a single pot without a time-consuming intermediate purification step. The entire production time for this updated procedure is 55 min, which dramatically simplifies the entire synthesis and reduces the starting radioactivity required for a whole plant imaging study.

  20. Determination of the optimal number of components in independent components analysis.

    PubMed

    Kassouf, Amine; Jouan-Rimbaud Bouveresse, Delphine; Rutledge, Douglas N

    2018-03-01

    Independent components analysis (ICA) may be considered as one of the most established blind source separation techniques for the treatment of complex data sets in analytical chemistry. Like other similar methods, the determination of the optimal number of latent variables, in this case, independent components (ICs), is a crucial step before any modeling. Therefore, validation methods are required in order to decide about the optimal number of ICs to be used in the computation of the final model. In this paper, three new validation methods are formally presented. The first one, called Random_ICA, is a generalization of the ICA_by_blocks method. Its specificity resides in the random way of splitting the initial data matrix into two blocks, and then repeating this procedure several times, giving a broader perspective for the selection of the optimal number of ICs. The second method, called KMO_ICA_Residuals is based on the computation of the Kaiser-Meyer-Olkin (KMO) index of the transposed residual matrices obtained after progressive extraction of ICs. The third method, called ICA_corr_y, helps to select the optimal number of ICs by computing the correlations between calculated proportions and known physico-chemical information about samples, generally concentrations, or between a source signal known to be present in the mixture and the signals extracted by ICA. These three methods were tested using varied simulated and experimental data sets and compared, when necessary, to ICA_by_blocks. Results were relevant and in line with expected ones, proving the reliability of the three proposed methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Maze Procedures for Atrial Fibrillation, From History to Practice.

    PubMed

    Kik, Charles; Bogers, Ad J J C

    2011-10-01

    Atrial fibrillation may result in significant symptoms, (systemic) thrombo-embolism, as well as tachycardia-induced cardiomyopathy with cardiac failure, and consequently be associated with significant morbidity and mortality. Nowadays symptomatic atrial fibrillation can be treated with catheter-based ablation, surgical ablation or hybrid approaches. In this setting a fairly large number of surgical approaches and procedures are described and being practised. It should be clear that the Cox-maze procedure resulted from building up evidence and experience in different steps, while some of the present surgical approaches and techniques are being based only on technical feasibility with limited experience, rather than on a process of consequent methodology. Some of the issues still under debate are whether or not the maze procedure can be limited to the left atrium or even to isolation of the pulmonary veins or that bi-atrial procedures are indicated, whether or not cardiopulmonary bypass is to be applied and which route of exposure facilitates an optimal result. In addition, maze procedures are not procedures guide by electrophysiological mapping. At least in theory not in all patients all lesions of the maze procedures are necessary. A history and aspects of current practise in surgical treatment of atrial fibrillation is presented.

  2. Maze Procedures for Atrial Fibrillation, From History to Practice

    PubMed Central

    Kik, Charles; Bogers, Ad J.J.C.

    2011-01-01

    Atrial fibrillation may result in significant symptoms, (systemic) thrombo-embolism, as well as tachycardia-induced cardiomyopathy with cardiac failure, and consequently be associated with significant morbidity and mortality. Nowadays symptomatic atrial fibrillation can be treated with catheter-based ablation, surgical ablation or hybrid approaches. In this setting a fairly large number of surgical approaches and procedures are described and being practised. It should be clear that the Cox-maze procedure resulted from building up evidence and experience in different steps, while some of the present surgical approaches and techniques are being based only on technical feasibility with limited experience, rather than on a process of consequent methodology. Some of the issues still under debate are whether or not the maze procedure can be limited to the left atrium or even to isolation of the pulmonary veins or that bi-atrial procedures are indicated, whether or not cardiopulmonary bypass is to be applied and which route of exposure facilitates an optimal result. In addition, maze procedures are not procedures guide by electrophysiological mapping. At least in theory not in all patients all lesions of the maze procedures are necessary. A history and aspects of current practise in surgical treatment of atrial fibrillation is presented. PMID:28357007

  3. Synthesis of Bipartite Tetracysteine PNA Probes for DNA In Situ Fluorescent Labeling.

    PubMed

    Fang, Ge-Min; Seitz, Oliver

    2017-12-24

    "Label-free" fluorescent probes that avoid additional steps or building blocks for conjugation of fluorescent dyes with oligonucleotides can significantly reduce the time and cost of parallel bioanalysis of a large number of nucleic acid samples. A method for the synthesis of "label-free" bicysteine-modified PNA probes using solid-phase synthesis and procedures for sequence-specific DNA in situ fluorescent labeling is described here. The concept is based on the adjacent alignment of two bicysteine-modified peptide nucleic acids on a DNA target to form a structurally optimized bipartite tetracysteine motif, which induces a sequence-specific fluorogenic reaction with commercially available biarsenic dyes, even in complex media such as cell lysate. This unit will help researchers to quickly synthesize bipartite tetracysteine PNA probes and carry out low-cost DNA in situ fluorescent labeling experiments. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  4. Apricot (Prunus armeniaca L.).

    PubMed

    Petri, César; Alburquerque, Nuria; Burgos, Lorenzo

    2015-01-01

    A protocol for Agrobacterium-mediated stable transformation of whole leaf explants of the apricot (Prunus armeniaca) cultivars 'Helena' and 'Canino' is described. Regenerated buds were selected using a two-step selection strategy with paromomycin sulfate and transferred to bud multiplication medium 1 week after they were detected for optimal survival. After buds were transferred to bud multiplication medium, antibiotic was changed to kanamycin and concentration increased gradually at each transfer to fresh medium in order to eliminate possible escapes and chimeras. Transformation efficiency, based on PCR analysis of individual putative transformed shoots from independent lines, was 5.6%. Green and healthy buds, surviving high kanamycin concentration, were transferred to shoot multiplication medium where they elongated in shoots and proliferated. Elongated transgenic shoots were rooted in a medium containing 70 μM kanamycin. Rooted plants were acclimatized following standard procedures. This constitutes the only transformation protocol described for apricot clonal tissues and one of the few of Prunus.

  5. [Extensive treatment of teacher's voice disorders in health spa].

    PubMed

    Niebudek-Bogusz, Ewa; Marszałek, Sławomir; Woźnicka, Ewelina; Minkiewicz, Zofia; Hima, Joanna; Sliwińska-Kowalska, Mariola

    2010-01-01

    Treatment in a health spa with proper infrastructure and professional medical care can provide optimal conditions for intensive voice rehabilitation, especially for people with occupational voice disorders. The most numerous group of people with voice disorders are teachers. In Poland, they have an opportunity to take care of, or regain, their health during a one-year paid leave. The authors describe a multi-specialist model of extensive treatment of voice disorders in a health spa, including holistic and interdisciplinary procedures in occupational dysphonia. Apart from balneotherapy, the spa treatment includes vocal training exercises, relaxation exercises, elements of physiotherapy with the larynx manual therapy and psychological workshops. The voice rehabilitation organized already for two groups of teachers has been received with great satisfaction by this occupational group. The implementation of a model program of extensive treatment of voice disorders in a health spa should become one of the steps aimed at preventing occupational voice diseases.

  6. Small-signal modeling with direct parameter extraction for impact ionization effect in high-electron-mobility transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guan, He; Lv, Hongliang; Guo, Hui, E-mail: hguan@stu.xidian.edu.cn

    2015-11-21

    Impact ionization affects the radio-frequency (RF) behavior of high-electron-mobility transistors (HEMTs), which have narrow-bandgap semiconductor channels, and this necessitates complex parameter extraction procedures for HEMT modeling. In this paper, an enhanced small-signal equivalent circuit model is developed to investigate the impact ionization, and an improved method is presented in detail for direct extraction of intrinsic parameters using two-step measurements in low-frequency and high-frequency regimes. The practicability of the enhanced model and the proposed direct parameter extraction method are verified by comparing the simulated S-parameters with published experimental data from an InAs/AlSb HEMT operating over a wide frequency range. The resultsmore » demonstrate that the enhanced model with optimal intrinsic parameter values that were obtained by the direct extraction approach can effectively characterize the effects of impact ionization on the RF performance of HEMTs.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Weizhao; Zhang, Zixuan; Lu, Jie

    Carbon fiber composites have received growing attention because of their high performance. One economic method to manufacturing the composite parts is the sequence of forming followed by the compression molding process. In this sequence, the preforming procedure forms the prepreg, which is the composite with the uncured resin, to the product geometry while the molding process cures the resin. Slip between different prepreg layers is observed in the preforming step and this paper reports a method to characterize the properties of the interaction between different prepreg layers, which is critical to predictive modeling and design optimization. An experimental setup wasmore » established to evaluate the interactions at various industrial production conditions. The experimental results were analyzed for an in-depth understanding about how the temperature, the relative sliding speed, and the fiber orientation affect the tangential interaction between two prepreg layers. The interaction factors measured from these experiments will be implemented in the computational preforming program.« less

  8. Perceptual Color Characterization of Cameras

    PubMed Central

    Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo

    2014-01-01

    Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586

  9. Full glowworm swarm optimization algorithm for whole-set orders scheduling in single machine.

    PubMed

    Yu, Zhang; Yang, Xiaomei

    2013-01-01

    By analyzing the characteristics of whole-set orders problem and combining the theory of glowworm swarm optimization, a new glowworm swarm optimization algorithm for scheduling is proposed. A new hybrid-encoding schema combining with two-dimensional encoding and random-key encoding is given. In order to enhance the capability of optimal searching and speed up the convergence rate, the dynamical changed step strategy is integrated into this algorithm. Furthermore, experimental results prove its feasibility and efficiency.

  10. Methods for the design and analysis of power optimized finite-state machines using clock gating

    NASA Astrophysics Data System (ADS)

    Chodorowski, Piotr

    2017-11-01

    The paper discusses two methods of design of power optimized FSMs. Both methods use clock gating techniques. The main objective of the research was to write a program capable of generating automatic hardware description of finite-state machines in VHDL and testbenches to help power analysis. The creation of relevant output files is detailed step by step. The program was tested using the LGSynth91 FSM benchmark package. An analysis of the generated circuits shows that the second method presented in this paper leads to significant reduction of power consumption.

  11. Effect of Saliva on the Tensile Bond Strength of Different Generation Adhesive Systems: An In-Vitro Study.

    PubMed

    Gupta, Nimisha; Tripathi, Abhay Mani; Saha, Sonali; Dhinsa, Kavita; Garg, Aarti

    2015-07-01

    Newer development of bonding agents have gained a better understanding of factors affecting adhesion of interface between composite and dentin surface to improve longevity of restorations. The present study evaluated the influence of salivary contamination on the tensile bond strength of different generation adhesive systems (two-step etch-and-rinse, two-step self-etch and one-step self-etch) during different bonding stages to dentin where isolation is not maintained. Superficial dentin surfaces of 90 extracted human molars were randomly divided into three study Groups (Group A: Two-step etch-and-rinse adhesive system; Group B: Two-step self-etch adhesive system and Group C: One-step self-etch adhesive system) according to the different generation of adhesives used. According to treatment conditions in different bonding steps, each Group was further divided into three Subgroups containing ten teeth in each. After adhesive application, resin composite blocks were built on dentin and light cured subsequently. The teeth were then stored in water for 24 hours before sending for testing of tensile bond strength by Universal Testing Machine. The collected data were then statistically analysed using one-way ANOVA and Tukey HSD test. One-step self-etch adhesive system revealed maximum mean tensile bond strength followed in descending order by Two-step self-etch adhesive system and Two-step etch-and-rinse adhesive system both in uncontaminated and saliva contaminated conditions respectively. Unlike One-step self-etch adhesive system, saliva contamination could reduce tensile bond strength of the two-step self-etch and two-step etch-and-rinse adhesive system. Furthermore, the step of bonding procedures and the type of adhesive seems to be effective on the bond strength of adhesives contaminated with saliva.

  12. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.

  13. A novel two-step optimization method for tandem and ovoid high-dose-rate brachytherapy treatment for locally advanced cervical cancer.

    PubMed

    Sharma, Manju; Fields, Emma C; Todor, Dorin A

    2015-01-01

    To present a novel method allowing fast volumetric optimization of tandem and ovoid high-dose-rate treatments and to quantify its benefits. Twenty-seven CT-based treatment plans from 6 consecutive cervical cancer patients treated with four to five intracavitary tandem and ovoid insertions were used. Initial single-step optimized plans were manually optimized, approved, and delivered plans created with a goal to cover high-risk clinical target volume (HR-CTV) with D90 >90% and minimize rectum, bladder, and sigmoid D2cc. For the two-step optimized (TSO) plan, each single-step optimized plan was replanned adding a structure created from prescription isodose line to the existent physician delineated HR-CTV, rectum, bladder, and sigmoid. New, more rigorous dose-volume histogram constraints for the critical organs at risks (OARs) were used for the optimization. HR-CTV D90 and OAR D2ccs were evaluated in both plans. TSO plans had consistently smaller D2ccs for all three OARs while preserving HR-CTV D90. On plans with "excellent" CTV coverage, average D90 of 96% (91-102%), sigmoid, bladder, and rectum D2cc, respectively, reduced on average by 37% (16-73%), 28% (20-47%), and 27% (15-45%). Similar reductions were obtained on plans with "good" coverage, average D90 of 93% (90-99%). For plans with "inferior" coverage, average D90 of 81%, the coverage increased to 87% with concurrent D2cc reductions of 31%, 18%, and 11% for sigmoid, bladder, and rectum, respectively. The TSO can be added with minimal planning time increase but with the potential of dramatic and systematic reductions in OAR D2ccs and in some cases with concurrent increase in target dose coverage. These single-fraction modifications would be magnified over the course of four to five intracavitary insertions and may have real clinical implications in terms of decreasing both acute and late toxicities. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  14. A triangular thin shell finite element: Nonlinear analysis. [structural analysis

    NASA Technical Reports Server (NTRS)

    Thomas, G. R.; Gallagher, R. H.

    1975-01-01

    Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.

  15. Oranges, Posters, Ribbons, and Lemonade: Concrete Computational Strategies for Dividing Fractions

    ERIC Educational Resources Information Center

    Kribs-Zaleta, Christopher M.

    2008-01-01

    This article describes how sixth-grade students developed concrete models to solve division of fractions story problems. Students developed separate two-step procedures to solve measurement and partitive problems, drawing on invented procedures for division of whole numbers. Errors also tended to be specific to the type of division problem…

  16. Precise non-steady-state characterization of solid active materials with no preliminary mechanistic assumptions

    DOE PAGES

    Constales, Denis; Yablonsky, Gregory S.; Wang, Lucun; ...

    2017-04-25

    This paper presents a straightforward and user-friendly procedure for extracting a reactivity characterization of catalytic reactions on solid materials under non-steady-state conditions, particularly in temporal analysis of products (TAP) experiments. The kinetic parameters derived by this procedure can help with the development of detailed mechanistic understanding. The procedure consists of the following two major steps: 1) Three “Laplace reactivities” are first determined based on the moments of the exit flow pulse response data; 2) Depending on a select kinetic model, kinetic constants of elementary reaction steps can then be expressed as a function of reactivities and determined accordingly. In particular,more » we distinguish two calculation methods based on the availability and reliability of reactant and product data. The theoretical results are illustrated using a reverse example with given parameters as well as an experimental example of CO oxidation over a supported Au/SiO 2 catalyst. The procedure presented here provides an efficient tool for kinetic characterization of many complex chemical reactions.« less

  17. Designing a fully automated multi-bioreactor plant for fast DoE optimization of pharmaceutical protein production.

    PubMed

    Fricke, Jens; Pohlmann, Kristof; Jonescheit, Nils A; Ellert, Andree; Joksch, Burkhard; Luttmann, Reiner

    2013-06-01

    The identification of optimal expression conditions for state-of-the-art production of pharmaceutical proteins is a very time-consuming and expensive process. In this report a method for rapid and reproducible optimization of protein expression in an in-house designed small-scale BIOSTAT® multi-bioreactor plant is described. A newly developed BioPAT® MFCS/win Design of Experiments (DoE) module (Sartorius Stedim Systems, Germany) connects the process control system MFCS/win and the DoE software MODDE® (Umetrics AB, Sweden) and enables therefore the implementation of fully automated optimization procedures. As a proof of concept, a commercial Pichia pastoris strain KM71H has been transformed for the expression of potential malaria vaccines. This approach has allowed a doubling of intact protein secretion productivity due to the DoE optimization procedure compared to initial cultivation results. In a next step, robustness regarding the sensitivity to process parameter variability has been proven around the determined optimum. Thereby, a pharmaceutical production process that is significantly improved within seven 24-hour cultivation cycles was established. Specifically, regarding the regulatory demands pointed out in the process analytical technology (PAT) initiative of the United States Food and Drug Administration (FDA), the combination of a highly instrumented, fully automated multi-bioreactor platform with proper cultivation strategies and extended DoE software solutions opens up promising benefits and opportunities for pharmaceutical protein production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Characterization of PSII-LHCII supercomplexes isolated from pea thylakoid membrane by one-step treatment with α- and β-dodecyl-D-maltoside.

    PubMed

    Barera, Simone; Pagliano, Cristina; Pape, Tillmann; Saracco, Guido; Barber, James

    2012-12-19

    It was the work of Jan Anderson, together with Keith Boardman, that showed it was possible to physically separate photosystem I (PSI) from photosystem II (PSII), and it was Jan Anderson who realized the importance of this work in terms of the fluid-mosaic model as applied to the thylakoid membrane. Since then, there has been a steady progress in the development of biochemical procedures to isolate PSII and PSI both for physical and structural studies. Dodecylmaltoside (DM) has emerged as an effective mild detergent for this purpose. DM is a glucoside-based surfactant with a bulky hydrophilic head group composed of two sugar rings and a non-charged alkyl glycoside chain. Two isomers of this molecule exist, differing only in the configuration of the alkyl chain around the anomeric centre of the carbohydrate head group, axial in α-DM and equatorial in β-DM. We have compared the use of α-DM and β-DM for the isolation of supramolecular complexes of PSII by a single-step solubilization of stacked thylakoid membranes isolated from peas. As a result, we have optimized conditions to obtain homogeneous preparations of the C(2)S(2)M(2) and C(2)S(2) supercomplexes following the nomenclature of Dekker & Boekema (2005 Biochim. Biophys. Acta 1706, 12-39). These PSII-LHCII supercomplexes were subjected to biochemical and structural analyses.

  19. A new and inexpensive non-bit-for-bit solution reproducibility test based on time step convergence (TSC1.0)

    DOE PAGES

    Wan, Hui; Zhang, Kai; Rasch, Philip J.; ...

    2017-02-03

    A test procedure is proposed for identifying numerically significant solution changes in evolution equations used in atmospheric models. The test issues a fail signal when any code modifications or computing environment changes lead to solution differences that exceed the known time step sensitivity of the reference model. Initial evidence is provided using the Community Atmosphere Model (CAM) version 5.3 that the proposed procedure can be used to distinguish rounding-level solution changes from impacts of compiler optimization or parameter perturbation, which are known to cause substantial differences in the simulated climate. The test is not exhaustive since it does not detect issues associatedmore » with diagnostic calculations that do not feedback to the model state variables. Nevertheless, it provides a practical and objective way to assess the significance of solution changes. The short simulation length implies low computational cost. The independence between ensemble members allows for parallel execution of all simulations, thus facilitating fast turnaround. The new method is simple to implement since it does not require any code modifications. We expect that the same methodology can be used for any geophysical model to which the concept of time step  convergence is applicable.« less

  20. A covalent modification for graphene by adamantane groups through two-step chlorination-Grignard reactions

    NASA Astrophysics Data System (ADS)

    Sun, Xuzhuo; Li, Bo; Lu, Mingxia

    2017-07-01

    Chemical modification of graphene is a promising approach to manipulate its properties for its end applications. Herein we designed a two-step route through chlorination-Grignard reactions to covalently decorate the surface of graphene with adamantane groups. The chemically modified graphene was characterized by Raman spectroscopy, atomic force microscopy, and X-ray photoelectron spectroscopy. Chlorination of graphene occurred rapidly, and the substitution of chlorine atoms on chlorinated graphene by adamantane Grignard reagent afforded adamantane graphene in almost quantitative yield. Adamantane groups were found to be covalently bonded to the graphene carbons. The present two-step procedure may provide an effective and facile route for graphene modification with varieties of organic functional groups.

  1. The Synthesis of 2-acetyl-1,4-naphthoquinone: A Multi-step Synthesis.

    ERIC Educational Resources Information Center

    Green, Ivan R.

    1982-01-01

    Outlines 2 procedures for synthesizing 2-acetyl-1,4-naphthoquinone to compare relative merits of the two pathways. The major objective of the exercise is to demonstrate that certain factors should be considered when selecting a pathway for synthesis including availability of starting materials, cost of reagents, number of steps involved,…

  2. Woodrow Wilson and the U.S. Ratification of the Treaty of Versailles. Lesson Plan.

    ERIC Educational Resources Information Center

    Pyne, John; Sesso, Gloria

    1995-01-01

    Presents a high school lesson plan on the struggle over ratification of the Treaty of Versailles and U.S. participation in the League of Nations. Includes a timeline of events, four primary source documents, and biographical portraits of two opposing senators. Provides student objectives and step-by-step instructional procedures. (CFR)

  3. Computer-Based Feedback in Linear Algebra: Effects on Transfer Performance and Motivation

    ERIC Educational Resources Information Center

    Corbalan, Gemma; Paas, Fred; Cuypers, Hans

    2010-01-01

    Two studies investigated the effects on students' perceptions (Study 1) and learning and motivation (Study 2) of different levels of feedback in mathematical problems. In these problems, an error made in one step of the problem-solving procedure will carry over to the following steps and consequently to the final solution. Providing immediate…

  4. An automated, fast and accurate registration method to link stranded seeds in permanent prostate implants

    NASA Astrophysics Data System (ADS)

    Westendorp, Hendrik; Nuver, Tonnis T.; Moerland, Marinus A.; Minken, André W.

    2015-10-01

    The geometry of a permanent prostate implant varies over time. Seeds can migrate and edema of the prostate affects the position of seeds. Seed movements directly influence dosimetry which relates to treatment quality. We present a method that tracks all individual seeds over time allowing quantification of seed movements. This linking procedure was tested on transrectal ultrasound (TRUS) and cone-beam CT (CBCT) datasets of 699 patients. These datasets were acquired intraoperatively during a dynamic implantation procedure, that combines both imaging modalities. The procedure was subdivided in four automatic linking steps. (I) The Hungarian Algorithm was applied to initially link seeds in CBCT and the corresponding TRUS datasets. (II) Strands were identified and optimized based on curvature and linefits: non optimal links were removed. (III) The positions of unlinked seeds were reviewed and were linked to incomplete strands if within curvature- and distance-thresholds. (IV) Finally, seeds close to strands were linked, also if the curvature-threshold was violated. After linking the seeds an affine transformation was applied. The procedure was repeated until the results were stable or the 6th iteration ended. All results were visually reviewed for mismatches and uncertainties. Eleven implants showed a mismatch and in 12 cases an uncertainty was identified. On average the linking procedure took 42 ms per case. This accurate and fast method has the potential to be used for other time spans, like Day 30, and other imaging modalities. It can potentially be used during a dynamic implantation procedure to faster and better evaluate the quality of the permanent prostate implant.

  5. Raman-tailored photonic crystal fiber for telecom band photon-pair generation.

    PubMed

    Cordier, M; Orieux, A; Gabet, R; Harlé, T; Dubreuil, N; Diamanti, E; Delaye, P; Zaquine, I

    2017-07-01

    We report on the experimental characterization of a novel nonlinear liquid-filled hollow-core photonic crystal fiber for the generation of photon pairs at a telecommunication wavelength through spontaneous four-wave mixing (SFWM). We show that the optimization procedure in view of this application links the choice of the nonlinear liquid to the design parameters of the fiber, and we give an example of such an optimization at telecom wavelengths. Combining the modeling of the fiber and classical characterization techniques at these wavelengths, we identify for the chosen fiber and liquid combination SFWM phase-matching frequency ranges with no Raman scattering noise contamination. This is a first step toward obtaining a telecom band fibered photon-pair source with a high signal-to-noise ratio.

  6. Optimal control of CPR procedure using hemodynamic circulation model

    DOEpatents

    Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok

    2007-12-25

    A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.

  7. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    PubMed

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  8. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Learn, R.; Feigenbaum, E.

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  9. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width

    DOE PAGES

    Learn, R.; Feigenbaum, E.

    2016-05-27

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. Furthermore, the second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  10. Versatile synthesis and rational design of caged morpholinos.

    PubMed

    Ouyang, Xiaohu; Shestopalov, Ilya A; Sinha, Surajit; Zheng, Genhua; Pitt, Cameron L W; Li, Wen-Hong; Olson, Andrew J; Chen, James K

    2009-09-23

    Embryogenesis is regulated by genetic programs that are dynamically executed in a stereotypic manner, and deciphering these molecular mechanisms requires the ability to control embryonic gene function with similar spatial and temporal precision. Chemical technologies can enable such genetic manipulations, as exemplified by the use of caged morpholino (cMO) oligonucleotides to inactivate genes in zebrafish and other optically transparent organisms with spatiotemporal control. Here we report optimized methods for the design and synthesis of hairpin cMOs incorporating a dimethoxynitrobenzyl (DMNB)-based bifunctional linker that permits cMO assembly in only three steps from commercially available reagents. Using this simplified procedure, we have systematically prepared cMOs with differing structural configurations and investigated how the in vitro thermodynamic properties of these reagents correlate with their in vivo activities. Through these studies, we have established general principles for cMO design and successfully applied them to several developmental genes. Our optimized synthetic and design methodologies have also enabled us to prepare a next-generation cMO that contains a bromohydroxyquinoline (BHQ)-based linker for two-photon uncaging. Collectively, these advances establish the generality of cMO technologies and will facilitate the application of these chemical probes in vivo for functional genomic studies.

  11. Versatile Synthesis and Rational Design of Caged Morpholinos

    PubMed Central

    2009-01-01

    Embryogenesis is regulated by genetic programs that are dynamically executed in a stereotypic manner, and deciphering these molecular mechanisms requires the ability to control embryonic gene function with similar spatial and temporal precision. Chemical technologies can enable such genetic manipulations, as exemplified by the use of caged morpholino (cMO) oligonucleotides to inactivate genes in zebrafish and other optically transparent organisms with spatiotemporal control. Here we report optimized methods for the design and synthesis of hairpin cMOs incorporating a dimethoxynitrobenzyl (DMNB)-based bifunctional linker that permits cMO assembly in only three steps from commercially available reagents. Using this simplified procedure, we have systematically prepared cMOs with differing structural configurations and investigated how the in vitro thermodynamic properties of these reagents correlate with their in vivo activities. Through these studies, we have established general principles for cMO design and successfully applied them to several developmental genes. Our optimized synthetic and design methodologies have also enabled us to prepare a next-generation cMO that contains a bromohydroxyquinoline (BHQ)-based linker for two-photon uncaging. Collectively, these advances establish the generality of cMO technologies and will facilitate the application of these chemical probes in vivo for functional genomic studies. PMID:19708646

  12. Spectral Unmixing Analysis of Time Series Landsat 8 Images

    NASA Astrophysics Data System (ADS)

    Zhuo, R.; Xu, L.; Peng, J.; Chen, Y.

    2018-05-01

    Temporal analysis of Landsat 8 images opens up new opportunities in the unmixing procedure. Although spectral analysis of time series Landsat imagery has its own advantage, it has rarely been studied. Nevertheless, using the temporal information can provide improved unmixing performance when compared to independent image analyses. Moreover, different land cover types may demonstrate different temporal patterns, which can aid the discrimination of different natures. Therefore, this letter presents time series K-P-Means, a new solution to the problem of unmixing time series Landsat imagery. The proposed approach is to obtain the "purified" pixels in order to achieve optimal unmixing performance. The vertex component analysis (VCA) is used to extract endmembers for endmember initialization. First, nonnegative least square (NNLS) is used to estimate abundance maps by using the endmember. Then, the estimated endmember is the mean value of "purified" pixels, which is the residual of the mixed pixel after excluding the contribution of all nondominant endmembers. Assembling two main steps (abundance estimation and endmember update) into the iterative optimization framework generates the complete algorithm. Experiments using both simulated and real Landsat 8 images show that the proposed "joint unmixing" approach provides more accurate endmember and abundance estimation results compared with "separate unmixing" approach.

  13. Optimal Frequency-Domain System Realization with Weighting

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Maghami, Peiman G.

    1999-01-01

    Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.

  14. From medical invention to clinical practice: the reimbursement challenge facing new device procedures and technology--part 2: coverage.

    PubMed

    Raab, G Gregory; Parr, David H

    2006-10-01

    This paper, the second of 3 that discuss the reimbursement challenges facing new medical device technology in various issues of this journal, explains the key aspects of coverage that affect the adoption of medical devices. The process Medicare uses to make coverage determinations has become more timely and open over the past several years, but it still lacks the predictability that product innovators prefer. The continued uncertainty surrounding evidence requirements undermines the predictability needed for optimal product planning and innovation. Recent steps taken by the Centers for Medicare and Medicaid Services to provide coverage in return for evidence development should provide patients with access to promising new technologies and procedures while generating important evidence concerning their effectiveness.

  15. Image restoration techniques as applied to Landsat MSS and TM data

    USGS Publications Warehouse

    Meyer, David

    1987-01-01

    Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.

  16. Implied alignment: a synapomorphy-based multiple-sequence alignment method and its use in cladogram search

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    A method to align sequence data based on parsimonious synapomorphy schemes generated by direct optimization (DO; earlier termed optimization alignment) is proposed. DO directly diagnoses sequence data on cladograms without an intervening multiple-alignment step, thereby creating topology-specific, dynamic homology statements. Hence, no multiple-alignment is required to generate cladograms. Unlike general and globally optimal multiple-alignment procedures, the method described here, implied alignment (IA), takes these dynamic homologies and traces them back through a single cladogram, linking the unaligned sequence positions in the terminal taxa via DO transformation series. These "lines of correspondence" link ancestor-descendent states and, when displayed as linearly arrayed columns without hypothetical ancestors, are largely indistinguishable from standard multiple alignment. Since this method is based on synapomorphy, the treatment of certain classes of insertion-deletion (indel) events may be different from that of other alignment procedures. As with all alignment methods, results are dependent on parameter assumptions such as indel cost and transversion:transition ratios. Such an IA could be used as a basis for phylogenetic search, but this would be questionable since the homologies derived from the implied alignment depend on its natal cladogram and any variance, between DO and IA + Search, due to heuristic approach. The utility of this procedure in heuristic cladogram searches using DO and the improvement of heuristic cladogram cost calculations are discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  17. Stepwise detection of recombination breakpoints in sequence alignments.

    PubMed

    Graham, Jinko; McNeney, Brad; Seillier-Moiseiwitsch, Françoise

    2005-03-01

    We propose a stepwise approach to identify recombination breakpoints in a sequence alignment. The approach can be applied to any recombination detection method that uses a permutation test and provides estimates of breakpoints. We illustrate the approach by analyses of a simulated dataset and alignments of real data from HIV-1 and human chromosome 7. The presented simulation results compare the statistical properties of one-step and two-step procedures. More breakpoints are found with a two-step procedure than with a single application of a given method, particularly for higher recombination rates. At higher recombination rates, the additional breakpoints were located at the cost of only a slight increase in the number of falsely declared breakpoints. However, a large proportion of breakpoints still go undetected. A makefile and C source code for phylogenetic profiling and the maximum chi2 method, tested with the gcc compiler on Linux and WindowsXP, are available at http://stat-db.stat.sfu.ca/stepwise/ jgraham@stat.sfu.ca.

  18. Parameter estimation techniques based on optimizing goodness-of-fit statistics for structural reliability

    NASA Technical Reports Server (NTRS)

    Starlinger, Alois; Duffy, Stephen F.; Palko, Joseph L.

    1993-01-01

    New methods are presented that utilize the optimization of goodness-of-fit statistics in order to estimate Weibull parameters from failure data. It is assumed that the underlying population is characterized by a three-parameter Weibull distribution. Goodness-of-fit tests are based on the empirical distribution function (EDF). The EDF is a step function, calculated using failure data, and represents an approximation of the cumulative distribution function for the underlying population. Statistics (such as the Kolmogorov-Smirnov statistic and the Anderson-Darling statistic) measure the discrepancy between the EDF and the cumulative distribution function (CDF). These statistics are minimized with respect to the three Weibull parameters. Due to nonlinearities encountered in the minimization process, Powell's numerical optimization procedure is applied to obtain the optimum value of the EDF. Numerical examples show the applicability of these new estimation methods. The results are compared to the estimates obtained with Cooper's nonlinear regression algorithm.

  19. Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR.

    PubMed

    Stein, Erica V; Duewer, David L; Farkas, Natalia; Romsos, Erica L; Wang, Lili; Cole, Kenneth D

    2017-01-01

    Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values.

  20. Steps to achieve quantitative measurements of microRNA using two step droplet digital PCR

    PubMed Central

    Duewer, David L.; Farkas, Natalia; Romsos, Erica L.; Wang, Lili; Cole, Kenneth D.

    2017-01-01

    Droplet digital PCR (ddPCR) is being advocated as a reference method to measure rare genomic targets. It has consistently been proven to be more sensitive and direct at discerning copy numbers of DNA than other quantitative methods. However, one of the largest obstacles to measuring microRNA (miRNA) using ddPCR is that reverse transcription efficiency depends upon the target, meaning small RNA nucleotide composition directly effects primer specificity in a manner that prevents traditional quantitation optimization strategies. Additionally, the use of reagents that are optimized for miRNA measurements using quantitative real-time PCR (qRT-PCR) appear to either cause false positive or false negative detection of certain targets when used with traditional ddPCR quantification methods. False readings are often related to using inadequate enzymes, primers and probes. Given that two-step miRNA quantification using ddPCR relies solely on reverse transcription and uses proprietary reagents previously optimized only for qRT-PCR, these barriers are substantial. Therefore, here we outline essential controls, optimization techniques, and an efficacy model to improve the quality of ddPCR miRNA measurements. We have applied two-step principles used for miRNA qRT-PCR measurements and leveraged the use of synthetic miRNA targets to evaluate ddPCR following cDNA synthesis with four different commercial kits. We have identified inefficiencies and limitations as well as proposed ways to circumvent identified obstacles. Lastly, we show that we can apply these criteria to a model system to confidently quantify miRNA copy number. Our measurement technique is a novel way to quantify specific miRNA copy number in a single sample, without using standard curves for individual experiments. Our methodology can be used for validation and control measurements, as well as a diagnostic technique that allows scientists, technicians, clinicians, and regulators to base miRNA measures on a single unit of measurement rather than a ratio of values. PMID:29145448

  1. Determination of full piezoelectric complex parameters using gradient-based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kiyono, C. Y.; Pérez, N.; Silva, E. C. N.

    2016-02-01

    At present, numerical techniques allow the precise simulation of mechanical structures, but the results are limited by the knowledge of the material properties. In the case of piezoelectric ceramics, the full model determination in the linear range involves five elastic, three piezoelectric, and two dielectric complex parameters. A successful solution to obtaining piezoceramic properties consists of comparing the experimental measurement of the impedance curve and the results of a numerical model by using the finite element method (FEM). In the present work, a new systematic optimization method is proposed to adjust the full piezoelectric complex parameters in the FEM model. Once implemented, the method only requires the experimental data (impedance modulus and phase data acquired by an impedometer), material density, geometry, and initial values for the properties. This method combines a FEM routine implemented using an 8-noded axisymmetric element with a gradient-based optimization routine based on the method of moving asymptotes (MMA). The main objective of the optimization procedure is minimizing the quadratic difference between the experimental and numerical electrical conductance and resistance curves (to consider resonance and antiresonance frequencies). To assure the convergence of the optimization procedure, this work proposes restarting the optimization loop whenever the procedure ends in an undesired or an unfeasible solution. Two experimental examples using PZ27 and APC850 samples are presented to test the precision of the method and to check the dependency of the frequency range used, respectively.

  2. Optimization of thermal processing of canned mussels.

    PubMed

    Ansorena, M R; Salvadori, V O

    2011-10-01

    The design and optimization of thermal processing of solid-liquid food mixtures, such as canned mussels, requires the knowledge of the thermal history at the slowest heating point. In general, this point does not coincide with the geometrical center of the can, and the results show that it is located along the axial axis at a height that depends on the brine content. In this study, a mathematical model for the prediction of the temperature at this point was developed using the discrete transfer function approach. Transfer function coefficients were experimentally obtained, and prediction equations fitted to consider other can dimensions and sampling interval. This model was coupled with an optimization routine in order to search for different retort temperature profiles to maximize a quality index. Both constant retort temperature (CRT) and variable retort temperature (VRT; discrete step-wise and exponential) were considered. In the CRT process, the optimal retort temperature was always between 134 °C and 137 °C, and high values of thiamine retention were achieved. A significant improvement in surface quality index was obtained for optimal VRT profiles compared to optimal CRT. The optimization procedure shown in this study produces results that justify its utilization in the industry.

  3. A vibration-based health monitoring program for a large and seismically vulnerable masonry dome

    NASA Astrophysics Data System (ADS)

    Pecorelli, M. L.; Ceravolo, R.; De Lucia, G.; Epicoco, R.

    2017-05-01

    Vibration-based health monitoring of monumental structures must rely on efficient and, as far as possible, automatic modal analysis procedures. Relatively low excitation energy provided by traffic, wind and other sources is usually sufficient to detect structural changes, as those produced by earthquakes and extreme events. Above all, in-operation modal analysis is a non-invasive diagnostic technique that can support optimal strategies for the preservation of architectural heritage, especially if complemented by model-driven procedures. In this paper, the preliminary steps towards a fully automated vibration-based monitoring of the world’s largest masonry oval dome (internal axes of 37.23 by 24.89 m) are presented. More specifically, the paper reports on signal treatment operations conducted to set up the permanent dynamic monitoring system of the dome and to realise a robust automatic identification procedure. Preliminary considerations on the effects of temperature on dynamic parameters are finally reported.

  4. Two-port robotic hysterectomy: a novel approach.

    PubMed

    Moawad, Gaby N; Tyan, Paul; Khalil, Elias D Abi

    2018-03-24

    The objective of the study was to demonstrate a novel technique for two-port robotic hysterectomy with a particular focus on the challenging portions of the procedure. The study is designed as a technical video, showing step-by-step a two-port robotic hysterectomy approach (Canadian Task Force classification level III). IRB approval was not required for this study. The benefits of minimally invasive surgery for gynecological pathology have been clearly documented in multiple studies. Patients had fewer medical and surgical complications postoperatively, better cosmesis and quality of life. Most gynecological surgeons require 3-5 ports for the standard gynecological procedure. Even though the minimally invasive multiport system provides an excellent safety profile, multiple incisions are associated with a greater risk for morbidity including infection, pain, and hernia. In the past decade, various new methods have emerged to minimize the number of ports used in gynecological surgery. The interventions employed were a two-port robotic hysterectomy, using a camera port plus one robotic arm, with a focus on salpingectomy and cuff closure. We describe a transvaginal and a transabdominal approach for salpingectomy and a novel method for cuff closure. The transvaginal and transabdominal techniques for salpingectomy for two-port robotic-assisted hysterectomy provide excellent tension and exposure for a safe procedure without the need for an extra port. We also describe a transvaginal technique to place the vaginal cuff on tension during closure. With the necessary set of skills on a carefully chosen patient, two-port robotic-assisted total laparoscopic hysterectomy is a feasible procedure.

  5. Student Opinions about the Seven-Step Procedure in Problem-Based Hospitality Management Education

    ERIC Educational Resources Information Center

    Zwaal, Wichard; Otting, Hans

    2014-01-01

    This study investigates how hospitality management students appreciate the role and application of the seven-step procedure in problem-based learning. A survey was developed containing sections about personal characteristics, recall of the seven steps, overall report marks, and 30 statements about the seven-step procedure. The survey was…

  6. Annealing Induced Re-crystallization in CH3NH3PbI3−xClx for High Performance Perovskite Solar Cells

    PubMed Central

    Yang, Yingguo; Feng, Shanglei; Li, Meng; Xu, Weidong; Yin, Guangzhi; Wang, Zhaokui; Sun, Baoquan; Gao, Xingyu

    2017-01-01

    Using poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) as hole conductor, a series of inverted planar CH3NH3PbI3−xClx perovskite solar cells (PSCs) were fabricated based on perovskite annealed by an improved time-temperature dependent (TTD) procedure in a flowing nitrogen atmosphere for different time. Only after an optimum annealing time, an optimized power conversion efficiency of 14.36% could be achieved. To understand their performance dependence on annealing time, an in situ real-time synchrotron-based grazing incidence X-ray diffraction (GIXRD) was used to monitor a step-by-step gradual structure transformation from distinct mainly organic-inorganic hybrid materials into highly ordered CH3NH3PbI3 crystal during annealing. However, a re-crystallization process of perovskite crystal was observed for the first time during such an annealing procedure, which helps to enhance the perovskite crystallization and preferential orientations. The present GIXRD findings could well explain the drops of the open circuit voltage (Voc) and the fill factor (FF) during the ramping of temperature as well as the optimized power conversion efficiency achieved after an optimum annealing time. Thus, the present study not only illustrates clearly the decisive roles of post-annealing in the formation of solution-processed perovskite to better understand its formation mechanism, but also demonstrates the crucial dependences of device performance on the perovskite microstructure in PSCs. PMID:28429762

  7. On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    1996-10-01

    One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.

  8. Investigation of the dependence of joint contact forces on musculotendon parameters using a codified workflow for image-based modelling.

    PubMed

    Modenese, Luca; Montefiori, Erica; Wang, Anqi; Wesarg, Stefan; Viceconti, Marco; Mazzà, Claudia

    2018-05-17

    The generation of subject-specific musculoskeletal models of the lower limb has become a feasible task thanks to improvements in medical imaging technology and musculoskeletal modelling software. Nevertheless, clinical use of these models in paediatric applications is still limited for what concerns the estimation of muscle and joint contact forces. Aiming to improve the current state of the art, a methodology to generate highly personalized subject-specific musculoskeletal models of the lower limb based on magnetic resonance imaging (MRI) scans was codified as a step-by-step procedure and applied to data from eight juvenile individuals. The generated musculoskeletal models were used to simulate 107 gait trials using stereophotogrammetric and force platform data as input. To ensure completeness of the modelling procedure, muscles' architecture needs to be estimated. Four methods to estimate muscles' maximum isometric force and two methods to estimate musculotendon parameters (optimal fiber length and tendon slack length) were assessed and compared, in order to quantify their influence on the models' output. Reported results represent the first comprehensive subject-specific model-based characterization of juvenile gait biomechanics, including profiles of joint kinematics and kinetics, muscle forces and joint contact forces. Our findings suggest that, when musculotendon parameters were linearly scaled from a reference model and the muscle force-length-velocity relationship was accounted for in the simulations, realistic knee contact forces could be estimated and these forces were not sensitive the method used to compute muscle maximum isometric force. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites.

  10. Two-Step Production of Phenylpyruvic Acid from L-Phenylalanine by Growing and Resting Cells of Engineered Escherichia coli: Process Optimization and Kinetics Modeling

    PubMed Central

    Hou, Ying; Hossain, Gazi Sakir; Li, Jianghua; Shin, Hyun-dong; Liu, Long; Du, Guocheng; Chen, Jian

    2016-01-01

    Phenylpyruvic acid (PPA) is widely used in the pharmaceutical, food, and chemical industries. Here, a two-step bioconversion process, involving growing and resting cells, was established to produce PPA from l-phenylalanine using the engineered Escherichia coli constructed previously. First, the biotransformation conditions for growing cells were optimized (l-phenylalanine concentration 20.0 g·L−1, temperature 35°C) and a two-stage temperature control strategy (keep 20°C for 12 h and increase the temperature to 35°C until the end of biotransformation) was performed. The biotransformation conditions for resting cells were then optimized in 3-L bioreactor and the optimized conditions were as follows: agitation speed 500 rpm, aeration rate 1.5 vvm, and l-phenylalanine concentration 30 g·L−1. The total maximal production (mass conversion rate) reached 29.8 ± 2.1 g·L−1 (99.3%) and 75.1 ± 2.5 g·L−1 (93.9%) in the flask and 3-L bioreactor, respectively. Finally, a kinetic model was established, and it was revealed that the substrate and product inhibition were the main limiting factors for resting cell biotransformation. PMID:27851793

  11. A multiobjective optimization framework for multicontaminant industrial water network design.

    PubMed

    Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge

    2011-07-01

    The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. An ITK framework for deterministic global optimization for medical image registration

    NASA Astrophysics Data System (ADS)

    Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.

    2006-03-01

    Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.

  13. Cellular traction force recovery: An optimal filtering approach in two-dimensional Fourier space.

    PubMed

    Huang, Jianyong; Qin, Lei; Peng, Xiaoling; Zhu, Tao; Xiong, Chunyang; Zhang, Youyi; Fang, Jing

    2009-08-21

    Quantitative estimation of cellular traction has significant physiological and clinical implications. As an inverse problem, traction force recovery is essentially susceptible to noise in the measured displacement data. For traditional procedure of Fourier transform traction cytometry (FTTC), noise amplification is accompanied in the force reconstruction and small tractions cannot be recovered from the displacement field with low signal-noise ratio (SNR). To improve the FTTC process, we develop an optimal filtering scheme to suppress the noise in the force reconstruction procedure. In the framework of the Wiener filtering theory, four filtering parameters are introduced in two-dimensional Fourier space and their analytical expressions are derived in terms of the minimum-mean-squared-error (MMSE) optimization criterion. The optimal filtering approach is validated with simulations and experimental data associated with the adhesion of single cardiac myocyte to elastic substrate. The results indicate that the proposed method can highly enhance SNR of the recovered forces to reveal tiny tractions in cell-substrate interaction.

  14. An approach of ionic liquids/lithium salts based microwave irradiation pretreatment followed by ultrasound-microwave synergistic extraction for two coumarins preparation from Cortex fraxini.

    PubMed

    Liu, Zaizhi; Gu, Huiyan; Yang, Lei

    2015-10-23

    Ionic liquids/lithium salts solvent system was successfully introduced into the separation technique for the preparation of two coumarins (aesculin and aesculetin) from Cortex fraxini. Ionic liquids/lithium salts based microwave irradiation pretreatment followed by ultrasound-microwave synergy extraction (ILSMP-UMSE) procedure was developed and optimized for the sufficient extraction of these two analytes. Several variables which can potentially influence the extraction yields, including pretreatment time and temperature, [C4mim]Br concentration, LiAc content, ultrasound-microwave synergy extraction (UMSE) time, liquid-solid ratio, and UMSE power were optimized by Plackett-Burman design. Among seven variables, UMSE time, liquid-solid ratio, and UMSE power were the statistically significant variables and these three factors were further optimized by Box-Behnken design to predict optimal extraction conditions and find out operability ranges with maximum extraction yields. Under optimum operating conditions, ILSMP-UMSE showed higher extraction yields of two target compounds than those obtained by reference extraction solvents. Method validation studies also evidenced that ILSMP-UMSE is credible for the preparation of two coumarins from Cortex fraxini. This study is indicative of the proposed procedure that has huge application prospects for the preparation of natural products from plant materials. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Development of an ELISA for evaluation of swab recovery efficiencies of bovine serum albumin.

    PubMed

    Sparding, Nadja; Slotved, Hans-Christian; Nicolaisen, Gert M; Giese, Steen B; Elmlund, Jón; Steenhard, Nina R

    2014-01-01

    After a potential biological incident the sampling strategy and sample analysis are crucial for the outcome of the investigation and identification. In this study, we have developed a simple sandwich ELISA based on commercial components to quantify BSA (used as a surrogate for ricin) with a detection range of 1.32-80 ng/mL. We used the ELISA to evaluate different protein swabbing procedures (swabbing techniques and after-swabbing treatments) for two swab types: a cotton gauze swab and a flocked nylon swab. The optimal swabbing procedure for each swab type was used to obtain recovery efficiencies from different surface materials. The surface recoveries using the optimal swabbing procedure ranged from 0-60% and were significantly higher from nonporous surfaces compared to porous surfaces. In conclusion, this study presents a swabbing procedure evaluation and a simple BSA ELISA based on commercial components, which are easy to perform in a laboratory with basic facilities. The data indicate that different swabbing procedures were optimal for each of the tested swab types, and the particular swab preference depends on the surface material to be swabbed.

  16. Conventional and two step sintering of PZT-PCN ceramics

    NASA Astrophysics Data System (ADS)

    Keshavarzi, Mostafa; Rahmani, Hooman; Nemati, Ali; Hashemi, Mahdieh

    2018-02-01

    In this study, PZT-PCN ceramic was made via sol-gel seeding method and effects of conventional sintering (CS) as well as two-step sintering (TSS) were investigated on microstructure, phase formation, density, dielectric and piezoelectric properties. First, high quality powder was achieved by seeding method in which the mixture of Co3O4 and Nb2O5 powder was added to the prepared PZT sol to form PZT-PCN gel. After drying and calcination, pyrochlore free PZT-PCN powder was synthesized. Second, CS and TSS were applied to achieve dense ceramic. The optimum temperature used for 2 h of conventional sintering was obtained at 1150 °C; finally, undesired ZrO2 phase formed in CS procedure was removed successfully with TSS procedure and dielectric and piezoelectric properties were improved compared to the CS procedure. The best electrical properties obtained for the sample sintered by TSS in the initial temperature of T 1 = 1200 °C and secondary temperature of T 2 = 1000 °C for 12 h.

  17. Efficient Geometry Minimization and Transition Structure Optimization Using Interpolated Potential Energy Surfaces and Iteratively Updated Hessians.

    PubMed

    Zheng, Jingjing; Frisch, Michael J

    2017-12-12

    An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.

  18. Real time optimal guidance of low-thrust spacecraft: an application of nonlinear model predictive control.

    PubMed

    Arrieta-Camacho, Juan José; Biegler, Lorenz T

    2005-12-01

    Real time optimal guidance is considered for a class of low thrust spacecraft. In particular, nonlinear model predictive control (NMPC) is utilized for computing the optimal control actions required to transfer a spacecraft from a low Earth orbit to a mission orbit. The NMPC methodology presented is able to cope with unmodeled disturbances. The dynamics of the transfer are modeled using a set of modified equinoctial elements because they do not exhibit singularities for zero inclination and zero eccentricity. The idea behind NMPC is the repeated solution of optimal control problems; at each time step, a new control action is computed. The optimal control problem is solved using a direct method-fully discretizing the equations of motion. The large scale nonlinear program resulting from the discretization procedure is solved using IPOPT--a primal-dual interior point algorithm. Stability and robustness characteristics of the NMPC algorithm are reviewed. A numerical example is presented that encourages further development of the proposed methodology: the transfer from low-Earth orbit to a molniya orbit.

  19. Observation of Stronger-than-Binary Correlations with Entangled Photonic Qutrits

    NASA Astrophysics Data System (ADS)

    Hu, Xiao-Min; Liu, Bi-Heng; Guo, Yu; Xiang, Guo-Yong; Huang, Yun-Feng; Li, Chuan-Feng; Guo, Guang-Can; Kleinmann, Matthias; Vértesi, Tamás; Cabello, Adán

    2018-05-01

    We present the first experimental confirmation of the quantum-mechanical prediction of stronger-than-binary correlations. These are correlations that cannot be explained under the assumption that the occurrence of a particular outcome of an n ≥3 -outcome measurement is due to a two-step process in which, in the first step, some classical mechanism precludes n -2 of the outcomes and, in the second step, a binary measurement generates the outcome. Our experiment uses pairs of photonic qutrits distributed between two laboratories, where randomly chosen three-outcome measurements are performed. We report a violation by 9.3 standard deviations of the optimal inequality for nonsignaling binary correlations.

  20. Direct writing of gold nanostructures with an electron beam: On the way to pure nanostructures by combining optimized deposition with oxygen-plasma treatment

    PubMed Central

    Belić, Domagoj; Shawrav, Mostafa M; Bertagnolli, Emmerich

    2017-01-01

    This work presents a highly effective approach for the chemical purification of directly written 2D and 3D gold nanostructures suitable for plasmonics, biomolecule immobilisation, and nanoelectronics. Gold nano- and microstructures can be fabricated by one-step direct-write lithography process using focused electron beam induced deposition (FEBID). Typically, as-deposited gold nanostructures suffer from a low Au content and unacceptably high carbon contamination. We show that the undesirable carbon contamination can be diminished using a two-step process – a combination of optimized deposition followed by appropriate postdeposition cleaning. Starting from the common metal-organic precursor Me2-Au-tfac, it is demonstrated that the Au content in pristine FEBID nanostructures can be increased from 30 atom % to as much as 72 atom %, depending on the sustained electron beam dose. As a second step, oxygen-plasma treatment is established to further enhance the Au content in the structures, while preserving their morphology to a high degree. This two-step process represents a simple, feasible and high-throughput method for direct writing of purer gold nanostructures that can enable their future use for demanding applications. PMID:29259868

Top