Analytical and Computational Properties of Distributed Approaches to MDO
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Lewis, Robert Michael
2000-01-01
Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis
Nontangent, Developed Contour Bulkheads for a Single-Stage Launch Vehicle
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Lepsch, Roger A., Jr.
2000-01-01
Dry weights for single-stage launch vehicles that incorporate nontangent, developed contour bulkheads are estimated and compared to a baseline vehicle with 1.414 aspect ratio ellipsoidal bulkheads. Weights, volumes, and heights of optimized bulkhead designs are computed using a preliminary design bulkhead analysis code. The dry weights of vehicles that incorporate the optimized bulkheads are predicted using a vehicle weights and sizing code. Two optimization approaches are employed. A structural-level method, where the vehicle's three major bulkhead regions are optimized separately and then incorporated into a model for computation of the vehicle dry weight, predicts a reduction of4365 lb (2.2 %) from the 200,679-lb baseline vehicle dry weight. In the second, vehicle-level, approach, the vehicle dry weight is the objective function for the optimization. For the vehicle-level analysis, modified bulkhead designs are analyzed and incorporated into the weights model for computation of a dry weight. The optimizer simultaneously manipulates design variables for all three bulkheads to reduce the dry weight. The vehicle-level analysis predicts a dry weight reduction of 5129 lb, a 2.6% reduction from the baseline weight. Based on these results, nontangent, developed contour bulkheads may provide substantial weight savings for single stage vehicles.
Advanced Information Technology in Simulation Based Life Cycle Design
NASA Technical Reports Server (NTRS)
Renaud, John E.
2003-01-01
In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.
Constrained Multi-Level Algorithm for Trajectory Optimization
NASA Astrophysics Data System (ADS)
Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi
The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in diagonalised methods, being the only single update with quadratic convergence. For a single level, the diagonalised multiplier method (DMM) is described in Ref.5. The main advantage of the two-level analogue of the DMM approach is that it avoids the inner loop optimizations required in the other methods. The scheme also introduces a gradient change measure to reduce the computational time needed to calculate the gradients. It is demonstrated that the new multi-level scheme leads to a robust procedure to handle the sensitivity of the constraints, and the multiple objectives of different trajectory phases. Ref. 1. Fahroo, F and Ross, M., " A Spectral Patching Method for Direct Trajectory Optimization" The Journal of the Astronautical Sciences, Vol.48, 2000, pp.269-286 Ref. 2. Phililps, C.A. and Drake, J.C., "Trajectory Optimization for a Missile using a Multitier Approach" Journal of Spacecraft and Rockets, Vol.37, 2000, pp.663-669 Ref. 3. Gath, P.F., and Calise, A.J., " Optimization of Launch Vehicle Ascent Trajectories with Path Constraints and Coast Arcs", Journal of Guidance, Control, and Dynamics, Vol. 24, 2001, pp.296-304 Ref. 4. Betts, J.T., " Survey of Numerical Methods for Trajectory Optimization", Journal of Guidance, Control, and Dynamics, Vol.21, 1998, pp. 193-207 Ref. 5. Adimurthy, V., " Launch Vehicle Trajectory Optimization", Acta Astronautica, Vol.15, 1987, pp.845-850.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Heru Tjahjana, R.
2017-01-01
In this paper, we propose a mathematical model in the form of dynamic/multi-stage optimization to solve an integrated supplier selection problem and tracking control problem of single product inventory system with product discount. The product discount will be stated as a piece-wise linear function. We use dynamic programming to solve this proposed optimization to determine the optimal supplier and the optimal product volume that will be purchased from the optimal supplier for each time period so that the inventory level tracks a reference trajectory given by decision maker with minimal total cost. We give a numerical experiment to evaluate the proposed model. From the result, the optimal supplier was determined for each time period and the inventory level follows the given reference well.
Optimal state transfer of a single dissipative two-level system
NASA Astrophysics Data System (ADS)
Jirari, Hamza; Wu, Ning
2016-04-01
Optimal state transfer of a single two-level system (TLS) coupled to an Ohmic boson bath via off-diagonal TLS-bath coupling is studied by using optimal control theory. In the weak system-bath coupling regime where the time-dependent Bloch-Redfield formalism is applicable, we obtain the Bloch equation to probe the evolution of the dissipative TLS in the presence of a time-dependent external control field. By using the automatic differentiation technique to compute the gradient for the cost functional, we calculate the optimal transfer integral profile that can achieve an ideal transfer within a dimer system in the Fenna-Matthews-Olson (FMO) model. The robustness of the control profile against temperature variation is also analyzed.
An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.
Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur
2017-01-01
Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.
Deb, Kalyanmoy; Sinha, Ankur
2010-01-01
Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.
Optimal strategy analysis based on robust predictive control for inventory system with random demand
NASA Astrophysics Data System (ADS)
Saputra, Aditya; Widowati, Sutrisno
2017-12-01
In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.
Direct handling of equality constraints in multilevel optimization
NASA Technical Reports Server (NTRS)
Renaud, John E.; Gabriele, Gary A.
1990-01-01
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
Two-level optimization of composite wing structures based on panel genetic optimization
NASA Astrophysics Data System (ADS)
Liu, Boyang
The design of complex composite structures used in aerospace or automotive vehicles presents a major challenge in terms of computational cost. Discrete choices for ply thicknesses and ply angles leads to a combinatorial optimization problem that is too expensive to solve with presently available computational resources. We developed the following methodology for handling this problem for wing structural design: we used a two-level optimization approach with response-surface approximations to optimize panel failure loads for the upper-level wing optimization. We tailored efficient permutation genetic algorithms to the panel stacking sequence design on the lower level. We also developed approach for improving continuity of ply stacking sequences among adjacent panels. The decomposition approach led to a lower-level optimization of stacking sequence with a given number of plies in each orientation. An efficient permutation genetic algorithm (GA) was developed for handling this problem. We demonstrated through examples that the permutation GAs are more efficient for stacking sequence optimization than a standard GA. Repair strategies for standard GA and the permutation GAs for dealing with constraints were also developed. The repair strategies can significantly reduce computation costs for both standard GA and permutation GA. A two-level optimization procedure for composite wing design subject to strength and buckling constraints is presented. At wing-level design, continuous optimization of ply thicknesses with orientations of 0°, 90°, and +/-45° is performed to minimize weight. At the panel level, the number of plies of each orientation (rounded to integers) and inplane loads are specified, and a permutation genetic algorithm is used to optimize the stacking sequence. The process begins with many panel genetic optimizations for a range of loads and numbers of plies of each orientation. Next, a cubic polynomial response surface is fitted to the optimum buckling load. The resulting response surface is used for wing-level optimization. In general, complex composite structures consist of several laminates. A common problem in the design of such structures is that some plies in the adjacent laminates terminate in the boundary between the laminates. These discontinuities may cause stress concentrations and may increase manufacturing difficulty and cost. We developed measures of continuity of two adjacent laminates. We studied tradeoffs between weight and continuity through a simple composite wing design. Finally, we compared the two-level optimization to a single-level optimization based on flexural lamination parameters. The single-level optimization is efficient and feasible for a wing consisting of unstiffened panels.
EUVL back-insertion layout optimization
NASA Astrophysics Data System (ADS)
Civay, D.; Laffosse, E.; Chesneau, A.
2018-03-01
Extreme ultraviolet lithography (EUVL) is targeted for front-up insertion at advanced technology nodes but will be evaluated for back insertion at more mature nodes. EUVL can put two or more mask levels back on one mask, depending upon what level(s) in the process insertion occurs. In this paper, layout optimization methods are discussed that can be implemented when EUVL back insertion is implemented. The layout optimizations can be focused on improving yield, reliability or density, depending upon the design needs. The proposed methodology modifies the original two or more colored layers and generates an optimized single color EUVL layout design.
Acoustic design by topology optimization
NASA Astrophysics Data System (ADS)
Dühring, Maria B.; Jensen, Jakob S.; Sigmund, Ole
2008-11-01
To bring down noise levels in human surroundings is an important issue and a method to reduce noise by means of topology optimization is presented here. The acoustic field is modeled by Helmholtz equation and the topology optimization method is based on continuous material interpolation functions in the density and bulk modulus. The objective function is the squared sound pressure amplitude. First, room acoustic problems are considered and it is shown that the sound level can be reduced in a certain part of the room by an optimized distribution of reflecting material in a design domain along the ceiling or by distribution of absorbing and reflecting material along the walls. We obtain well defined optimized designs for a single frequency or a frequency interval for both 2D and 3D problems when considering low frequencies. Second, it is shown that the method can be applied to design outdoor sound barriers in order to reduce the sound level in the shadow zone behind the barrier. A reduction of up to 10 dB for a single barrier and almost 30 dB when using two barriers are achieved compared to utilizing conventional sound barriers.
NASA Astrophysics Data System (ADS)
Schmitz, Gunnar; Christiansen, Ove
2018-06-01
We study how with means of Gaussian Process Regression (GPR) geometry optimizations, which rely on numerical gradients, can be accelerated. The GPR interpolates a local potential energy surface on which the structure is optimized. It is found to be efficient to combine results on a low computational level (HF or MP2) with the GPR-calculated gradient of the difference between the low level method and the target method, which is a variant of explicitly correlated Coupled Cluster Singles and Doubles with perturbative Triples correction CCSD(F12*)(T) in this study. Overall convergence is achieved if both the potential and the geometry are converged. Compared to numerical gradient-based algorithms, the number of required single point calculations is reduced. Although introducing an error due to the interpolation, the optimized structures are sufficiently close to the minimum of the target level of theory meaning that the reference and predicted minimum only vary energetically in the μEh regime.
NASA Astrophysics Data System (ADS)
Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong
2017-06-01
In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.
An extension of the directed search domain algorithm to bilevel optimization
NASA Astrophysics Data System (ADS)
Wang, Kaiqiang; Utyuzhnikov, Sergey V.
2017-08-01
A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.
NASA Astrophysics Data System (ADS)
Buyuk, Ersin; Karaman, Abdullah
2017-04-01
We estimated transmissivity and storage coefficient values from the single well water-level measurements positioned ahead of the mining face by using particle swarm optimization (PSO) technique. The water-level response to the advancing mining face contains an semi-analytical function that is not suitable for conventional inversion shemes because the partial derivative is difficult to calculate . Morever, the logaritmic behaviour of the model create difficulty for obtaining an initial model that may lead to a stable convergence. The PSO appears to obtain a reliable solution that produce a reasonable fit between water-level data and model function response. Optimization methods have been used to find optimum conditions consisting either minimum or maximum of a given objective function with regard to some criteria. Unlike PSO, traditional non-linear optimization methods have been used for many hydrogeologic and geophysical engineering problems. These methods indicate some difficulties such as dependencies to initial model, evolution of the partial derivatives that is required while linearizing the model and trapping at local optimum. Recently, Particle swarm optimization (PSO) became the focus of modern global optimization method that is inspired from the social behaviour of birds of swarms, and appears to be a reliable and powerful algorithms for complex engineering applications. PSO that is not dependent on an initial model, and non-derivative stochastic process appears to be capable of searching all possible solutions in the model space either around local or global optimum points.
Multiobjective hyper heuristic scheme for system design and optimization
NASA Astrophysics Data System (ADS)
Rafique, Amer Farhan
2012-11-01
As system design is becoming more and more multifaceted, integrated, and complex, the traditional single objective optimization trends of optimal design are becoming less and less efficient and effective. Single objective optimization methods present a unique optimal solution whereas multiobjective methods present pareto front. The foremost intent is to predict a reasonable distributed pareto-optimal solution set independent of the problem instance through multiobjective scheme. Other objective of application of intended approach is to improve the worthiness of outputs of the complex engineering system design process at the conceptual design phase. The process is automated in order to provide the system designer with the leverage of the possibility of studying and analyzing a large multiple of possible solutions in a short time. This article presents Multiobjective Hyper Heuristic Optimization Scheme based on low level meta-heuristics developed for the application in engineering system design. Herein, we present a stochastic function to manage meta-heuristics (low-level) to augment surety of global optimum solution. Generic Algorithm, Simulated Annealing and Swarm Intelligence are used as low-level meta-heuristics in this study. Performance of the proposed scheme is investigated through a comprehensive empirical analysis yielding acceptable results. One of the primary motives for performing multiobjective optimization is that the current engineering systems require simultaneous optimization of conflicting and multiple. Random decision making makes the implementation of this scheme attractive and easy. Injecting feasible solutions significantly alters the search direction and also adds diversity of population resulting in accomplishment of pre-defined goals set in the proposed scheme.
Patel, B N; Thomas, J V; Lockhart, M E; Berland, L L; Morgan, D E
2013-02-01
To evaluate lesion contrast in pancreatic adenocarcinoma patients using spectral multidetector computed tomography (MDCT) analysis. The present institutional review board-approved, Health Insurance Portability and Accountability Act of 1996 (HIPAA)-compliant retrospective study evaluated 64 consecutive adults with pancreatic adenocarcinoma examined using a standardized, multiphasic protocol on a single-source, dual-energy MDCT system. Pancreatic phase images (35 s) were acquired in dual-energy mode; unenhanced and portal venous phases used standard MDCT. Lesion contrast was evaluated on an independent workstation using dual-energy analysis software, comparing tumour to non-tumoural pancreas attenuation (HU) differences and tumour diameter at three energy levels: 70 keV; individual subject-optimized viewing energy level (based on the maximum contrast-to-noise ratio, CNR); and 45 keV. The image noise was measured for the same three energies. Differences in lesion contrast, diameter, and noise between the different energy levels were analysed using analysis of variance (ANOVA). Quantitative differences in contrast gain between 70 keV and CNR-optimized viewing energies, and between CNR-optimized and 45 keV were compared using the paired t-test. Thirty-four women and 30 men (mean age 68 years) had a mean tumour diameter of 3.6 cm. The median optimized energy level was 50 keV (range 40-77). The mean ± SD lesion contrast values (non-tumoural pancreas - tumour attenuation) were: 57 ± 29, 115 ± 70, and 146 ± 74 HU (p = 0.0005); the lengths of the tumours were: 3.6, 3.3, and 3.1 cm, respectively (p = 0.026); and the contrast to noise ratios were: 24 ± 7, 39 ± 12, and 59 ± 17 (p = 0.0005) for 70 keV, the optimized energy level, and 45 keV, respectively. For individuals, the mean ± SD contrast gain from 70 keV to the optimized energy level was 59 ± 45 HU; and the mean ± SD contrast gain from the optimized energy level to 45 keV was 31 ± 25 HU (p = 0.007). Significantly increased pancreatic lesion contrast was noted at lower viewing energies using spectral MDCT. Individual patient CNR-optimized energy level images have the potential to improve lesion conspicuity. Copyright © 2012 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
Incentives for Optimal Multi-level Allocation of HIV Prevention Resources
Malvankar, Monali M.; Zaric, Gregory S.
2013-01-01
HIV/AIDS prevention funds are often allocated at multiple levels of decision-making. Optimal allocation of HIV prevention funds maximizes the number of HIV infections averted. However, decision makers often allocate using simple heuristics such as proportional allocation. We evaluate the impact of using incentives to encourage optimal allocation in a two-level decision-making process. We model an incentive based decision-making process consisting of an upper-level decision maker allocating funds to a single lower-level decision maker who then distributes funds to local programs. We assume that the lower-level utility function is linear in the amount of the budget received from the upper-level, the fraction of funds reserved for proportional allocation, and the number of infections averted. We assume that the upper level objective is to maximize the number of infections averted. We illustrate with an example using data from California, U.S. PMID:23766551
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
Xu, Zixiang; Zheng, Ping; Sun, Jibin; Ma, Yanhe
2013-01-01
Gene knockout has been used as a common strategy to improve microbial strains for producing chemicals. Several algorithms are available to predict the target reactions to be deleted. Most of them apply mixed integer bi-level linear programming (MIBLP) based on metabolic networks, and use duality theory to transform bi-level optimization problem of large-scale MIBLP to single-level programming. However, the validity of the transformation was not proved. Solution of MIBLP depends on the structure of inner problem. If the inner problem is continuous, Karush-Kuhn-Tucker (KKT) method can be used to reformulate the MIBLP to a single-level one. We adopt KKT technique in our algorithm ReacKnock to attack the intractable problem of the solution of MIBLP, demonstrated with the genome-scale metabolic network model of E. coli for producing various chemicals such as succinate, ethanol, threonine and etc. Compared to the previous methods, our algorithm is fast, stable and reliable to find the optimal solutions for all the chemical products tested, and able to provide all the alternative deletion strategies which lead to the same industrial objective. PMID:24348984
Review of optimization techniques of polygeneration systems for building applications
NASA Astrophysics Data System (ADS)
Y, Rong A.; Y, Su; R, Lahdelma
2016-08-01
Polygeneration means simultaneous production of two or more energy products in a single integrated process. Polygeneration is an energy-efficient technology and plays an important role in transition into future low-carbon energy systems. It can find wide applications in utilities, different types of industrial sectors and building sectors. This paper mainly focus on polygeneration applications in building sectors. The scales of polygeneration systems in building sectors range from the micro-level for a single home building to the large- level for residential districts. Also the development of polygeneration microgrid is related to building applications. The paper aims at giving a comprehensive review for optimization techniques for designing, synthesizing and operating different types of polygeneration systems for building applications.
Räikkönen, Katri; Matthews, Karen A.
2010-01-01
We tested the hypotheses that (1) high pessimism and low optimism (LOT-R overall and subscale scores) would predict high ambulatory blood pressure (ABP) level and 24-hour load (percentage of ABP values exceeding the pediatric 95th percentile) among healthy Black and White adolescents (n = 201; 14–16 yrs) across 2 consecutive school days and (2) that the relationships for the pessimism and optimism subscales would show nonlinear effects. The hypotheses were confirmed for pessimism but not for optimism. The results suggest that high pessimism may have different effects than low optimism on ABP and that even moderate levels of pessimism may effect blood pressure regulation. These results suggest that optimism and pessimism are not the opposite poles on a single continuum but ought to be treated as separate constructs. PMID:18399951
Flexible Approximation Model Approach for Bi-Level Integrated System Synthesis
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Kim, Hongman; Ragon, Scott; Soremekun, Grant; Malone, Brett
2004-01-01
Bi-Level Integrated System Synthesis (BLISS) is an approach that allows design problems to be naturally decomposed into a set of subsystem optimizations and a single system optimization. In the BLISS approach, approximate mathematical models are used to transfer information from the subsystem optimizations to the system optimization. Accurate approximation models are therefore critical to the success of the BLISS procedure. In this paper, new capabilities that are being developed to generate accurate approximation models for BLISS procedure will be described. The benefits of using flexible approximation models such as Kriging will be demonstrated in terms of convergence characteristics and computational cost. An approach of dealing with cases where subsystem optimization cannot find a feasible design will be investigated by using the new flexible approximation models for the violated local constraints.
Optimal configuration of microstructure in ferroelectric materials by stochastic optimization
NASA Astrophysics Data System (ADS)
Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.
2010-07-01
An optimization procedure determining the ideal configuration at the microstructural level of ferroelectric (FE) materials is applied to maximize piezoelectricity. Piezoelectricity in ceramic FEs differs significantly from that of single crystals because of the presence of crystallites (grains) possessing crystallographic axes aligned imperfectly. The piezoelectric properties of a polycrystalline (ceramic) FE is inextricably related to the grain orientation distribution (texture). The set of combination of variables, known as solution space, which dictates the texture of a ceramic is unlimited and hence the choice of the optimal solution which maximizes the piezoelectricity is complicated. Thus, a stochastic global optimization combined with homogenization is employed for the identification of the optimal granular configuration of the FE ceramic microstructure with optimum piezoelectric properties. The macroscopic equilibrium piezoelectric properties of polycrystalline FE is calculated using mathematical homogenization at each iteration step. The configuration of grains characterized by its orientations at each iteration is generated using a randomly selected set of orientation distribution parameters. The optimization procedure applied to the single crystalline phase compares well with the experimental data. Apparent enhancement of piezoelectric coefficient d33 is observed in an optimally oriented BaTiO3 single crystal. Based on the good agreement of results with the published data in single crystals, we proceed to apply the methodology in polycrystals. A configuration of crystallites, simultaneously constraining the orientation distribution of the c-axis (polar axis) while incorporating ab-plane randomness, which would multiply the overall piezoelectricity in ceramic BaTiO3 is also identified. The orientation distribution of the c-axes is found to be a narrow Gaussian distribution centered around 45°. The piezoelectric coefficient in such a ceramic is found to be nearly three times as that of the single crystal. Our optimization model provide designs for materials with enhanced piezoelectric performance, which would stimulate further studies involving materials possessing higher spontaneous polarization.
Predicting the Health of a Natural Water System
ERIC Educational Resources Information Center
Graves, Gregory H.
2010-01-01
This project was developed as an interdisciplinary application of the optimization of a single-variable function. It was used in a freshman-level single-variable calculus course. After the first month of the course, students had been exposed to the concepts of the derivative as a rate of change, average and instantaneous velocities, derivatives of…
Guo, Xuezhen; Claassen, G D H; Oude Lansink, A G J M; Saatkamp, H W
2014-06-01
Economic analysis of hazard surveillance in livestock production chains is essential for surveillance organizations (such as food safety authorities) when making scientifically based decisions on optimization of resource allocation. To enable this, quantitative decision support tools are required at two levels of analysis: (1) single-hazard surveillance system and (2) surveillance portfolio. This paper addresses the first level by presenting a conceptual approach for the economic analysis of single-hazard surveillance systems. The concept includes objective and subjective aspects of single-hazard surveillance system analysis: (1) a simulation part to derive an efficient set of surveillance setups based on the technical surveillance performance parameters (TSPPs) and the corresponding surveillance costs, i.e., objective analysis, and (2) a multi-criteria decision making model to evaluate the impacts of the hazard surveillance, i.e., subjective analysis. The conceptual approach was checked for (1) conceptual validity and (2) data validity. Issues regarding the practical use of the approach, particularly the data requirement, were discussed. We concluded that the conceptual approach is scientifically credible for economic analysis of single-hazard surveillance systems and that the practicability of the approach depends on data availability. Copyright © 2014 Elsevier B.V. All rights reserved.
Intelligent fault recognition strategy based on adaptive optimized multiple centers
NASA Astrophysics Data System (ADS)
Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong
2018-06-01
For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.
Optimality of affine control system of several species in competition on a sequential batch reactor
NASA Astrophysics Data System (ADS)
Rodríguez, J. C.; Ramírez, H.; Gajardo, P.; Rapaport, A.
2014-09-01
In this paper, we analyse the optimality of affine control system of several species in competition for a single substrate on a sequential batch reactor, with the objective being to reach a given (low) level of the substrate. We allow controls to be bounded measurable functions of time plus possible impulses. A suitable modification of the dynamics leads to a slightly different optimal control problem, without impulsive controls, for which we apply different optimality conditions derived from Pontryagin principle and the Hamilton-Jacobi-Bellman equation. We thus characterise the singular trajectories of our problem as the extremal trajectories keeping the substrate at a constant level. We also establish conditions for which an immediate one impulse (IOI) strategy is optimal. Some numerical experiences are then included in order to illustrate our study and show that those conditions are also necessary to ensure the optimality of the IOI strategy.
Displacement based multilevel structural optimization
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1995-01-01
Multidisciplinary design optimization (MDO) is expected to play a major role in the competitive transportation industries of tomorrow, i.e., in the design of aircraft and spacecraft, of high speed trains, boats, and automobiles. All of these vehicles require maximum performance at minimum weight to keep fuel consumption low and conserve resources. Here, MDO can deliver mathematically based design tools to create systems with optimum performance subject to the constraints of disciplines such as structures, aerodynamics, controls, etc. Although some applications of MDO are beginning to surface, the key to a widespread use of this technology lies in the improvement of its efficiency. This aspect is investigated here for the MDO subset of structural optimization, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures (here, statically indeterminate trusses and beams for proof of concept) is performed. In the system level optimization, the design variables are the coefficients of assumed displacement functions, and the load unbalance resulting from the solution of the stiffness equations is minimized. Constraints are placed on the deflection amplitudes and the weight of the structure. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. This approach is expected to prove very efficient, especially for complex structures, since the design task is broken down into a large number of small and efficiently handled subtasks, each with only a small number of variables. This partitioning will also allow for the use of parallel computing, first, by sending the system and subsystems level computations to two different processors, ultimately, by performing all subsystems level optimizations in a massively parallel manner on separate processors. It is expected that the subsystems level optimizations can be further improved through the use of controlled growth, a method which reduces an optimization to a more efficient analysis with only a slight degradation in accuracy. The efficiency of all proposed techniques is being evaluated relative to the performance of the standard single level optimization approach where the complete structure is weight minimized under the action of all given constraints by one processor and to the performance of simultaneous analysis and design which combines analysis and optimization into a single step. It is expected that the present approach can be expanded to include additional structural constraints (buckling, free and forced vibration, etc.) or other disciplines (passive and active controls, aerodynamics, etc.) for true MDO.
Estimation of the discharges of the multiple water level stations by multi-objective optimization
NASA Astrophysics Data System (ADS)
Matsumoto, Kazuhiro; Miyamoto, Mamoru; Yamakage, Yuzuru; Tsuda, Morimasa; Yanami, Hitoshi; Anai, Hirokazu; Iwami, Yoichi
2016-04-01
This presentation shows two aspects of the parameter identification to estimate the discharges of the multiple water level stations by multi-objective optimization. One is how to adjust the parameters to estimate the discharges accurately. The other is which optimization algorithms are suitable for the parameter identification. Regarding the previous studies, there is a study that minimizes the weighted error of the discharges of the multiple water level stations by single-objective optimization. On the other hand, there are some studies that minimize the multiple error assessment functions of the discharge of a single water level station by multi-objective optimization. This presentation features to simultaneously minimize the errors of the discharges of the multiple water level stations by multi-objective optimization. Abe River basin in Japan is targeted. The basin area is 567.0km2. There are thirteen rainfall stations and three water level stations. Nine flood events are investigated. They occurred from 2005 to 2012 and the maximum discharges exceed 1,000m3/s. The discharges are calculated with PWRI distributed hydrological model. The basin is partitioned into the meshes of 500m x 500m. Two-layer tanks are placed on each mesh. Fourteen parameters are adjusted to estimate the discharges accurately. Twelve of them are the hydrological parameters and two of them are the parameters of the initial water levels of the tanks. Three objective functions are the mean squared errors between the observed and calculated discharges at the water level stations. Latin Hypercube sampling is one of the uniformly sampling algorithms. The discharges are calculated with respect to the parameter values sampled by a simplified version of Latin Hypercube sampling. The observed discharge is surrounded by the calculated discharges. It suggests that it might be possible to estimate the discharge accurately by adjusting the parameters. In a sense, it is true that the discharge of a water level station can be accurately estimated by setting the parameter values optimized to the responding water level station. However, there are some cases that the calculated discharge by setting the parameter values optimized to one water level station does not meet the observed discharge at another water level station. It is important to estimate the discharges of all the water level stations in some degree of accuracy. It turns out to be possible to select the parameter values from the pareto optimal solutions by the condition that all the normalized errors by the minimum error of the responding water level station are under 3. The optimization performance of five implementations of the algorithms and a simplified version of Latin Hypercube sampling are compared. Five implementations are NSGA2 and PAES of an optimization software inspyred and MCO_NSGA2R, MOPSOCD and NSGA2R_NSGA2R of a statistical software R. NSGA2, PAES and MOPSOCD are the optimization algorithms of a genetic algorithm, an evolution strategy and a particle swarm optimization respectively. The number of the evaluations of the objective functions is 10,000. Two implementations of NSGA2 of R outperform the others. They are promising to be suitable for the parameter identification of PWRI distributed hydrological model.
Multi-Objective Optimization of a Turbofan for an Advanced, Single-Aisle Transport
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.; Guynn, Mark D.
2012-01-01
Considerable interest surrounds the design of the next generation of single-aisle commercial transports in the Boeing 737 and Airbus A320 class. Aircraft designers will depend on advanced, next-generation turbofan engines to power these airplanes. The focus of this study is to apply single- and multi-objective optimization algorithms to the conceptual design of ultrahigh bypass turbofan engines for this class of aircraft, using NASA s Subsonic Fixed Wing Project metrics as multidisciplinary objectives for optimization. The independent design variables investigated include three continuous variables: sea level static thrust, wing reference area, and aerodynamic design point fan pressure ratio, and four discrete variables: overall pressure ratio, fan drive system architecture (i.e., direct- or gear-driven), bypass nozzle architecture (i.e., fixed- or variable geometry), and the high- and low-pressure compressor work split. Ramp weight, fuel burn, noise, and emissions are the parameters treated as dependent objective functions. These optimized solutions provide insight to the ultrahigh bypass engine design process and provide information to NASA program management to help guide its technology development efforts.
Vibrational self-consistent field theory using optimized curvilinear coordinates.
Bulik, Ireneusz W; Frisch, Michael J; Vaccaro, Patrick H
2017-07-28
A vibrational SCF model is presented in which the functions forming the single-mode functions in the product wavefunction are expressed in terms of internal coordinates and the coordinates used for each mode are optimized variationally. This model involves no approximations to the kinetic energy operator and does not require a Taylor-series expansion of the potential. The non-linear optimization of coordinates is found to give much better product wavefunctions than the limited variations considered in most previous applications of SCF methods to vibrational problems. The approach is tested using published potential energy surfaces for water, ammonia, and formaldehyde. Variational flexibility allowed in the current ansätze results in excellent zero-point energies expressed through single-product states and accurate fundamental transition frequencies realized by short configuration-interaction expansions. Fully variational optimization of single-product states for excited vibrational levels also is discussed. The highlighted methodology constitutes an excellent starting point for more sophisticated treatments, as the bulk characteristics of many-mode coupling are accounted for efficiently in terms of compact wavefunctions (as evident from the accurate prediction of transition frequencies).
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Optimal control of quantum rings by terahertz laser pulses.
Räsänen, E; Castro, A; Werschnik, J; Rubio, A; Gross, E K U
2007-04-13
Complete control of single-electron states in a two-dimensional semiconductor quantum-ring model is established, opening a path into coherent laser-driven single-gate qubits. The control scheme is developed in the framework of optimal-control theory for laser pulses of two-component polarization. In terms of pulse lengths and target-state occupations, the scheme is shown to be superior to conventional control methods that exploit Rabi oscillations generated by uniform circularly polarized pulses. Current-carrying states in a quantum ring can be used to manipulate a two-level subsystem at the ring center. Combining our results, we propose a realistic approach to construct a laser-driven single-gate qubit that has switching times in the terahertz regime.
Bi-Level Integrated System Synthesis (BLISS) for Concurrent and Distributed Processing
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Altus, Troy D.; Phillips, Matthew; Sandusky, Robert
2002-01-01
The paper introduces a new version of the Bi-Level Integrated System Synthesis (BLISS) methods intended for optimization of engineering systems conducted by distributed specialty groups working concurrently and using a multiprocessor computing environment. The method decomposes the overall optimization task into subtasks associated with disciplines or subsystems where the local design variables are numerous and a single, system-level optimization whose design variables are relatively few. The subtasks are fully autonomous as to their inner operations and decision making. Their purpose is to eliminate the local design variables and generate a wide spectrum of feasible designs whose behavior is represented by Response Surfaces to be accessed by a system-level optimization. It is shown that, if the problem is convex, the solution of the decomposed problem is the same as that obtained without decomposition. A simplified example of an aircraft design shows the method working as intended. The paper includes a discussion of the method merits and demerits and recommendations for further research.
NASA Astrophysics Data System (ADS)
Kim, Jungkyu; Hong, Yushin; Kim, Taebok
2011-01-01
This article discusses joint pricing and ordering policies for price-dependent demand in a supply chain consisting of a single retailer and a single manufacturer. The retailer places orders for products according to an EOQ policy and the manufacturer produces them on a lot-for-lot basis. Four mechanisms with differing levels of coordination are presented. Mathematical models are formulated and solution procedures are developed to determine the optimal retail prices and order quantities. Through extensive numerical experiments, we analyse and compare the behaviours and characteristics of the proposed mechanisms, and find that enhancing the level of coordination has important benefits for the supply chain.
Ye, Fang; Jiang, Jin; Chang, Honglong; Xie, Li; Deng, Jinjun; Ma, Zhibo; Yuan, Weizheng
2015-07-01
Cell studies at the single-cell level are becoming more and more critical for understanding the complex biological processes. Here, we present an optimization study investigating the positioning of single cells using micromolding in capillaries technology coupled with the cytophobic biomaterial poly (2-hydroxyethyl methacrylate) (poly (HEMA)). As a cytophobic biomaterial, poly (HEMA) was used to inhibit cells, whereas the glass was used as the substrate to provide a cell adhesive background. The poly (HEMA) chemical barrier was obtained using micromolding in capillaries, and the microchannel networks used for capillarity were easily achieved by reversibly bonding the polydimethylsiloxane mold and the glass. Finally, discrete cell adhesion regions were presented on the glass surface. This method is facile and low cost, and the reagents are commercially available. We validated the cytophobic abilities of the poly (HEMA), optimized the channel parameters for higher quality and more stable poly (HEMA) patterns by investigating the effects of changing the aspect ratio and the width of the microchannel on the poly (HEMA) grid pattern, and improved the single-cell occupancy by optimizing the dimensions of the cell adhesion regions.
A Minimally Invasive Method for Retrieving Single Adherent Cells of Different Types from Cultures
Zeng, Jia; Mohammadreza, Aida; Gao, Weimin; Merza, Saeed; Smith, Dean; Kelbauskas, Laimonas; Meldrum, Deirdre R.
2014-01-01
The field of single-cell analysis has gained a significant momentum over the last decade. Separation and isolation of individual cells is an indispensable step in almost all currently available single-cell analysis technologies. However, stress levels introduced by such manipulations remain largely unstudied. We present a method for minimally invasive retrieval of selected individual adherent cells of different types from cell cultures. The method is based on a combination of mechanical (shear flow) force and biochemical (trypsin digestion) treatment. We quantified alterations in the transcription levels of stress response genes in individual cells exposed to varying levels of shear flow and trypsinization. We report optimal temperature, RNA preservation reagents, shear force and trypsinization conditions necessary to minimize changes in the stress-related gene expression levels. The method and experimental findings are broadly applicable and can be used by a broad research community working in the field of single cell analysis. PMID:24957932
Acoustic-noise-optimized diffusion-weighted imaging.
Ott, Martin; Blaimer, Martin; Grodzki, David M; Breuer, Felix A; Roesch, Julie; Dörfler, Arnd; Heismann, Björn; Jakob, Peter M
2015-12-01
This work was aimed at reducing acoustic noise in diffusion-weighted MR imaging (DWI) that might reach acoustic noise levels of over 100 dB(A) in clinical practice. A diffusion-weighted readout-segmented echo-planar imaging (EPI) sequence was optimized for acoustic noise by utilizing small readout segment widths to obtain low gradient slew rates and amplitudes instead of faster k-space coverage. In addition, all other gradients were optimized for low slew rates. Volunteer and patient imaging experiments were conducted to demonstrate the feasibility of the method. Acoustic noise measurements were performed and analyzed for four different DWI measurement protocols at 1.5T and 3T. An acoustic noise reduction of up to 20 dB(A) was achieved, which corresponds to a fourfold reduction in acoustic perception. The image quality was preserved at the level of a standard single-shot (ss)-EPI sequence, with a 27-54% increase in scan time. The diffusion-weighted imaging technique proposed in this study allowed a substantial reduction in the level of acoustic noise compared to standard single-shot diffusion-weighted EPI. This is expected to afford considerably more patient comfort, but a larger study would be necessary to fully characterize the subjective changes in patient experience.
Lee, Kyueun; Drekonja, Dimitri M; Enns, Eva A
2018-03-01
To determine the optimal antibiotic prophylaxis strategy for transrectal prostate biopsy (TRPB) as a function of the local antibiotic resistance profile. We developed a decision-analytic model to assess the cost-effectiveness of four antibiotic prophylaxis strategies: ciprofloxacin alone, ceftriaxone alone, ciprofloxacin and ceftriaxone in combination, and directed prophylaxis selection based on susceptibility testing. We used a payer's perspective and estimated the health care costs and quality-adjusted life-years (QALYs) associated with each strategy for a cohort of 66-year-old men undergoing TRPB. Costs and benefits were discounted at 3% annually. Base-case resistance prevalence was 29% to ciprofloxacin and 7% to ceftriaxone, reflecting susceptibility patterns observed at the Minneapolis Veterans Affairs Health Care System. Resistance levels were varied in sensitivity analysis. In the base case, single-agent prophylaxis strategies were dominated. Directed prophylaxis strategy was the optimal strategy at a willingness-to-pay threshold of $50,000/QALY gained. Relative to the directed prophylaxis strategy, the incremental cost-effectiveness ratio of the combination strategy was $123,333/QALY gained over the lifetime time horizon. In sensitivity analysis, single-agent prophylaxis strategies were preferred only at extreme levels of resistance. Directed or combination prophylaxis strategies were optimal for a wide range of resistance levels. Facilities using single-agent antibiotic prophylaxis strategies before TRPB should re-evaluate their strategies unless extremely low levels of antimicrobial resistance are documented. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
A kriging metamodel-assisted robust optimization method based on a reverse model
NASA Astrophysics Data System (ADS)
Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao
2018-02-01
The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.
Multiple anatomy optimization of accumulated dose
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, W. Tyler, E-mail: watkinswt@virginia.edu; Siebers, Jeffrey V.; Moore, Joseph A.
Purpose: To investigate the potential advantages of multiple anatomy optimization (MAO) for lung cancer radiation therapy compared to the internal target volume (ITV) approach. Methods: MAO aims to optimize a single fluence to be delivered under free-breathing conditions such that the accumulated dose meets the plan objectives, where accumulated dose is defined as the sum of deformably mapped doses computed on each phase of a single four dimensional computed tomography (4DCT) dataset. Phantom and patient simulation studies were carried out to investigate potential advantages of MAO compared to ITV planning. Through simulated delivery of the ITV- and MAO-plans, target dosemore » variations were also investigated. Results: By optimizing the accumulated dose, MAO shows the potential to ensure dose to the moving target meets plan objectives while simultaneously reducing dose to organs at risk (OARs) compared with ITV planning. While consistently superior to the ITV approach, MAO resulted in equivalent OAR dosimetry at planning objective dose levels to within 2% volume in 14/30 plans and to within 3% volume in 19/30 plans for each lung V20, esophagus V25, and heart V30. Despite large variations in per-fraction respiratory phase weights in simulated deliveries at high dose rates (e.g., treating 4/10 phases during single fraction beams) the cumulative clinical target volume (CTV) dose after 30 fractions and per-fraction dose were constant independent of planning technique. In one case considered, however, per-phase CTV dose varied from 74% to 117% of prescription implying the level of ITV-dose heterogeneity may not be appropriate with conventional, free-breathing delivery. Conclusions: MAO incorporates 4DCT information in an optimized dose distribution and can achieve a superior plan in terms of accumulated dose to the moving target and OAR sparing compared to ITV-plans. An appropriate level of dose heterogeneity in MAO plans must be further investigated.« less
SNR-optimized phase-sensitive dual-acquisition turbo spin echo imaging: a fast alternative to FLAIR.
Lee, Hyunyeol; Park, Jaeseok
2013-07-01
Phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo imaging was recently introduced, producing high-resolution isotropic cerebrospinal fluid attenuated brain images without long inversion recovery preparation. Despite the advantages, the weighted-averaging-based technique suffers from noise amplification resulting from different levels of cerebrospinal fluid signal modulations over the two acquisitions. The purpose of this work is to develop a signal-to-noise ratio-optimized version of the phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo. Variable refocusing flip angles in the first acquisition are calculated using a three-step prescribed signal evolution while those in the second acquisition are calculated using a two-step pseudo-steady state signal transition with a high flip-angle pseudo-steady state at a later portion of the echo train, balancing the levels of cerebrospinal fluid signals in both the acquisitions. Low spatial frequency signals are sampled during the high flip-angle pseudo-steady state to further suppress noise. Numerical simulations of the Bloch equations were performed to evaluate signal evolutions of brain tissues along the echo train and optimize imaging parameters. In vivo studies demonstrate that compared with conventional phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo, the proposed optimization yields 74% increase in apparent signal-to-noise ratio for gray matter and 32% decrease in imaging time. The proposed method can be a potential alternative to conventional fluid-attenuated imaging. Copyright © 2012 Wiley Periodicals, Inc.
Optimal load scheduling in commercial and residential microgrids
NASA Astrophysics Data System (ADS)
Ganji Tanha, Mohammad Mahdi
Residential and commercial electricity customers use more than two third of the total energy consumed in the United States, representing a significant resource of demand response. Price-based demand response, which is in response to changes in electricity prices, represents the adjustments in load through optimal load scheduling (OLS). In this study, an efficient model for OLS is developed for residential and commercial microgrids which include aggregated loads in single-units and communal loads. Single unit loads which include fixed, adjustable and shiftable loads are controllable by the unit occupants. Communal loads which include pool pumps, elevators and central heating/cooling systems are shared among the units. In order to optimally schedule residential and commercial loads, a community-based optimal load scheduling (CBOLS) is proposed in this thesis. The CBOLS schedule considers hourly market prices, occupants' comfort level, and microgrid operation constraints. The CBOLS' objective in residential and commercial microgrids is the constrained minimization of the total cost of supplying the aggregator load, defined as the microgrid load minus the microgrid generation. This problem is represented by a large-scale mixed-integer optimization for supplying single-unit and communal loads. The Lagrangian relaxation methodology is used to relax the linking communal load constraint and decompose the independent single-unit functions into subproblems which can be solved in parallel. The optimal solution is acceptable if the aggregator load limit and the duality gap are within the bounds. If any of the proposed criteria is not satisfied, the Lagrangian multiplier will be updated and a new optimal load schedule will be regenerated until both constraints are satisfied. The proposed method is applied to several case studies and the results are presented for the Galvin Center load on the 16th floor of the IIT Tower in Chicago.
Optimal maintenance policy incorporating system level and unit level for mechanical systems
NASA Astrophysics Data System (ADS)
Duan, Chaoqun; Deng, Chao; Wang, Bingran
2018-04-01
The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.
Structural tailoring of counter rotation propfans
NASA Technical Reports Server (NTRS)
Brown, Kenneth W.; Hopkins, D. A.
1989-01-01
The STAT program was designed for the optimization of single rotation, tractor propfan designs. New propfan designs, however, generally consist of two counter rotating propfan rotors. STAT is constructed to contain two levels of analysis. An interior loop, consisting of accurate, efficient approximate analyses, is used to perform the primary propfan optimization. Once an optimum design has been obtained, a series of refined analyses are conducted. These analyses, while too computer time expensive for the optimization loop, are of sufficient accuracy to validate the optimized design. Should the design prove to be unacceptable, provisions are made for recalibration of the approximate analyses, for subsequent reoptimization.
Creating single-copy genetic circuits
Lee, Jeong Wook; Gyorgy, Andras; Cameron, D. Ewen; Pyenson, Nora; Choi, Kyeong Rok; Way, Jeffrey C.; Silver, Pamela A.; Del Vecchio, Domitilla; Collins, James J.
2017-01-01
SUMMARY Synthetic biology is increasingly used to develop sophisticated living devices for basic and applied research. Many of these genetic devices are engineered using multi-copy plasmids, but as the field progresses from proof-of-principle demonstrations to practical applications, it is important to develop single-copy synthetic modules that minimize consumption of cellular resources and can be stably maintained as genomic integrants. Here we use empirical design, mathematical modeling and iterative construction and testing to build single-copy, bistable toggle switches with improved performance and reduced metabolic load that can be stably integrated into the host genome. Deterministic and stochastic models led us to focus on basal transcription to optimize circuit performance and helped to explain the resulting circuit robustness across a large range of component expression levels. The design parameters developed here provide important guidance for future efforts to convert functional multi-copy gene circuits into optimized single-copy circuits for practical, real-world use. PMID:27425413
Gerritz, Samuel W; Zhai, Weixu; Shi, Shuhao; Zhu, Shirong; Toyn, Jeremy H; Meredith, Jere E; Iben, Lawrence G; Burton, Catherine R; Albright, Charles F; Good, Andrew C; Tebben, Andrew J; Muckelbauer, Jodi K; Camac, Daniel M; Metzler, William; Cook, Lynda S; Padmanabha, Ramesh; Lentz, Kimberley A; Sofia, Michael J; Poss, Michael A; Macor, John E; Thompson, Lorin A
2012-11-08
This report describes the discovery and optimization of a BACE-1 inhibitor series containing an unusual acyl guanidine chemotype that was originally synthesized as part of a 6041-membered solid-phase library. The synthesis of multiple follow-up solid- and solution-phase libraries facilitated the optimization of the original micromolar hit into a single-digit nanomolar BACE-1 inhibitor in both radioligand binding and cell-based functional assay formats. The X-ray structure of representative inhibitors bound to BACE-1 revealed a number of key ligand:protein interactions, including a hydrogen bond between the side chain amide of flap residue Gln73 and the acyl guanidine carbonyl group, and a cation-π interaction between Arg235 and the isothiazole 4-methoxyphenyl substituent. Following subcutaneous administration in rats, an acyl guanidine inhibitor with single-digit nanomolar activity in cells afforded good plasma exposures and a dose-dependent reduction in plasma Aβ levels, but poor brain exposure was observed (likely due to Pgp-mediated efflux), and significant reductions in brain Aβ levels were not obtained.
Image segmentation using local shape and gray-level appearance models
NASA Astrophysics Data System (ADS)
Seghers, Dieter; Loeckx, Dirk; Maes, Frederik; Suetens, Paul
2006-03-01
A new generic model-based segmentation scheme is presented, which can be trained from examples akin to the Active Shape Model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Because in the ASM approach the intensity and shape models are typically applied alternately during optimizing as first an optimal target location is selected for each landmark separately based on local gray-level appearance information only to which the shape model is fitted subsequently, the ASM may be misled in case of wrongly selected landmark locations. Instead, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized non-iteratively using dynamic programming which allows to find the optimal landmark positions using combined shape and intensity information, without the need for initialization.
Trujillo, William A.; Sorenson, Wendy R.; La Luzerne, Paul; Austad, John W.; Sullivan, Darryl
2008-01-01
The presence of aristolochic acid in some dietary supplements is a concern to regulators and consumers. A method has been developed, by initially using a reference method as a guide, during single laboratory validation (SLV) for the determination of aristolochic acid I, also known as aristolochic acid A, in botanical species and dietary supplements at concentrations of approximately 2 to 32 μg/g. Higher levels were determined by dilution to fit the standard curve. Through the SLV, the method was optimized for quantification by liquid Chromatography with ultraviolet detection (LC-UV) and LC/mass Spectrometry (MS) confirmation. The test samples were extracted with organic solvent and water, then injected on a reverse phase LC column. Quantification was achieved with linear regression using a laboratory automation system. The SLV study included systematically optimizing the LC-UV method with regard to test sample size, fine grinding of solids, and solvent extraction efficiency. These parameters were varied in increments (and in separate optimization studies), in order to ensure that each parameter was individually studied; the test results include corresponding tables of parameter variations. In addition, the chromatographic conditions were optimized with respect to injection volume and detection wavelength. Precision studies produced overall relative standard deviation values from 2.44 up to 8.26% for aristolochic acid I. Mean recoveries were between 100 and 103% at the 2 μg/g level, between 102 and 103% at the 10 μg/g level, and 104% at the 30 μg/g level. PMID:16915829
Trujillo, William A; Sorenson, Wendy R; La Luzerne, Paul; Austad, John W; Sullivan, Darryl
2006-01-01
The presence of aristolochic acid in some dietary supplements is a concern to regulators and consumers. A method has been developed, by initially using a reference method as a guide, during single laboratory validation (SLV) for the determination of aristolochic acid I, also known as aristolochic acid A, in botanical species and dietary supplements at concentrations of approximately 2 to 32 microg/g. Higher levels were determined by dilution to fit the standard curve. Through the SLV, the method was optimized for quantification by liquid chromatography with ultraviolet detection (LC-UV) and LC/mass spectrometry (MS) confirmation. The test samples were extracted with organic solvent and water, then injected on a reverse phase LC column. Quantification was achieved with linear regression using a laboratory automation system. The SLV study included systematically optimizing the LC-UV method with regard to test sample size, fine grinding of solids, and solvent extraction efficiency. These parameters were varied in increments (and in separate optimization studies), in order to ensure that each parameter was individually studied; the test results include corresponding tables of parameter variations. In addition, the chromatographic conditions were optimized with respect to injection volume and detection wavelength. Precision studies produced overall relative standard deviation values from 2.44 up to 8.26% for aristolochic acid I. Mean recoveries were between 100 and 103% at the 2 microg/g level, between 102 and 103% at the 10 microg/g level, and 104% at the 30 microg/g level.
Zhu, Mingyue; Zhang, Jing; Yi, Xingwen; Ying, Hao; Li, Xiang; Luo, Ming; Song, Yingxiong; Huang, Xiatao; Qiu, Kun
2018-03-19
We present the design and optimization of the optical single side-band (SSB) Nyquist four-level pulse amplitude modulation (PAM-4) transmission using dual-drive Mach-Zehnder modulator (DDMZM)modulation and direct detection (DD), aiming at the C-band cost-effective, high-speed and long-distance transmission. At the transmitter, the laser line width should be small to avoid the phase noise to amplitude noise conversion and equalization-enhanced phase noise due to the large chromatic dispersion (CD). The optical SSB signal is generated after optimizing the optical modulation index (OMI) and hence the minimum phase condition which is required by the Kramers-Kronig (KK) receiver can also be satisfied. At the receiver, a simple AC-coupled photodiode (PD) is used and a virtual carrier is added for the KK operation to alleviate the signal-to-signal beating interference (SSBI).A Volterra filter (VF) is cascaded for remaining nonlinearities mitigation. When the fiber nonlinearity becomes significant, we elect to use an optical band-pass filter with offset filtering. It can suppress the simulated Brillouin scattering and the conjugated distortion by filtering out the imaging frequency components. With our design and optimization, we achieve single-channel, single polarization 102.4-Gb/s Nyquist PAM-4 over 800-km standard single-mode fiber (SSMF).
A multi-product green supply chain under government supervision with price and demand uncertainty
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Zamani, Soma
2018-05-01
In this paper, a bi-level game-theoretic model is proposed to investigate the effects of governmental financial intervention on green supply chain. This problem is formulated as a bi-level program for a green supply chain that produces various products with different environmental pollution levels. The problem is also regard uncertainties in market demand and sale price of raw materials and products. The model is further transformed into a single-level nonlinear programming problem by replacing the lower-level optimization problem with its Karush-Kuhn-Tucker optimality conditions. Genetic algorithm is applied as a solution methodology to solve nonlinear programming model. Finally, to investigate the validity of the proposed method, the computational results obtained through genetic algorithm are compared with global optimal solution attained by enumerative method. Analytical results indicate that the proposed GA offers better solutions in large size problems. Also, we conclude that financial intervention by government consists of green taxation and subsidization is an effective method to stabilize green supply chain members' performance.
Global dynamic optimization approach to predict activation in metabolic pathways.
de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R
2014-01-06
During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Enhanced post wash retention of combed DNA molecules by varying multiple combing parameters.
Yadav, Hemendra; Sharma, Pulkit
2017-11-01
Recent advances in genomics have created a need for efficient techniques for deciphering information hidden in various genomes. Single molecule analysis is one such technique to understand molecular processes at single molecule level. Fiber- FISH performed with the help of DNA combing can help us in understanding genetic rearrangements and changes in genome at single DNA molecule level. For performing Fiber-FISH we need high retention of combed DNA molecules post wash as Fiber-FISH requires profuse washing. We optimized combing process involving combing solution, method of DNA mounting on glass slides and coating of glass slides to enhance post-wash retention of DNA molecules. It was found that average number of DNA molecules observed post-wash per field of view was maximum with our optimized combing solution. APTES coated glass slides showed lesser retention than PEI surface but fluorescent intensity was higher in case of APTES coated surface. Capillary method used to mount DNA on glass slides also showed lesser retention but straight DNA molecules were observed as compared to force flow method. Copyright © 2017 Elsevier Inc. All rights reserved.
Is patient size important in dose determination and optimization in cardiology?
NASA Astrophysics Data System (ADS)
Reay, J.; Chapple, C. L.; Kotre, C. J.
2003-12-01
Patient dose determination and optimization have become more topical in recent years with the implementation of the Medical Exposures Directive into national legislation, the Ionising Radiation (Medical Exposure) Regulations. This legislation incorporates a requirement for new equipment to provide a means of displaying a measure of patient exposure and introduces the concept of diagnostic reference levels. It is normally assumed that patient dose is governed largely by patient size; however, in cardiology, where procedures are often very complex, the significance of patient size is less well understood. This study considers over 9000 cardiology procedures, undertaken throughout the north of England, and investigates the relationship between patient size and dose. It uses simple linear regression to calculate both correlation coefficients and significance levels for data sorted by both room and individual clinician for the four most common examinations, left ventrical and/or coronary angiography, single vessel stent insertion and single vessel angioplasty. This paper concludes that the correlation between patient size and dose is weak for the procedures considered. It also illustrates the use of an existing method for removing the effect of patient size from dose survey data. This allows typical doses and, therefore, reference levels to be defined for the purposes of dose optimization.
Liwo, Adam; Khalili, Mey; Czaplewski, Cezary; Kalinowski, Sebastian; Ołdziej, Stanisław; Wachucik, Katarzyna; Scheraga, Harold A.
2011-01-01
We report the modification and parameterization of the united-residue (UNRES) force field for energy-based protein-structure prediction and protein-folding simulations. We tested the approach on three training proteins separately: 1E0L (β), 1GAB (α), and 1E0G (α + β). Heretofore, the UNRES force field had been designed and parameterized to locate native-like structures of proteins as global minima of their effective potential-energy surfaces, which largely neglected the conformational entropy because decoys composed of only lowest-energy conformations were used to optimize the force field. Recently, we developed a mesoscopic dynamics procedure for UNRES, and applied it with success to simulate protein folding pathways. How ever, the force field turned out to be largely biased towards α-helical structures in canonical simulations because the conformational entropy had been neglected in the parameterization. We applied the hierarchical optimization method developed in our earlier work to optimize the force field, in which the conformational space of a training protein is divided into levels each corresponding to a certain degree of native-likeness. The levels are ordered according to increasing native-likeness; level 0 corresponds to structures with no native-like elements and the highest level corresponds to the fully native-like structures. The aim of optimization is to achieve the order of the free energies of levels, decreasing as their native-likeness increases. The procedure is iterative, and decoys of the training protein(s) generated with the energy-function parameters of the preceding iteration are used to optimize the force field in a current iteration. We applied the multiplexing replica exchange molecular dynamics (MREMD) method, recently implemented in UNRES, to generate decoys; with this modification, conformational entropy is taken into account. Moreover, we optimized the free-energy gaps between levels at temperatures corresponding to a predominance of folded or unfolded structures, as well as to structures at the putative folding-transition temperature, changing the sign of the gaps at the transition temperature. This enabled us to obtain force fields characterized by a single peak in the heat capacity at the transition temperature. Furthermore, we introduced temperature dependence to the UNRES force field; this is consistent with the fact that it is a free-energy and not a potential-energy function. PMID:17201450
Ghafouri, H R; Mosharaf-Dehkordi, M; Afzalan, B
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Khogeer, Ahmed Sirag
2005-11-01
Petroleum refining is a capital-intensive business. With stringent environmental regulations on the processing industry and declining refining margins, political instability, increased risk of war and terrorist attacks in which refineries and fuel transportation grids may be targeted, higher pressures are exerted on refiners to optimize performance and find the best combination of feed and processes to produce salable products that meet stricter product specifications, while at the same time meeting refinery supply commitments and of course making profit. This is done through multi objective optimization. For corporate refining companies and at the national level, Intea-Refinery and Inter-Refinery optimization is the second step in optimizing the operation of the whole refining chain as a single system. Most refinery-wide optimization methods do not cover multiple objectives such as minimizing environmental impact, avoiding catastrophic failures, or enhancing product spec upgrade effects. This work starts by carrying out a refinery-wide, single objective optimization, and then moves to multi objective-single refinery optimization. The last step is multi objective-multi refinery optimization, the objectives of which are analysis of the effects of economic, environmental, product spec, strategic, and catastrophic failure. Simulation runs were carried out using both MATLAB and ASPEN PIMS utilizing nonlinear techniques to solve the optimization problem. The results addressed the need to debottleneck some refineries or transportation media in order to meet the demand for essential products under partial or total failure scenarios. They also addressed how importing some high spec products can help recover some of the losses and what is needed in order to accomplish this. In addition, the results showed nonlinear relations among local and global objectives for some refineries. The results demonstrate that refineries can have a local multi objective optimum that does not follow the same trends as either global or local single objective optimums. Catastrophic failure effects on refinery operations and on local objectives are more significant than environmental objective effects, and changes in the capacity or the local objectives follow a discrete behavioral pattern, in contrast to environmental objective cases in which the effects are smoother. (Abstract shortened by UMI.)
ERIC Educational Resources Information Center
Frankenhuis, Willem E.; Panchanathan, Karthik; Belsky, Jay
2016-01-01
Children vary in the extent to which their development is shaped by particular experiences (e.g. maltreatment, social support). This variation raises a question: Is there no single level of plasticity that maximizes biological fitness? One influential hypothesis states that when different levels of plasticity are optimal in different environmental…
Toward an optimal online checkpoint solution under a two-level HPC checkpoint model
Di, Sheng; Robert, Yves; Vivien, Frederic; ...
2016-03-29
The traditional single-level checkpointing method suffers from significant overhead on large-scale platforms. Hence, multilevel checkpointing protocols have been studied extensively in recent years. The multilevel checkpoint approach allows different levels of checkpoints to be set (each with different checkpoint overheads and recovery abilities), in order to further improve the fault tolerance performance of extreme-scale HPC applications. How to optimize the checkpoint intervals for each level, however, is an extremely difficult problem. In this paper, we construct an easy-to-use two-level checkpoint model. Checkpoint level 1 deals with errors with low checkpoint/recovery overheads such as transient memory errors, while checkpoint level 2more » deals with hardware crashes such as node failures. Compared with previous optimization work, our new optimal checkpoint solution offers two improvements: (1) it is an online solution without requiring knowledge of the job length in advance, and (2) it shows that periodic patterns are optimal and determines the best pattern. We evaluate the proposed solution and compare it with the most up-to-date related approaches on an extreme-scale simulation testbed constructed based on a real HPC application execution. Simulation results show that our proposed solution outperforms other optimized solutions and can improve the performance significantly in some cases. Specifically, with the new solution the wall-clock time can be reduced by up to 25.3% over that of other state-of-the-art approaches. Lastly, a brute-force comparison with all possible patterns shows that our solution is always within 1% of the best pattern in the experiments.« less
Optimal Design of Gradient Materials and Bi-Level Optimization of Topology Using Targets (BOTT)
NASA Astrophysics Data System (ADS)
Garland, Anthony
The objective of this research is to understand the fundamental relationships necessary to develop a method to optimize both the topology and the internal gradient material distribution of a single object while meeting constraints and conflicting objectives. Functionally gradient material (FGM) objects possess continuous varying material properties throughout the object, and they allow an engineer to tailor individual regions of an object to have specific mechanical properties by locally modifying the internal material composition. A variety of techniques exists for topology optimization, and several methods exist for FGM optimization, but combining the two together is difficult. Understanding the relationship between topology and material gradient optimization enables the selection of an appropriate model and the development of algorithms, which allow engineers to design high-performance parts that better meet design objectives than optimized homogeneous material objects. For this research effort, topology optimization means finding the optimal connected structure with an optimal shape. FGM optimization means finding the optimal macroscopic material properties within an object. Tailoring the material constitutive matrix as a function of position results in gradient properties. Once, the target macroscopic properties are known, a mesostructure or a particular material nanostructure can be found which gives the target material properties at each macroscopic point. This research demonstrates that topology and gradient materials can both be optimized together for a single part. The algorithms use a discretized model of the domain and gradient based optimization algorithms. In addition, when considering two conflicting objectives the algorithms in this research generate clear 'features' within a single part. This tailoring of material properties within different areas of a single part (automated design of 'features') using computational design tools is a novel benefit of gradient material designs. A macroscopic gradient can be achieved by varying the microstructure or the mesostructures of an object. The mesostructure interpretation allows for more design freedom since the mesostructures can be tuned to have non-isotropic material properties. A new algorithm called Bi-level Optimization of Topology using Targets (BOTT) seeks to find the best distribution of mesostructure designs throughout a single object in order to minimize an objective value. On the macro level, the BOTT algorithm optimizes the macro topology and gradient material properties within the object. The BOTT algorithm optimizes the material gradient by finding the best constitutive matrix at each location with the object. In order to enhance the likelihood that a mesostructure can be generated with the same equivalent constitutive matrix, the variability of the constitutive matrix is constrained to be an orthotropic material. The stiffness in the X and Y directions (of the base coordinate system) can change in addition to rotating the orthotropic material to align with the loading at each region. Second, the BOTT algorithm designs mesostructures with macroscopic properties equal to the target properties found in step one while at the same time the algorithm seeks to minimize material usage in each mesostructure. The mesostructure algorithm maximizes the strain energy of the mesostructures unit cell when a pseudo strain is applied to the cell. A set of experiments reveals the fundamental relationship between target cell density and the strain (or pseudo strain) applied to a unit cell and the output effective properties of the mesostructure. At low density, a few mesostructure unit cell design are possible, while at higher density the mesostructure unit cell designs have many possibilities. Therefore, at low densities the effective properties of the mesostructure are a step function of the applied pseudo strain. At high densities, the effective properties of the mesostructure are continuous function of the applied pseudo strain. Finally, the macro and mesostructure designs are coordinated so that the macro and meso levels agree on the material properties at each macro region. In addition, a coordination effort seeks to coordinate the boundaries of adjacent mesostructure designs so that the macro load path is transmitted from one mesostructure design to its neighbors. The BOTT algorithm has several advantages over existing algorithms within the literature. First, the BOTT algorithm significantly reduces the computational power required to run the algorithm. Second, the BOTT algorithm indirectly enforces a minimum mesostructure density constraint which increases the manufacturability of the final design. Third, the BOTT algorithm seeks to transfer the load from one mesostructure to its neighbors by coordinating the boundaries of adjacent mesostructure designs. However, the BOTT algorithm can still be improved since it may have difficulty converging due to the step function nature of the mesostructure design problem at low density.
Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi
2011-12-01
Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Anchang, Benedict; Davis, Kara L.; Fienberg, Harris G.; Bendall, Sean C.; Karacosta, Loukia G.; Tibshirani, Robert; Nolan, Garry P.; Plevritis, Sylvia K.
2018-01-01
An individual malignant tumor is composed of a heterogeneous collection of single cells with distinct molecular and phenotypic features, a phenomenon termed intratumoral heterogeneity. Intratumoral heterogeneity poses challenges for cancer treatment, motivating the need for combination therapies. Single-cell technologies are now available to guide effective drug combinations by accounting for intratumoral heterogeneity through the analysis of the signaling perturbations of an individual tumor sample screened by a drug panel. In particular, Mass Cytometry Time-of-Flight (CyTOF) is a high-throughput single-cell technology that enables the simultaneous measurements of multiple (>40) intracellular and surface markers at the level of single cells for hundreds of thousands of cells in a sample. We developed a computational framework, entitled Drug Nested Effects Models (DRUG-NEM), to analyze CyTOF single-drug perturbation data for the purpose of individualizing drug combinations. DRUG-NEM optimizes drug combinations by choosing the minimum number of drugs that produce the maximal desired intracellular effects based on nested effects modeling. We demonstrate the performance of DRUG-NEM using single-cell drug perturbation data from tumor cell lines and primary leukemia samples. PMID:29654148
Sousa, Sérgio Filipe; Fernandes, Pedro Alexandrino; Ramos, Maria João
2009-12-31
Gas-phase optimization of single biological molecules and of small active-site biological models has become a standard approach in first principles computational enzymology. The important role played by the surrounding environment (solvent, enzyme, both) is normally only accounted for through higher-level single point energy calculations performed using a polarizable continuum model (PCM) and an appropriate dielectric constant with the gas-phase-optimized geometries. In this study we analyze this widely used approximation, by comparing gas-phase-optimized geometries with geometries optimized with different PCM approaches (and considering different dielectric constants) for a representative data set of 20 very important biological molecules--the 20 natural amino acids. A total of 323 chemical bonds and 469 angles present in standard amino acid residues were evaluated. The results show that the use of gas-phase-optimized geometries can in fact be quite a reasonable alternative to the use of the more computationally intensive continuum optimizations, providing a good description of bond lengths and angles for typical biological molecules, even for charged amino acids, such as Asp, Glu, Lys, and Arg. This approximation is particularly successful if the protonation state of the biological molecule could be reasonably described in vacuum, a requirement that was already necessary in first principles computational enzymology.
Efficiency Improvements to the Displacement Based Multilevel Structural Optimization Algorithm
NASA Technical Reports Server (NTRS)
Plunkett, C. L.; Striz, A. G.; Sobieszczanski-Sobieski, J.
2001-01-01
Multilevel Structural Optimization (MSO) continues to be an area of research interest in engineering optimization. In the present project, the weight optimization of beams and trusses using Displacement based Multilevel Structural Optimization (DMSO), a member of the MSO set of methodologies, is investigated. In the DMSO approach, the optimization task is subdivided into a single system and multiple subsystems level optimizations. The system level optimization minimizes the load unbalance resulting from the use of displacement functions to approximate the structural displacements. The function coefficients are then the design variables. Alternately, the system level optimization can be solved using the displacements themselves as design variables, as was shown in previous research. Both approaches ensure that the calculated loads match the applied loads. In the subsystems level, the weight of the structure is minimized using the element dimensions as design variables. The approach is expected to be very efficient for large structures, since parallel computing can be utilized in the different levels of the problem. In this paper, the method is applied to a one-dimensional beam and a large three-dimensional truss. The beam was tested to study possible simplifications to the system level optimization. In previous research, polynomials were used to approximate the global nodal displacements. The number of coefficients of the polynomials equally matched the number of degrees of freedom of the problem. Here it was desired to see if it is possible to only match a subset of the degrees of freedom in the system level. This would lead to a simplification of the system level, with a resulting increase in overall efficiency. However, the methods tested for this type of system level simplification did not yield positive results. The large truss was utilized to test further improvements in the efficiency of DMSO. In previous work, parallel processing was applied to the subsystems level, where the derivative verification feature of the optimizer NPSOL had been utilized in the optimizations. This resulted in large runtimes. In this paper, the optimizations were repeated without using the derivative verification, and the results are compared to those from the previous work. Also, the optimizations were run on both, a network of SUN workstations using the MPICH implementation of the Message Passing Interface (MPI) and on the faster Beowulf cluster at ICASE, NASA Langley Research Center, using the LAM implementation of UP]. The results on both systems were consistent and showed that it is not necessary to verify the derivatives and that this gives a large increase in efficiency of the DMSO algorithm.
Ant groups optimally amplify the effect of transiently informed individuals
NASA Astrophysics Data System (ADS)
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-07-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge.
Ant groups optimally amplify the effect of transiently informed individuals
Gelblum, Aviram; Pinkoviezky, Itai; Fonio, Ehud; Ghosh, Abhijit; Gov, Nir; Feinerman, Ofer
2015-01-01
To cooperatively transport a large load, it is important that carriers conform in their efforts and align their forces. A downside of behavioural conformism is that it may decrease the group's responsiveness to external information. Combining experiment and theory, we show how ants optimize collective transport. On the single-ant scale, optimization stems from decision rules that balance individuality and compliance. Macroscopically, these rules poise the system at the transition between random walk and ballistic motion where the collective response to the steering of a single informed ant is maximized. We relate this peak in response to the divergence of susceptibility at a phase transition. Our theoretical models predict that the ant-load system can be transitioned through the critical point of this mesoscopic system by varying its size; we present experiments supporting these predictions. Our findings show that efficient group-level processes can arise from transient amplification of individual-based knowledge. PMID:26218613
Preliminary Design Optimization For A Supersonic Turbine For Rocket Propulsion
NASA Technical Reports Server (NTRS)
Papila, Nilay; Shyy, Wei; Griffin, Lisa; Huber, Frank; Tran, Ken; McConnaughey, Helen (Technical Monitor)
2000-01-01
In this study, we present a method for optimizing, at the preliminary design level, a supersonic turbine for rocket propulsion system application. Single-, two- and three-stage turbines are considered with the number of design variables increasing from 6 to 11 then to 15, in accordance with the number of stages. Due to its global nature and flexibility in handling different types of information, the response surface methodology (RSM) is applied in the present study. A major goal of the present Optimization effort is to balance the desire of maximizing aerodynamic performance and minimizing weight. To ascertain required predictive capability of the RSM, a two-level domain refinement approach has been adopted. The accuracy of the predicted optimal design points based on this strategy is shown to he satisfactory. Our investigation indicates that the efficiency rises quickly from single stage to 2 stages but that the increase is much less pronounced with 3 stages. A 1-stage turbine performs poorly under the engine balance boundary condition. A portion of fluid kinetic energy is lost at the turbine discharge of the 1-stage design due to high stage pressure ratio and high-energy content, mostly hydrogen, of the working fluid. Regarding the optimization technique, issues related to the design of experiments (DOE) has also been investigated. It is demonstrated that the criteria for selecting the data base exhibit significant impact on the efficiency and effectiveness of the construction of the response surface.
Pointing and Jitter Control for the USNA Multi-Beam Combining System
2013-05-10
previous work, an adaptive H-infinity optimal controller has been developed to control a single beam using a beam position detector for feedback... turbulence and airborne particles, platform jitter, lack of feedback from the target , and current laser technology represent just a few of these...lasers. Solid state lasers, however, cannot currently provide high enough power levels to destroy a target using a single beam. On solid-state
Marin, Daniele; Ramirez-Giraldo, Juan Carlos; Gupta, Sonia; Fu, Wanyi; Stinnett, Sandra S; Mileto, Achille; Bellini, Davide; Patel, Bhavik; Samei, Ehsan; Nelson, Rendon C
2016-06-01
The purpose of this study is to investigate whether the reduction in noise using a second-generation monoenergetic algorithm can improve the conspicuity of hypervascular liver tumors on dual-energy CT (DECT) images of the liver. An anthropomorphic liver phantom in three body sizes and iodine-containing inserts simulating hypervascular lesions was imaged with DECT and single-energy CT at various energy levels (80-140 kV). In addition, a retrospective clinical study was performed in 31 patients with 66 hypervascular liver tumors who underwent DECT during the late hepatic arterial phase. Datasets at energy levels ranging from 40 to 80 keV were reconstructed using first- and second-generation monoenergetic algorithms. Noise, tumor-to-liver contrast-to-noise ratio (CNR), and CNR with a noise constraint (CNRNC) set with a maximum noise increase of 50% were calculated and compared among the different reconstructed datasets. The maximum CNR for the second-generation monoenergetic algorithm, which was attained at 40 keV in both phantom and clinical datasets, was statistically significantly higher than the maximum CNR for the first-generation monoenergetic algorithm (p < 0.001) or single-energy CT acquisitions across a wide range of kilovoltage values. With the second-generation monoenergetic algorithm, the optimal CNRNC occurred at 55 keV, corresponding to lower energy levels compared with first-generation algorithm (predominantly at 70 keV). Patient body size did not substantially affect the selection of the optimal energy level to attain maximal CNR and CNRNC using the second-generation monoenergetic algorithm. A noise-optimized second-generation monoenergetic algorithm significantly improves the conspicuity of hypervascular liver tumors.
NASA Astrophysics Data System (ADS)
Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.
NASA Technical Reports Server (NTRS)
Stanley, Douglas O.; Unal, Resit; Joyner, C. R.
1992-01-01
The application of advanced technologies to future launch vehicle designs would allow the introduction of a rocket-powered, single-stage-to-orbit (SSTO) launch system early in the next century. For a selected SSTO concept, a dual mixture ratio, staged combustion cycle engine that employs a number of innovative technologies was selected as the baseline propulsion system. A series of parametric trade studies are presented to optimize both a dual mixture ratio engine and a single mixture ratio engine of similar design and technology level. The effect of varying lift-off thrust-to-weight ratio, engine mode transition Mach number, mixture ratios, area ratios, and chamber pressure values on overall vehicle weight is examined. The sensitivity of the advanced SSTO vehicle to variations in each of these parameters is presented, taking into account the interaction of each of the parameters with each other. This parametric optimization and sensitivity study employs a Taguchi design method. The Taguchi method is an efficient approach for determining near-optimum design parameters using orthogonal matrices from design of experiments (DOE) theory. Using orthogonal matrices significantly reduces the number of experimental configurations to be studied. The effectiveness and limitations of the Taguchi method for propulsion/vehicle optimization studies as compared to traditional single-variable parametric trade studies is also discussed.
Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming
Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy
2013-01-01
Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148
NASA Astrophysics Data System (ADS)
Frank, Milan; Jelínek, Michal; Kubeček, Václav
2015-01-01
In this paper the operation of pulsed diode-pumped Nd:GdVO4 laser oscillator in bounce geometry passively modelocked using semiconductor saturable absorber mirror (SAM), generating microjoule level picosecond pulses at wavelength of 1063 nm, is reported. Optimization of the output coupling for generation either Q-switched mode locked pulse trains or cavity dumped single pulses with maximum energy was performed, which resulted in extraction of single pulses as short as 10 ps and energy of 20 uJ. In comparison with the previous results obtained with this Nd:GdVO4 oscillator and saturable absorber in transmission mode, the achieved pulse duration is five times shorter. Using different absorbers and parameters of single pulse extraction enables generation of the pulses with duration up to 100 ps with the energy in the range from 10 to 20 μJ.
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.
EmptyHeaded: A Relational Engine for Graph Processing
Aberger, Christopher R.; Tu, Susan; Olukotun, Kunle; Ré, Christopher
2016-01-01
There are two types of high-performance graph processing engines: low- and high-level engines. Low-level engines (Galois, PowerGraph, Snap) provide optimized data structures and computation models but require users to write low-level imperative code, hence ensuring that efficiency is the burden of the user. In high-level engines, users write in query languages like datalog (SociaLite) or SQL (Grail). High-level engines are easier to use but are orders of magnitude slower than the low-level graph engines. We present EmptyHeaded, a high-level engine that supports a rich datalog-like query language and achieves performance comparable to that of low-level engines. At the core of EmptyHeaded’s design is a new class of join algorithms that satisfy strong theoretical guarantees but have thus far not achieved performance comparable to that of specialized graph processing engines. To achieve high performance, EmptyHeaded introduces a new join engine architecture, including a novel query optimizer and data layouts that leverage single-instruction multiple data (SIMD) parallelism. With this architecture, EmptyHeaded outperforms high-level approaches by up to three orders of magnitude on graph pattern queries, PageRank, and Single-Source Shortest Paths (SSSP) and is an order of magnitude faster than many low-level baselines. We validate that EmptyHeaded competes with the best-of-breed low-level engine (Galois), achieving comparable performance on PageRank and at most 3× worse performance on SSSP. PMID:28077912
The use of an integrated variable fuzzy sets in water resources management
NASA Astrophysics Data System (ADS)
Qiu, Qingtai; Liu, Jia; Li, Chuanzhe; Yu, Xinzhe; Wang, Yang
2018-06-01
Based on the evaluation of the present situation of water resources and the development of water conservancy projects and social economy, optimal allocation of regional water resources presents an increasing need in the water resources management. Meanwhile it is also the most effective way to promote the harmonic relationship between human and water. In view of the own limitations of the traditional evaluations of which always choose a single index model using in optimal allocation of regional water resources, on the basis of the theory of variable fuzzy sets (VFS) and system dynamics (SD), an integrated variable fuzzy sets model (IVFS) is proposed to address dynamically complex problems in regional water resources management in this paper. The model is applied to evaluate the level of the optimal allocation of regional water resources of Zoucheng in China. Results show that the level of allocation schemes of water resources ranging from 2.5 to 3.5, generally showing a trend of lower level. To achieve optimal regional management of water resources, this model conveys a certain degree of accessing water resources management, which prominently improve the authentic assessment of water resources management by using the eigenvector of level H.
Workflow Optimization for Tuning Prostheses with High Input Channel
2017-10-01
of Specific Aim 1 by driving a commercially available two DoF wrist and single DoF hand. The high -level control system will provide analog signals...AWARD NUMBER: W81XWH-16-1-0767 TITLE: Workflow Optimization for Tuning Prostheses with High Input Channel PRINCIPAL INVESTIGATOR: Daniel Merrill...Unlimited The views, opinions and/or findings contained in this report are those of the author(s) and should not be construed as an official Department
Development and application of optimum sensitivity analysis of structures
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.; Hallauer, W. L., Jr.
1984-01-01
The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.
Optimization and evaluation of single-cell whole-genome multiple displacement amplification.
Spits, C; Le Caignec, C; De Rycke, M; Van Haute, L; Van Steirteghem, A; Liebaers, I; Sermon, K
2006-05-01
The scarcity of genomic DNA can be a limiting factor in some fields of genetic research. One of the methods developed to overcome this difficulty is whole genome amplification (WGA). Recently, multiple displacement amplification (MDA) has proved very efficient in the WGA of small DNA samples and pools of cells, the reaction being catalyzed by the phi29 or the Bst DNA polymerases. The aim of the present study was to develop a reliable, efficient, and fast protocol for MDA at the single-cell level. We first compared the efficiency of phi29 and Bst polymerases on DNA samples and single cells. The phi29 polymerase generated accurately, in a short time and from a single cell, sufficient DNA for a large set of tests, whereas the Bst enzyme showed a low efficiency and a high error rate. A single-cell protocol was optimized using the phi29 polymerase and was evaluated on 60 single cells; the DNA obtained DNA was assessed by 22 locus-specific PCRs. This new protocol can be useful for many applications involving minute quantities of starting material, such as forensic DNA analysis, prenatal and preimplantation genetic diagnosis, or cancer research. (c) 2006 Wiley-Liss, Inc.
Hyun, Seung Won; Wong, Weng Kee
2016-01-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs. PMID:26565557
Hyun, Seung Won; Wong, Weng Kee
2015-11-01
We construct an optimal design to simultaneously estimate three common interesting features in a dose-finding trial with possibly different emphasis on each feature. These features are (1) the shape of the dose-response curve, (2) the median effective dose and (3) the minimum effective dose level. A main difficulty of this task is that an optimal design for a single objective may not perform well for other objectives. There are optimal designs for dual objectives in the literature but we were unable to find optimal designs for 3 or more objectives to date with a concrete application. A reason for this is that the approach for finding a dual-objective optimal design does not work well for a 3 or more multiple-objective design problem. We propose a method for finding multiple-objective optimal designs that estimate the three features with user-specified higher efficiencies for the more important objectives. We use the flexible 4-parameter logistic model to illustrate the methodology but our approach is applicable to find multiple-objective optimal designs for other types of objectives and models. We also investigate robustness properties of multiple-objective optimal designs to mis-specification in the nominal parameter values and to a variation in the optimality criterion. We also provide computer code for generating tailor made multiple-objective optimal designs.
Huang, Dao-sheng; Shi, Wei; Han, Lei; Sun, Ke; Chen, Guang-bo; Wu Jian-xiong; Xu, Gui-hong; Bi, Yu-an; Wang, Zhen-zhong; Xiao, Wei
2015-06-01
To optimize the belt drying process conditions optimization of Gardeniae Fructus extract from Reduning injection by Box-Behnken design-response surface methodology, on the basis of single factor experiment, a three-factor and three-level Box-Behnken experimental design was employed to optimize the drying technology of Gardeniae Fructus extract from Reduning injection. With drying temperature, drying time, feeding speed as independent variables and the content of geniposide as dependent variable, the experimental data were fitted to a second order polynomial equation, establishing the mathematical relationship between the content of geniposide and respective variables. With the experimental data analyzed by Design-Expert 8. 0. 6, the optimal drying parameter was as follows: the drying temperature was 98.5 degrees C , the drying time was 89 min, the feeding speed was 99.8 r x min(-1). Three verification experiments were taked under this technology and the measured average content of geniposide was 564. 108 mg x g(-1), which was close to the model prediction: 563. 307 mg x g(-1). According to the verification test, the Gardeniae Fructus belt drying process is steady and feasible. So single factor experiments combined with response surface method (RSM) could be used to optimize the drying technology of Reduning injection Gardenia extract.
Bates, Timothy C.
2015-01-01
Optimism and pessimism are associated with important outcomes including health and depression. Yet it is unclear if these apparent polar opposites form a single dimension or reflect two distinct systems. The extent to which personality accounts for differences in optimism/pessimism is also controversial. Here, we addressed these questions in a genetically informative sample of 852 pairs of twins. Distinct genetic influences on optimism and pessimism were found. Significant family-level environment effects also emerged, accounting for much of the negative relationship between optimism and pessimism, as well as a link to neuroticism. A general positive genetics factor exerted significant links among both personality and life-orientation traits. Both optimism bias and pessimism also showed genetic variance distinct from all effects of personality, and from each other. PMID:26561494
NASA Astrophysics Data System (ADS)
Mejid Elsiti, Nagwa; Noordin, M. Y.; Idris, Ani; Saed Majeed, Faraj
2017-10-01
This paper presents an optimization of process parameters of Micro-Electrical Discharge Machining (EDM) process with (γ-Fe2O3) nano-powder mixed dielectric using multi-response optimization Grey Relational Analysis (GRA) method instead of single response optimization. These parameters were optimized based on 2-Level factorial design combined with Grey Relational Analysis. The machining parameters such as peak current, gap voltage, and pulse on time were chosen for experimentation. The performance characteristics chosen for this study are material removal rate (MRR), tool wear rate (TWR), Taper and Overcut. Experiments were conducted using electrolyte copper as the tool and CoCrMo as the workpiece. Experimental results have been improved through this approach.
Ma, Changxi; Hao, Wei; Pan, Fuquan; Xiang, Wang
2018-01-01
Route optimization of hazardous materials transportation is one of the basic steps in ensuring the safety of hazardous materials transportation. The optimization scheme may be a security risk if road screening is not completed before the distribution route is optimized. For road screening issues of hazardous materials transportation, a road screening algorithm of hazardous materials transportation is built based on genetic algorithm and Levenberg-Marquardt neural network (GA-LM-NN) by analyzing 15 attributes data of each road network section. A multi-objective robust optimization model with adjustable robustness is constructed for the hazardous materials transportation problem of single distribution center to minimize transportation risk and time. A multi-objective genetic algorithm is designed to solve the problem according to the characteristics of the model. The algorithm uses an improved strategy to complete the selection operation, applies partial matching cross shift and single ortho swap methods to complete the crossover and mutation operation, and employs an exclusive method to construct Pareto optimal solutions. Studies show that the sets of hazardous materials transportation road can be found quickly through the proposed road screening algorithm based on GA-LM-NN, whereas the distribution route Pareto solutions with different levels of robustness can be found rapidly through the proposed multi-objective robust optimization model and algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Donghai; Deng, Yongkai; Chu, Saisai
2016-07-11
Single-nanoparticle two-photon microscopy shows great application potential in super-resolution cell imaging. Here, we report in situ adaptive optimization of single-nanoparticle two-photon luminescence signals by phase and polarization modulations of broadband laser pulses. For polarization-independent quantum dots, phase-only optimization was carried out to compensate the phase dispersion at the focus of the objective. Enhancement of the two-photon excitation fluorescence intensity under dispersion-compensated femtosecond pulses was achieved. For polarization-dependent single gold nanorod, in situ polarization optimization resulted in further enhancement of two-photon photoluminescence intensity than phase-only optimization. The application of in situ adaptive control of femtosecond pulse provides a way for object-orientedmore » optimization of single-nanoparticle two-photon microscopy for its future applications.« less
Optimization of investment portfolio weight of stocks affected by market index
NASA Astrophysics Data System (ADS)
Azizah, E.; Rusyaman, E.; Supian, S.
2017-01-01
Stock price assessment, selection of optimum combination, and measure the risk of a portfolio investment is one important issue for investors. In this paper single index model used for the assessment of the stock price, and formulation optimization model developed using Lagrange multiplier technique to determine the proportion of assets to be invested. The level of risk is estimated by using variance. These models are used to analyse the stock price data Lippo Bank and Bumi Putera.
Tailoring the Psychotherapy to the Borderline Patient
HORWITZ, LEONARD; GABBARD, GLEN O.; ALLEN, JON G.; COLSON, DONALD B.; FRIESWYK, SIEBOLT; NEWSOM, GAVIN E.; COYNE, LOLAFAYE
1996-01-01
Views still differ as to the optimal psychodynamic treatment of borderline patients. Recommendations range from psychoanalysis and exploratory psychotherapy to an explicitly supportive treatment aimed at strengthening adaptive defenses. The authors contend that no single approach is appropriate for all patients in this wide-ranging diagnostic category, which spans a continuum from close-to-neurotic to close-to-psychotic levels of functioning. Careful differentiations based on developmental considerations, ego structures, and relationship patterns provide the basis for the optimal treatment approach. PMID:22700301
Access to specialist care: Optimizing the geographic configuration of trauma systems
Jansen, Jan O.; Morrison, Jonathan J.; Wang, Handing; He, Shan; Lawrenson, Robin; Hutchison, James D.; Campbell, Marion K.
2015-01-01
BACKGROUND The optimal geographic configuration of health care systems is key to maximizing accessibility while promoting the efficient use of resources. This article reports the use of a novel approach to inform the optimal configuration of a national trauma system. METHODS This is a prospective cohort study of all trauma patients, 15 years and older, attended to by the Scottish Ambulance Service, between July 1, 2013, and June 30, 2014. Patients underwent notional triage to one of three levels of care (major trauma center [MTC], trauma unit, or local emergency hospital). We used geographic information systems software to calculate access times, by road and air, from all incident locations to all candidate hospitals. We then modeled the performance of all mathematically possible network configurations and used multiobjective optimization to determine geospatially optimized configurations. RESULTS A total of 80,391 casualties were included. A network with only high- or moderate-volume MTCs (admitting at least 650 or 400 severely injured patients per year, respectively) would be optimally configured with a single MTC. A network accepting lower-volume MTCs (at least 240 severely injured patients per year) would be optimally configured with two MTCs. Both configurations would necessitate an increase in the number of helicopter retrievals. CONCLUSION This study has shown that a novel combination of notional triage, network analysis, and mathematical optimization can be used to inform the planning of a national clinical network. Scotland’s trauma system could be optimized with one or two MTCs. LEVEL OF EVIDENCE Care management study, level IV. PMID:26335775
Computational Approaches to Simulation and Optimization of Global Aircraft Trajectories
NASA Technical Reports Server (NTRS)
Ng, Hok Kwan; Sridhar, Banavar
2016-01-01
This study examines three possible approaches to improving the speed in generating wind-optimal routes for air traffic at the national or global level. They are: (a) using the resources of a supercomputer, (b) running the computations on multiple commercially available computers and (c) implementing those same algorithms into NASAs Future ATM Concepts Evaluation Tool (FACET) and compares those to a standard implementation run on a single CPU. Wind-optimal aircraft trajectories are computed using global air traffic schedules. The run time and wait time on the supercomputer for trajectory optimization using various numbers of CPUs ranging from 80 to 10,240 units are compared with the total computational time for running the same computation on a single desktop computer and on multiple commercially available computers for potential computational enhancement through parallel processing on the computer clusters. This study also re-implements the trajectory optimization algorithm for further reduction of computational time through algorithm modifications and integrates that with FACET to facilitate the use of the new features which calculate time-optimal routes between worldwide airport pairs in a wind field for use with existing FACET applications. The implementations of trajectory optimization algorithms use MATLAB, Python, and Java programming languages. The performance evaluations are done by comparing their computational efficiencies and based on the potential application of optimized trajectories. The paper shows that in the absence of special privileges on a supercomputer, a cluster of commercially available computers provides a feasible approach for national and global air traffic system studies.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
Optimization of enzyme complexes for efficient hydrolysis of corn stover to produce glucose.
Yu, Xiaoxiao; Liu, Yan; Meng, Jiatong; Cheng, Qiyue; Zhang, Zaixiao; Cui, Yuxiao; Liu, Jiajing; Teng, Lirong; Lu, Jiahui; Meng, Qingfan; Ren, Xiaodong
2015-05-01
Hydrolysis of cellulose to glucose is the critical step for transferring the lignocellulose to the industrial chemicals. For improving the conversion rate of cellulose of corn stover to glucose, the cocktail of celllulase with other auxiliary enzymes and chemicals was studied in this work. Single factor tests and Response Surface Methodology (RSM) were applied to optimize the enzyme mixture, targeting maximum glucose release from corn stover. The increasing rate of glucan-to-glucose conversion got the higher levels while the cellulase was added 1.7μl tween-80/g cellulose, 300μg β-glucosidase/g cellulose, 400μg pectinase/g cellulose and 0.75mg/ml sodium thiosulphate separately in single factor tests. To improve the glucan conversion, the β-glucosidase, pectinase and sodium thiosulphate were selected for next step optimization with RSM. It is showed that the maximum increasing yield was 45.8% at 377μg/g cellulose Novozyme 188, 171μg/g cellulose pectinase and 1mg/ml sodium thiosulphate.
Lee, Ji-Eun; Han, Ye Ri; Ham, Sujin; Jun, Chul-Ho; Kim, Dongho
2017-11-08
We have investigated the fundamental photophysical properties of surface-bound perylene bisimide (PBI) molecules in a solution-phase at the single-molecule level. By efficient immobilization of single PBIs on glass, we were able to simultaneously monitor fluorescence intensity trajectories, fluorescence lifetimes, and emission spectra of individual PBIs in organic and aqueous media using confocal microscopy. We showed that the fluorescence dynamics of single PBIs in the solution phase is highly dependent on their local and chemical environments. Furthermore, we visualized different spatial-fluctuations of surface-bound PBIs using defocused wide-field imaging. While PBIs show more steric flexibility in organic media, the flexible motion of PBI molecules in aqueous solution is relatively prohibited due to a cage effect by a hydrogen bonding network, which is previously unobserved. Our method opens up a new possibility to investigate the photophysical properties of multi-chromophoric systems in various solvents at the single-molecule level for developing optimal molecular devices such as water-proof devices.
Zhang, Changsheng; Cai, Hongmin; Huang, Jingying; Song, Yan
2016-09-17
Variations in DNA copy number have an important contribution to the development of several diseases, including autism, schizophrenia and cancer. Single-cell sequencing technology allows the dissection of genomic heterogeneity at the single-cell level, thereby providing important evolutionary information about cancer cells. In contrast to traditional bulk sequencing, single-cell sequencing requires the amplification of the whole genome of a single cell to accumulate enough samples for sequencing. However, the amplification process inevitably introduces amplification bias, resulting in an over-dispersing portion of the sequencing data. Recent study has manifested that the over-dispersed portion of the single-cell sequencing data could be well modelled by negative binomial distributions. We developed a read-depth based method, nbCNV to detect the copy number variants (CNVs). The nbCNV method uses two constraints-sparsity and smoothness to fit the CNV patterns under the assumption that the read signals are negatively binomially distributed. The problem of CNV detection was formulated as a quadratic optimization problem, and was solved by an efficient numerical solution based on the classical alternating direction minimization method. Extensive experiments to compare nbCNV with existing benchmark models were conducted on both simulated data and empirical single-cell sequencing data. The results of those experiments demonstrate that nbCNV achieves superior performance and high robustness for the detection of CNVs in single-cell sequencing data.
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
This paper presents a novel atomic layer deposition (ALD) based ZnO functionalization of surface pre-treated multi-walled carbon nanotubes (MWCNTs) for highly sensitive methane chemoresistive sensors. The temperature optimization of the ALD process leads to enhanced ZnO nanopart...
Applications of Chemiluminescence in the Teaching of Experimental Design
ERIC Educational Resources Information Center
Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan
2015-01-01
This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…
A model of optimal voluntary muscular control.
FitzHugh, R
1977-07-19
In the absence of detailed knowledge of how the CNS controls a muscle through its motor fibers, a reasonable hypothesis is that of optimal control. This hypothesis is studied using a simplified mathematical model of a single muscle, based on A.V. Hill's equations, with series elastic element omitted, and with the motor signal represented by a single input variable. Two cost functions were used. The first was total energy expended by the muscle (work plus heat). If the load is a constant force, with no inertia, Hill's optimal velocity of shortening results. If the load includes a mass, analysis by optimal control theory shows that the motor signal to the muscle consists of three phases: (1) maximal stimulation to accelerate the mass to the optimal velocity as quickly as possible, (2) an intermediate level of stimulation to hold the velocity at its optimal value, once reached, and (3) zero stimulation, to permit the mass to slow down, as quickly as possible, to zero velocity at the specified distance shortened. If the latter distance is too small, or the mass too large, the optimal velocity is not reached, and phase (2) is absent. For lengthening, there is no optimal velocity; there are only two phases, zero stimulation followed by maximal stimulation. The second cost function was total time. The optimal control for shortening consists of only phases (1) and (3) above, and is identical to the minimal energy control whenever phase (2) is absent from the latter. Generalization of this model to include viscous loads and a series elastic element are discussed.
Skoczinski, Pia; Volkenborn, Kristina; Fulton, Alexander; Bhadauriya, Anuseema; Nutschel, Christina; Gohlke, Holger; Knapp, Andreas; Jaeger, Karl-Erich
2017-09-25
Bacillus subtilis produces and secretes proteins in amounts of up to 20 g/l under optimal conditions. However, protein production can be challenging if transcription and cotranslational secretion are negatively affected, or the target protein is degraded by extracellular proteases. This study aims at elucidating the influence of a target protein on its own production by a systematic mutational analysis of the homologous B. subtilis model protein lipase A (LipA). We have covered the full natural diversity of single amino acid substitutions at 155 positions of LipA by site saturation mutagenesis excluding only highly conserved residues and qualitatively and quantitatively screened about 30,000 clones for extracellular LipA production. Identified variants with beneficial effects on production were sequenced and analyzed regarding B. subtilis growth behavior, extracellular lipase activity and amount as well as changes in lipase transcript levels. In total, 26 LipA variants were identified showing an up to twofold increase in either amount or activity of extracellular lipase. These variants harbor single amino acid or codon substitutions that did not substantially affect B. subtilis growth. Subsequent exemplary combination of beneficial single amino acid substitutions revealed an additive effect solely at the level of extracellular lipase amount; however, lipase amount and activity could not be increased simultaneously. Single amino acid and codon substitutions can affect LipA secretion and production by B. subtilis. Several codon-related effects were observed that either enhance lipA transcription or promote a more efficient folding of LipA. Single amino acid substitutions could improve LipA production by increasing its secretion or stability in the culture supernatant. Our findings indicate that optimization of the expression system is not sufficient for efficient protein production in B. subtilis. The sequence of the target protein should also be considered as an optimization target for successful protein production. Our results further suggest that variants with improved properties might be identified much faster and easier if mutagenesis is prioritized towards elements that contribute to enzymatic activity or structural integrity.
An approach for aerodynamic optimization of transonic fan blades
NASA Astrophysics Data System (ADS)
Khelghatibana, Maryam
Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.
NASA Astrophysics Data System (ADS)
Bertinotti, A.; Viallet, V.; Colson, D.; Marucco, J.-F.; Hammann, J.; Forget, A.; Le Bras, G.
1996-02-01
Single crystals of HgBa2CuO4+δ of submillimetric sizes were grown with the same one step, low pressure, gold amalgamation technique used to obtain single crystals of HgBa2Ca2Cu3O8+δ. Remarkable superconducting properties are displayed by the samples which are optimally doped as grown. The sharpness of the transition profiles of the magnetic susceptibility, its anisotropy dependence and the volume fraction exhibiting the Meissner effect exceed the values obtained with the best crystal samples of Hg-1223. X-rays show that no substitutional defects have been found in the mercury plane, in particular no mixed occupancy of copper at the mercury site. The interstitial oxygen content at (1/2, 1/2, 0) δ = 0.066+/-0.008 is about one third that observed in optimally doped Hg-1223, resulting in an identical doping level per CuO2 plane in both compounds.
Experimental Optimal Single Qubit Purification in an NMR Quantum Information Processor
Hou, Shi-Yao; Sheng, Yu-Bo; Feng, Guan-Ru; Long, Gui-Lu
2014-01-01
High quality single qubits are the building blocks in quantum information processing. But they are vulnerable to environmental noise. To overcome noise, purification techniques, which generate qubits with higher purities from qubits with lower purities, have been proposed. Purifications have attracted much interest and been widely studied. However, the full experimental demonstration of an optimal single qubit purification protocol proposed by Cirac, Ekert and Macchiavello [Phys. Rev. Lett. 82, 4344 (1999), the CEM protocol] more than one and half decades ago, still remains an experimental challenge, as it requires more complicated networks and a higher level of precision controls. In this work, we design an experiment scheme that realizes the CEM protocol with explicit symmetrization of the wave functions. The purification scheme was successfully implemented in a nuclear magnetic resonance quantum information processor. The experiment fully demonstrated the purification protocol, and showed that it is an effective way of protecting qubits against errors and decoherence. PMID:25358758
Single Mothers and Their Infants: Factors Associated with Optimal Parenting.
ERIC Educational Resources Information Center
Barratt, Marguerite Stevenson; And Others
1991-01-01
Examined factors that might influence optimal early parenting by Caucasian single mothers (n=53). Results indicated optimal parenting was linked with older maternal age, fewer maternal psychological symptoms, and less difficult infant temperament. Recommends particular needs of single mother should be considered when formulating public policy.…
NASA Astrophysics Data System (ADS)
Olivia, G.; Santoso, A.; Prayogo, D. N.
2017-11-01
Nowadays, the level of competition between supply chains is getting tighter and a good coordination system between supply chains members is very crucial in solving the issue. This paper focused on a model development of coordination system between single supplier and buyers in a supply chain as a solution. Proposed optimization model was designed to determine the optimal number of deliveries from a supplier to buyers in order to minimize the total cost over a planning horizon. Components of the total supply chain cost consist of transportation costs, handling costs of supplier and buyers and also stock out costs. In the proposed optimization model, the supplier can supply various types of items to retailers whose item demand patterns are probabilistic. Sensitivity analysis of the proposed model was conducted to test the effect of changes in transport costs, handling costs and production capacities of the supplier. The results of the sensitivity analysis showed a significant influence on the changes in the transportation cost, handling costs and production capacity to the decisions of the optimal numbers of product delivery for each item to the buyers.
SMT-Aware Instantaneous Footprint Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Probir; Liu, Xu; Song, Shuaiwen
Modern architectures employ simultaneous multithreading (SMT) to increase thread-level parallelism. SMT threads share many functional units and the whole memory hierarchy of a physical core. Without a careful code design, SMT threads can easily contend with each other for these shared resources, causing severe performance degradation. Minimizing SMT thread contention for HPC applications running on dedicated platforms is very challenging, because they usually spawn threads within Single Program Multiple Data (SPMD) models. To address this important issue, we introduce a simple scheme for SMT-aware code optimization, which aims to reduce the memory contention across SMT threads.
Press, Neil J; Taylor, Roger J; Fullerton, Joseph D; Tranter, Pamela; McCarthy, Clive; Keller, Thomas H; Arnold, Nicola; Beer, David; Brown, Lyndon; Cheung, Robert; Christie, Julie; Denholm, Alastair; Haberthuer, Sandra; Hatto, Julia D I; Keenan, Mark; Mercer, Mark K; Oakman, Helen; Sahri, Helene; Tuffnell, Andrew R; Tweed, Morris; Trifilieff, Alexandre
2015-09-10
Herein we describe the optimization of a series of PDE4 inhibitors, with special focus on solubility and pharamcokinetics, to clinical compound 2, 4-(8-(3-fluorophenyl)-1,7-naphthyridin-6-yl)transcyclohexanecarboxylic acid. Although compound 2 produces emesis in humans when given as a single dose, its exemplary pharmacokinetic properties enabled a novel dosing regime comprising multiple escalating doses and the resultant achievement of high plasma drug levels without associated nausea or emesis.
Harańczyk, Maciej; Gutowski, Maciej
2007-01-01
We describe a procedure of finding low-energy tautomers of a molecule. The procedure consists of (i) combinatorial generation of a library of tautomers, (ii) screening based on the results of geometry optimization of initial structures performed at the density functional level of theory, and (iii) final refinement of geometry for the top hits at the second-order Möller-Plesset level of theory followed by single-point energy calculations at the coupled cluster level of theory with single, double, and perturbative triple excitations. The library of initial structures of various tautomers is generated with TauTGen, a tautomer generator program. The procedure proved to be successful for these molecular systems for which common chemical knowledge had not been sufficient to predict the most stable structures.
The Development of Models to Optimize Selection of Nuclear Fuels through Atomic-Level Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prof. Simon Phillpot; Prof. Susan B. Sinnott; Prof. Hans Seifert
2009-01-26
Demonstrated that FRAPCON can be modified to accept data generated from first principles studies, and that the result obtained from the modified FRAPCON make sense in terms of the inputs. Determined the temperature dependence of the thermal conductivity of single crystal UO2 from atomistic simulation.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Synthesizing epidemiological and economic optima for control of immunizing infections.
Klepac, Petra; Laxminarayan, Ramanan; Grenfell, Bryan T
2011-08-23
Epidemic theory predicts that the vaccination threshold required to interrupt local transmission of an immunizing infection like measles depends only on the basic reproductive number and hence transmission rates. When the search for optimal strategies is expanded to incorporate economic constraints, the optimum for disease control in a single population is determined by relative costs of infection and control, rather than transmission rates. Adding a spatial dimension, which precludes local elimination unless it can be achieved globally, can reduce or increase optimal vaccination levels depending on the balance of costs and benefits. For weakly coupled populations, local optimal strategies agree with the global cost-effective strategy; however, asymmetries in costs can lead to divergent control optima in more strongly coupled systems--in particular, strong regional differences in costs of vaccination can preclude local elimination even when elimination is locally optimal. Under certain conditions, it is locally optimal to share vaccination resources with other populations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klymenko, M. V.; Remacle, F., E-mail: fremacle@ulg.ac.be
2014-10-28
A methodology is proposed for designing a low-energy consuming ternary-valued full adder based on a quantum dot (QD) electrostatically coupled with a single electron transistor operating as a charge sensor. The methodology is based on design optimization: the values of the physical parameters of the system required for implementing the logic operations are optimized using a multiobjective genetic algorithm. The searching space is determined by elements of the capacitance matrix describing the electrostatic couplings in the entire device. The objective functions are defined as the maximal absolute error over actual device logic outputs relative to the ideal truth tables formore » the sum and the carry-out in base 3. The logic units are implemented on the same device: a single dual-gate quantum dot and a charge sensor. Their physical parameters are optimized to compute either the sum or the carry out outputs and are compatible with current experimental capabilities. The outputs are encoded in the value of the electric current passing through the charge sensor, while the logic inputs are supplied by the voltage levels on the two gate electrodes attached to the QD. The complex logic ternary operations are directly implemented on an extremely simple device, characterized by small sizes and low-energy consumption compared to devices based on switching single-electron transistors. The design methodology is general and provides a rational approach for realizing non-switching logic operations on QD devices.« less
NASA Astrophysics Data System (ADS)
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Access to specialist care: Optimizing the geographic configuration of trauma systems.
Jansen, Jan O; Morrison, Jonathan J; Wang, Handing; He, Shan; Lawrenson, Robin; Hutchison, James D; Campbell, Marion K
2015-11-01
The optimal geographic configuration of health care systems is key to maximizing accessibility while promoting the efficient use of resources. This article reports the use of a novel approach to inform the optimal configuration of a national trauma system. This is a prospective cohort study of all trauma patients, 15 years and older, attended to by the Scottish Ambulance Service, between July 1, 2013, and June 30, 2014. Patients underwent notional triage to one of three levels of care (major trauma center [MTC], trauma unit, or local emergency hospital). We used geographic information systems software to calculate access times, by road and air, from all incident locations to all candidate hospitals. We then modeled the performance of all mathematically possible network configurations and used multiobjective optimization to determine geospatially optimized configurations. A total of 80,391 casualties were included. A network with only high- or moderate-volume MTCs (admitting at least 650 or 400 severely injured patients per year, respectively) would be optimally configured with a single MTC. A network accepting lower-volume MTCs (at least 240 severely injured patients per year) would be optimally configured with two MTCs. Both configurations would necessitate an increase in the number of helicopter retrievals. This study has shown that a novel combination of notional triage, network analysis, and mathematical optimization can be used to inform the planning of a national clinical network. Scotland's trauma system could be optimized with one or two MTCs. Care management study, level IV.
NASA Astrophysics Data System (ADS)
Arfawi Kurdhi, Nughthoh; Adi Diwiryo, Toray; Sutanto
2016-02-01
This paper presents an integrated single-vendor two-buyer production-inventory model with stochastic demand and service level constraints. Shortage is permitted in the model, and partial backordered partial lost sale. The lead time demand is assumed follows a normal distribution and the lead time can be reduced by adding crashing cost. The lead time and ordering cost reductions are interdependent with logaritmic function relationship. A service level constraint policy corresponding to each buyer is considered in the model in order to limit the level of inventory shortages. The purpose of this research is to minimize joint total cost inventory model by finding the optimal order quantity, safety stock, lead time, and the number of lots delivered in one production run. The optimal production-inventory policy gained by the Lagrange method is shaped to account for the service level restrictions. Finally, a numerical example and effects of the key parameters are performed to illustrate the results of the proposed model.
MO-AB-BRA-01: A Global Level Set Based Formulation for Volumetric Modulated Arc Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, D; Lyu, Q; Ruan, D
2016-06-15
Purpose: The current clinical Volumetric Modulated Arc Therapy (VMAT) optimization is formulated as a non-convex problem and various greedy heuristics have been employed for an empirical solution, jeopardizing plan consistency and quality. We introduce a novel global direct aperture optimization method for VMAT to overcome these limitations. Methods: The global VMAT (gVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term and an anisotropic total variation term. A level set function was used to describe the aperture shapes and adjacent aperture shapes were penalized to control MLC motion range. An alternating optimization strategy was implemented to solvemore » the fluence intensity and aperture shapes simultaneously. Single arc gVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme (GBM), lung (LNG), and 2 head and neck cases—one with 3 PTVs (H&N3PTV) and one with 4 PTVs (H&N4PTV). The plans were compared against the clinical VMAT (cVMAT) plans utilizing two overlapping coplanar arcs. Results: The optimization of the gVMAT plans had converged within 600 iterations. gVMAT reduced the average max and mean OAR dose by 6.59% and 7.45% of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N3PTV case. PTV coverages (D95, D98, D99) were within 0.25% of the prescription dose. By globally considering all beams, the gVMAT optimizer allowed some beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel VMAT approach allows for the search of an optimal plan in the global solution space and generates deliverable apertures directly. The single arc VMAT approach fully utilizes the digital linacs’ capability in dose rate and gantry rotation speed modulation. Varian Medical Systems, NIH grant R01CA188300, NIH grant R43CA183390.« less
THE SPECTRUM OF THORIUM FROM 250 nm TO 5500 nm: RITZ WAVELENGTHS AND OPTIMIZED ENERGY LEVELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redman, Stephen L.; Nave, Gillian; Sansonetti, Craig J.
2014-03-01
We have made precise observations of a thorium-argon hollow cathode lamp emission spectrum in the region between 350 nm and 1175 nm using a high-resolution Fourier transform spectrometer. Our measurements are combined with results from seven previously published thorium line lists to re-optimize the energy levels of neutral, singly, and doubly ionized thorium (Th I, Th II, and Th III). Using the optimized level values, we calculate accurate Ritz wavelengths for 19, 874 thorium lines between 250 nm and 5500 nm (40, 000 cm{sup –1} to 1800 cm{sup –1}). We have also found 102 new thorium energy levels. A systematicmore » analysis of previous measurements in light of our new results allows us to identify and propose corrections for systematic errors in Palmer and Engleman and typographical errors and incorrect classifications in Kerber et al. We also found a large scatter with respect to the thorium line list of Lovis and Pepe. We anticipate that our Ritz wavelengths will lead to improved measurement accuracy for current and future spectrographs that make use of thorium-argon or thorium-neon lamps as calibration standards.« less
Trajectory optimization and guidance for an aerospace plane
NASA Technical Reports Server (NTRS)
Mease, Kenneth D.; Vanburen, Mark A.
1989-01-01
The first step in the approach to developing guidance laws for a horizontal take-off, air breathing single-stage-to-orbit vehicle is to characterize the minimum-fuel ascent trajectories. The capability to generate constrained, minimum fuel ascent trajectories for a single-stage-to-orbit vehicle was developed. A key component of this capability is the general purpose trajectory optimization program OTIS. The pre-production version, OTIS 0.96 was installed and run on a Convex C-1. A propulsion model was developed covering the entire flight envelope of a single-stage-to-orbit vehicle. Three separate propulsion modes, corresponding to an after burning turbojet, a ramjet and a scramjet, are used in the air breathing propulsion phase. The Generic Hypersonic Aerodynamic Model Example aerodynamic model of a hypersonic air breathing single-stage-to-orbit vehicle was obtained and implemented. Preliminary results pertaining to the effects of variations in acceleration constraints, available thrust level and fuel specific impulse on the shape of the minimum-fuel ascent trajectories were obtained. The results show that, if the air breathing engines are sized for acceleration to orbital velocity, it is the acceleration constraint rather than the dynamic pressure constraint that is active during ascent.
Han, Xiaoping; Chen, Haide; Huang, Daosheng; Chen, Huidong; Fei, Lijiang; Cheng, Chen; Huang, He; Yuan, Guo-Cheng; Guo, Guoji
2018-04-05
Human pluripotent stem cells (hPSCs) provide powerful models for studying cellular differentiations and unlimited sources of cells for regenerative medicine. However, a comprehensive single-cell level differentiation roadmap for hPSCs has not been achieved. We use high throughput single-cell RNA-sequencing (scRNA-seq), based on optimized microfluidic circuits, to profile early differentiation lineages in the human embryoid body system. We present a cellular-state landscape for hPSC early differentiation that covers multiple cellular lineages, including neural, muscle, endothelial, stromal, liver, and epithelial cells. Through pseudotime analysis, we construct the developmental trajectories of these progenitor cells and reveal the gene expression dynamics in the process of cell differentiation. We further reprogram primed H9 cells into naïve-like H9 cells to study the cellular-state transition process. We find that genes related to hemogenic endothelium development are enriched in naïve-like H9. Functionally, naïve-like H9 show higher potency for differentiation into hematopoietic lineages than primed cells. Our single-cell analysis reveals the cellular-state landscape of hPSC early differentiation, offering new insights that can be harnessed for optimization of differentiation protocols.
Passive states as optimal inputs for single-jump lossy quantum channels
NASA Astrophysics Data System (ADS)
De Palma, Giacomo; Mari, Andrea; Lloyd, Seth; Giovannetti, Vittorio
2016-06-01
The passive states of a quantum system minimize the average energy among all the states with a given spectrum. We prove that passive states are the optimal inputs of single-jump lossy quantum channels. These channels arise from a weak interaction of the quantum system of interest with a large Markovian bath in its ground state, such that the interaction Hamiltonian couples only consecutive energy eigenstates of the system. We prove that the output generated by any input state ρ majorizes the output generated by the passive input state ρ0 with the same spectrum of ρ . Then, the output generated by ρ can be obtained applying a random unitary operation to the output generated by ρ0. This is an extension of De Palma et al. [IEEE Trans. Inf. Theory 62, 2895 (2016)], 10.1109/TIT.2016.2547426, where the same result is proved for one-mode bosonic Gaussian channels. We also prove that for finite temperature this optimality property can fail already in a two-level system, where the best input is a coherent superposition of the two energy eigenstates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stark, Christopher C.; Roberge, Aki; Mandell, Avi
ExoEarth yield is a critical science metric for future exoplanet imaging missions. Here we estimate exoEarth candidate yield using single visit completeness for a variety of mission design and astrophysical parameters. We review the methods used in previous yield calculations and show that the method choice can significantly impact yield estimates as well as how the yield responds to mission parameters. We introduce a method, called Altruistic Yield Optimization, that optimizes the target list and exposure times to maximize mission yield, adapts maximally to changes in mission parameters, and increases exoEarth candidate yield by up to 100% compared to previousmore » methods. We use Altruistic Yield Optimization to estimate exoEarth candidate yield for a large suite of mission and astrophysical parameters using single visit completeness. We find that exoEarth candidate yield is most sensitive to telescope diameter, followed by coronagraph inner working angle, followed by coronagraph contrast, and finally coronagraph contrast noise floor. We find a surprisingly weak dependence of exoEarth candidate yield on exozodi level. Additionally, we provide a quantitative approach to defining a yield goal for future exoEarth-imaging missions.« less
Ground Fault Overvoltage With Inverter-Interfaced Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ropp, Michael; Hoke, Anderson; Chakraborty, Sudipta
Ground Fault Overvoltage can occur in situations in which a four-wire distribution circuit is energized by an ungrounded voltage source during a single phase to ground fault. The phenomenon is well-documented with ungrounded synchronous machines, but there is considerable discussion about whether inverters cause this phenomenon, and consequently whether inverters require effective grounding. This paper examines the overvoltages that can be supported by inverters during single phase to ground faults via theory, simulation and experiment, identifies the relevant physical mechanisms, quantifies expected levels of overvoltage, and makes recommendations for optimal mitigation.
NASA Technical Reports Server (NTRS)
Stevenson, T. R.; Hsieh, W.-T.; Li, M. J.; Prober, D. E.; Rhee, K. W.; Schoelkopf, R. J.; Stahle, C. M.; Teufel, J.; Wollack, E. J.
2004-01-01
For high resolution imaging and spectroscopy in the FIR and submillimeter, space observatories will demand sensitive, fast, compact, low-power detector arrays with 104 pixels and sensitivity less than 10(exp -20) W/Hz(sup 0.5). Antenna-coupled superconducting tunnel junctions with integrated rf single-electron transistor readout amplifiers have the potential for achieving this high level of sensitivity, and can take advantage of an rf multiplexing technique. The device consists of an antenna to couple radiation into a small superconducting volume and cause quasiparticle excitations, and a single-electron transistor to measure current through junctions contacting the absorber. We describe optimization of device parameters, and results on fabrication techniques for producing devices with high yield for detector arrays. We also present modeling of expected saturation power levels, antenna coupling, and rf multiplexing schemes.
Beqiri, Arian; Price, Anthony N; Padormo, Francesco; Hajnal, Joseph V; Malik, Shaihan J
2017-06-01
Cardiac magnetic resonance imaging (MRI) at high field presents challenges because of the high specific absorption rate and significant transmit field (B 1 + ) inhomogeneities. Parallel transmission MRI offers the ability to correct for both issues at the level of individual radiofrequency (RF) pulses, but must operate within strict hardware and safety constraints. The constraints are themselves affected by sequence parameters, such as the RF pulse duration and TR, meaning that an overall optimal operating point exists for a given sequence. This work seeks to obtain optimal performance by performing a 'sequence-level' optimization in which pulse sequence parameters are included as part of an RF shimming calculation. The method is applied to balanced steady-state free precession cardiac MRI with the objective of minimizing TR, hence reducing the imaging duration. Results are demonstrated using an eight-channel parallel transmit system operating at 3 T, with an in vivo study carried out on seven male subjects of varying body mass index (BMI). Compared with single-channel operation, a mean-squared-error shimming approach leads to reduced imaging durations of 32 ± 3% with simultaneous improvement in flip angle homogeneity of 32 ± 8% within the myocardium. © 2017 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Chen, Yizhong; Lu, Hongwei; Li, Jing; Ren, Lixia; He, Li
2017-05-01
This study presents the mathematical formulation and implementations of a synergistic optimization framework based on an understanding of water availability and reliability together with the characteristics of multiple water demands. This framework simultaneously integrates a set of leader-followers-interactive objectives established by different decision makers during the synergistic optimization. The upper-level model (leader's one) determines the optimal pollutants discharge to satisfy the environmental target. The lower-level model (follower's one) accepts the dispatch requirement from the upper-level one and dominates the optimal water-allocation strategy to maximize economic benefits representing the regional authority. The complicated bi-level model significantly improves upon the conventional programming methods through the mutual influence and restriction between the upper- and lower-level decision processes, particularly when limited water resources are available for multiple completing users. To solve the problem, a bi-level interactive solution algorithm based on satisfactory degree is introduced into the decision-making process for measuring to what extent the constraints are met and the objective reaches its optima. The capabilities of the proposed model are illustrated through a real-world case study of water resources management system in the district of Fengtai located in Beijing, China. Feasible decisions in association with water resources allocation, wastewater emission and pollutants discharge would be sequentially generated for balancing the objectives subject to the given water-related constraints, which can enable Stakeholders to grasp the inherent conflicts and trade-offs between the environmental and economic interests. The performance of the developed bi-level model is enhanced by comparing with single-level models. Moreover, in consideration of the uncertainty in water demand and availability, sensitivity analysis and policy analysis are employed for identifying their impacts on the final decisions and improving the practical applications.
A vibratory stimulation-based inhibition system for nocturnal bruxism: a clinical report.
Watanabe, T; Baba, K; Yamagata, K; Ohyama, T; Clark, G T
2001-03-01
For the single subject tested to date, the bruxism-contingent vibratory-feedback system for occlusal appliances effectively inhibited bruxism without inducing substantial sleep disturbance. Whether the reduction in bruxism would continue if the device no longer provided feedback and whether the force levels applied are optimal to induce suppression remain to be determined.
Multilevel geometry optimization
NASA Astrophysics Data System (ADS)
Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.
2000-02-01
Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.
Poon, S K; Peacock, L; Gibson, W; Gull, K; Kelly, S
2012-02-01
Here, we present a simple modular extendable vector system for introducing the T7 RNA polymerase and tetracycline repressor genes into Trypanosoma brucei. This novel system exploits developments in our understanding of gene expression and genome organization to produce a streamlined plasmid optimized for high levels of expression of the introduced transgenes. We demonstrate the utility of this novel system in bloodstream and procyclic forms of Trypanosoma brucei, including the genome strain TREU927/4. We validate these cell lines using a variety of inducible experiments that recapture previously published lethal and non-lethal phenotypes. We further demonstrate the utility of the single marker (SmOx) TREU927/4 cell line for in vivo experiments in the tsetse fly and provide a set of plasmids that enable both whole-fly and salivary gland-specific inducible expression of transgenes.
Poon, S. K.; Peacock, L.; Gibson, W.; Gull, K.; Kelly, S.
2012-01-01
Here, we present a simple modular extendable vector system for introducing the T7 RNA polymerase and tetracycline repressor genes into Trypanosoma brucei. This novel system exploits developments in our understanding of gene expression and genome organization to produce a streamlined plasmid optimized for high levels of expression of the introduced transgenes. We demonstrate the utility of this novel system in bloodstream and procyclic forms of Trypanosoma brucei, including the genome strain TREU927/4. We validate these cell lines using a variety of inducible experiments that recapture previously published lethal and non-lethal phenotypes. We further demonstrate the utility of the single marker (SmOx) TREU927/4 cell line for in vivo experiments in the tsetse fly and provide a set of plasmids that enable both whole-fly and salivary gland-specific inducible expression of transgenes. PMID:22645659
Cherubin, Patrick; Quiñones, Beatriz; Teter, Ken
2018-02-06
Ricin, Shiga toxin, exotoxin A, and diphtheria toxin are AB-type protein toxins that act within the host cytosol and kill the host cell through pathways involving the inhibition of protein synthesis. It is thought that a single molecule of cytosolic toxin is sufficient to kill the host cell. Intoxication is therefore viewed as an irreversible process. Using flow cytometry and a fluorescent reporter system to monitor protein synthesis, we show a single molecule of cytosolic toxin is not sufficient for complete inhibition of protein synthesis or cell death. Furthermore, cells can recover from intoxication: cells with a partial loss of protein synthesis will, upon removal of the toxin, increase the level of protein production and survive the toxin challenge. Thus, in contrast to the prevailing model, ongoing toxin delivery to the cytosol appears to be required for the death of cells exposed to sub-optimal toxin concentrations.
Constraining the braneworld with gravitational wave observations.
McWilliams, Sean T
2010-04-09
Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, l, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining l via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain l at the approximately 1 microm level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of l < or = 5 microm.
Constraining the Braneworld with Gravitational Wave Observations
NASA Technical Reports Server (NTRS)
McWilliams, Sean T.
2011-01-01
Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, L, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining L via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain L at the approximately 1 micron level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of L less than or equal to 5 microns.
Static inverter with synchronous output waveform synthesized by time-optimal-response feedback
NASA Technical Reports Server (NTRS)
Kernick, A.; Stechschulte, D. L.; Shireman, D. W.
1976-01-01
Time-optimal-response 'bang-bang' or 'bang-hang' technique, using four feedback control loops, synthesizes static-inverter sinusoidal output waveform by self-oscillatory but yet synchronous pulse-frequency-modulation (SPFM). A single modular power stage per phase of ac output entails the minimum of circuit complexity while providing by feedback synthesis individual phase voltage regulation, phase position control and inherent compensation simultaneously for line and load disturbances. Clipped sinewave performance is described under off-limit load or input voltage conditions. Also, approaches to high power levels, 3-phase arraying and parallel modular connection are given.
Zhao, Xi; Wu, Xiaoli; Zhou, Hui; Jiang, Tao; Chen, Chun; Liu, Mingshi; Jin, Yuanbao; Yang, Dongsheng
2014-11-01
To optimize the preparation factors for argan oil microcapsule using complex coacervation of chitosan cross-linked with gelatin based on hybrid-level orthogonal array design via SPSS modeling. Eight relatively significant factors were firstly investigated and selected as calculative factors for the orthogonal array design from the total of ten factors effecting the preparation of argan oil microcapsule by utilizing the single factor variable method. The modeling of hybrid-level orthogonal array design was built in these eight factors with the relevant levels (9, 9, 9, 9, 7, 6, 2 and 2 respectively). The preparation factors for argan oil microcapsule were investigated and optimized according to the results of hybrid-level orthogonal array design. The priorities order and relevant optimum levels of preparation factors standard to base on the percentage of microcapsule with the diameter of 30~40 μm via SPSS. Experimental data showed that the optimum factors were controlling the chitosan/gelatin ratio, the systemic concentration and the core/shell ratio at 1:2, 1.5% and 1:7 respectively, presetting complex coacervation pH at 6.4, setting cross-linking time and complex coacervation at 75 min and 30 min, using the glucose-delta lactone as the type of cross-linking agent, and selecting chitosan with the molecular weight of 2000~3000.
Lai, Chi-Chih; Friedman, Michael; Lin, Hsin-Ching; Wang, Pa-Chun; Hwang, Michelle S; Hsu, Cheng-Ming; Lin, Meng-Chih; Chin, Chien-Hung
2015-08-01
To identify standard clinical parameters that may predict the optimal level of continuous positive airway pressure (CPAP) in adult patients with obstructive sleep apnea/hypopnea syndrome (OSAHS). This is a retrospective study in a tertiary academic medical center that included 129 adult patients (117 males and 12 females) with OSAHS confirmed by diagnostic polysomnography (PSG). All OSAHS patients underwent successful full-night manual titration to determine the optimal CPAP pressure level for OSAHS treatment. The PSG parameters and completed physical examination, including body mass index, tonsil size grading, modified Mallampati grade (also known as updated Friedman's tongue position [uFTP]), uvular length, neck circumference, waist circumference, hip circumference, thyroid-mental distance, and hyoid-mental distance (HMD) were recorded. When the physical examination variables and OSAHS disease were correlated singly with the optimal CPAP pressure, we found that uFTP, HMD, and apnea/hypopnea index (AHI) were reliable predictors of CPAP pressures (P = .013, P = .002, and P < .001, respectively, by multiple regression). When all important factors were considered in a stepwise multiple linear regression analysis, a significant correlation with optimal CPAP pressure was formulated by factoring the uFTP, HMD, and AHI (optimal CPAP pressure = 1.01 uFTP + 0.74 HMD + 0.059 AHI - 1.603). This study distinguished the correlation between uFTP, HMD, and AHI with the optimal CPAP pressure. The structure of the upper airway (especially tongue base obstruction) and disease severity may predict the effective level of CPAP pressure. 4. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Rosenblum, Serge; Borne, Adrien; Dayan, Barak
2017-03-01
The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.
HIITE: HIV-1 incidence and infection time estimator.
Park, Sung Yong; Love, Tanzy M T; Kapoor, Shivankur; Lee, Ha Youn
2018-06-15
Around 2.1 million new HIV-1 infections were reported in 2015, alerting that the HIV-1 epidemic remains a significant global health challenge. Precise incidence assessment strengthens epidemic monitoring efforts and guides strategy optimization for prevention programs. Estimating the onset time of HIV-1 infection can facilitate optimal clinical management and identify key populations largely responsible for epidemic spread and thereby infer HIV-1 transmission chains. Our goal is to develop a genomic assay estimating the incidence and infection time in a single cross-sectional survey setting. We created a web-based platform, HIV-1 incidence and infection time estimator (HIITE), which processes envelope gene sequences using hierarchical clustering algorithms and informs the stage of infection, along with time since infection for incident cases. HIITE's performance was evaluated using 585 incident and 305 chronic specimens' envelope gene sequences collected from global cohorts including HIV-1 vaccine trial participants. HIITE precisely identified chronically infected individuals as being chronic with an error less than 1% and correctly classified 94% of recently infected individuals as being incident. Using a mixed-effect model, an incident specimen's time since infection was estimated from its single lineage diversity, showing 14% prediction error for time since infection. HIITE is the first algorithm to inform two key metrics from a single time point sequence sample. HIITE has the capacity for assessing not only population-level epidemic spread but also individual-level transmission events from a single survey, advancing HIV prevention and intervention programs. Web-based HIITE and source code of HIITE are available at http://www.hayounlee.org/software.html. Supplementary data are available at Bioinformatics online.
Large-scale structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.
1983-01-01
Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.
Li, Jing; He, Li; Fan, Xing; Chen, Yizhong; Lu, Hongwei
2017-08-01
This study presents a synergic optimization of control for greenhouse gas (GHG) emissions and system cost in integrated municipal solid waste (MSW) management on a basis of bi-level programming. The bi-level programming is formulated by integrating minimizations of GHG emissions at the leader level and system cost at the follower level into a general MSW framework. Different from traditional single- or multi-objective approaches, the proposed bi-level programming is capable of not only addressing the tradeoffs but also dealing with the leader-follower relationship between different decision makers, who have dissimilar perspectives interests. GHG emission control is placed at the leader level could emphasize the significant environmental concern in MSW management. A bi-level decision-making process based on satisfactory degree is then suitable for solving highly nonlinear problems with computationally effectiveness. The capabilities and effectiveness of the proposed bi-level programming are illustrated by an application of a MSW management problem in Canada. Results show that the obtained optimal management strategy can bring considerable revenues, approximately from 76 to 97 million dollars. Considering control of GHG emissions, it would give priority to the development of the recycling facility throughout the whole period, especially in latter periods. In terms of capacity, the existing landfill is enough in the future 30 years without development of new landfills, while expansion to the composting and recycling facilities should be paid more attention.
A stochastic model for optimizing composite predictors based on gene expression profiles.
Ramanathan, Murali
2003-07-01
This project was done to develop a mathematical model for optimizing composite predictors based on gene expression profiles from DNA arrays and proteomics. The problem was amenable to a formulation and solution analogous to the portfolio optimization problem in mathematical finance: it requires the optimization of a quadratic function subject to linear constraints. The performance of the approach was compared to that of neighborhood analysis using a data set containing cDNA array-derived gene expression profiles from 14 multiple sclerosis patients receiving intramuscular inteferon-beta1a. The Markowitz portfolio model predicts that the covariance between genes can be exploited to construct an efficient composite. The model predicts that a composite is not needed for maximizing the mean value of a treatment effect: only a single gene is needed, but the usefulness of the effect measure may be compromised by high variability. The model optimized the composite to yield the highest mean for a given level of variability or the least variability for a given mean level. The choices that meet this optimization criteria lie on a curve of composite mean vs. composite variability plot referred to as the "efficient frontier." When a composite is constructed using the model, it outperforms the composite constructed using the neighborhood analysis method. The Markowitz portfolio model may find potential applications in constructing composite biomarkers and in the pharmacogenomic modeling of treatment effects derived from gene expression endpoints.
NASA Astrophysics Data System (ADS)
Saloman, Edward B.; Kramida, Alexander
2017-08-01
The energy levels, observed spectral lines, and transition probabilities of singly ionized vanadium, V II, have been compiled. The experimentally derived energy levels belong to the configurations 3d 4, 3d 3 ns (n = 4, 5, 6), 3d 3 np, and 3d 3 nd (n = 4, 5), 3d 34f, 3d 24s 2, and 3d 24s4p. Also included are values for some forbidden lines that may be of interest to the astrophysical community. Experimental Landé g-factors and leading percentages for the levels are included when available, as well as Ritz wavelengths calculated from the energy levels. Wavelengths and transition probabilities are reported for 3568 and 1896 transitions, respectively. From the list of observed wavelengths, 407 energy levels are determined. The observed intensities, normalized to a common scale, are provided. From the newly optimized energy levels, a revised value for the ionization energy is derived, 118,030(60) cm-1, corresponding to 14.634(7) eV. This is 130 cm-1 higher than the previously recommended value from Iglesias et al.
NASA Astrophysics Data System (ADS)
Li, Hong; Zhang, Li; Jiao, Yong-Chang
2016-07-01
This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Ayadi, Omar; Felfel, Houssem; Masmoudi, Faouzi
2017-07-01
The current manufacturing environment has changed from traditional single-plant to multi-site supply chain where multiple plants are serving customer demands. In this article, a tactical multi-objective, multi-period, multi-product, multi-site supply-chain planning problem is proposed. A corresponding optimization model aiming to simultaneously minimize the total cost, maximize product quality and maximize the customer satisfaction demand level is developed. The proposed solution approach yields to a front of Pareto-optimal solutions that represents the trade-offs among the different objectives. Subsequently, the analytic hierarchy process method is applied to select the best Pareto-optimal solution according to the preferences of the decision maker. The robustness of the solutions and the proposed approach are discussed based on a sensitivity analysis and an application to a real case from the textile and apparel industry.
Optimization of simultaneous tritium–radiocarbon internal gas proportional counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonicalzi, R. M.; Aalseth, C. E.; Day, A. R.
Specific environmental applications can benefit from dual tritium and radiocarbon measurements in a single compound. Assuming typical environmental levels, it is often the low tritium activity relative to the higher radiocarbon activity that limits the dual measurement. In this paper, we explore the parameter space for a combined tritium and radiocarbon measurement using a methane sample mixed with an argon fill gas in low-background proportional counters of a specific design. We present an optimized methane percentage, detector fill pressure, and analysis energy windows to maximize measurement sensitivity while minimizing count time. The final optimized method uses a 9-atm fill ofmore » P35 (35% methane, 65% argon), and a tritium analysis window from 1.5 to 10.3 keV, which stops short of the tritium beta decay endpoint energy of 18.6 keV. This method optimizes tritium counting efficiency while minimizing radiocarbon beta decay interference.« less
ERIC Educational Resources Information Center
Fallon, Lindsay M.; Collier-Meek, Melissa A.; Maggin, Daniel M.; Sanetti, Lisa M. H.; Johnson, Austin H.
2015-01-01
Optimal levels of treatment fidelity, a critical moderator of intervention effectiveness, are often difficult to sustain in applied settings. It is unknown whether performance feedback, a widely researched method for increasing educators' treatment fidelity, is an evidence-based practice. The purpose of this review was to evaluate the current…
Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers
NASA Astrophysics Data System (ADS)
Lemyre Garneau, Mathieu
A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.
Zhou, Liang; Kwok, Chi-Chung; Cheng, Gang; Zhang, Hongjie; Che, Chi-Ming
2013-07-15
In this work, organic electroluminescent (EL) devices with double light-emitting layers (EMLs) having stepwise energy levels were designed to improve the EL performance of a red-light-emitting platinum(II) Schiff base complex. A series of devices with single or double EML(s) were fabricated and characterized. Compared with single-EML devices, double-EML devices showed improved EL efficiency and brightness, attributed to better balance in carriers. In addition, the stepwise distribution in energy levels of host materials is instrumental in broadening the recombination zone, thus delaying the roll-off of EL efficiency. The highest EL current efficiency and power efficiency of 17.36 cd/A and 14.73 lm/W, respectively, were achieved with the optimized double-EML devices. At high brightness of 1000 cd/m², EL efficiency as high as 8.89 cd/A was retained.
Chen, Zhenning; Shao, Xinxing; Xu, Xiangyang; He, Xiaoyuan
2018-02-01
The technique of digital image correlation (DIC), which has been widely used for noncontact deformation measurements in both the scientific and engineering fields, is greatly affected by the quality of speckle patterns in terms of its performance. This study was concerned with the optimization of the digital speckle pattern (DSP) for DIC in consideration of both the accuracy and efficiency. The root-mean-square error of the inverse compositional Gauss-Newton algorithm and the average number of iterations were used as quality metrics. Moreover, the influence of subset sizes and the noise level of images, which are the basic parameters in the quality assessment formulations, were also considered. The simulated binary speckle patterns were first compared with the Gaussian speckle patterns and captured DSPs. Both the single-radius and multi-radius DSPs were optimized. Experimental tests and analyses were conducted to obtain the optimized and recommended DSP. The vector diagram of the optimized speckle pattern was also uploaded as reference.
Use of the Collaborative Optimization Architecture for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, R. D.; Moore, A. A.; Kroo, I. M.
1996-01-01
Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization
Multi-level systems modeling and optimization for novel aircraft
NASA Astrophysics Data System (ADS)
Subramanian, Shreyas Vathul
This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission best achieved via a large collection of interacting simple systems, or a relatively few highly capable, complex air vehicles). The vastly unexplored area of optimization in evolving design spaces will be studied and incorporated into the SoS optimization framework. We envision a framework that resembles a multi-level, mult-fidelity, multi-disciplinary assemblage of optimization problems. The challenge is not simply one of scaling up to a new level (the SoS), but recognizing that the aircraft sub-systems and the integrated vehicle are now intensely cyber-physical, with hardware and software components interacting in complex ways that give rise to new and improved capabilities. The work presented here is a step closer to modeling the information flow that exists in realistic SoS optimization problems between sub-contractors, contractors and the SoS architect.
A new high dynamic range ROIC with smart light intensity control unit
NASA Astrophysics Data System (ADS)
Yazici, Melik; Ceylan, Omer; Shafique, Atia; Abbasi, Shahbaz; Galioglu, Arman; Gurbuz, Yasar
2017-05-01
This journal presents a new high dynamic range ROIC with smart pixel which consists of two pre-amplifiers that are controlled by a circuit inside the pixel. Each pixel automatically decides which pre-amplifier is used according to the incoming illumination level. Instead of using single pre-amplifier, two input pre-amplifiers, which are optimized for different signal levels, are placed inside each pixel. The smart circuit mechanism, which decides the best input circuit according to the incoming light level, is also designed for each pixel. In short, an individual pixel has the ability to select the best input amplifier circuit that performs the best/highest SNR for the incoming signal level. A 32 × 32 ROIC prototype chip is designed to demonstrate the concept in 0.18 μ m CMOS technology. The prototype is optimized for NIR and SWIR bands. Instead of a detector, process variation optimized current sources are placed inside the ROIC. The chip achieves minimum 8.6 e- input referred noise and 98.9 dB dynamic range. It has the highest dynamic range in the literature in terms of analog ROICs for SWIR band. It is operating in room temperature and power consumption is 2.8 μ W per pixel.
Coding stimulus amplitude by correlated neural activity
NASA Astrophysics Data System (ADS)
Metzen, Michael G.; Ávila-Åkerberg, Oscar; Chacron, Maurice J.
2015-04-01
While correlated activity is observed ubiquitously in the brain, its role in neural coding has remained controversial. Recent experimental results have demonstrated that correlated but not single-neuron activity can encode the detailed time course of the instantaneous amplitude (i.e., envelope) of a stimulus. These have furthermore demonstrated that such coding required and was optimal for a nonzero level of neural variability. However, a theoretical understanding of these results is still lacking. Here we provide a comprehensive theoretical framework explaining these experimental findings. Specifically, we use linear response theory to derive an expression relating the correlation coefficient to the instantaneous stimulus amplitude, which takes into account key single-neuron properties such as firing rate and variability as quantified by the coefficient of variation. The theoretical prediction was in excellent agreement with numerical simulations of various integrate-and-fire type neuron models for various parameter values. Further, we demonstrate a form of stochastic resonance as optimal coding of stimulus variance by correlated activity occurs for a nonzero value of noise intensity. Thus, our results provide a theoretical explanation of the phenomenon by which correlated but not single-neuron activity can code for stimulus amplitude and how key single-neuron properties such as firing rate and variability influence such coding. Correlation coding by correlated but not single-neuron activity is thus predicted to be a ubiquitous feature of sensory processing for neurons responding to weak input.
GaN-based superluminescent diodes with long lifetime
NASA Astrophysics Data System (ADS)
Castiglia, A.; Rossetti, M.; Matuschek, N.; Rezzonico, R.; Duelk, M.; Vélez, C.; Carlin, J.-F.; Grandjean, N.
2016-02-01
We report on the reliability of GaN-based super-luminescent light emitting diodes (SLEDs) emitting at a wavelength of 405 nm. We show that the Mg doping level in the p-type layers has an impact on both the device electro-optical characteristics and their reliability. Optimized doping levels allow decreasing the operating voltage on single-mode devices from more than 6 V to less than 5 V for an injection current of 100 mA. Furthermore, maximum output powers as high as 350 mW (for an injection current of 500 mA) have been achieved in continuous-wave operation (CW) at room temperature. Modules with standard and optimized p-type layers were finally tested in terms of lifetime, at a constant output power of 10 mW, in CW operation and at a case temperature of 25 °C. The modules with non-optimized p-type doping showed a fast and remarkable increase in the drive current during the first hundreds of hours together with an increase of the device series resistance. No degradation of the electrical characteristics was observed over 2000 h on devices with optimized p-type layers. The estimated lifetime for those devices was longer than 5000 h.
Mitigation of epidemics in contact networks through optimal contact adaptation *
Youssef, Mina; Scoglio, Caterina
2013-01-01
This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209
Mitigation of epidemics in contact networks through optimal contact adaptation.
Youssef, Mina; Scoglio, Caterina
2013-08-01
This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.
Chen, Wen; Zhu, Ming-Dong; Yan, Xiao-Lan; Lin, Li-Jun; Zhang, Jian-Feng; Li, Li; Wen, Li-Yong
2011-06-01
To understand and evaluate the quality of feces examination for schistosomiasis in province-level laboratories of Zhejiang Province. With the single-blind method, the stool samples were detected by the stool hatching method and sediment detection method. In the 3 quality control assessments in 2006, 2008 and 2009, most laboratories finished the examinations on time. The accordance rates of detections were 88.9%, 100% and 93.9%, respectively. The province-level laboratories for schistosomiasis feces examination of Zhejiang Province is coming into standardization, and the techniques of schistosomiasis feces examination are optimized gradually.
Robust design of configurations and parameters of adaptable products
NASA Astrophysics Data System (ADS)
Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua
2014-03-01
An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.
Process Parameters Optimization in Single Point Incremental Forming
NASA Astrophysics Data System (ADS)
Gulati, Vishal; Aryal, Ashmin; Katyal, Puneet; Goswami, Amitesh
2016-04-01
This work aims to optimize the formability and surface roughness of parts formed by the single-point incremental forming process for an Aluminium-6063 alloy. The tests are based on Taguchi's L18 orthogonal array selected on the basis of DOF. The tests have been carried out on vertical machining center (DMC70V); using CAD/CAM software (SolidWorks V5/MasterCAM). Two levels of tool radius, three levels of sheet thickness, step size, tool rotational speed, feed rate and lubrication have been considered as the input process parameters. Wall angle and surface roughness have been considered process responses. The influential process parameters for the formability and surface roughness have been identified with the help of statistical tool (response table, main effect plot and ANOVA). The parameter that has the utmost influence on formability and surface roughness is lubrication. In the case of formability, lubrication followed by the tool rotational speed, feed rate, sheet thickness, step size and tool radius have the influence in descending order. Whereas in surface roughness, lubrication followed by feed rate, step size, tool radius, sheet thickness and tool rotational speed have the influence in descending order. The predicted optimal values for the wall angle and surface roughness are found to be 88.29° and 1.03225 µm. The confirmation experiments were conducted thrice and the value of wall angle and surface roughness were found to be 85.76° and 1.15 µm respectively.
NASA Technical Reports Server (NTRS)
Stevenson, T. R.; Hsieh, W.-T.; Li, M. J.; Stahle, C. M.; Wollack, E. J.; Schoelkopf, R. J.; Teufel, J.; Krebs, Carolyn (Technical Monitor)
2002-01-01
Antenna-coupled superconducting tunnel junction detectors have the potential for photon-counting sensitivity at sub-mm wavelengths. The device consists of an antenna structure to couple radiation into a small superconducting volume and cause quasiparticle excitations, and a single-electron transistor to measure currents through tunnel junction contacts to the absorber volume. We will describe optimization of device parameters, and recent results on fabrication techniques for producing devices with high yield for detector arrays. We will also present modeling of expected saturation power levels, antenna coupling, and rf multiplexing schemes.
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
On-chip Magnetic Separation and Cell Encapsulation in Droplets
NASA Astrophysics Data System (ADS)
Chen, A.; Byvank, T.; Bharde, A.; Miller, B. L.; Chalmers, J. J.; Sooryakumar, R.; Chang, W.-J.; Bashir, R.
2012-02-01
The demand for high-throughput single cell assays is gaining importance because of the heterogeneity of many cell suspensions, even after significant initial sorting. These suspensions may display cell-to-cell variability at the gene expression level that could impact single cell functional genomics, cancer, stem-cell research and drug screening. The on-chip monitoring of individual cells in an isolated environment could prevent cross-contamination, provide high recovery yield and ability to study biological traits at a single cell level These advantages of on-chip biological experiments contrast to conventional methods, which require bulk samples that provide only averaged information on cell metabolism. We report on a device that integrates microfluidic technology with a magnetic tweezers array to combine the functionality of separation and encapsulation of objects such as immunomagnetically labeled cells or magnetic beads into pico-liter droplets on the same chip. The ability to control the separation throughput that is independent of the hydrodynamic droplet generation rate allows the encapsulation efficiency to be optimized. The device can potentially be integrated with on-chip labeling and/or bio-detection to become a powerful single-cell analysis device.
NASA Astrophysics Data System (ADS)
Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R.
2017-06-01
Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1H MEGA-PRESS, misalignment between mean edited (ON ‾) and non-edited (OFF ‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON ‾ and OFF ‾ 1H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3 T. The results of the alignment between the mean OFF ‾ and ON ‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF ‾ spectra of Rbd = 0.997 ± 0.003 (method (b) vs. (d)), compared to Rad = 0.764 ± 0.220 (method (a) vs. (d)) with no alignment between OFF ‾ and ON ‾ . Method (c) revealed a slightly lower correlation coefficient of Rcd = 0.972 ± 0.028 compared to Rbd, that can be ascribed to small remaining subtraction artefacts in the final DIFF ‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF ‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging.
Cleve, Marianne; Krämer, Martin; Gussew, Alexander; Reichenbach, Jürgen R
2017-06-01
Phase and frequency corrections of magnetic resonance spectroscopic data are of major importance to obtain reliable and unambiguous metabolite estimates as validated in recent research for single-shot scans with the same spectral fingerprint. However, when using the J-difference editing technique 1 H MEGA-PRESS, misalignment between mean edited (ON‾) and non-edited (OFF‾) spectra that may remain even after correction of the corresponding individual single-shot scans results in subtraction artefacts compromising reliable GABA quantitation. We present a fully automatic routine that iteratively optimizes simultaneously relative frequencies and phases between the mean ON‾ and OFF‾ 1 H MEGA-PRESS spectra while minimizing the sum of the magnitude of the difference spectrum (L 1 norm). The proposed method was applied to simulated spectra at different SNR levels with deliberately preset frequency and phase errors. Difference optimization proved to be more sensitive to small signal fluctuations, as e.g. arising from subtraction artefacts, and outperformed the alternative spectral registration approach, that, in contrast to our proposed linear approach, uses a nonlinear least squares minimization (L 2 norm), at all investigated levels of SNR. Moreover, the proposed method was applied to 47 MEGA-PRESS datasets acquired in vivo at 3T. The results of the alignment between the mean OFF‾ and ON‾ spectra were compared by applying (a) no correction, (b) difference optimization or (c) spectral registration. Since the true frequency and phase errors are not known for in vivo data, manually corrected spectra were used as the gold standard reference (d). Automatically corrected data applying both, method (b) or method (c), showed distinct improvements of spectra quality as revealed by the mean Pearson correlation coefficient between corresponding real part mean DIFF‾ spectra of R bd =0.997±0.003 (method (b) vs. (d)), compared to R ad =0.764±0.220 (method (a) vs. (d)) with no alignment between OFF‾ and ON‾. Method (c) revealed a slightly lower correlation coefficient of R cd =0.972±0.028 compared to R bd , that can be ascribed to small remaining subtraction artefacts in the final DIFF‾ spectrum. In conclusion, difference optimization performs robustly with no restrictions regarding the input data range or user intervention and represents a complementary tool to optimize the final DIFF‾ spectrum following the mandatory frequency and phase corrections of single ON and OFF scans prior to averaging. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rinaldi, I; Ludwig Maximilian University, Garching, DE; Heidelberg University Hospital, Heidelberg, DE
2015-06-15
Purpose: We present an improved method to calculate patient-specific calibration curves to convert X-ray computed tomography (CT) Hounsfield Unit (HU) to relative stopping powers (RSP) for proton therapy treatment planning. Methods: By optimizing the HU-RSP calibration curve, the difference between a proton radiographic image and a digitally reconstructed X-ray radiography (DRR) is minimized. The feasibility of this approach has previously been demonstrated. This scenario assumes that all discrepancies between proton radiography and DRR originate from uncertainties in the HU-RSP curve. In reality, external factors cause imperfections in the proton radiography, such as misalignment compared to the DRR and unfaithful representationmore » of geometric structures (“blurring”). We analyze these effects based on synthetic datasets of anthropomorphic phantoms and suggest an extended optimization scheme which explicitly accounts for these effects. Performance of the method is been tested for various simulated irradiation parameters. The ultimate purpose of the optimization is to minimize uncertainties in the HU-RSP calibration curve. We therefore suggest and perform a thorough statistical treatment to quantify the accuracy of the optimized HU-RSP curve. Results: We demonstrate that without extending the optimization scheme, spatial blurring (equivalent to FWHM=3mm convolution) in the proton radiographies can cause up to 10% deviation between the optimized and the ground truth HU-RSP calibration curve. Instead, results obtained with our extended method reach 1% or better correspondence. We have further calculated gamma index maps for different acceptance levels. With DTA=0.5mm and RD=0.5%, a passing ratio of 100% is obtained with the extended method, while an optimization neglecting effects of spatial blurring only reach ∼90%. Conclusion: Our contribution underlines the potential of a single proton radiography to generate a patient-specific calibration curve and to improve dose delivery by optimizing the HU-RSP calibration curve as long as all sources of systematic incongruence are properly modeled.« less
Optimization of Support Vector Machine (SVM) for Object Classification
NASA Technical Reports Server (NTRS)
Scholten, Matthew; Dhingra, Neil; Lu, Thomas T.; Chao, Tien-Hsin
2012-01-01
The Support Vector Machine (SVM) is a powerful algorithm, useful in classifying data into species. The SVMs implemented in this research were used as classifiers for the final stage in a Multistage Automatic Target Recognition (ATR) system. A single kernel SVM known as SVMlight, and a modified version known as a SVM with K-Means Clustering were used. These SVM algorithms were tested as classifiers under varying conditions. Image noise levels varied, and the orientation of the targets changed. The classifiers were then optimized to demonstrate their maximum potential as classifiers. Results demonstrate the reliability of SVM as a method for classification. From trial to trial, SVM produces consistent results.
The optimal power puzzle: scrutiny of the monotone likelihood ratio assumption in multiple testing.
Cao, Hongyuan; Sun, Wenguang; Kosorok, Michael R
2013-01-01
In single hypothesis testing, power is a non-decreasing function of type I error rate; hence it is desirable to test at the nominal level exactly to achieve optimal power. The puzzle lies in the fact that for multiple testing, under the false discovery rate paradigm, such a monotonic relationship may not hold. In particular, exact false discovery rate control may lead to a less powerful testing procedure if a test statistic fails to fulfil the monotone likelihood ratio condition. In this article, we identify different scenarios wherein the condition fails and give caveats for conducting multiple testing in practical settings.
An optimal brain can be composed of conflicting agents
Livnat, Adi; Pippenger, Nicholas
2006-01-01
Many behaviors have been attributed to internal conflict within the animal and human mind. However, internal conflict has not been reconciled with evolutionary principles, in that it appears maladaptive relative to a seamless decision-making process. We study this problem through a mathematical analysis of decision-making structures. We find that, under natural physiological limitations, an optimal decision-making system can involve “selfish” agents that are in conflict with one another, even though the system is designed for a single purpose. It follows that conflict can emerge within a collective even when natural selection acts on the level of the collective only. PMID:16492775
Verguet, Stéphane; Johri, Mira; Morris, Shaun K.; Gauvreau, Cindy L.; Jha, Prabhat; Jit, Mark
2015-01-01
Background The Measles & Rubella Initiative, a broad consortium of global health agencies, has provided support to measles-burdened countries, focusing on sustaining high coverage of routine immunization of children and supplementing it with a second dose opportunity for measles vaccine through supplemental immunization activities (SIAs). We estimate optimal scheduling of SIAs in countries with the highest measles burden. Methods We develop an age-stratified dynamic compartmental model of measles transmission. We explore the frequency of SIAs in order to achieve measles control in selected countries and two Indian states with high measles burden. Specifically, we compute the maximum allowable time period between two consecutive SIAs to achieve measles control. Results Our analysis indicates that a single SIA will not control measles transmission in any of the countries with high measles burden. However, regular SIAs at high coverage levels are a viable strategy to prevent measles outbreaks. The periodicity of SIAs differs between countries and even within a single country, and is determined by population demographics and existing routine immunization coverage. Conclusions Our analysis can guide country policymakers deciding on the optimal scheduling of SIA campaigns and the best combination of routine and SIA vaccination to control measles. PMID:25541214
An Effective and Novel Neural Network Ensemble for Shift Pattern Detection in Control Charts.
Barghash, Mahmoud
2015-01-01
Pattern recognition in control charts is critical to make a balance between discovering faults as early as possible and reducing the number of false alarms. This work is devoted to designing a multistage neural network ensemble that achieves this balance which reduces rework and scrape without reducing productivity. The ensemble under focus is composed of a series of neural network stages and a series of decision points. Initially, this work compared using multidecision points and single-decision point on the performance of the ANN which showed that multidecision points are highly preferable to single-decision points. This work also tested the effect of population percentages on the ANN and used this to optimize the ANN's performance. Also this work used optimized and nonoptimized ANNs in an ensemble and proved that using nonoptimized ANN may reduce the performance of the ensemble. The ensemble that used only optimized ANNs has improved performance over individual ANNs and three-sigma level rule. In that respect using the designed ensemble can help in reducing the number of false stops and increasing productivity. It also can be used to discover even small shifts in the mean as early as possible.
Impact of NICU design on environmental noise.
Szymczak, Stacy E; Shellhaas, Renée A
2014-04-01
For neonates requiring intensive care, the optimal sound environment is uncertain. Minimal disruptions from medical staff create quieter environments for sleep, but limit language exposure necessary for proper language development. There are two models of neonatal intensive care units (NICUs): open-bay, in which 6-to-10 infants are cared for in a single large room; and single-room, in which neonates are housed in private, individual hospital rooms. We compared the acoustic environments in the two NICU models. We extracted the audio tracks from video-electroencephalography (EEG) monitoring studies from neonates in an open-bay NICU and compared the acoustic environment to that recorded from neonates in a new single-room NICU. From each NICU, 18 term infants were studied (total N=36; mean gestational age 39.3±1.9 weeks). Neither z-scores of the sound level variance (0.088±0.03 vs. 0.083±0.03, p=0.7), nor percent time with peak sound variance (above 2 standard deviations; 3.6% vs. 3.8%, p=0.6) were different. However, time below 0.05 standard deviations was higher in the single-room NICU (76% vs. 70%, p=0.02). We provide objective evidence that single-room NICUs have equal sound peaks and overall noise level variability compared with open-bay units, but the former may offer significantly more time at lower noise levels.
A terahertz performance of hybrid single walled CNT based amplifier with analytical approach
NASA Astrophysics Data System (ADS)
Kumar, Sandeep; Song, Hanjung
2018-01-01
This work is focuses on terahertz performance of hybrid single walled carbon nanotube (CNT) based amplifier and proposed for measurement of soil parameters application. The proposed circuit topology provides hybrid structure which achieves wide impedance bandwidth of 0.33 THz within range of 1.07-THz to 1.42-THz with fractional amount of 28%. The single walled RF CNT network executes proposed ambition and proves its ability to resonant at 1.25-THz with analytical approach. Moreover, a RF based microstrip transmission line radiator used as compensator in the circuit topology which achieves more than 30 dB of gain. A proper methodology is chosen for achieves stability at circuit level in order to obtain desired optimal conditions. The fundamental approach optimizes matched impedance condition at (50+j0) Ω and noise variation with impact of series resistances for the proposed hybrid circuit topology and demonstrates the accuracy of performance parameters at the circuit level. The chip fabrication of the proposed circuit by using RF based commercial CMOS process of 45 nm which reveals promising results with simulation one. Additionally, power measurement analysis achieves highest output power of 26 dBm with power added efficiency of 78%. The succeed minimum noise figure from 0.6 dB to 0.4 dB is outstanding achievement for circuit topology at terahertz range. The chip area of hybrid circuit is 0.65 mm2 and power consumption of 9.6 mW.
NASA Astrophysics Data System (ADS)
Crayton, Samuel
The rapidly progressing field of nanotechnology promises to revolutionize healthcare in the 21st century, with applications in the prevention, diagnosis, and treatment of a wide range of diseases. However, before nanoparticulate agents can be brought into clinical use, they must first be developed, optimized, and evaluated in animal models. In the typical pre-clinical paradigm, almost all of the optimization is done at the in vitro level, with only a few select agents reaching the level of animal studies. Since only one experimental nanoparticle formulation can be investigated in a single animal, and in vivo experiments have relatively higher complexity, cost, and time requirements, it is not feasible to evaluate a very large number of agents at the in vivo stage. A major drawback of this approach, however, is that in vitro assays do not always accurately predict how a nanoparticle will perform in animal studies. Therefore, a method that allows many agents to be evaluated in a single animal subject would allow for much more efficient and predictive optimization of nanoparticles. We have found that by incorporating lanthanide tracer metals into nanoparticle formulations, we are successfully able to use inductively coupled plasma mass spectrometry (ICP-MS) to quantitatively determine a nanoparticle's blood clearance kinetics, biodistribution, and tumor delivery. This approach was applied to evaluate both passive and active tumor targeting, as well as metabolically directed targeting of nanoparticles to low pH tumor microenvironments. Importantly, we found that these in vivo measurements could be made for many nanoparticle formulations simultaneously, in single animals, due to the high-order multiplexing capability of mass spectrometry. This approach allowed for efficient and reproducible comparison of performance between different nanoparticle formulations, by eliminating the effects of subject-to-subject variability. In the future, we envision that this "higher-throughput" evaluation of agents at the in vivo level, using ICP-MS multiplex analysis, will constitute a powerful tool to accelerate pre-clinical evaluation of nanoparticles in animal models.
Scheef, Lukas; Nordmeyer-Massner, Jurek A; Smith-Collins, Adam Pr; Müller, Nicole; Stegmann-Woessner, Gaby; Jankowski, Jacob; Gieseke, Jürgen; Born, Mark; Seitz, Hermann; Bartmann, Peter; Schild, Hans H; Pruessmann, Klaas P; Heep, Axel; Boecker, Henning
2017-01-01
Functional magnetic resonance imaging (fMRI) in neonates has been introduced as a non-invasive method for studying sensorimotor processing in the developing brain. However, previous neonatal studies have delivered conflicting results regarding localization, lateralization, and directionality of blood oxygenation level dependent (BOLD) responses in sensorimotor cortex (SMC). Amongst the confounding factors in interpreting neonatal fMRI studies include the use of standard adult MR-coils providing insufficient signal to noise, and liberal statistical thresholds, compromising clinical interpretation at the single subject level. Here, we employed a custom-designed neonatal MR-coil adapted and optimized to the head size of a newborn in order to improve robustness, reliability and validity of neonatal sensorimotor fMRI. Thirteen preterm infants with a median gestational age of 26 weeks were scanned at term-corrected age using a prototype 8-channel neonatal head coil at 3T (Achieva, Philips, Best, NL). Sensorimotor stimulation was elicited by passive extension/flexion of the elbow at 1 Hz in a block design. Analysis of temporal signal to noise ratio (tSNR) was performed on the whole brain and the SMC, and was compared to data acquired with an 'adult' 8 channel head coil published previously. Task-evoked activation was determined by single-subject SPM8 analyses, thresholded at p < 0.05, whole-brain FWE-corrected. Using a custom-designed neonatal MR-coil, we found significant positive BOLD responses in contralateral SMC after unilateral passive sensorimotor stimulation in all neonates (analyses restricted to artifact-free data sets = 8/13). Improved imaging characteristics of the neonatal MR-coil were evidenced by additional phantom and in vivo tSNR measurements: phantom studies revealed a 240% global increase in tSNR; in vivo studies revealed a 73% global and a 55% local (SMC) increase in tSNR, as compared to the 'adult' MR-coil. Our findings strengthen the importance of using optimized coil settings for neonatal fMRI, yielding robust and reproducible SMC activation at the single subject level. We conclude that functional lateralization of SMC activation, as found in children and adults, is already present in the newborn period.
Kim, Hyunsoo; Tanatar, M A; Martin, C; Blomberg, E C; Ni, Ni; Bud'ko, S L; Canfield, P C; Prozorov, R
2018-06-06
Doping evolution of the superconducting gap anisotropy was studied in single crystals of 4d-electron doped Ba(Fe 1-x Rh x ) 2 As 2 using tunnel diode resonator measurements of the temperature variation of the London penetration depth [Formula: see text]. Single crystals with doping levels representative of an underdoped regime x = 0.039 ([Formula: see text] K), close to optimal doping x = 0.057 ([Formula: see text] K) and overdoped x = 0.079 ([Formula: see text] K) and x = 0.131([Formula: see text] K) were studied. Superconducting energy gap anisotropy was characterized by the exponent, n, by fitting the data to the power-law, [Formula: see text]. The exponent n varies non-monotonically with x, increasing to a maximum n = 2.5 for x = 0.079 and rapidly decreasing towards overdoped compositions to 1.6 for x = 0.131. This behavior is qualitatively similar to the doping evolution of the superconducting gap anisotropy in other iron pnictides, including hole-doped (Ba,K)Fe 2 As 2 and 3d-electron-doped Ba(Fe,Co) 2 As 2 superconductors, finding a full gap near optimal doping and strong anisotropy toward the ends of the superconducting dome in the T-x phase diagram. The normalized superfluid density in an optimally Rh-doped sample is almost identical to the temperature-dependence in the optimally doped Ba(Fe,Co) 2 As 2 samples. Our study supports the universal superconducting gap variation with doping and [Formula: see text] pairing at least in iron based superconductors of the BaFe 2 As 2 family.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hyunsoo; Tanatar, M. A.; Martin, C.
Doping evolution of the superconducting gap anisotropy was studied in single crystals of 4d-electron doped Ba(Fe 1–xRh x) 2As 2 using tunnel diode resonator measurements of the temperature variation of the London penetration depth Δλ( T). Single crystals with doping levels representative of an underdoped regime x = 0.039 ( T c = 15.5 K), close to optimal doping x = 0.057 ( T c = 24.4 K) and overdoped x = 0.079 ( T c = 21.5 K) and x = 0.131( T c = 4.9 K) were studied. Superconducting energy gap anisotropy was characterized by the exponent, n,more » by fitting the data to the power-law, Δλ = AT n. The exponent n varies non-monotonically with x, increasing to a maximum n = 2.5 for x = 0.079 and rapidly decreasing towards overdoped compositions to 1.6 for x = 0.131. This behavior is qualitatively similar to the doping evolution of the superconducting gap anisotropy in other iron pnictides, including hole-doped (Ba,K)Fe 2As 2 and 3d-electron-doped Ba(Fe,Co) 2As 2 superconductors, finding a full gap near optimal doping and strong anisotropy toward the ends of the superconducting dome in the T-x phase diagram. The normalized superfluid density in an optimally Rh-doped sample is almost identical to the temperature-dependence in the optimally doped Ba(Fe,Co) 2As 2 samples. In conclusion, our study supports the universal superconducting gap variation with doping and pairing at least in iron based superconductors of the BaFe 2As 2 family.« less
Kim, Hyunsoo; Tanatar, M. A.; Martin, C.; ...
2018-05-08
Doping evolution of the superconducting gap anisotropy was studied in single crystals of 4d-electron doped Ba(Fe 1–xRh x) 2As 2 using tunnel diode resonator measurements of the temperature variation of the London penetration depth Δλ( T). Single crystals with doping levels representative of an underdoped regime x = 0.039 ( T c = 15.5 K), close to optimal doping x = 0.057 ( T c = 24.4 K) and overdoped x = 0.079 ( T c = 21.5 K) and x = 0.131( T c = 4.9 K) were studied. Superconducting energy gap anisotropy was characterized by the exponent, n,more » by fitting the data to the power-law, Δλ = AT n. The exponent n varies non-monotonically with x, increasing to a maximum n = 2.5 for x = 0.079 and rapidly decreasing towards overdoped compositions to 1.6 for x = 0.131. This behavior is qualitatively similar to the doping evolution of the superconducting gap anisotropy in other iron pnictides, including hole-doped (Ba,K)Fe 2As 2 and 3d-electron-doped Ba(Fe,Co) 2As 2 superconductors, finding a full gap near optimal doping and strong anisotropy toward the ends of the superconducting dome in the T-x phase diagram. The normalized superfluid density in an optimally Rh-doped sample is almost identical to the temperature-dependence in the optimally doped Ba(Fe,Co) 2As 2 samples. In conclusion, our study supports the universal superconducting gap variation with doping and pairing at least in iron based superconductors of the BaFe 2As 2 family.« less
NASA Astrophysics Data System (ADS)
Kim, Hyunsoo; Tanatar, M. A.; Martin, C.; Blomberg, E. C.; Ni, Ni; Bud’ko, S. L.; Canfield, P. C.; Prozorov, R.
2018-06-01
Doping evolution of the superconducting gap anisotropy was studied in single crystals of 4d-electron doped Ba(Fe1‑x Rh x )2As2 using tunnel diode resonator measurements of the temperature variation of the London penetration depth . Single crystals with doping levels representative of an underdoped regime x = 0.039 ( K), close to optimal doping x = 0.057 ( K) and overdoped x = 0.079 ( K) and x = 0.131( K) were studied. Superconducting energy gap anisotropy was characterized by the exponent, n, by fitting the data to the power-law, . The exponent n varies non-monotonically with x, increasing to a maximum n = 2.5 for x = 0.079 and rapidly decreasing towards overdoped compositions to 1.6 for x = 0.131. This behavior is qualitatively similar to the doping evolution of the superconducting gap anisotropy in other iron pnictides, including hole-doped (Ba,K)Fe2As2 and 3d-electron-doped Ba(Fe,Co)2As2 superconductors, finding a full gap near optimal doping and strong anisotropy toward the ends of the superconducting dome in the T-x phase diagram. The normalized superfluid density in an optimally Rh-doped sample is almost identical to the temperature-dependence in the optimally doped Ba(Fe,Co)2As2 samples. Our study supports the universal superconducting gap variation with doping and pairing at least in iron based superconductors of the BaFe2As2 family.
Gao, Shun-Yu; Zhang, Xiao-Peng; Cui, Yong; Sun, Ying-Shi; Tang, Lei; Li, Xiao-Ting; Zhang, Xiao-Yan; Shan, Jun
2014-08-01
To explore whether single and fused monochromatic images can improve liver tumor detection and delineation by single source dual energy CT (ssDECT) in patients with hepatocellular carcinoma (HCC) during arterial phase. Fifty-seven patients with HCC who underwent ssDECT scanning at Beijing Cancer Hospital were enrolled retrospectively. Twenty-one sets of monochromatic images from 40 to 140 keV were reconstructed at 5 keV intervals in arterial phase. The optimal contrast-noise ratio (CNR) monochromatic images of the liver tumor and the lowest-noise monochromatic images were selected for image fusion. We evaluated the image quality of the optimal-CNR monochromatic images, the lowest-noise monochromatic images and the fused monochromatic images, respectively. The evaluation indicators included the spatial resolution of the anatomical structure, the noise level, the contrast and CNR of the tumor. In arterial phase, the anatomical structure of the liver can be displayed most clearly in the 65-keV monochromatic images, with the lowest image noise. The optimal-CNR monochromatic images of HCC tumor were 50-keV monochromatic images in which the internal structural features of the liver tumors were displayed most clearly and meticulously. For tumor detection, the fused monochromatic images and the 50-keV monochromatic images had similar performances, and were more sensitive than 65-keV monochromatic images. We achieved good arterial phase images by fusing the optimal-CNR monochromatic images of the HCC tumor and the lowest-noise monochromatic images. The fused images displayed liver tumors and anatomical structures more clearly, which is potentially helpful for identifying more and smaller HCC tumors.
NASA Technical Reports Server (NTRS)
Stevenson, T. R.; Hsieh, W.-T.; Li, M. J.; Stahle, C. M.; Wollack, E. J.; Schoelkopf, R. J.; Krebs, Carolyn (Technical Monitor)
2002-01-01
The science drivers for the SPIRIT/SPECS missions demand sensitive, fast, compact, low-power, large-format detector arrays for high resolution imaging and spectroscopy in the far infrared and submillimeter. Detector arrays with 10,000 pixels and sensitivity less than 10(exp 20)-20 W/Hz(exp 20)0.5 are needed. Antenna-coupled superconducting tunnel junction detectors with integrated rf single-electron transistor readout amplifiers have the potential for achieving this high level of sensitivity, and can take advantage of an rf multiplexing technique when forming arrays. The device consists of an antenna structure to couple radiation into a small superconducting volume and cause quasiparticle excitations, and a single-electron transistor to measure currents through tunnel junction contacts to the absorber volume. We will describe optimization of device parameters, and recent results on fabrication techniques for producing devices with high yield for detector arrays. We will also present modeling of expected saturation power levels, antenna coupling, and rf multiplexing schemes.
Single-trial log transformation is optimal in frequency analysis of resting EEG alpha.
Smulders, Fren T Y; Ten Oever, Sanne; Donkers, Franc C L; Quaedflieg, Conny W E M; van de Ven, Vincent
2018-02-01
The appropriate definition and scaling of the magnitude of electroencephalogram (EEG) oscillations is an underdeveloped area. The aim of this study was to optimize the analysis of resting EEG alpha magnitude, focusing on alpha peak frequency and nonlinear transformation of alpha power. A family of nonlinear transforms, Box-Cox transforms, were applied to find the transform that (a) maximized a non-disputed effect: the increase in alpha magnitude when the eyes are closed (Berger effect), and (b) made the distribution of alpha magnitude closest to normal across epochs within each participant, or across participants. The transformations were performed either at the single epoch level or at the epoch-average level. Alpha peak frequency showed large individual differences, yet good correspondence between various ways to estimate it in 2 min of eyes-closed and 2 min of eyes-open resting EEG data. Both alpha magnitude and the Berger effect were larger for individual alpha than for a generic (8-12 Hz) alpha band. The log-transform on single epochs (a) maximized the t-value of the contrast between the eyes-open and eyes-closed conditions when tested within each participant, and (b) rendered near-normally distributed alpha power across epochs and participants, thereby making further transformation of epoch averages superfluous. The results suggest that the log-normal distribution is a fundamental property of variations in alpha power across time in the order of seconds. Moreover, effects on alpha power appear to be multiplicative rather than additive. These findings support the use of the log-transform on single epochs to achieve appropriate scaling of alpha magnitude. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Dubroca, Guilhem; Richert, Michaël.; Loiseaux, Didier; Caron, Jérôme; Bézy, Jean-Loup
2015-09-01
To increase the accuracy of earth-observation spectro-imagers, it is necessary to achieve high levels of depolarization of the incoming beam. The preferred device in space instrument is the so-called polarization scrambler. It is made of birefringent crystal wedges arranged in a single or dual Babinet. Today, with required radiometric accuracies of the order of 0.1%, it is necessary to develop tools to find optimal and low sensitivity solutions quickly and to measure the performances with a high level of accuracy.
High-level expression of Camelid nanobodies in Nicotiana benthamiana.
Teh, Yi-Hui Audrey; Kavanagh, Tony A
2010-08-01
Nanobodies (or VHHs) are single-domain antigen-binding fragments derived from Camelid heavy chain-only antibodies. Their small size, monomeric behaviour, high stability and solubility, and ability to bind epitopes not accessible to conventional antibodies make them especially suitable for many therapeutic and biotechnological applications. Here we describe high-level expression, in Nicotiana benthamiana, of three versions of an anti-hen egg white lysozyme (HEWL) nanobody which include the original VHH from an immunized library (cAbLys3), a codon-optimized derivative, and a codon-optimized hybrid nanobody comprising the CDRs of cAbLys3 grafted onto an alternative 'universal' nanobody framework. His6- and StrepII-tagged derivatives of each nanobody were targeted for accumulation in the cytoplasm, chloroplast and apoplast using different pre-sequences. When targeted to the apoplast, intact functional nanobodies accumulated at an exceptionally high level (up to 30% total leaf protein), demonstrating the great potential of plants as a nanobody production system.
The Optimal Forest Rotation: A Discussion and Annotated Bibliography
David H. Newman
1988-01-01
The literature contains six different criteria of the optimal forest rotation: (1) maximum single-rotation physical yield, (2) maximum single-rotation annual yield, (3) maximum single-rotation discounted net revenues, (4) maximum discounted net revenues from an infinite series of rotations, (5) maximum annual net revenues, and (6) maximum internal rate of return. First...
MobiDB-lite: fast and highly specific consensus prediction of intrinsic disorder in proteins.
Necci, Marco; Piovesan, Damiano; Dosztányi, Zsuzsanna; Tosatto, Silvio C E
2017-05-01
Intrinsic disorder (ID) is established as an important feature of protein sequences. Its use in proteome annotation is however hampered by the availability of many methods with similar performance at the single residue level, which have mostly not been optimized to predict long ID regions of size comparable to domains. Here, we have focused on providing a single consensus-based prediction, MobiDB-lite, optimized for highly specific (i.e. few false positive) predictions of long disorder. The method uses eight different predictors to derive a consensus which is then filtered for spurious short predictions. Consensus prediction is shown to outperform the single methods when annotating long ID regions. MobiDB-lite can be useful in large-scale annotation scenarios and has indeed already been integrated in the MobiDB, DisProt and InterPro databases. MobiDB-lite is available as part of the MobiDB database from URL: http://mobidb.bio.unipd.it/. An executable can be downloaded from URL: http://protein.bio.unipd.it/mobidblite/. silvio.tosatto@unipd.it. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Coelho, Joseph R.; Hastings, Jon M.; Holliday, Charles W.
2012-01-01
This study evaluated foraging effectiveness of Pacific cicada killers (Sphecius convallis) by comparing observed prey loads to that predicted by an optimality model. Female S. convallis preyed exclusively on the cicada Tibicen parallelus, resulting in a mean loaded flight muscle ratio (FMR) of 0.187 (N = 46). This value lies just above the marginal level, and only seven wasps (15%) were below 0.179. The low standard error (0.002) suggests that S. convallis is the most ideal flying predator so far examined in this respect. Preying on a single species may have allowed stabilizing selection to adjust the morphology of females to a nearly ideal size. That the loaded FMR is slightly above the marginal level may provide a small safety factor for wasps that do not have optimal thorax temperatures or that have to contend with attempted prey theft. Operational FMR was directly related to wasp body mass. Smaller wasps were overloaded in spite of provisioning with smaller cicadas, while larger wasps were underloaded despite provisioning with larger cicadas. Small wasps may have abandoned larger cicadas because of difficulty with carriage. PMID:26467953
Adaptive track scheduling to optimize concurrency and vectorization in GeantV
Apostolakis, J.; Bandieramonte, M.; Bitzes, G.; ...
2015-05-22
The GeantV project is focused on the R&D of new particle transport techniques to maximize parallelism on multiple levels, profiting from the use of both SIMD instructions and co-processors for the CPU-intensive calculations specific to this type of applications. In our approach, vectors of tracks belonging to multiple events and matching different locality criteria must be gathered and dispatched to algorithms having vector signatures. While the transport propagates tracks and changes their individual states, data locality becomes harder to maintain. The scheduling policy has to be changed to maintain efficient vectors while keeping an optimal level of concurrency. The modelmore » has complex dynamics requiring tuning the thresholds to switch between the normal regime and special modes, i.e. prioritizing events to allow flushing memory, adding new events in the transport pipeline to boost locality, dynamically adjusting the particle vector size or switching between vector to single track mode when vectorization causes only overhead. Lastly, this work requires a comprehensive study for optimizing these parameters to make the behaviour of the scheduler self-adapting, presenting here its initial results.« less
NASA Astrophysics Data System (ADS)
Rahman, Shariffah Nurhidayah Syed Abdul; Kalil, Mohd Sahaid; Hamid, Aidil Abdul
2018-04-01
Optimization of fermentation medium for the production of docosahexaenoic acid (DHA) by Aurantiochytrium sp. SW1 was carried out. In this study, levels of fructose, monosodium glutamate (MSG) and sea salt were optimized for enhanced lipid and DHA production using response surface methodology (RSM). The design contains a total of 20 runs with 6 central points replication. Cultivation was carried out in 500 mL flasks containing 100 mL nitrogen limited medium at 30°C for 96h. Sequential model sum of squares (SS) revealed that the system was adequately represented by a quadratic model (p<0.0001). ANOVA results showed that fructose and MSG as a single factor has significant positive effect on the DHA content of SW1. The estimated optimal levels of the factors were 100 g/L fructose, 8 g/L MSG and 47% sea salt. Subsequent cultivation employing the suggested values confirmed that the predicted response values were experimentally achievable and reproducible, where 8.82 g/L DHA (51.34% g/g lipid) was achieved.
Sellers, Benjamin D; James, Natalie C; Gobbi, Alberto
2017-06-26
Reducing internal strain energy in small molecules is critical for designing potent drugs. Quantum mechanical (QM) and molecular mechanical (MM) methods are often used to estimate these energies. In an effort to determine which methods offer an optimal balance in accuracy and performance, we have carried out torsion scan analyses on 62 fragments. We compared nine QM and four MM methods to reference energies calculated at a higher level of theory: CCSD(T)/CBS single point energies (coupled cluster with single, double, and perturbative triple excitations at the complete basis set limit) calculated on optimized geometries using MP2/6-311+G**. The results show that both the more recent MP2.X perturbation method as well as MP2/CBS perform quite well. In addition, combining a Hartree-Fock geometry optimization with a MP2/CBS single point energy calculation offers a fast and accurate compromise when dispersion is not a key energy component. Among MM methods, the OPLS3 force field accurately reproduces CCSD(T)/CBS torsion energies on more test cases than the MMFF94s or Amber12:EHT force fields, which struggle with aryl-amide and aryl-aryl torsions. Using experimental conformations from the Cambridge Structural Database, we highlight three example structures for which OPLS3 significantly overestimates the strain. The energies and conformations presented should enable scientists to estimate the expected error for the methods described and we hope will spur further research into QM and MM methods.
Park, Jae-Yong; Koo, Bon Seok
2014-06-01
Despite an excellent prognosis, cervical lymph node (LN) metastases are common in patients with papillary thyroid cancer (PTC). The presence of metastasis is associated with an increased risk of locoregional recurrence, which significantly impairs quality of life and may decrease survival. Therefore, it has been an important determinant of the extent of lateral LN dissection in the initial treatment of PTC patients with lateral cervical metastasis. However, the optimal extent of therapeutic lateral neck dissection (ND) remains controversial. Optimizing the surgical extent of LN dissection is fundamental for balancing the surgical morbidity and oncological benefits of ND in PTC patients with lateral neck metastasis. We reviewed the currently available literature regarding the optimal extent of lateral LN dissection in PTC patients with lateral neck metastasis. Even in cases with suspicion of metastatic LN at the single lateral level or isolated metastatic lateral LN, the application of ND including all sublevels from IIa and IIb to Va and Vb may be overtreatment, due to the surgical morbidity. When there is no suspicion of LN metastasis at levels II and V, or when multilevel aggressive neck metastasis is not found, sublevel IIb and Va dissection may not be necessary in PTC patients with lateral neck metastasis. Thus consideration of the individualized optimal surgical extent of lateral ND is important when treating PTC patients with lateral cervical metastasis.
A comprehensive formulation for volumetric modulated arc therapy planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Dan; Lyu, Qihui; Ruan, Dan
2016-07-15
Purpose: Volumetric modulated arc therapy (VMAT) is a widely employed radiation therapy technique, showing comparable dosimetry to static beam intensity modulated radiation therapy (IMRT) with reduced monitor units and treatment time. However, the current VMAT optimization has various greedy heuristics employed for an empirical solution, which jeopardizes plan consistency and quality. The authors introduce a novel direct aperture optimization method for VMAT to overcome these limitations. Methods: The comprehensive VMAT (comVMAT) planning was formulated as an optimization problem with an L2-norm fidelity term to penalize the difference between the optimized dose and the prescribed dose, as well as an anisotropicmore » total variation term to promote piecewise continuity in the fluence maps, preparing it for direct aperture optimization. A level set function was used to describe the aperture shapes and the difference between aperture shapes at adjacent angles was penalized to control MLC motion range. A proximal-class optimization solver was adopted to solve the large scale optimization problem, and an alternating optimization strategy was implemented to solve the fluence intensity and aperture shapes simultaneously. Single arc comVMAT plans, utilizing 180 beams with 2° angular resolution, were generated for a glioblastoma multiforme case, a lung (LNG) case, and two head and neck cases—one with three PTVs (H&N{sub 3PTV}) and one with foue PTVs (H&N{sub 4PTV})—to test the efficacy. The plans were optimized using an alternating optimization strategy. The plans were compared against the clinical VMAT (clnVMAT) plans utilizing two overlapping coplanar arcs for treatment. Results: The optimization of the comVMAT plans had converged within 600 iterations of the block minimization algorithm. comVMAT plans were able to consistently reduce the dose to all organs-at-risk (OARs) as compared to the clnVMAT plans. On average, comVMAT plans reduced the max and mean OAR dose by 6.59% and 7.45%, respectively, of the prescription dose. Reductions in max dose and mean dose were as high as 14.5 Gy in the LNG case and 15.3 Gy in the H&N{sub 3PTV} case. PTV coverages measured by D95, D98, and D99 were within 0.25% of the prescription dose. By comprehensively optimizing all beams, the comVMAT optimizer gained the freedom to allow some selected beams to deliver higher intensities, yielding a dose distribution that resembles a static beam IMRT plan with beam orientation optimization. Conclusions: The novel nongreedy VMAT approach simultaneously optimizes all beams in an arc and then directly generates deliverable apertures. The single arc VMAT approach thus fully utilizes the digital Linac’s capability in dose rate and gantry rotation speed modulation. In practice, the new single VMAT algorithm generates plans superior to existing VMAT algorithms utilizing two arcs.« less
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Hasinoff, Samuel W; Kutulakos, Kiriakos N
2011-11-01
In this paper, we consider the problem of imaging a scene with a given depth of field at a given exposure level in the shortest amount of time possible. We show that by 1) collecting a sequence of photos and 2) controlling the aperture, focus, and exposure time of each photo individually, we can span the given depth of field in less total time than it takes to expose a single narrower-aperture photo. Using this as a starting point, we obtain two key results. First, for lenses with continuously variable apertures, we derive a closed-form solution for the globally optimal capture sequence, i.e., that collects light from the specified depth of field in the most efficient way possible. Second, for lenses with discrete apertures, we derive an integer programming problem whose solution is the optimal sequence. Our results are applicable to off-the-shelf cameras and typical photography conditions, and advocate the use of dense, wide-aperture photo sequences as a light-efficient alternative to single-shot, narrow-aperture photography.
Force measurements in stiff, 3D, opaque granular materials
NASA Astrophysics Data System (ADS)
Hurley, Ryan C.; Hall, Stephen A.; Andrade, José E.; Wright, Jonathan
2017-06-01
We present results from two experiments that provide the first quantification of inter-particle force networks in stiff, 3D, opaque granular materials. Force vectors between all grains were determined using a mathematical optimization technique that seeks to satisfy grain equilibrium and strain measurements. Quantities needed in the optimization - the spatial location of the inter-particle contact network and tensor grain strains - were found using 3D X-ray diffraction and X-ray computed tomography. The statistics of the force networks are consistent with those found in past simulations and 2D experiments. In particular, we observe an exponential decay of normal forces above the mean and a partition of forces into strong and weak networks. In the first experiment, involving 77 single-crystal quartz grains, we also report on the temporal correlation of the force network across two sequential load cycles. In the second experiment, involving 1099 single-crystal ruby grains, we characterize force network statistics at low levels of compression.
Master-slave control scheme in electric vehicle smart charging infrastructure.
Chung, Ching-Yen; Chynoweth, Joshua; Chu, Chi-Cheng; Gadh, Rajit
2014-01-01
WINSmartEV is a software based plug-in electric vehicle (PEV) monitoring, control, and management system. It not only incorporates intelligence at every level so that charge scheduling can avoid grid bottlenecks, but it also multiplies the number of PEVs that can be plugged into a single circuit. This paper proposes, designs, and executes many upgrades to WINSmartEV. These upgrades include new hardware that makes the level 1 and level 2 chargers faster, more robust, and more scalable. It includes algorithms that provide a more optimal charge scheduling for the level 2 (EVSE) and an enhanced vehicle monitoring/identification module (VMM) system that can automatically identify PEVs and authorize charging.
Master-Slave Control Scheme in Electric Vehicle Smart Charging Infrastructure
Chung, Ching-Yen; Chynoweth, Joshua; Chu, Chi-Cheng; Gadh, Rajit
2014-01-01
WINSmartEV is a software based plug-in electric vehicle (PEV) monitoring, control, and management system. It not only incorporates intelligence at every level so that charge scheduling can avoid grid bottlenecks, but it also multiplies the number of PEVs that can be plugged into a single circuit. This paper proposes, designs, and executes many upgrades to WINSmartEV. These upgrades include new hardware that makes the level 1 and level 2 chargers faster, more robust, and more scalable. It includes algorithms that provide a more optimal charge scheduling for the level 2 (EVSE) and an enhanced vehicle monitoring/identification module (VMM) system that can automatically identify PEVs and authorize charging. PMID:24982956
Song, Yong-Hong; Sun, Xue-Wen; Jiang, Bo; Liu, Ji-En; Su, Xian-Hui
2015-12-01
Design of experiment (DoE) is a statistics-based technique for experimental design that could overcome the shortcomings of traditional one-factor-at-a-time (OFAT) approach for protein purification optimization. In this study, a DoE approach was applied for optimizing purification of a recombinant single-chain variable fragment (scFv) against type 1 insulin-like growth factor receptor (IGF-1R) expressed in Escherichia coli. In first capture step using Capto L, a 2-level fractional factorial analysis and successively a central composite circumscribed (CCC) design were used to identify the optimal elution conditions. Two main effects, pH and trehalose, were identified, and high recovery (above 95%) and low aggregates ratio (below 10%) were achieved at the pH range from 2.9 to 3.0 with 32-35% (w/v) trehalose added. In the second step using cation exchange chromatography, an initial screening of media and elution pH and a following CCC design were performed, whereby the optimal selectivity of the scFv was obtained on Capto S at pH near 6.0, and the optimal conditions for fulfilling high DBC and purity were identified as pH range of 5.9-6.1 and loading conductivity range of 5-12.5 mS/cm. Upon a further gel filtration, the final purified scFv with a purity of 98% was obtained. Finally, the optimized conditions were verified by a 20-fold scale-up experiment. The purities and yields of intermediate and final products all fell within the regions predicted by DoE approach, suggesting the robustness of the optimized conditions. We proposed that the DoE approach described here is also applicable in production of other recombinant antibody constructs. Copyright © 2015 Elsevier Inc. All rights reserved.
Operating Spin Echo in the Quantum Regime for an Atomic-Ensemble Quantum Memory
NASA Astrophysics Data System (ADS)
Rui, Jun; Jiang, Yan; Yang, Sheng-Jun; Zhao, Bo; Bao, Xiao-Hui; Pan, Jian-Wei
2015-09-01
Spin echo is a powerful technique to extend atomic or nuclear coherence times by overcoming the dephasing due to inhomogeneous broadenings. However, there are disputes about the feasibility of applying this technique to an ensemble-based quantum memory at the single-quanta level. In this experimental study, we find that noise due to imperfections of the rephasing pulses has both intense superradiant and weak isotropic parts. By properly arranging the beam directions and optimizing the pulse fidelities, we successfully manage to operate the spin echo technique in the quantum regime by observing nonclassical photon-photon correlations as well as the quantum behavior of retrieved photons. Our work for the first time demonstrates the feasibility of harnessing the spin echo method to extend the lifetime of ensemble-based quantum memories at the single-quanta level.
General Methodology for Designing Spacecraft Trajectories
NASA Technical Reports Server (NTRS)
Condon, Gerald; Ocampo, Cesar; Mathur, Ravishankar; Morcos, Fady; Senent, Juan; Williams, Jacob; Davis, Elizabeth C.
2012-01-01
A methodology for designing spacecraft trajectories in any gravitational environment within the solar system has been developed. The methodology facilitates modeling and optimization for problems ranging from that of a single spacecraft orbiting a single celestial body to that of a mission involving multiple spacecraft and multiple propulsion systems operating in gravitational fields of multiple celestial bodies. The methodology consolidates almost all spacecraft trajectory design and optimization problems into a single conceptual framework requiring solution of either a system of nonlinear equations or a parameter-optimization problem with equality and/or inequality constraints.
Feng, Guitao; Li, Junyu; Colberts, Fallon J M; Li, Mengmeng; Zhang, Jianqi; Yang, Fan; Jin, Yingzhi; Zhang, Fengling; Janssen, René A J; Li, Cheng; Li, Weiwei
2017-12-27
A series of "double-cable" conjugated polymers were developed for application in efficient single-component polymer solar cells, in which high quantum efficiencies could be achieved due to the optimized nanophase separation between donor and acceptor parts. The new double-cable polymers contain electron-donating poly(benzodithiophene) (BDT) as linear conjugated backbone for hole transport and pendant electron-deficient perylene bisimide (PBI) units for electron transport, connected via a dodecyl linker. Sulfur and fluorine substituents were introduced to tune the energy levels and crystallinity of the conjugated polymers. The double-cable polymers adopt a "face-on" orientation in which the conjugated BDT backbone and the pendant PBI units have a preferential π-π stacking direction perpendicular to the substrate, favorable for interchain charge transport normal to the plane. The linear conjugated backbone acts as a scaffold for the crystallization of the PBI groups, to provide a double-cable nanophase separation of donor and acceptor phases. The optimized nanophase separation enables efficient exciton dissociation as well as charge transport as evidenced from the high-up to 80%-internal quantum efficiency for photon-to-electron conversion. In single-component organic solar cells, the double-cable polymers provide power conversion efficiency up to 4.18%. This is one of the highest performances in single-component organic solar cells. The nanophase-separated design can likely be used to achieve high-performance single-component organic solar cells.
Chan, Christian S; Rhodes, Jean E; Pérez, John E
2012-03-01
This prospective study examined the pathways by which religious involvement affected the post-disaster psychological functioning of women who survived Hurricanes Katrina and Rita. The participants were 386 low-income, predominantly Black, single mothers. The women were enrolled in the study before the hurricane, providing a rare opportunity to document changes in mental health from before to after the storm, and to assess the protective role of religious involvement over time. Results of structural equation modeling indicated that, controlling for level of exposure to the hurricanes, pre-disaster physical health, age, and number of children, pre-disaster religiousness predicted higher levels of post-disaster (1) social resources and (2) optimism and sense of purpose. The latter, but not the former, was associated with better post-disaster psychological outcome. Mediation analysis confirmed the mediating role of optimism and sense of purpose.
Rhodes, Jean E.; Pérez, John E.
2013-01-01
This prospective study examined the pathways by which religious involvement affected the post-disaster psychological functioning of women who survived Hurricanes Katrina and Rita. The participants were 386 low-income, predominantly Black, single mothers. The women were enrolled in the study before the hurricane, providing a rare opportunity to document changes in mental health from before to after the storm, and to assess the protective role of religious involvement over time. Results of structural equation modeling indicated that, controlling for level of exposure to the hurricanes, pre-disaster physical health, age, and number of children, pre-disaster religiousness predicted higher levels of post-disaster (1) social resources and (2) optimism and sense of purpose. The latter, but not the former, was associated with better post-disaster psychological outcome. Mediation analysis confirmed the mediating role of optimism and sense of purpose. PMID:21626083
Optimally robust redundancy relations for failure detection in uncertain systems
NASA Technical Reports Server (NTRS)
Lou, X.-C.; Willsky, A. S.; Verghese, G. C.
1986-01-01
All failure detection methods are based, either explicitly or implicitly, on the use of redundancy, i.e. on (possibly dynamic) relations among the measured variables. The robustness of the failure detection process consequently depends to a great degree on the reliability of the redundancy relations, which in turn is affected by the inevitable presence of model uncertainties. In this paper the problem of determining redundancy relations that are optimally robust is addressed in a sense that includes several major issues of importance in practical failure detection and that provides a significant amount of intuition concerning the geometry of robust failure detection. A procedure is given involving the construction of a single matrix and its singular value decomposition for the determination of a complete sequence of redundancy relations, ordered in terms of their level of robustness. This procedure also provides the basis for comparing levels of robustness in redundancy provided by different sets of sensors.
A self-organising model of market with single commodity
NASA Astrophysics Data System (ADS)
Chakraborti, Anirban; Pradhan, Srutarshi; Chakrabarti, Bikas K.
2001-08-01
We have studied here the self-organising features of the dynamics of a model market, where the agents ‘trade’ for a single commodity with their money. The model market consists of fixed numbers of economic agents, money supply and commodity. We demonstrate that the model, apart from showing a self-organising behaviour, indicates a crucial role for the money supply in the market and also its self-organising behaviour is seen to be significantly affected when the money supply becomes less than the optimum. We also observed that this optimal money supply level of the market depends on the amount of ‘frustration’ or scarcity in the commodity market.
Counterfactual quantum key distribution with high efficiency
NASA Astrophysics Data System (ADS)
Sun, Ying; Wen, Qiao-Yan
2010-11-01
In a counterfactual quantum key distribution scheme, a secret key can be generated merely by transmitting the split vacuum pulses of single particles. We improve the efficiency of the first quantum key distribution scheme based on the counterfactual phenomenon. This scheme not only achieves the same security level as the original one but also has higher efficiency. We also analyze how to achieve the optimal efficiency under various conditions.
Counterfactual quantum key distribution with high efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Ying; Beijing Electronic Science and Technology Institute, Beijing 100070; Wen Qiaoyan
2010-11-15
In a counterfactual quantum key distribution scheme, a secret key can be generated merely by transmitting the split vacuum pulses of single particles. We improve the efficiency of the first quantum key distribution scheme based on the counterfactual phenomenon. This scheme not only achieves the same security level as the original one but also has higher efficiency. We also analyze how to achieve the optimal efficiency under various conditions.
2016-04-30
previously, PALT duration was identified by customers as the single most important performance measure . In this section, we present the benchmark...decrease customer satisfaction . We identified percentage of warranted contracting officers ranging from 24% to 91% of an organization’s contracting...multiple uses of CPDO and other measures to optimize contract awards and meet the needs of procurement customers more effectively. Extending this
Technology optimization techniques for multicomponent optical band-pass filter manufacturing
NASA Astrophysics Data System (ADS)
Baranov, Yuri P.; Gryaznov, Georgiy M.; Rodionov, Andrey Y.; Obrezkov, Andrey V.; Medvedev, Roman V.; Chivanov, Alexey N.
2016-04-01
Narrowband optical devices (like IR-sensing devices, celestial navigation systems, solar-blind UV-systems and many others) are one of the most fast-growing areas in optical manufacturing. However, signal strength in this type of applications is quite low and performance of devices depends on attenuation level of wavelengths out of operating range. Modern detectors (photodiodes, matrix detectors, photomultiplier tubes and others) usually do not have required selectivity or have higher sensitivity to background spectrum at worst. Manufacturing of a single component band-pass filter with high attenuation level of wavelength is resource-intensive task. Sometimes it's not possible to find solution for this problem using existing technologies. Different types of filters have technology variations of transmittance profile shape due to various production factors. At the same time there are multiple tasks with strict requirements for background spectrum attenuation in narrowband optical devices. For example, in solar-blind UV-system wavelengths above 290-300 nm must be attenuated by 180dB. In this paper techniques of multi-component optical band-pass filters assembly from multiple single elements with technology variations of transmittance profile shape for optimal signal-tonoise ratio (SNR) were proposed. Relationships between signal-to-noise ratio and different characteristics of transmittance profile shape were shown. Obtained practical results were in rather good agreement with our calculations.
High Level Rule Modeling Language for Airline Crew Pairing
NASA Astrophysics Data System (ADS)
Mutlu, Erdal; Birbil, Ş. Ilker; Bülbül, Kerem; Yenigün, Hüsnü
2011-09-01
The crew pairing problem is an airline optimization problem where a set of least costly pairings (consecutive flights to be flown by a single crew) that covers every flight in a given flight network is sought. A pairing is defined by using a very complex set of feasibility rules imposed by international and national regulatory agencies, and also by the airline itself. The cost of a pairing is also defined by using complicated rules. When an optimization engine generates a sequence of flights from a given flight network, it has to check all these feasibility rules to ensure whether the sequence forms a valid pairing. Likewise, the engine needs to calculate the cost of the pairing by using certain rules. However, the rules used for checking the feasibility and calculating the costs are usually not static. Furthermore, the airline companies carry out what-if-type analyses through testing several alternate scenarios in each planning period. Therefore, embedding the implementation of feasibility checking and cost calculation rules into the source code of the optimization engine is not a practical approach. In this work, a high level language called ARUS is introduced for describing the feasibility and cost calculation rules. A compiler for ARUS is also implemented in this work to generate a dynamic link library to be used by crew pairing optimization engines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soer, Wouter
LED luminaires have seen dramatic changes in cost breakdown over the past few years. The LED component cost, which until recently was the dominant portion of luminaire cost, has fallen to a level of the same order as the other luminaire components, such as the driver, housing, optics etc. With the current state of the technology, further luminaire performance improvement and cost reduction is realized most effectively by optimization of the whole system, rather than a single component. This project focuses on improving the integration between LEDs and drivers. Lumileds has developed a light engine platform based on low-cost high-powermore » LEDs and driver topologies optimized for integration with these LEDs on a single substrate. The integration of driver and LEDs enables an estimated luminaire cost reduction of about 25% for targeted applications, mostly due to significant reductions in driver and housing cost. The high-power LEDs are based on Lumileds’ patterned sapphire substrate flip-chip (PSS-FC) technology, affording reduced die fabrication and packaging cost compared to existing technology. Two general versions of PSS-FC die were developed in order to create the desired voltage and flux increments for driver integration: (i) small single-junction die (0.5 mm 2), optimal for distributed lighting applications, and (ii) larger multi-junction die (2 mm 2 and 4 mm 2) for high-power directional applications. Two driver topologies were developed: a tapped linear driver topology and a single-stage switch-mode topology, taking advantage of the flexible voltage configurations of the new PSS-FC die and the simplification opportunities enabled by integration of LEDs and driver on the same board. A prototype light engine was developed for an outdoor “core module” application based on the multi-junction PSS-FC die and the single-stage switch-mode driver. The light engine meets the project efficacy target of 128 lm/W at a luminous flux greater than 4100 lm, a correlated color temperature (CCT) of 4000K and a color rendering index (CRI) greater than 70.« less
Task driven optimal leg trajectories in insect-scale legged microrobots
NASA Astrophysics Data System (ADS)
Doshi, Neel; Goldberg, Benjamin; Jayaram, Kaushik; Wood, Robert
Origami inspired layered manufacturing techniques and 3D-printing have enabled the development of highly articulated legged robots at the insect-scale, including the 1.43g Harvard Ambulatory MicroRobot (HAMR). Research on these platforms has expanded its focus from manufacturing aspects to include design optimization and control for application-driven tasks. Consequently, the choice of gait selection, body morphology, leg trajectory, foot design, etc. have become areas of active research. HAMR has two controlled degrees-of-freedom per leg, making it an ideal candidate for exploring leg trajectory. We will discuss our work towards optimizing HAMR's leg trajectories for two different tasks: climbing using electroadhesives and level ground running (5-10 BL/s). These tasks demonstrate the ability of single platform to adapt to vastly different locomotive scenarios: quasi-static climbing with controlled ground contact, and dynamic running with un-controlled ground contact. We will utilize trajectory optimization methods informed by existing models and experimental studies to determine leg trajectories for each task. We also plan to discuss how task specifications and choice of objective function have contributed to the shape of these optimal leg trajectories.
Biswas, Subhodip; Kundu, Souvik; Das, Swagatam
2014-10-01
In real life, we often need to find multiple optimally sustainable solutions of an optimization problem. Evolutionary multimodal optimization algorithms can be very helpful in such cases. They detect and maintain multiple optimal solutions during the run by incorporating specialized niching operations in their actual framework. Differential evolution (DE) is a powerful evolutionary algorithm (EA) well-known for its ability and efficiency as a single peak global optimizer for continuous spaces. This article suggests a niching scheme integrated with DE for achieving a stable and efficient niching behavior by combining the newly proposed parent-centric mutation operator with synchronous crowding replacement rule. The proposed approach is designed by considering the difficulties associated with the problem dependent niching parameters (like niche radius) and does not make use of such control parameter. The mutation operator helps to maintain the population diversity at an optimum level by using well-defined local neighborhoods. Based on a comparative study involving 13 well-known state-of-the-art niching EAs tested on an extensive collection of benchmarks, we observe a consistent statistical superiority enjoyed by our proposed niching algorithm.
Gigahertz dynamics of a strongly driven single quantum spin.
Fuchs, G D; Dobrovitski, V V; Toyli, D M; Heremans, F J; Awschalom, D D
2009-12-11
Two-level systems are at the core of numerous real-world technologies such as magnetic resonance imaging and atomic clocks. Coherent control of the state is achieved with an oscillating field that drives dynamics at a rate determined by its amplitude. As the strength of the field is increased, a different regime emerges where linear scaling of the manipulation rate breaks down and complex dynamics are expected. By calibrating the spin rotation with an adiabatic passage, we have measured the room-temperature "strong-driving" dynamics of a single nitrogen vacancy center in diamond. With an adiabatic passage to calibrate the spin rotation, we observed dynamics on sub-nanosecond time scales. Contrary to conventional thinking, this breakdown of the rotating wave approximation provides opportunities for time-optimal quantum control of a single spin.
Optimal shapes of surface-slip driven self-propelled swimmers
NASA Astrophysics Data System (ADS)
Vilfan, Andrej; Osterman, Natan
2012-11-01
If one defines the swimming efficiency of a microorganism as the power needed to move it against viscous drag, divided by the total dissipated power, one usually finds values no better than 1%. In order to find out how close this is to the theoretically achievable optimum, we first introduced a new efficiency measure at the level of a single cilium or an infinite ciliated surface and numerically determined the optimal beating patterns according to this criterion. In the following we also determined the optimal shape of a swimmer such that the total power is minimal while maintaining the volume and the swimming speed. The resulting shape depends strongly on the allowed maximum curvature. When sufficient curvature is allowed the optimal swimmer exhibits two protrusions along the symmetry axis. The results show that prolate swimmers such as Paramecium have an efficiency that is ~ 20% higher than that of a spherical body, whereas some microorganisms have shapes that allow even higher efficiency.
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-01-01
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties. PMID:23531490
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-03-26
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties.
Alqabandi, Jassim A; Abdel-Motal, Ussama M; Youcef-Toumi, Kamal
2009-02-01
Cancer cells have distinctive electrochemical properties. This work sheds light on the system design aspects and key challenges that should be considered when experimentally analyzing and extracting the electrical characteristics of a tumor cell line. In this study, we developed a cellularbased functional microfabricated device using lithography technology. This device was used to investigate the electrochemical parameters of cultured cancer cells at the single-cell level. Using impedance spectroscopy analyses, we determined the average specific capacitance and resistance of the membrane of the cancer cell line B16-F10 to be 1.154 +/- 0.29 microF/cm(2), and 3.9 +/- 1.15 KOmega.cm(2) (mean +/- SEM, n =14 cells), respectively. The consistency of our findings via different trails manifests the legitimacy of our experimental procedure. Furthermore, the data were compared with a proposed constructed analytical-circuit model. The results of this work may greatly assist researchers in defining an optimal procedure while extracting electrical properties of cancer cells. Detecting electrical signals at the single cell level could lead to the development of novel approaches for analysis of malignant cells in human tissues and biopsies.
Power optimal single-axis articulating strategies
NASA Technical Reports Server (NTRS)
Kumar, Renjith R.; Heck, Michael L.
1991-01-01
Power optimal single axis articulating PV array motion for Space Station Freedom is investigated. The motivation is to eliminate one of the articular joints to reduce Station costs. Optimal (maximum power) Beta tracking is addressed for local vertical local horizontal (LVLH) and non-LVLH attitudes. Effects of intra-array shadowing are also presented. Maximum power availability while Beta tracking is compared to full sun tracking and optimal alpha tracking. The results are quantified in orbital and yearly minimum, maximum, and average values of power availability.
Dopamine, Affordance and Active Inference
Friston, Karl J.; Shiner, Tamara; FitzGerald, Thomas; Galea, Joseph M.; Adams, Rick; Brown, Harriet; Dolan, Raymond J.; Moran, Rosalyn; Stephan, Klaas Enno; Bestmann, Sven
2012-01-01
The role of dopamine in behaviour and decision-making is often cast in terms of reinforcement learning and optimal decision theory. Here, we present an alternative view that frames the physiology of dopamine in terms of Bayes-optimal behaviour. In this account, dopamine controls the precision or salience of (external or internal) cues that engender action. In other words, dopamine balances bottom-up sensory information and top-down prior beliefs when making hierarchical inferences (predictions) about cues that have affordance. In this paper, we focus on the consequences of changing tonic levels of dopamine firing using simulations of cued sequential movements. Crucially, the predictions driving movements are based upon a hierarchical generative model that infers the context in which movements are made. This means that we can confuse agents by changing the context (order) in which cues are presented. These simulations provide a (Bayes-optimal) model of contextual uncertainty and set switching that can be quantified in terms of behavioural and electrophysiological responses. Furthermore, one can simulate dopaminergic lesions (by changing the precision of prediction errors) to produce pathological behaviours that are reminiscent of those seen in neurological disorders such as Parkinson's disease. We use these simulations to demonstrate how a single functional role for dopamine at the synaptic level can manifest in different ways at the behavioural level. PMID:22241972
An Integrated Method for Airfoil Optimization
NASA Astrophysics Data System (ADS)
Okrent, Joshua B.
Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal operational conditions from a broad design space with the use of minimal computational resources on both an absolute and relative scale to traditional analysis techniques. Aerodynamicists, program managers, aircraft configuration specialist, and anyone else in charge of aircraft configuration, design studies, and program level decisions might find the evaluation and optimization method proposed of interest.
NASA Astrophysics Data System (ADS)
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.
The p40 Subunit of Interleukin (IL)-12 Promotes Stabilization and Export of the p35 Subunit
Jalah, Rashmi; Rosati, Margherita; Ganneru, Brunda; Pilkington, Guy R.; Valentin, Antonio; Kulkarni, Viraj; Bergamaschi, Cristina; Chowdhury, Bhabadeb; Zhang, Gen-Mu; Beach, Rachel Kelly; Alicea, Candido; Broderick, Kate E.; Sardesai, Niranjan Y.; Pavlakis, George N.; Felber, Barbara K.
2013-01-01
IL-12 is a 70-kDa heterodimeric cytokine composed of the p35 and p40 subunits. To maximize cytokine production from plasmid DNA, molecular steps controlling IL-12p70 biosynthesis at the posttranscriptional and posttranslational levels were investigated. We show that the combination of RNA/codon-optimized gene sequences and fine-tuning of the relative expression levels of the two subunits within a cell resulted in increased production of the IL-12p70 heterodimer. We found that the p40 subunit plays a critical role in enhancing the stability, intracellular trafficking, and export of the p35 subunit. This posttranslational regulation mediated by the p40 subunit is conserved in mammals. Based on these findings, dual gene expression vectors were generated, producing an optimal ratio of the two subunits, resulting in a ∼1 log increase in human, rhesus, and murine IL-12p70 production compared with vectors expressing the wild type sequences. Such optimized DNA plasmids also produced significantly higher levels of systemic bioactive IL-12 upon in vivo DNA delivery in mice compared with plasmids expressing the wild type sequences. A single therapeutic injection of an optimized murine IL-12 DNA plasmid showed significantly more potent control of tumor development in the B16 melanoma cancer model in mice. Therefore, the improved IL-12p70 DNA vectors have promising potential for in vivo use as molecular vaccine adjuvants and in cancer immunotherapy. PMID:23297419
Performance comparison of extracellular spike sorting algorithms for single-channel recordings.
Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert
2012-01-30
Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (p<0.01) with optimized parameters than with the default ones. WaveClus was the most accurate spike sorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (p<0.01) than WaveClus for signals with a noise level in the range 0.15-0.30. KlustaKwik achieved similar scores to WaveClus for signals with low noise level 0.00-0.15 and was worse otherwise. In conclusion, none of the three compared algorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schanen, Michel; Marin, Oana; Zhang, Hong
Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ide, Toshiki; Hofmann, Holger F.; JST-CREST, Graduate School of Advanced Sciences of Matter, Hiroshima University, Kagamiyama 1-3-1, Higashi Hiroshima 739-8530
The information encoded in the polarization of a single photon can be transferred to a remote location by two-channel continuous-variable quantum teleportation. However, the finite entanglement used in the teleportation causes random changes in photon number. If more than one photon appears in the output, the continuous-variable teleportation accidentally produces clones of the original input photon. In this paper, we derive the polarization statistics of the N-photon output components and show that they can be decomposed into an optimal cloning term and completely unpolarized noise. We find that the accidental cloning of the input photon is nearly optimal at experimentallymore » feasible squeezing levels, indicating that the loss of polarization information is partially compensated by the availability of clones.« less
Experimental optimization of directed field ionization
NASA Astrophysics Data System (ADS)
Liu, Zhimin Cheryl; Gregoric, Vincent C.; Carroll, Thomas J.; Noel, Michael W.
2017-04-01
The state distribution of an ensemble of Rydberg atoms is commonly measured using selective field ionization. The resulting time resolved ionization signal from a single energy eigenstate tends to spread out due to the multiple avoided Stark level crossings atoms must traverse on the way to ionization. The shape of the ionization signal can be modified by adding a perturbation field to the main field ramp. Here, we present experimental results of the manipulation of the ionization signal using a genetic algorithm. We address how both the genetic algorithm and the experimental parameters were adjusted to achieve an optimized result. This work was supported by the National Science Foundation under Grants No. 1607335 and No. 1607377.
Heger, Dominic; Herff, Christian; Schultz, Tanja
2014-01-01
In this paper, we show that multiple operations of the typical pattern recognition chain of an fNIRS-based BCI, including feature extraction and classification, can be unified by solving a convex optimization problem. We formulate a regularized least squares problem that learns a single affine transformation of raw HbO(2) and HbR signals. We show that this transformation can achieve competitive results in an fNIRS BCI classification task, as it significantly improves recognition of different levels of workload over previously published results on a publicly available n-back data set. Furthermore, we visualize the learned models and analyze their spatio-temporal characteristics.
NASA Technical Reports Server (NTRS)
Chen, Guanrong
1991-01-01
An optimal trajectory planning problem for a single-link, flexible joint manipulator is studied. A global feedback-linearization is first applied to formulate the nonlinear inequality-constrained optimization problem in a suitable way. Then, an exact and explicit structural formula for the optimal solution of the problem is derived and the solution is shown to be unique. It turns out that the optimal trajectory planning and control can be done off-line, so that the proposed method is applicable to both theoretical analysis and real time tele-robotics control engineering.
NASA Astrophysics Data System (ADS)
Lin, Juan; Liu, Chenglian; Guo, Yongning
2014-10-01
The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.
Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.
Heydari, Ali; Balakrishnan, Sivasubramanya N
2013-01-01
To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.
Tian, J; Andreadis, S T
2009-07-01
Expression of multiple genes from the same target cell is required in several technological and therapeutic applications such as quantitative measurements of promoter activity or in vivo tracking of stem cells. In spite of such need, reaching independent and high-level dual-gene expression cannot be reliably accomplished by current gene transfer vehicles. To address this issue, we designed a lentiviral vector carrying two transcriptional units separated by polyadenylation, terminator and insulator sequences. With this design, the expression level of both genes was as high as that yielded from lentiviral vectors containing only a single transcriptional unit. Similar results were observed with several promoters and cell types including epidermal keratinocytes, bone marrow mesenchymal stem cells and hair follicle stem cells. Notably, we demonstrated quantitative dynamic monitoring of gene expression in primary cells with no need for selection protocols suggesting that this optimized lentivirus may be useful in high-throughput gene expression profiling studies.
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Torigian, Drew A.; Wu, Caiyun; Christie, Jason; Lederer, David J.
2016-03-01
Chest fat estimation is important for identifying high-risk lung transplant candidates. In this paper, an approach to chest fat quantification based on a recently formulated concept of standardized anatomic space (SAS) is presented. The goal of this paper is to seek answers to the following questions related to chest fat quantification on single slice versus whole volume CT, which have not been addressed in the literature. What level of correlation exists between total chest fat volume and fat areas measured on single abdominal and thigh slices? What is the anatomic location in the chest where maximal correlation of fat area with fat volume can be expected? Do the components of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) have the same area-to-volume correlative behavior or do they differ? The SAS approach includes two steps: calibration followed by transformation which will map the patient slice locations non-linearly to SAS. The optimal slice locations found for SAT and VAT based on SAS are different and at the mid-level of the T8 vertebral body for SAT and mid-level of the T7 vertebral body for VAT. Fat volume and area on optimal slices for SAT and VAT are correlated with Pearson correlation coefficients of 0.97 and 0.86, respectively. The correlation of chest fat volume with abdominal and thigh fat areas is weak to modest.
Analytical optimization of demand management strategies across all urban water use sectors
NASA Astrophysics Data System (ADS)
Friedman, Kenneth; Heaney, James P.; Morales, Miguel; Palenchar, John
2014-07-01
An effective urban water demand management program can greatly influence both peak and average demand and therefore long-term water supply and infrastructure planning. Although a theoretical framework for evaluating residential indoor demand management has been well established, little has been done to evaluate other water use sectors such as residential irrigation in a compatible manner for integrating these results into an overall solution. This paper presents a systematic procedure to evaluate the optimal blend of single family residential irrigation demand management strategies to achieve a specified goal based on performance functions derived from parcel level tax assessor's data linked to customer level monthly water billing data. This framework is then generalized to apply to any urban water sector, as exponential functions can be fit to all resulting cumulative water savings functions. Two alternative formulations are presented: maximize net benefits, or minimize total costs subject to satisfying a target water savings. Explicit analytical solutions are presented for both formulations based on appropriate exponential best fits of performance functions. A direct result of this solution is the dual variable which represents the marginal cost of water saved at a specified target water savings goal. A case study of 16,303 single family irrigators in Gainesville Regional Utilities utilizing high quality tax assessor and monthly billing data along with parcel level GIS data provide an illustrative example of these techniques. Spatial clustering of targeted homes can be easily performed in GIS to identify priority demand management areas.
Optimized image acquisition for breast tomosynthesis in projection and reconstruction space.
Chawla, Amarpreet S; Lo, Joseph Y; Baker, Jay A; Samei, Ehsan
2009-11-01
Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigated the effects of these acquisition parameters on the overall diagnostic image quality of breast tomosynthesis in both the projection and reconstruction space. Five mastectomy specimens were imaged using a prototype tomosynthesis system. 25 angular projections of each specimen were acquired at 6.2 times typical single-view clinical dose level. Images at lower dose levels were then simulated using a noise modification routine. Each projection image was supplemented with 84 simulated 3 mm 3D lesions embedded at the center of 84 nonoverlapping ROIs. The projection images were then reconstructed using a filtered backprojection algorithm at different combinations of acquisition parameters to investigate which of the many possible combinations maximizes the performance. Performance was evaluated in terms of a Laguerre-Gauss channelized Hotelling observer model-based measure of lesion detectability. The analysis was also performed without reconstruction by combining the model results from projection images using Bayesian decision fusion algorithm. The effect of acquisition parameters on projection images and reconstructed slices were then compared to derive an optimization rule for tomosynthesis. The results indicated that projection images yield comparable but higher performance than reconstructed images. Both modes, however, offered similar trends: Performance improved with an increase in the total acquisition dose level and the angular span. Using a constant dose level and angular span, the performance rolled off beyond a certain number of projections, indicating that simply increasing the number of projections in tomosynthesis may not necessarily improve its performance. The best performance for both projection images and tomosynthesis slices was obtained for 15-17 projections spanning an angular are of approximately 45 degrees--the maximum tested in our study, and for an acquisition dose equal to single-view mammography. The optimization framework developed in this framework is applicable to other reconstruction techniques and other multiprojection systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steudle, Gesine A.; Knauer, Sebastian; Herzog, Ulrike
2011-05-15
We present an experimental implementation of optimum measurements for quantum state discrimination. Optimum maximum-confidence discrimination and optimum unambiguous discrimination of two mixed single-photon polarization states were performed. For the latter the states of rank 2 in a four-dimensional Hilbert space are prepared using both path and polarization encoding. Linear optics and single photons from a true single-photon source based on a semiconductor quantum dot are utilized.
2009-01-01
Many studies of RNA folding and catalysis have revealed conformational heterogeneity, metastable folding intermediates, and long-lived states with distinct catalytic activities. We have developed a single-molecule imaging approach for investigating the functional heterogeneity of in vitro-evolved RNA aptamers. Monitoring the association of fluorescently labeled ligands with individual RNA aptamer molecules has allowed us to record binding events over the course of multiple days, thus providing sufficient statistics to quantitatively define the kinetic properties at the single-molecule level. The ligand binding kinetics of the highly optimized RNA aptamer studied here displays a remarkable degree of uniformity and lack of memory. Such homogeneous behavior is quite different from the heterogeneity seen in previous single-molecule studies of naturally derived RNA and protein enzymes. The single-molecule methods we describe may be of use in analyzing the distribution of functional molecules in heterogeneous evolving populations or even in unselected samples of random sequences. PMID:19572753
Song, Do Kyeong; Oh, Jee-Young; Lee, Hyejin; Sung, Yeon-Ah
2017-07-01
Although increased serum anti-Müllerian hormone (AMH) level has been suggested to be a surrogate marker of polycystic ovarian morphology (PCOM), its association with polycystic ovary syndrome (PCOS) is controversial, and its diagnostic value has not been determined. We aimed to observe the relationship between the AMH level and PCOS phenotypes and to determine the optimal cutoff value of AMH for the diagnosis of PCOS in young Korean women. We recruited 207 women with PCOS (120 with PCOM and 87 without PCOM) and 220 regular cycling women with normoandrogenemia (100 with PCOM and 120 without PCOM). Subjects underwent testing at a single outpatient visit. Serum AMH level was measured. Women with PCOS had higher serum AMH levels than did regular cycling women with normoandrogenemia ( p < 0.05). Women with PCOM had higher serum AMH levels than women without PCOM, regardless of PCOS status ( p < 0.05). The optimal AMH cutoff value for the diagnosis of PCOS was 10.0 ng/mL (71% sensitivity, 93% specificity). Serum AMH was an independent determinant of total testosterone after adjustment for age, body mass index, and the number of menses/year (β = 0.31, p < 0.01). An association between AMH and hyperandrogenism was only observed in women with PCOS, and it was independent of the presence of PCOM. The serum AMH level can be useful for the diagnosis of PCOS at any age less than 40 years, and the optimal cutoff value for the diagnosis of PCOS identified in this study of young Korean women was 10.0 ng/mL.
Design and Optimization of AlN based RF MEMS Switches
NASA Astrophysics Data System (ADS)
Hasan Ziko, Mehadi; Koel, Ants
2018-05-01
Radio frequency microelectromechanical system (RF MEMS) switch technology might have potential to replace the semiconductor technology in future communication systems as well as communication satellites, wireless and mobile phones. This study is to explore the possibilities of RF MEMS switch design and optimization with aluminium nitride (AlN) thin film as the piezoelectric actuation material. Achieving low actuation voltage and high contact force with optimal geometry using the principle of piezoelectric effect is the main motivation for this research. Analytical and numerical modelling of single beam type RF MEMS switch used to analyse the design parameters and optimize them for the minimum actuation voltage and high contact force. An analytical model using isotropic AlN material properties used to obtain the optimal parameters. The optimized geometry of the device length, width and thickness are 2000 µm, 500 µm and 0.6 µm respectively obtained for the single beam RF MEMS switch. Low actuation voltage and high contact force with optimal geometry are less than 2 Vand 100 µN obtained by analytical analysis. Additionally, the single beam RF MEMS switch are optimized and validated by comparing the analytical and finite element modelling (FEM) analysis.
Optimization for Guitar Fingering on Single Notes
NASA Astrophysics Data System (ADS)
Itoh, Masaru; Hayashida, Takumi
This paper presents an optimization method for guitar fingering. The fingering is to determine a unique combination of string, fret and finger corresponding to the note. The method aims to generate the best fingering pattern for guitar robots rather than beginners. Furthermore, it can be applied to any musical score on single notes. A fingering action can be decomposed into three motions, that is, a motion of press string, release string and move fretting hand. The cost for moving the hand is estimated on the basis of Manhattan distance which is the sum of distances along fret and string directions. The objective is to minimize the total fingering costs, subject to fret, string and finger constraints. As a sequence of notes on the score forms a line on time series, the optimization for guitar fingering can be resolved into a multistage decision problem. Dynamic programming is exceedingly effective to solve such a problem. A level concept is introduced into rendering states so as to make multiple DP solutions lead a unique one among the DP backward processes. For example, if two fingerings have the same value of cost at different states on a stage, then the low position would be taken precedence over the high position, and the index finger would be over the middle finger.
Calcium channel blockers in hypertension. Is there still a controversy?
Izzo, Joseph L
2005-08-01
There are several reasons why no single antihypertensive drug class is ideal in all clinical situations. First, pathophysiologic heterogeneity in hypertension and diversity of mechanisms of antihypertensive drugs dictate that no single drug class can be optimally effective in all subpopulations. Second, sustained blood pressure control generally requires combination therapy to block the reflex stimulation of physiologic mechanisms that attempt to restore blood pressure to pretreatment levels. Third, while effective blood pressure control is more important than choice of initial drug in the prevention of hypertension-related morbidity and mortality, specific drug classes are indicated for optimal treatment of complications of hypertension (e.g. heart failure, kidney disease). Fourth, although antihypertensive drug side effects are uncommon, alternative strategies are required in some patients. Given these principles, past controversies regarding whether calcium channel blockers (CCBs) should be used in the treatment of hypertension become moot. CCBs are extremely effective in lowering blood pressure and in preventing stroke and cardiovascular disease. When additional blood pressure lowering is necessary to meet strict targets, CCBs may be added, even in heart failure or chronic kidney disease, where CCBs alone may not achieve optimal outcomes. Combinations of CCBs with "anti-neurohumoral" drugs such as ACE inhibitors are particularly useful to achieve sustained blood pressure control, reduce adverse effects such as edema, and improve outcomes.
NASA Technical Reports Server (NTRS)
Holmes, B. J.
1980-01-01
A design study has been conducted to optimize a single-engine airplane for a high-performance cruise mission. The mission analyzed included a cruise speed of about 300 knots, a cruise range of about 1300 nautical miles, and a six-passenger payload (5340 N (1200 lb)). The purpose of the study is to investigate the combinations of wing design, engine, and operating altitude required for the mission. The results show that these mission performance characteristics can be achieved with fuel efficiencies competitive with present-day high-performance, single- and twin-engine, business airplanes. It is noted that relaxation of the present Federal Aviation Regulation, Part 23, stall-speed requirement for single-engine airplanes facilitates the optimization of the airplane for fuel efficiency.
NASA Astrophysics Data System (ADS)
Liu, Yong; Zhou, Lin; Sun, Kewei; Straszheim, Warren E.; Tanatar, Makariy A.; Prozorov, Ruslan; Lograsso, Thomas A.
2018-02-01
We present a thorough study of doping dependent magnetic hysteresis and relaxation characteristics in single crystals of (B a1 -xKx ) F e2A s2 (0.18 ≤x ≤1 ). The critical current density Jc reaches maximum in the underdoped sample x =0.26 and then decreases in the optimally doped and overdoped samples. Meanwhile, the magnetic relaxation rate S rapidly increases and the flux creep activation barrier U0 sharply decreases in the overdoped sample x =0.70 . These results suggest that vortex pinning is very strong in the underdoped regime, but it is greatly reduced in the optimally doped and overdoped regime. Transmission electron microscope (TEM) measurements reveal the existence of dislocations and inclusions in all three studied samples x =0.38 , 0.46, and 0.65. An investigation of the paramagnetic Meissner effect (PME) suggests that spatial variations in Tc become small in the samples x =0.43 and 0.46, slightly above the optimal doping levels. Our results support that two types of pinning sources dominate the (B a1 -xKx ) F e2A s2 crystals: (i) strong δl pinning, which results from the fluctuations in the mean free path l and δ Tc pinning from the spatial variations in Tc in the underdoped regime, and (ii) weak δ Tc pinning in the optimally doped and overdoped regime.
Tuning Parameters in Heuristics by Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
Overexpression and characterization of laccase from Trametes versicolor in Pichia pastoris.
Li, Q; Pei, J; Zhao, L; Xie, J; Cao, F; Wang, G
2014-01-01
A laccase-encoding gene of Trametes versicolor, lccA, was cloned and expressed in Pichia pastoris X33. The lccA gene consists ofa 1560 bp open reading frame encoding 519 amino acids, which was classified into family copper blue oxidase. To improve the expression level of recombinant laccase in P. pastoris, conditions of the fermentation were optimized by the single factor experiments. The optimal fermentation conditions for the laccase production in shake flask cultivation using BMGY medium were obtained: the optimal initial pH 7.0, the presence of 0.5 mM Cu2+, 0.6% methanol added into the culture every 24 h. The laccase activity was up to 11.972 U/L under optimal conditions after 16 days of induction in a medium with 4% peptone. After 100 h of large scale production in 5 L fermenter the enzyme activity reached 18.123 U/L. The recombinant laccase was purified by ultrafiltration and (NH4)2SO4 precipitation showing a single band on SDS-PAGE, which had a molecular mass of 58 kDa. The optimum pH and temperature for the laccase were pH 2.0 and 50 degrees C with 2,2'-azino-bis(3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) as a substrate. The recombinant laccase was stable over a pH range of 2.0-7.0. The K(m) and the V(max) value of LccA were 0.43 mM and 82.3 U/mg for ABTS, respectively.
Structural and electronic properties of carbon nanotube-reinforced epoxy resins.
Suggs, Kelvin; Wang, Xiao-Qian
2010-03-01
Nanocomposites of cured epoxy resin reinforced by single-walled carbon nanotubes exhibit a plethora of interesting behaviors at the molecular level. We have employed a combination of force-field-based molecular mechanics and first-principles calculations to study the corresponding binding and charge-transfer behavior. The simulation study of various nanotube species and curing agent configurations provides insight into the optimal structures in lieu of interfacial stability. An analysis of charge distributions of the epoxy functionalized semiconducting and metallic tubes reveals distinct level hybridizations. The implications of these results for understanding dispersion mechanism and future nano reinforced composite developments are discussed.
Aerodynamic configuration design using response surface methodology analysis
NASA Technical Reports Server (NTRS)
Engelund, Walter C.; Stanley, Douglas O.; Lepsch, Roger A.; Mcmillin, Mark M.; Unal, Resit
1993-01-01
An investigation has been conducted to determine a set of optimal design parameters for a single-stage-to-orbit reentry vehicle. Several configuration geometry parameters which had a large impact on the entry vehicle flying characteristics were selected as design variables: the fuselage fineness ratio, the nose to body length ratio, the nose camber value, the wing planform area scale factor, and the wing location. The optimal geometry parameter values were chosen using a response surface methodology (RSM) technique which allowed for a minimum dry weight configuration design that met a set of aerodynamic performance constraints on the landing speed, and on the subsonic, supersonic, and hypersonic trim and stability levels. The RSM technique utilized, specifically the central composite design method, is presented, along with the general vehicle conceptual design process. Results are presented for an optimized configuration along with several design trade cases.
Chemical release from single-PMMA microparticles monitored by CARS microscopy
NASA Astrophysics Data System (ADS)
Enejder, Annika; Svedberg, Fredrik; Nordstierna, Lars; Nydén, Magnus
2011-03-01
Microparticles loaded with antigens, proteins, DNA, fungicides, and other functional agents emerge as ideal vehicles for vaccine, drug delivery, genetic therapy, surface- and crop protection. The microscopic size of the particles and their collective large specific surface area enables highly active and localized release of the functional substance. In order to develop designs with release profiles optimized for the specific application, it is desirable to map the distribution of the active substance within the particle and how parameters such as size, material and morphology affect release rates at single particle level. Current imaging techniques are limited in resolution, sensitivity, image acquisition time, or sample treatment, excluding dynamic studies of active agents in microparticles. Here, we demonstrate that the combination of CARS and THG microscopy can successfully be used, by mapping the spatial distribution and release rates of the fungicide and food preservative IPBC from different designs of PMMA microparticles at single-particle level. By fitting a radial diffusion model to the experimental data, single particle diffusion coefficients can be determined. We show that release rates are highly dependent on the size and morphology of the particles. Hence, CARS and THG microscopy provides adequate sensitivity and spatial resolution for quantitative studies on how singleparticle properties affect the diffusion of active agents at microscopic level. This will aid the design of innovative microencapsulating systems for controlled release.
NASA Astrophysics Data System (ADS)
Janneck, Robby; Vercesi, Federico; Heremans, Paul; Genoe, Jan; Rolin, Cedric
2016-09-01
Organic thin film transistors (OTFTs) based on single crystalline thin films of organic semiconductors have seen considerable development in the recent years. The most successful method for the fabrication of single crystalline films are solution-based meniscus guided coating techniques such as dip-coating, solution shearing or zone casting. These upscalable methods enable rapid and efficient film formation without additional processing steps. The single-crystalline film quality is strongly dependent on solvent choice, substrate temperature and coating speed. So far, however, process optimization has been conducted by trial and error methods, involving, for example, the variation of coating speeds over several orders of magnitude. Through a systematic study of solvent phase change dynamics in the meniscus region, we develop a theoretical framework that links the optimal coating speed to the solvent choice and the substrate temperature. In this way, we can accurately predict an optimal processing window, enabling fast process optimization. Our approach is verified through systematic OTFT fabrication based on films grown with different semiconductors, solvents and substrate temperatures. The use of best predicted coating speeds delivers state of the art devices. In the case of C8BTBT, OTFTs show well-behaved characteristics with mobilities up to 7 cm2/Vs and onset voltages close to 0 V. Our approach also explains well optimal recipes published in the literature. This route considerably accelerates parameter screening for all meniscus guided coating techniques and unveils the physics of single crystalline film formation.
Efficient Symbolic Task Planning for Multiple Mobile Robots
2016-12-13
Efficient Symbolic Task Planning for Multiple Mobile Robots Yuqian Jiang December 13, 2016 Abstract Symbolic task planning enables a robot to make...high-level deci- sions toward a complex goal by computing a sequence of actions with minimum expected costs. This thesis builds on a single- robot ...time complexity of optimal planning for multiple mobile robots . In this thesis we first investigate the performance of the state-of-the-art solvers of
Detailed study of the water trimer potential energy surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, J.E.; Schaefer, H.F. III
The potential energy surface of the water trimer has been studied through the use of ab initio quantum mechanical methods. Five stationary points were located, including one minimum and two transition states. All geometries were optimized at levels up to the double-[Zeta] plus polarization plus diffuse (DZP + diff) single and double excitation coupled cluster (CCSD) level of theory. CCSD single energy points were obtained for the minimum, two transition states, and the water monomer using the triple-[Zeta] plus double polarization plus diffuse (TZ2P + diff) basis at the geometries predicted by the DZP + diff CCSD method. Reported aremore » the following: geometrical parameters, total and relative energies, harmonic vibrational frequencies and infrared intensities for the minimum, and zero point vibrational energies for the minimum, two transition states, and three separated water molecules. 27 refs., 5 figs., 10 tabs.« less
NASA Astrophysics Data System (ADS)
Giri, B. C.; Maiti, T.
2013-05-01
This article develops a single-manufacturer and single-retailer supply chain model under two-level permissible delay in payments when the manufacturer follows a lot-for-lot policy in response to the retailer's demand. The manufacturer offers a trade credit period to the retailer with the contract that the retailer must share a fraction of the profit earned during the trade credit period. On the other hand, the retailer provides his customer a partial trade credit which is less than that of the manufacturer. The demand at the retailer is assumed to be dependent on the selling price and the trade credit period offered to the customers. The average net profit of the supply chain is derived and an algorithm for finding the optimal solution is developed. Numerical examples are given to demonstrate the coordination policy of the supply chain and examine the sensitivity of key model-parameters.
Conformational and functional analysis of molecular dynamics trajectories by Self-Organising Maps
2011-01-01
Background Molecular dynamics (MD) simulations are powerful tools to investigate the conformational dynamics of proteins that is often a critical element of their function. Identification of functionally relevant conformations is generally done clustering the large ensemble of structures that are generated. Recently, Self-Organising Maps (SOMs) were reported performing more accurately and providing more consistent results than traditional clustering algorithms in various data mining problems. We present a novel strategy to analyse and compare conformational ensembles of protein domains using a two-level approach that combines SOMs and hierarchical clustering. Results The conformational dynamics of the α-spectrin SH3 protein domain and six single mutants were analysed by MD simulations. The Cα's Cartesian coordinates of conformations sampled in the essential space were used as input data vectors for SOM training, then complete linkage clustering was performed on the SOM prototype vectors. A specific protocol to optimize a SOM for structural ensembles was proposed: the optimal SOM was selected by means of a Taguchi experimental design plan applied to different data sets, and the optimal sampling rate of the MD trajectory was selected. The proposed two-level approach was applied to single trajectories of the SH3 domain independently as well as to groups of them at the same time. The results demonstrated the potential of this approach in the analysis of large ensembles of molecular structures: the possibility of producing a topological mapping of the conformational space in a simple 2D visualisation, as well as of effectively highlighting differences in the conformational dynamics directly related to biological functions. Conclusions The use of a two-level approach combining SOMs and hierarchical clustering for conformational analysis of structural ensembles of proteins was proposed. It can easily be extended to other study cases and to conformational ensembles from other sources. PMID:21569575
Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Sun, Yuchun; Wang, Yong
2015-02-01
The purpose of this study was to establish a depth-control method in enamel-cavity ablation by optimizing the timing of the focal-plane-normal stepping and the single-step size of a three axis, numerically controlled picosecond laser. Although it has been proposed that picosecond lasers may be used to ablate dental hard tissue, the viability of such a depth-control method in enamel-cavity ablation remains uncertain. Forty-two enamel slices with approximately level surfaces were prepared and subjected to two-dimensional ablation by a picosecond laser. The additive-pulse layer, n, was set to 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70. A three-dimensional microscope was then used to measure the ablation depth, d, to obtain a quantitative function relating n and d. Six enamel slices were then subjected to three dimensional ablation to produce 10 cavities, respectively, with additive-pulse layer and single-step size set to corresponding values. The difference between the theoretical and measured values was calculated for both the cavity depth and the ablation depth of a single step. These were used to determine minimum-difference values for both the additive-pulse layer (n) and single-step size (d). When the additive-pulse layer and the single-step size were set 5 and 45, respectively, the depth error had a minimum of 2.25 μm, and 450 μm deep enamel cavities were produced. When performing three-dimensional ablating of enamel with a picosecond laser, adjusting the timing of the focal-plane-normal stepping and the single-step size allows for the control of ablation-depth error to the order of micrometers.
Texas two-step: a framework for optimal multi-input single-output deconvolution.
Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G
2007-11-01
Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.
Unusually high critical current of clean P-doped BaFe2As2 single crystalline thin film
NASA Astrophysics Data System (ADS)
Kurth, F.; Tarantini, C.; Grinenko, V.; Hänisch, J.; Jaroszynski, J.; Reich, E.; Mori, Y.; Sakagami, A.; Kawaguchi, T.; Engelmann, J.; Schultz, L.; Holzapfel, B.; Ikuta, H.; Hühne, R.; Iida, K.
2015-02-01
Microstructurally clean, isovalently P-doped BaFe2As2 (Ba-122) single crystalline thin films have been prepared on MgO (001) substrates by molecular beam epitaxy. These films show a superconducting transition temperature (Tc) of over 30 K although P content is around 0.22, which is lower than the optimal one for single crystals (i.e., 0.33). The enhanced Tc at this doping level is attributed to the in-plane tensile strain. The strained film shows high transport self-field critical current densities (Jc) of over 6 MA/cm2 at 4.2 K, which are among the highest for Fe based superconductors (FeSCs). In-field Jc exceeds 0.1 MA/cm2 at μ 0 H = 35 T for H ‖ a b and μ 0 H = 18 T for H ‖ c at 4.2 K, respectively, in spite of moderate upper critical fields compared to other FeSCs with similar Tc. Structural investigations reveal no defects or misoriented grains pointing to strong pinning centers. We relate this unexpected high Jc to a strong enhancement of the vortex core energy at optimal Tc, driven by in-plane strain and doping. These unusually high Jc make P-doped Ba-122 very favorable for high-field magnet applications.
Design and optimization of reverse-transcription quantitative PCR experiments.
Tichopad, Ales; Kitchen, Rob; Riedmaier, Irmgard; Becker, Christiane; Ståhlberg, Anders; Kubista, Mikael
2009-10-01
Quantitative PCR (qPCR) is a valuable technique for accurately and reliably profiling and quantifying gene expression. Typically, samples obtained from the organism of study have to be processed via several preparative steps before qPCR. We estimated the errors of sample withdrawal and extraction, reverse transcription (RT), and qPCR that are introduced into measurements of mRNA concentrations. We performed hierarchically arranged experiments with 3 animals, 3 samples, 3 RT reactions, and 3 qPCRs and quantified the expression of several genes in solid tissue, blood, cell culture, and single cells. A nested ANOVA design was used to model the experiments, and relative and absolute errors were calculated with this model for each processing level in the hierarchical design. We found that intersubject differences became easily confounded by sample heterogeneity for single cells and solid tissue. In cell cultures and blood, the noise from the RT and qPCR steps contributed substantially to the overall error because the sampling noise was less pronounced. We recommend the use of sample replicates preferentially to any other replicates when working with solid tissue, cell cultures, and single cells, and we recommend the use of RT replicates when working with blood. We show how an optimal sampling plan can be calculated for a limited budget. .
Wigley, K; Wakelin, S A; Moot, D J; Hammond, S; Ridgway, H J
2016-08-01
The aim of this work was to develop a tool to investigate the influence of soil factors on carbon utilization activity of single micro-organisms. The assay for Rhizobium leguminosarum bv. trifolii in γ-irradiated soil, using the MicroResp(™) system, was optimized for sterility, incubation time, and moisture level. The optimized method was validated with experiments that assessed (i) differences in C utilization of different rhizobia strains and (ii) how this was affected by soil type. Carbon utilization differed among strains of the same species (and symbiovar), but some strains were more responsive to the soil environment than others. This novel modification of the MicroResp(™) has enabled the scope of carbon-utilization patterns of single strains of bacteria, such as Rh. leguminosarum bv. trifolii, to be studied in soil. The system is a new tool with applications in microbial ecology adaptable to the study of many culturable bacterial and fungal soil-borne taxa. It will allow measurement of a micro-organism's ability to utilize common C sources released in rhizosphere exudates to be measured in a physical soil background. This knowledge may improve selection efficiency and deployment of commercial microbial inoculants. © 2016 The Society for Applied Microbiology.
reaxFF Reactive Force Field for Disulfide Mechanochemistry, Fitted to Multireference ab Initio Data.
Müller, Julian; Hartke, Bernd
2016-08-09
Mechanochemistry, in particular in the form of single-molecule atomic force microscopy experiments, is difficult to model theoretically, for two reasons: Covalent bond breaking is not captured accurately by single-determinant, single-reference quantum chemistry methods, and experimental times of milliseconds or longer are hard to simulate with any approach. Reactive force fields have the potential to alleviate both problems, as demonstrated in this work: Using nondeterministic global parameter optimization by evolutionary algorithms, we have fitted a reaxFF force field to high-level multireference ab initio data for disulfides. The resulting force field can be used to reliably model large, multifunctional mechanochemistry units with disulfide bonds as designed breaking points. Explorative calculations show that a significant part of the time scale gap between AFM experiments and dynamical simulations can be bridged with this approach.
Precision thermometry and the quantum speed limit
NASA Astrophysics Data System (ADS)
Campbell, Steve; Genoni, Marco G.; Deffner, Sebastian
2018-04-01
We assess precision thermometry for an arbitrary single quantum system. For a d-dimensional harmonic system we show that the gap sets a single temperature that can be optimally estimated. Furthermore, we establish a simple linear relationship between the gap and this temperature, and show that the precision exhibits a quadratic relationship. We extend our analysis to explore systems with arbitrary spectra, showing that exploiting anharmonicity and degeneracy can greatly enhance the precision of thermometry. Finally, we critically assess the dynamical features of two thermometry protocols for a two level system. By calculating the quantum speed limit we find that, despite the gap fixing a preferred temperature to probe, there is no evidence of this emerging in the dynamical features.
Requirements for high-efficiency solar cells
NASA Technical Reports Server (NTRS)
Sah, C. T.
1986-01-01
Minimum recombination and low injection level are essential for high efficiency. Twenty percent AM1 efficiency requires a dark recombination current density of 2 x 10 to the minus 13th power A/sq cm and a recombination center density of less than 10 to the 10th power /cu cm. Recombination mechanisms at thirteen locations in a conventional single crystalline silicon cell design are reviewed. Three additional recombination locations are described at grain boundaries in polycrystalline cells. Material perfection and fabrication process optimization requirements for high efficiency are outlined. Innovative device designs to reduce recombination in the bulk and interfaces of single crystalline cells and in the grain boundary of polycrystalline cells are reviewed.
Alecu, I M; Zheng, Jingjing; Zhao, Yan; Truhlar, Donald G
2010-09-14
Optimized scale factors for calculating vibrational harmonic and fundamental frequencies and zero-point energies have been determined for 145 electronic model chemistries, including 119 based on approximate functionals depending on occupied orbitals, 19 based on single-level wave function theory, three based on the neglect-of-diatomic-differential-overlap, two based on doubly hybrid density functional theory, and two based on multicoefficient correlation methods. Forty of the scale factors are obtained from large databases, which are also used to derive two universal scale factor ratios that can be used to interconvert between scale factors optimized for various properties, enabling the derivation of three key scale factors at the effort of optimizing only one of them. A reduced scale factor optimization model is formulated in order to further reduce the cost of optimizing scale factors, and the reduced model is illustrated by using it to obtain 105 additional scale factors. Using root-mean-square errors from the values in the large databases, we find that scaling reduces errors in zero-point energies by a factor of 2.3 and errors in fundamental vibrational frequencies by a factor of 3.0, but it reduces errors in harmonic vibrational frequencies by only a factor of 1.3. It is shown that, upon scaling, the balanced multicoefficient correlation method based on coupled cluster theory with single and double excitations (BMC-CCSD) can lead to very accurate predictions of vibrational frequencies. With a polarized, minimally augmented basis set, the density functionals with zero-point energy scale factors closest to unity are MPWLYP1M (1.009), τHCTHhyb (0.989), BB95 (1.012), BLYP (1.013), BP86 (1.014), B3LYP (0.986), MPW3LYP (0.986), and VSXC (0.986).
NASA Astrophysics Data System (ADS)
Cheng, Xi; He, Li; Lu, Hongwei; Chen, Yizhong; Ren, Lixia
2016-09-01
A major concern associated with current shale-gas extraction is high consumption of water resources. However, decision-making problems regarding water consumption and shale-gas extraction have not yet been solved through systematic approaches. This study develops a new bilevel optimization problem based on goals at two different levels: minimization of water demands at the lower level and maximization of system benefit at the upper level. The model is used to solve a real-world case across Pennsylvania and West Virginia. Results show that surface water would be the largest contributor to gas production (with over 80.00% from 2015 to 2030) and groundwater occupies for the least proportion (with less than 2.00% from 2015 to 2030) in both districts over the planning span. Comparative analysis between the proposed model and conventional single-level models indicates that the bilevel model could provide coordinated schemes to comprehensively attain the goals from both water resources authorities and energy sectors. Sensitivity analysis shows that the change of water use of per unit gas production (WU) has significant effects upon system benefit, gas production and pollutants (i.e., barium, chloride and bromide) discharge, but not significantly changes water demands.
SU-D-12A-06: A Comprehensive Parameter Analysis for Low Dose Cone-Beam CT Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, W; Southern Medical University, Guangzhou; Yan, H
Purpose: There is always a parameter in compressive sensing based iterative reconstruction (IR) methods low dose cone-beam CT (CBCT), which controls the weight of regularization relative to data fidelity. A clear understanding of the relationship between image quality and parameter values is important. The purpose of this study is to investigate this subject based on experimental data and a representative advanced IR algorithm using Tight-frame (TF) regularization. Methods: Three data sets of a Catphan phantom acquired at low, regular and high dose levels are used. For each tests, 90 projections covering a 200-degree scan range are used for reconstruction. Threemore » different regions-of-interest (ROIs) of different contrasts are used to calculate contrast-to-noise ratios (CNR) for contrast evaluation. A single point structure is used to measure modulation transfer function (MTF) for spatial-resolution evaluation. Finally, we analyze CNRs and MTFs to study the relationship between image quality and parameter selections. Results: It was found that: 1) there is no universal optimal parameter. The optimal parameter value depends on specific task and dose level. 2) There is a clear trade-off between CNR and resolution. The parameter for the best CNR is always smaller than that for the best resolution. 3) Optimal parameters are also dose-specific. Data acquired under a high dose protocol require less regularization, yielding smaller optimal parameter values. 4) Comparing with conventional FDK images, TF-based CBCT images are better under a certain optimally selected parameters. The advantages are more obvious for low dose data. Conclusion: We have investigated the relationship between image quality and parameter values in the TF-based IR algorithm. Preliminary results indicate optimal parameters are specific to both the task types and dose levels, providing guidance for selecting parameters in advanced IR algorithms. This work is supported in part by NIH (1R01CA154747-01)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Changhoon; Hong, Beom-Ju; Bok, Seoyeon
Purpose: To investigate the serial changes of tumor hypoxia in response to single high-dose irradiation by various clinical and preclinical methods to propose an optimal fractionation schedule for stereotactic ablative radiation therapy. Methods and Materials: Syngeneic Lewis lung carcinomas were grown either orthotopically or subcutaneously in C57BL/6 mice and irradiated with a single dose of 15 Gy to mimic stereotactic ablative radiation therapy used in the clinic. Serial [{sup 18}F]-misonidazole (F-MISO) positron emission tomography (PET) imaging, pimonidazole fluorescence-activated cell sorting analyses, hypoxia-responsive element-driven bioluminescence, and Hoechst 33342 perfusion were performed before irradiation (day −1), at 6 hours (day 0), and 2 (daymore » 2) and 6 (day 6) days after irradiation for both subcutaneous and orthotopic lung tumors. For F-MISO, the tumor/brain ratio was analyzed. Results: Hypoxic signals were too low to quantitate for orthotopic tumors using F-MISO PET or hypoxia-responsive element-driven bioluminescence imaging. In subcutaneous tumors, the maximum tumor/brain ratio was 2.87 ± 0.483 at day −1, 1.67 ± 0.116 at day 0, 2.92 ± 0.334 at day 2, and 2.13 ± 0.385 at day 6, indicating that tumor hypoxia was decreased immediately after irradiation and had returned to the pretreatment levels at day 2, followed by a slight decrease by day 6 after radiation. Pimonidazole analysis also revealed similar patterns. Using Hoechst 33342 vascular perfusion dye, CD31, and cleaved caspase 3 co-immunostaining, we found a rapid and transient vascular collapse, which might have resulted in poor intratumor perfusion of F-MISO PET tracer or pimonidazole delivered at day 0, leading to decreased hypoxic signals at day 0 by PET or pimonidazole analyses. Conclusions: We found tumor hypoxia levels decreased immediately after delivery of a single dose of 15 Gy and had returned to the pretreatment levels 2 days after irradiation and had decreased slightly by day 6. Our results indicate that single high-dose irradiation can produce a rapid, but reversible, vascular collapse in tumors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saloman, Edward B.; Kramida, Alexander
2017-08-01
The energy levels, observed spectral lines, and transition probabilities of singly ionized vanadium, V ii, have been compiled. The experimentally derived energy levels belong to the configurations 3 d {sup 4}, 3 d {sup 3} ns ( n = 4, 5, 6), 3 d {sup 3} np , and 3 d {sup 3} nd ( n = 4, 5), 3 d {sup 3}4 f , 3 d {sup 2}4 s {sup 2}, and 3 d {sup 2}4 s 4 p . Also included are values for some forbidden lines that may be of interest to the astrophysical community. Experimental Landé g -factorsmore » and leading percentages for the levels are included when available, as well as Ritz wavelengths calculated from the energy levels. Wavelengths and transition probabilities are reported for 3568 and 1896 transitions, respectively. From the list of observed wavelengths, 407 energy levels are determined. The observed intensities, normalized to a common scale, are provided. From the newly optimized energy levels, a revised value for the ionization energy is derived, 118,030(60) cm{sup −1}, corresponding to 14.634(7) eV. This is 130 cm{sup −1} higher than the previously recommended value from Iglesias et al.« less
Analysis and optimization of hybrid electric vehicle thermal management systems
NASA Astrophysics Data System (ADS)
Hamut, H. S.; Dincer, I.; Naterer, G. F.
2014-02-01
In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.
Morovati, Amirhosein; Ghaffari, Alireza; Erfani Jabarian, Lale; Mehramizi, Ali
2017-01-01
Guaifenesin, a highly water-soluble active (50 mg/mL), classified as a BCS class I drug. Owing to its poor flowability and compressibility, formulating tablets especially high-dose one, may be a challenge. Direct compression may not be feasible. Bilayer tablet technology applied to Mucinex®, endures challenges to deliver a robust formulation. To overcome challenges involved in bilayer-tablet manufacturing and powder compressibility, an optimized single layer tablet prepared by a binary mixture (Two-in-one), mimicking the dual drug release character of Mucinex ® was purposed. A 3-factor, 3-level Box-Behnken design was applied to optimize seven considered dependent variables (Release "%" in 1, 2, 4, 6, 8, 10 and 12 h) regarding different levels of independent one (X 1 : Cetyl alcohol, X 2 : Starch 1500 ® , X 3 : HPMC K100M amounts). Two granule portions were prepared using melt and wet granulations, blended together prior to compression. An optimum formulation was obtained (X 1 : 37.10, X 2 : 2, X 3 : 42.49 mg). Desirability function was 0.616. F2 and f1 between release profiles of Mucinex® and the optimum formulation were 74 and 3, respectively. An n-value of about 0.5 for both optimum and Mucinex® formulations showed diffusion (Fickian) control mechanism. However, HPMC K100M rise in 70 mg accompanied cetyl alcohol rise in 60 mg led to first order kinetic (n = 0.6962). The K values of 1.56 represented an identical burst drug releases. Cetyl alcohol and starch 1500 ® modulated guaifenesin release from HPMC K100M matrices, while due to their binding properties, improved its poor flowability and compressibility, too.
Morovati, Amirhosein; Ghaffari, Alireza; Erfani jabarian, Lale; Mehramizi, Ali
2017-01-01
Guaifenesin, a highly water-soluble active (50 mg/mL), classified as a BCS class I drug. Owing to its poor flowability and compressibility, formulating tablets especially high-dose one, may be a challenge. Direct compression may not be feasible. Bilayer tablet technology applied to Mucinex®, endures challenges to deliver a robust formulation. To overcome challenges involved in bilayer-tablet manufacturing and powder compressibility, an optimized single layer tablet prepared by a binary mixture (Two-in-one), mimicking the dual drug release character of Mucinex® was purposed. A 3-factor, 3-level Box-Behnken design was applied to optimize seven considered dependent variables (Release “%” in 1, 2, 4, 6, 8, 10 and 12 h) regarding different levels of independent one (X1: Cetyl alcohol, X2: Starch 1500®, X3: HPMC K100M amounts). Two granule portions were prepared using melt and wet granulations, blended together prior to compression. An optimum formulation was obtained (X1: 37.10, X2: 2, X3: 42.49 mg). Desirability function was 0.616. F2 and f1 between release profiles of Mucinex® and the optimum formulation were 74 and 3, respectively. An n-value of about 0.5 for both optimum and Mucinex® formulations showed diffusion (Fickian) control mechanism. However, HPMC K100M rise in 70 mg accompanied cetyl alcohol rise in 60 mg led to first order kinetic (n = 0.6962). The K values of 1.56 represented an identical burst drug releases. Cetyl alcohol and starch 1500® modulated guaifenesin release from HPMC K100M matrices, while due to their binding properties, improved its poor flowability and compressibility, too. PMID:29552045
Ando, Wataru; Kutcher, Josh J; Krawetz, Roman; Sen, Arindom; Nakamura, Norimasa; Frank, Cyril B; Hart, David A
2014-06-01
Previous studies have demonstrated that porcine synovial membrane stem cells can adhere to a cartilage defect in vivo through the use of a tissue-engineered construct approach. To optimize this model, we wanted to compare effectiveness of tissue sources to determine whether porcine synovial fluid, synovial membrane, bone marrow and skin sources replicate our understanding of synovial fluid mesenchymal stromal cells or mesenchymal progenitor cells from humans both at the population level and the single-cell level. Synovial fluid clones were subsequently isolated and characterized to identify cells with a highly characterized optimal phenotype. The chondrogenic, osteogenic and adipogenic potentials were assessed in vitro for skin, bone marrow, adipose, synovial fluid and synovial membrane-derived stem cells. Synovial fluid cells then underwent limiting dilution analysis to isolate single clonal populations. These clonal populations were assessed for proliferative and differentiation potential by use of standardized protocols. Porcine-derived cells demonstrated the same relationship between cell sources as that demonstrated previously for humans, suggesting that the pig may be an ideal preclinical animal model. Synovial fluid cells demonstrated the highest chondrogenic potential that was further characterized, demonstrating the existence of a unique clonal phenotype with enhanced chondrogenic potential. Porcine stem cells demonstrate characteristics similar to those in human-derived mesenchymal stromal cells from the same sources. Synovial fluid-derived stem cells contain an inherent phenotype that may be optimal for cartilage repair. This must be more fully investigated for future use in the in vivo tissue-engineered construct approach in this physiologically relevant preclinical porcine model. Copyright © 2014 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.
Yu, Xiang; Zhang, Xueqing
2017-01-01
Comprehensive learning particle swarm optimization (CLPSO) is a powerful state-of-the-art single-objective metaheuristic. Extending from CLPSO, this paper proposes multiswarm CLPSO (MSCLPSO) for multiobjective optimization. MSCLPSO involves multiple swarms, with each swarm associated with a separate original objective. Each particle's personal best position is determined just according to the corresponding single objective. Elitists are stored externally. MSCLPSO differs from existing multiobjective particle swarm optimizers in three aspects. First, each swarm focuses on optimizing the associated objective using CLPSO, without learning from the elitists or any other swarm. Second, mutation is applied to the elitists and the mutation strategy appropriately exploits the personal best positions and elitists. Third, a modified differential evolution (DE) strategy is applied to some extreme and least crowded elitists. The DE strategy updates an elitist based on the differences of the elitists. The personal best positions carry useful information about the Pareto set, and the mutation and DE strategies help MSCLPSO discover the true Pareto front. Experiments conducted on various benchmark problems demonstrate that MSCLPSO can find nondominated solutions distributed reasonably over the true Pareto front in a single run.
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong
2018-01-01
Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.
NASA Astrophysics Data System (ADS)
Mailhot, Jason M.; Garnick, Jerry J.
1996-04-01
The purpose of our research is to determine the effects of KTP laser on root cementum and fibroblast attachment. Initial work has been completed in testing the effect of different energy levels on root surfaces. From these studies optimal energy levels were determined. In subsequent studies the working distance and exposure time required to obtain significant fibroblast attachment to healthy cementum surfaces were investigated. Results showed that lased cemental surfaces exhibited changes in surface topography which ranged from a melted surface to an apparent slight fusion of the surface of the covering smear layer. When the optimal energy level was used, fibroblasts demonstrate attachment on the specimens, resulting in the presence of a monolayer of cells on the control surfaces as well as on the surfaces lased with this energy level. The present study investigates the treatment of pathological root surfaces and calculus with a KTP laser utilizing these optimal parameters determine previously. Thirty single rooted teeth with advanced periodontal disease and ten healthy teeth were obtained, crowns were sectioned and roots split longitudinally. Forty test specimens were assigned into 1 of 4 groups; pathologic root--not lased, pathologic root--lased, root planed root and health root planed root. Human gingival fibroblasts were seeded on specimens and cultured for 24 hours. Specimens were processed for SEM. The findings suggest that with the KTP laser using a predetermined energy level applied to pathological root surfaces, the lased surfaces provided an unacceptable surface for fibroblast attachment. However, the procedural control using healthy root planed surfaces did demonstrate fibroblast attachment.
Kosovac, D; Wild, J; Ludwig, C; Meissner, S; Bauer, A P; Wagner, R
2011-02-01
Advanced gene delivery techniques can be combined with rational gene design to further improve the efficiency of plasmid DNA (pDNA)-mediated transgene expression in vivo. Herein, we analyzed the influence of intragenic sequence modifications on transgene expression in vitro and in vivo using murine erythropoietin (mEPO) as a transgene model. A single electro-gene transfer of an RNA- and codon-optimized mEPOopt gene into skeletal muscle resulted in a 3- to 4-fold increase of mEPO production sustained for >1 year and triggered a significant increase in hematocrit and hemoglobin without causing adverse effects. mEPO expression and hematologic levels were significantly lower when using comparable amounts of the wild type (mEPOwt) gene and only marginal effects were induced by mEPOΔCpG lacking intragenic CpG dinucleotides, even at high pDNA amounts. Corresponding with these observations, in vitro analysis of transfected cells revealed a 2- to 3-fold increased (mEPOopt) and 50% decreased (mEPOΔCpG) erythropoietin expression compared with mEPOwt, respectively. RNA analyses demonstrated that the specific design of the transgene sequence influenced expression levels by modulating transcriptional activity and nuclear plus cytoplasmic RNA amounts rather than translation. In sum, whereas CpG depletion negatively interferes with efficient expression in postmitotic tissues, mEPOopt doses <0.5 μg were sufficient to trigger optimal long-term hematologic effects encouraging the use of sequence-optimized transgenes to further reduce effective pDNA amounts.
Baur, Kilian; Wolf, Peter; Riener, Robert; Duarte, Jaime E
2017-07-01
Multiplayer environments are thought to increase the training intensity in robot-aided rehabilitation therapy after stroke. We developed a haptic-based environment to investigate the dynamics of two-player training performing time-constrained reaching movements using the ARMin rehabilitation robot. We implemented a challenge level adaptation algorithm that controlled a virtual damping coefficient to reach a desired success rate. We tested the algorithm's effectiveness in regulating the success rate during game play in a simulation with computer-controlled players, in a feasibility study with six unimpaired players, and in a single session with one stroke patient. The algorithm demonstrated its capacity to adjust the damping coefficient to reach three levels of success rate (low [50%], moderate [70%], and high [90%]) during singleplayer and multiplayer training. For the patient - tested in single-player mode at the moderate success rate only - the algorithm showed also promising behavior. Results of the feasibility study showed that to increase the player's willingness to play at a more challenging task condition, the effect of the challenge level adaptation - regardless of being played in single player or multiplayer mode - might be more important than the provision of multiplayer setting alone. Furthermore, the multiplayer setting tends to be a motivating and encouraging therapy component. Based on these results we will optimize and expand the multiplayer training platform and further investigate multiplayer settings in stroke therapy.
Titanium dioxide antireflection coating for silicon solar cells by spray deposition
NASA Technical Reports Server (NTRS)
Kern, W.; Tracy, E.
1980-01-01
A high-speed production process is described for depositing a single-layer, quarter-wavelength thick antireflection coating of titanium dioxide on metal-patterned single-crystal silicon solar cells for terrestrial applications. Controlled atomization spraying of an organotitanium solution was selected as the most cost-effective method of film deposition using commercial automated equipment. The optimal composition consists of titanium isopropoxide as the titanium source, n-butyl acetate as the diluent solvent, sec-butanol as the leveling agent, and 2-ethyl-1-hexanol to render the material uniformly depositable. Application of the process to the coating of circular, large-diameter solar cells with either screen-printed silver metallization or with vacuum-evaporated Ti/Pd/Ag metallization showed increases of over 40% in the electrical conversion efficiency. Optical characteristics, corrosion resistance, and several other important properties of the spray-deposited film are reported. Experimental evidence indicates a wide tolerance in the coating thickness upon the overall efficiency of the cell. Considerations pertaining to the optimization of AR coatings in general are discussed, and a comprehensive critical survey of the literature is presented.
Long-Acting Phospholipid Gel of Exenatide for Long-Term Therapy of Type II Diabetes.
Hu, Mei; Zhang, Yu; Xiang, Nanxi; Zhong, Ying; Gong, Tao; Zhang, Zhi-Rong; Fu, Yao
2016-06-01
This study aimed to develop a sustained-release formulation of exenatide (EXT) for the long-term therapeutic efficacy in the treatment of type II diabetes. In this study, we present an injectable phospholipid gel by mixing biocompatible phospholipid S100, medium chain triglyceride (MCT) with 85% (w/w) ethanol. A systemic pre-formulation study has been carried out to improve the stability of EXT during formulation fabrication. With the optimized formulation, the pharmacokinetic profiles in rats were studied and two diabetic animal models were employed to evaluate the therapeutic effect of EXT phospholipid gel via a single subcutaneous injection versus repeated injections of normal saline and EXT solution. With optimized formulation, sustained release of exenatide in vivo for over three consecutive weeks was observed after one single subcutaneous injection. Moreover, the pharmacodynamic study in two diabetic models justified that the gel formulation displayed a comparable hypoglycemic effect and controlled blood glucose level compared with exenatide solution treated group. EXT-loaded phospholipid gel represents a promising controlled release system for long-term therapy of type II diabetes.
Ultralow-phase-noise oscillators based on BAW resonators.
Li, Mingdong; Seok, Seonho; Rolland, Nathalie; Rolland, Paul; El Aabbaoui, Hassan; de Foucauld, Emeric; Vincent, Pierre; Giordano, Vincent
2014-06-01
This paper presents two 2.1-GHz low-phase noise oscillators based on BAW resonators. Both a single-ended common base structure and a differential Colpitts structure have been implemented in a 0.25-μm BiCMOS process. The detailed design methods including the realization, optimization, and test are reported. The differential Colpitts structure exhibits a phase noise 6.5 dB lower than the single-ended structure because of its good performance of power noise immunity. Comparison between the two structures is also carried out. The differential Colpitts structure shows a phase noise level of -87 dBc/Hz at 1-kHz offset frequency and a phase noise floor of -162 dBc/Hz, with an output power close to -6.5 dBm and a core consumption of 21.6 mW. Furthermore, with the proposed optimization methods, both proposed devices have achieved promising phase noise performance compared with state-of-the-art oscillators described in the literature. Finally, we briefly present the application of the proposed BAW oscillator to a micro-atomic clock.
NASA Technical Reports Server (NTRS)
Bainum, P. M.; Sellappan, R.
1977-01-01
The problem of optimal control with a minimum time criterion as applied to a single boom system for achieving two axis control is discussed. The special case where the initial conditions are such that the system can be driven to the equilibrium state with only a single switching maneuver in the bang-bang optimal sequence is analyzed. The system responses are presented. Application of the linear regulator problem for the optimal control of the telescoping system is extended to consider the effects of measurement and plant noises. The noise uncertainties are included with an application of the estimator - Kalman filter problem. Different schemes for measuring the components of the angular velocity are considered. Analytical results are obtained for special cases, and numerical results are presented for the general case.
Mounting evidence favoring single-family room neonatal intensive care.
Stevens, D; Thompson, P; Helseth, C; Pottala, J
2015-01-01
Controversy regarding the optimal design for neonatal intensive care has existed for more than 20 years. Recent evidence confirms that in comparison with the traditional open-bay design, the single-room facility provides for improved control of excessive noise and light, improved staff and parental satisfaction with care and equal, or possibly reduced, cost of care. Single-room care was not associated with any increase in adverse outcomes. To optimize long term developmental outcomes, single-room care must be augmented with appropriate developmental therapy and programs to actively support parental involvement.
NASA Astrophysics Data System (ADS)
Svensson, Mats; Humbel, Stéphane; Morokuma, Keiji
1996-09-01
The integrated MO+MO (IMOMO) method, recently proposed for geometry optimization, is tested for accurate single point calculations. The principle idea of the IMOMO method is to reproduce results of a high level MO calculation for a large ``real'' system by dividing it into a small ``model'' system and the rest and applying different levels of MO theory for the two parts. Test examples are the activation barrier of the SN2 reaction of Cl-+alkyl chlorides, the C=C double bond dissociation of olefins and the energy of reaction for epoxidation of benzene. The effects of basis set and method in the lower level calculation as well as the effects of the choice of model system are investigated in detail. The IMOMO method gives an approximation to the high level MO energetics on the real system, in most cases with very small errors, with a small additional cost over the low level calculation. For instance, when the MP2 (Møller-Plesset second-order perturbation) method is used as the lower level method, the IMOMO method reproduces the results of very high level MO method within 2 kcal/mol, with less than 50% of additional computer time, for the first two test examples. When the HF (Hartree-Fock) method is used as the lower level method, it is less accurate and depends more on the choice of model system, though the improvement over the HF energy is still very significant. Thus the IMOMO single point calculation provides a method for obtaining reliable local energetics such as bond energies and activation barriers for a large molecular system.
Cost optimization in low volume VLSI circuits
NASA Technical Reports Server (NTRS)
Cook, K. B., Jr.; Kerns, D. V., Jr.
1982-01-01
The relationship of integrated circuit (IC) cost to electronic system cost is developed using models for integrated circuit cost which are based on design/fabrication approach. Emphasis is on understanding the relationship between cost and volume for custom circuits suitable for NASA applications. In this report, reliability is a major consideration in the models developed. Results are given for several typical IC designs using off the shelf, full custom, and semicustom IC's with single and double level metallization.
[Autocontrol of muscle relaxation with vecuronium].
Sibilla, C; Zatelli, R; Marchi, M; Zago, M
1990-01-01
The optimal conditions for maintaining desired levels of muscle relaxation with vecuronium are obtained by means of the continuous infusion (I.V.) technique. A frequent correction of the infusion flow is required, since it is impossible to predict the exact amount for the muscle relaxant in single case. In order to overcome such limits the authors propose a very feasible infusion system for the self-control of muscle relaxation; furthermore they positively consider its possible daily clinical application.
System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO
NASA Technical Reports Server (NTRS)
Olds, John R.
1994-01-01
This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.
Cell wall-bound silicon optimizes ammonium uptake and metabolism in rice cells.
Sheng, Huachun; Ma, Jie; Pu, Junbao; Wang, Lijun
2018-05-16
Turgor-driven plant cell growth depends on cell wall structure and mechanics. Strengthening of cell walls on the basis of an association and interaction with silicon (Si) could lead to improved nutrient uptake and optimized growth and metabolism in rice (Oryza sativa). However, the structural basis and physiological mechanisms of nutrient uptake and metabolism optimization under Si assistance remain obscure. Single-cell level biophysical measurements, including in situ non-invasive micro-testing (NMT) of NH4+ ion fluxes, atomic force microscopy (AFM) of cell walls, and electrolyte leakage and membrane potential, as well as whole-cell proteomics using isobaric tags for relative and absolute quantification (iTRAQ), were performed. The altered cell wall structure increases the uptake rate of the main nutrient NH4+ in Si-accumulating cells, whereas the rate is only half in Si-deprived counterparts. Rigid cell walls enhanced by a wall-bound form of Si as the structural basis stabilize cell membranes. This, in turn, optimizes nutrient uptake of the cells in the same growth phase without any requirement for up-regulation of transmembrane ammonium transporters. Optimization of cellular nutrient acquisition strategies can substantially improve performance in terms of growth, metabolism and stress resistance.
Yang, Qingbo; Wang, Hanzheng; Lan, Xinwei; Cheng, Baokai; Chen, Sisi; Shi, Honglan; Xiao, Hai; Ma, Yinfa
2015-02-01
pH sensing at the single-cell level without negatively affecting living cells is very important but still a remaining issue in the biomedical studies. A 70 μm reflection-mode fiber-optic micro-pH sensor was designed and fabricated by dip-coating thin layer of organically modified aerogel onto a tapered spherical probe head. A pH sensitive fluorescent dye 2', 7'-Bis (2-carbonylethyl)-5(6)-carboxyfluorescein (BCECF) was employed and covalently bonded within the aerogel networks. By tuning the alkoxide mixing ratio and adjusting hexamethyldisilazane (HMDS) priming procedure, the sensor can be optimized to have high stability and pH sensing ability. The in vitro real-time sensing capability was then demonstrated in a simple spectroscopic way, and showed linear measurement responses with a pH resolution up to an average of 0.049 pH unit within a narrow, but biological meaningful pH range of 6.12-7.81. Its novel characterizations of high spatial resolution, reflection mode operation, fast response and high stability, great linear response within biological meaningful pH range and high pH resolutions, make this novel pH probe a very cost-effective tool for chemical/biological sensing, especially within the single cell level research field.
Yang, Qingbo; Wang, Hanzheng; Lan, Xinwei; Cheng, Baokai; Chen, Sisi; Shi, Honglan; Xiao, Hai; Ma, Yinfa
2014-01-01
pH sensing at the single-cell level without negatively affecting living cells is very important but still a remaining issue in the biomedical studies. A 70 μm reflection-mode fiber-optic micro-pH sensor was designed and fabricated by dip-coating thin layer of organically modified aerogel onto a tapered spherical probe head. A pH sensitive fluorescent dye 2′, 7′-Bis (2-carbonylethyl)-5(6)-carboxyfluorescein (BCECF) was employed and covalently bonded within the aerogel networks. By tuning the alkoxide mixing ratio and adjusting hexamethyldisilazane (HMDS) priming procedure, the sensor can be optimized to have high stability and pH sensing ability. The in vitro real-time sensing capability was then demonstrated in a simple spectroscopic way, and showed linear measurement responses with a pH resolution up to an average of 0.049 pH unit within a narrow, but biological meaningful pH range of 6.12–7.81. Its novel characterizations of high spatial resolution, reflection mode operation, fast response and high stability, great linear response within biological meaningful pH range and high pH resolutions, make this novel pH probe a very cost-effective tool for chemical/biological sensing, especially within the single cell level research field. PMID:25530670
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aramaki, Takeshi, E-mail: t.aramaki@scchr.jp; Moriguchi, Michihisa, E-mail: m.moriguchi@scchr.jp; Bekku, Emima, E-mail: e.bekku@scchr.jp
2015-02-15
PurposeTo assess optimal bed-rest duration after vascular intervention by way of the common femoral artery using 3F introducer sheaths.Materials and MethodsEligibility criteria for this single-center, prospective study included clinically necessary angiography, no coagulopathy or anticoagulant therapy, no hypersensitivity to contrast medium, age >20 years, and written, informed consent. Enrolled patients were assigned to one of three groups (105/group) with the duration of bed rest deceased sequentially. A sheath was inserted by way of the common femoral artery using the Seldinger technique. The first group (level 1) received 3 h of bed rest after the vascular intervention. If no bleeding or hematomas developed,more » the next group (level 2) received 2.5 h of bed rest. If still no bleeding or hematomas developed, the final group (level 3) received 2 h of bed rest. If any patient had bleeding or hematomas after bed rest, the study was terminated, and the bed rest of the preceding level was considered the optimal duration.ResultsA total of 105 patients were enrolled at level 1 between November 2010 and September 2011. Eight patients were excluded from analysis because cessation of bed rest was delayed. None of the remaining subjects experienced postoperative bleeding; therefore, patient enrollment at level 2 began in September 2011. However, puncture site bleeding occurred in the 52nd patient immediately after cessation of bed rest, necessitating study termination.ConclusionTo prevent bleeding, at least 3 h of postoperative bed rest is recommended for patients undergoing angiography using 3F sheaths.« less
Song, Do Kyeong; Oh, Jee-Young; Lee, Hyejin; Sung, Yeon-Ah
2017-01-01
Background/Aims Although increased serum anti-Müllerian hormone (AMH) level has been suggested to be a surrogate marker of polycystic ovarian morphology (PCOM), its association with polycystic ovary syndrome (PCOS) is controversial, and its diagnostic value has not been determined. We aimed to observe the relationship between the AMH level and PCOS phenotypes and to determine the optimal cutoff value of AMH for the diagnosis of PCOS in young Korean women. Methods We recruited 207 women with PCOS (120 with PCOM and 87 without PCOM) and 220 regular cycling women with normoandrogenemia (100 with PCOM and 120 without PCOM). Subjects underwent testing at a single outpatient visit. Serum AMH level was measured. Results Women with PCOS had higher serum AMH levels than did regular cycling women with normoandrogenemia (p < 0.05). Women with PCOM had higher serum AMH levels than women without PCOM, regardless of PCOS status (p < 0.05). The optimal AMH cutoff value for the diagnosis of PCOS was 10.0 ng/mL (71% sensitivity, 93% specificity). Serum AMH was an independent determinant of total testosterone after adjustment for age, body mass index, and the number of menses/year (β = 0.31, p < 0.01). An association between AMH and hyperandrogenism was only observed in women with PCOS, and it was independent of the presence of PCOM. Conclusion The serum AMH level can be useful for the diagnosis of PCOS at any age less than 40 years, and the optimal cutoff value for the diagnosis of PCOS identified in this study of young Korean women was 10.0 ng/mL. PMID:27899014
Focusing light through random photonic layers by four-element division algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zhang, Xicheng; Zuo, Haoyi; Pang, Lin
2018-02-01
The propagation of waves in turbid media is a fundamental problem of optics with vast applications. Optical phase optimization approaches for focusing light through turbid media using phase control algorithm have been widely studied in recent years due to the rapid development of spatial light modulator. The existing approaches include element-based algorithms - stepwise sequential algorithm, continuous sequential algorithm and whole element optimization approaches - partitioning algorithm, transmission matrix approach and genetic algorithm. The advantage of element-based approaches is that the phase contribution of each element is very clear; however, because the intensity contribution of each element to the focal point is small especially for the case of large number of elements, the determination of the optimal phase for a single element would be difficult. In other words, the signal to noise ratio of the measurement is weak, leading to possibly local maximal during the optimization. As for whole element optimization approaches, all elements are employed for the optimization. Of course, signal to noise ratio during the optimization is improved. However, because more random processings are introduced into the processing, optimizations take more time to converge than the single element based approaches. Based on the advantages of both single element based approaches and whole element optimization approaches, we propose FEDA approach. Comparisons with the existing approaches show that FEDA only takes one third of measurement time to reach the optimization, which means that FEDA is promising in practical application such as for deep tissue imaging.
The fractionated dipole antenna: A new antenna for body imaging at 7 Tesla.
Raaijmakers, Alexander J E; Italiaander, Michel; Voogt, Ingmar J; Luijten, Peter R; Hoogduin, Johannes M; Klomp, Dennis W J; van den Berg, Cornelis A T
2016-03-01
Dipole antennas in ultrahigh field MRI have demonstrated advantages over more conventional designs. In this study, the fractionated dipole antenna is presented: a dipole where the legs are split into segments that are interconnected by capacitors or inductors. A parameter study has been performed on dipole antenna length using numerical simulations. A subsequent simulation study investigates the optimal intersegment capacitor/inductor value. The resulting optimal design has been constructed and compared to a previous design, the single-side adapted dipole (SSAD) by simulations and measurements. An array of eight elements has been constructed for prostate imaging on four subjects (body mass index 20-27.5) using 8 × 2 kW amplifiers. For prostate imaging at 7T, lowest peak local specific-absorption rate (SAR) levels are achieved if the antenna is 30 cm or longer. A fractionated dipole antenna design with inductors between segments has been chosen to achieve even lower SAR levels and more homogeneous receive sensitivities. With the new design, good quality prostate images are acquired. SAR levels are reduced by 41% to 63% in comparison to the SSAD. Coupling levels are moderate (average nearest neighbor: -14.6 dB) for each subject and prostate B1+ levels range from 12 to 18 μT. © 2015 Wiley Periodicals, Inc.
Designing Industrial Networks Using Ecological Food Web Metrics.
Layton, Astrid; Bras, Bert; Weissburg, Marc
2016-10-18
Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loflin, Leonard
Through this grant, the U.S. Department of Energy (DOE) will review several functional areas within a nuclear power plant, including fire protection, operations and operations support, refueling, training, procurement, maintenance, site engineering, and others. Several functional areas need to be examined since there appears to be no single staffing area or approach that alone has the potential for significant staff optimization at new nuclear power plants. Several of the functional areas will require a review of technology options such as automation, remote monitoring, fleet wide monitoring, new and specialized instrumentation, human factors engineering, risk informed analysis and PRAs, component andmore » system condition monitoring and reporting, just in time training, electronic and automated procedures, electronic tools for configuration management and license and design basis information, etc., that may be applied to support optimization. Additionally, the project will require a review key regulatory issues that affect staffing and could be optimized with additional technology input. Opportunities to further optimize staffing levels and staffing functions by selection of design attributes of physical systems and structures need also be identified. A goal of this project is to develop a prioritized assessment of the functional areas, and R&D actions needed for those functional areas, to provide the best optimization« less
NASA Astrophysics Data System (ADS)
Sharqawy, Mostafa H.
2016-12-01
Pore network models (PNM) of Berea and Fontainebleau sandstones were constructed using nonlinear programming (NLP) and optimization methods. The constructed PNMs are considered as a digital representation of the rock samples which were based on matching the macroscopic properties of the porous media and used to conduct fluid transport simulations including single and two-phase flow. The PNMs consisted of cubic networks of randomly distributed pores and throats sizes and with various connectivity levels. The networks were optimized such that the upper and lower bounds of the pore sizes are determined using the capillary tube bundle model and the Nelder-Mead method instead of guessing them, which reduces the optimization computational time significantly. An open-source PNM framework was employed to conduct transport and percolation simulations such as invasion percolation and Darcian flow. The PNM model was subsequently used to compute the macroscopic properties; porosity, absolute permeability, specific surface area, breakthrough capillary pressure, and primary drainage curve. The pore networks were optimized to allow for the simulation results of the macroscopic properties to be in excellent agreement with the experimental measurements. This study demonstrates that non-linear programming and optimization methods provide a promising method for pore network modeling when computed tomography imaging may not be readily available.
NASA Astrophysics Data System (ADS)
Hoffmann, Marcin; Szarecka, Agnieszka; Rychlewski, Jacek
A review over most recent ab initio studies carried out at both RHF and MP2 levels on (R,R)-tartaric acid (TA), its diamide (DA), tetramethyldiamide (TMDA) and on three prototypic model systems (each of them constitutes a half of the respective parental molecule), i.e. 2-hydroxyacetic acid (HA), 2-hydroxyacetamide (HD) and 2-hydroxy-N,N-dimethylacetamide (HMD) is presented. (R,R)-tartaric acid and the derivatives have been completely optimized at RHF/6-31G* level and subsequently single-point energies of all conformers have been calculated with the use of second order perturbation theory according to the scheme: MP2/6-31G*//RHF/6-31G*. In the complete optimization of the model molecules at RHF level we have employed relatively large basis sets, augmented with polarisation and diffuse functions, namely 3-21G, 6-31G*, 6-31++G** and 6-311++G**. Electronic correlation has been included with the largest basis set used in this study, i.e. MP2/6-311++G**//RHF/6-311++G** single-point energy calculations have been performed. General confomational preferences of tartaric acid derivatives have been analysed as well as an attempt has been made to define main factors affecting the conformational behaviour of these molecules in the isolated state, in particular, the role and stability of intramolecular hydrogen bonding. In the case of the model compounds, our study principally concerned the conformational preferences and hydrogen bonding structure within the [alpha]-hydroxy-X moiety, where X=COOH, CONH2, CON(CH3)2.
NASA Astrophysics Data System (ADS)
Lu, Zheng; Chen, Xiaoyi; Zhou, Ying
2018-04-01
A particle tuned mass damper (PTMD) is a creative combination of a widely used tuned mass damper (TMD) and an efficient particle damper (PD) in the vibration control area. The performance of a one-storey steel frame attached with a PTMD is investigated through free vibration and shaking table tests. The influence of some key parameters (filling ratio of particles, auxiliary mass ratio, and particle density) on the vibration control effects is investigated, and it is shown that the attenuation level significantly depends on the filling ratio of particles. According to the experimental parametric study, some guidelines for optimization of the PTMD that mainly consider the filling ratio are proposed. Furthermore, an approximate analytical solution based on the concept of an equivalent single-particle damper is proposed, and it shows satisfied agreement between the simulation and experimental results. This simplified method is then used for the preliminary optimal design of a PTMD system, and a case study of a PTMD system attached to a five-storey steel structure following this optimization process is presented.
NASA Astrophysics Data System (ADS)
Ren, Wei; Wang, Shujun; Lü, Mingsheng; Wang, Xiaobei; Fang, Yaowei; Jiao, Yuliang; Hu, Jianen
2016-03-01
We adopted the response surface methodology using single factor and orthogonal experiments to optimize four types of antimicrobial agents that could inhibit biofilm formation by Streptococcus mutans, which is commonly found in the human oral cavity and causes tooth decay. The objective was to improve the function of marine Arthrobacter oxydans KQ11 dextranase mouthwash (designed and developed by our laboratory). The experiment was conducted in a three-level, four-variable central composite design to determine the best combination of ZnSO4, lysozyme, citric acid and chitosan. The optimized antibacterial agents were 2.16 g/L ZnSO4, 14 g/L lysozyme, 4.5 g/L citric acid and 5 g/L chitosan. The biofilm formation inhibition reached 84.49%. In addition, microscopic observation of the biofilm was performed using scanning electron microscopy and confocal laser scanning microscopy. The optimized formula was tested in marine dextranase Arthrobacter oxydans KQ11 mouthwash and enhanced the inhibition of S. mutans. This work may be promoted for the design and development of future marine dextranase oral care products.
Comparison of cryogenic low-pass filters.
Thalmann, M; Pernau, H-F; Strunk, C; Scheer, E; Pietsch, T
2017-11-01
Low-temperature electronic transport measurements with high energy resolution require both effective low-pass filtering of high-frequency input noise and an optimized thermalization of the electronic system of the experiment. In recent years, elaborate filter designs have been developed for cryogenic low-level measurements, driven by the growing interest in fundamental quantum-physical phenomena at energy scales corresponding to temperatures in the few millikelvin regime. However, a single filter concept is often insufficient to thermalize the electronic system to the cryogenic bath and eliminate spurious high frequency noise. Moreover, the available concepts often provide inadequate filtering to operate at temperatures below 10 mK, which are routinely available now in dilution cryogenic systems. Herein we provide a comprehensive analysis of commonly used filter types, introduce a novel compact filter type based on ferrite compounds optimized for the frequency range above 20 GHz, and develop an improved filtering scheme providing adaptable broad-band low-pass characteristic for cryogenic low-level and quantum measurement applications at temperatures down to few millikelvin.
Comparison of cryogenic low-pass filters
NASA Astrophysics Data System (ADS)
Thalmann, M.; Pernau, H.-F.; Strunk, C.; Scheer, E.; Pietsch, T.
2017-11-01
Low-temperature electronic transport measurements with high energy resolution require both effective low-pass filtering of high-frequency input noise and an optimized thermalization of the electronic system of the experiment. In recent years, elaborate filter designs have been developed for cryogenic low-level measurements, driven by the growing interest in fundamental quantum-physical phenomena at energy scales corresponding to temperatures in the few millikelvin regime. However, a single filter concept is often insufficient to thermalize the electronic system to the cryogenic bath and eliminate spurious high frequency noise. Moreover, the available concepts often provide inadequate filtering to operate at temperatures below 10 mK, which are routinely available now in dilution cryogenic systems. Herein we provide a comprehensive analysis of commonly used filter types, introduce a novel compact filter type based on ferrite compounds optimized for the frequency range above 20 GHz, and develop an improved filtering scheme providing adaptable broad-band low-pass characteristic for cryogenic low-level and quantum measurement applications at temperatures down to few millikelvin.
Suppressing four-wave mixing in warm-atomic-vapor quantum memory
NASA Astrophysics Data System (ADS)
Vurgaftman, Igor; Bashkansky, Mark
2013-06-01
Warm-atomic-vapor cells may be employed as quantum-memory components in an experimentally convenient implementation of the Duan-Lukin-Cirac-Zoller protocol. Previous studies have shown the performance of these cells is limited by the combination of collisional fluorescence during the writing process and four-wave mixing during the reading process and have proposed to overcome this by a combination of optimized detuning and prepumping with circularly polarized write and read beams. Here we show that the Raman matrix elements involving the excited P (F'=I-(1)/(2) and F'=I+(1)/(2)) levels of all alkali atoms are always equal in magnitude and opposite in sign when the write and the anti-Stokes (Stokes) photons have the opposite helicity, and the Raman transitions via the two levels interfere destructively. The existence of an optimal detuning is demonstrated for a given dark-count rate of the single-photon detector. The predicted behavior is observed experimentally in a warm Rb cell with buffer gas.
Magnetic microfluidic system for isolation of single cells
NASA Astrophysics Data System (ADS)
Mitterboeck, Richard; Kokkinis, Georgios; Berris, Theocharis; Keplinger, Franz; Giouroudi, Ioanna
2015-06-01
This paper presents the design and realization of a compact, portable and cost effective microfluidic system for isolation and detection of rare circulating tumor cells (CTCs) in suspension. The innovative aspect of the proposed isolation method is that it utilizes superparamagnetic particles (SMPs) to label CTCs and then isolate those using microtraps with integrated current carrying microconductors. The magnetically labeled and trapped CTCs can then be detected by integrated magnetic microsensors e.g. giant magnetoresistive (GMR) or giant magnetoimpedance (GMI) sensors. The channel and trap dimensions are optimized to protect the cells from shear stress and achieve high trapping efficiency. These intact single CTCs can then be used for additional analysis, testing and patient specific drug screening. Being able to analyze the CTCs metastasis-driving capabilities on the single cell level is considered of great importance for developing patient specific therapies. Experiments showed that it is possible to capture single labeled cells in multiple microtraps and hold them there without permanent electric current and magnetic field.
RECOVERY OF VASCULAR FUNCTION AFTER EXPOSURE TO A SINGLE BOUT OF SEGMENTAL VIBRATION
Krajnak, Kristine; Waugh, Stacey; Miller, G. Roger; Johnson, Claud
2015-01-01
Work rotation schedules may be used to reduce the negative effects of vibration on vascular function. This study determined how long it takes vascular function to recover after a single exposure to vibration in rats (125 Hz, acceleration 5g). The responsiveness of rat-tail arteries to the vasoconstricting factor UK14304, an α2C-adrenoreceptor agonist, and the vasodilating factor acetylcholine (ACh) were measured ex vivo 1, 2, 7, or 9 d after exposure to a single bout of vibration. Vasoconstriction induced by UK14304 returned to control levels after 1 d of recovery. However, re-dilation induced by ACh did not return to baseline until after 9 d of recovery. Exposure to vibration exerted prolonged effects on peripheral vascular function, and altered vascular responses to a subsequent exposure. To optimize the positive results of work rotation schedules, it is suggested that studies assessing recovery of vascular function after exposure to a single bout of vibration be performed in humans. PMID:25072825
NASA Technical Reports Server (NTRS)
Connolly, J. C.; Carlin, D. B.; Ettenberg, M.
1989-01-01
A high power single spatial mode channeled substrate planar AlGaAs semiconductor diode laser was developed. The emission wavelength was optimized at 860 to 880 nm. The operating characteristics (power current, single spatial mode behavior, far field radiation patterns, and spectral behavior) and results of computer modeling studies on the performance of the laser are discussed. Reliability assessment at high output levels is included. Performance results on a new type of channeled substrate planar diode laser incorporating current blocking layers, grown by metalorganic chemical vapor deposition, to more effectively focus the operational current to the lasing region was demonstrated. The optoelectronic behavior and fabrication procedures for this new diode laser are discussed. The highlights include single spatial mode devices with up to 160 mW output at 8600 A, and quantum efficiencies of 70 percent (1 W/amp) with demonstrated operating lifetimes of 10,000 h at 50 mW.
Single cell digital polymerase chain reaction on self-priming compartmentalization chip
Zhu, Qiangyuan; Qiu, Lin; Xu, Yanan; Li, Guang; Mu, Ying
2017-01-01
Single cell analysis provides a new framework for understanding biology and disease, however, an absolute quantification of single cell gene expression still faces many challenges. Microfluidic digital polymerase chain reaction (PCR) provides a unique method to absolutely quantify the single cell gene expression, but only limited devices are developed to analyze a single cell with detection variation. This paper describes a self-priming compartmentalization (SPC) microfluidic digital polymerase chain reaction chip being capable of performing single molecule amplification from single cell. The chip can be used to detect four single cells simultaneously with 85% of sample digitization. With the optimized protocol for the SPC chip, we first tested the ability, precision, and sensitivity of our SPC digital PCR chip by assessing β-actin DNA gene expression in 1, 10, 100, and 1000 cells. And the reproducibility of the SPC chip is evaluated by testing 18S rRNA of single cells with 1.6%–4.6% of coefficient of variation. At last, by detecting the lung cancer related genes, PLAU gene expression of A549 cells at the single cell level, the single cell heterogeneity was demonstrated. So, with the power-free, valve-free SPC chip, the gene copy number of single cells can be quantified absolutely with higher sensitivity, reduced labor time, and reagent. We expect that this chip will enable new studies for biology and disease. PMID:28191267
Single cell digital polymerase chain reaction on self-priming compartmentalization chip.
Zhu, Qiangyuan; Qiu, Lin; Xu, Yanan; Li, Guang; Mu, Ying
2017-01-01
Single cell analysis provides a new framework for understanding biology and disease, however, an absolute quantification of single cell gene expression still faces many challenges. Microfluidic digital polymerase chain reaction (PCR) provides a unique method to absolutely quantify the single cell gene expression, but only limited devices are developed to analyze a single cell with detection variation. This paper describes a self-priming compartmentalization (SPC) microfluidic digital polymerase chain reaction chip being capable of performing single molecule amplification from single cell. The chip can be used to detect four single cells simultaneously with 85% of sample digitization. With the optimized protocol for the SPC chip, we first tested the ability, precision, and sensitivity of our SPC digital PCR chip by assessing β-actin DNA gene expression in 1, 10, 100, and 1000 cells. And the reproducibility of the SPC chip is evaluated by testing 18S rRNA of single cells with 1.6%-4.6% of coefficient of variation. At last, by detecting the lung cancer related genes, PLAU gene expression of A549 cells at the single cell level, the single cell heterogeneity was demonstrated. So, with the power-free, valve-free SPC chip, the gene copy number of single cells can be quantified absolutely with higher sensitivity, reduced labor time, and reagent. We expect that this chip will enable new studies for biology and disease.
Cooperation and age structure in spatial games
NASA Astrophysics Data System (ADS)
Wang, Zhen; Wang, Zhen; Zhu, Xiaodan; Arenzon, Jeferson J.
2012-01-01
We study the evolution of cooperation in evolutionary spatial games when the payoff correlates with the increasing age of players (the level of correlation is set through a single parameter, α). The demographic heterogeneous age distribution, directly affecting the outcome of the game, is thus shown to be responsible for enhancing the cooperative behavior in the population. In particular, moderate values of α allow cooperators not only to survive but to outcompete defectors, even when the temptation to defect is large and the ageless, standard α=0 model does not sustain cooperation. The interplay between age structure and noise is also considered, and we obtain the conditions for optimal levels of cooperation.
One size fits all? An assessment tool for solid waste management at local and national levels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broitman, Dani, E-mail: danib@techunix.technion.ac.il; Ayalon, Ofira; Kan, Iddo
2012-10-15
Highlights: Black-Right-Pointing-Pointer Waste management schemes are generally implemented at national or regional level. Black-Right-Pointing-Pointer Local conditions characteristics and constraints are often neglected. Black-Right-Pointing-Pointer We developed an economic model able to compare multi-level waste management options. Black-Right-Pointing-Pointer A detailed test case with real economic data and a best-fit scenario is described. Black-Right-Pointing-Pointer Most efficient schemes combine clear National directives with local level flexibility. - Abstract: As environmental awareness rises, integrated solid waste management (WM) schemes are increasingly being implemented all over the world. The different WM schemes usually address issues such as landfilling restrictions (mainly due to methane emissions and competingmore » land use), packaging directives and compulsory recycling goals. These schemes are, in general, designed at a national or regional level, whereas local conditions and constraints are sometimes neglected. When national WM top-down policies, in addition to setting goals, also dictate the methods by which they are to be achieved, local authorities lose their freedom to optimize their operational WM schemes according to their specific characteristics. There are a myriad of implementation options at the local level, and by carrying out a bottom-up approach the overall national WM system will be optimal on economic and environmental scales. This paper presents a model for optimizing waste strategies at a local level and evaluates this effect at a national level. This is achieved by using a waste assessment model which enables us to compare both the economic viability of several WM options at the local (single municipal authority) level, and aggregated results for regional or national levels. A test case based on various WM approaches in Israel (several implementations of mixed and separated waste) shows that local characteristics significantly influence WM costs, and therefore the optimal scheme is one under which each local authority is able to implement its best-fitting mechanism, given that national guidelines are kept. The main result is that strict national/regional WM policies may be less efficient, unless some type of local flexibility is implemented. Our model is designed both for top-down and bottom-up assessment, and can be easily adapted for a wide range of WM option comparisons at different levels.« less
Optimal moment determination in POME-copula based hydrometeorological dependence modelling
NASA Astrophysics Data System (ADS)
Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi
2017-07-01
Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.
NASA Astrophysics Data System (ADS)
Holmes, Timothy W.
2001-01-01
A detailed tomotherapy inverse treatment planning method is described which incorporates leakage and head scatter corrections during each iteration of the optimization process, allowing these effects to be directly accounted for in the optimized dose distribution. It is shown that the conventional inverse planning method for optimizing incident intensity can be extended to include a `concurrent' leaf sequencing operation from which the leakage and head scatter corrections are determined. The method is demonstrated using the steepest-descent optimization technique with constant step size and a least-squared error objective. The method was implemented using the MATLAB scientific programming environment and its feasibility demonstrated for 2D test cases simulating treatment delivery using a single coplanar rotation. The results indicate that this modification does not significantly affect convergence of the intensity optimization method when exposure times of individual leaves are stratified to a large number of levels (>100) during leaf sequencing. In general, the addition of aperture dependent corrections, especially `head scatter', reduces incident fluence in local regions of the modulated fan beam, resulting in increased exposure times for individual collimator leaves. These local variations can result in 5% or greater local variation in the optimized dose distribution compared to the uncorrected case. The overall efficiency of the modified intensity optimization algorithm is comparable to that of the original unmodified case.
NASA Astrophysics Data System (ADS)
Chung, Kun-Jen
2012-08-01
Cardenas-Barron [Cardenas-Barron, L.E. (2010) 'A Simple Method to Compute Economic order Quantities: Some Observations', Applied Mathematical Modelling, 34, 1684-1688] indicates that there are several functions in which the arithmetic-geometric mean method (AGM) does not give the minimum. This article presents another situation to reveal that the AGM inequality to locate the optimal solution may be invalid for Teng, Chen, and Goyal [Teng, J.T., Chen, J., and Goyal S.K. (2009), 'A Comprehensive Note on: An Inventory Model under Two Levels of Trade Credit and Limited Storage Space Derived without Derivatives', Applied Mathematical Modelling, 33, 4388-4396], Teng and Goyal [Teng, J.T., and Goyal S.K. (2009), 'Comment on 'Optimal Inventory Replenishment Policy for the EPQ Model under Trade Credit Derived without Derivatives', International Journal of Systems Science, 40, 1095-1098] and Hsieh, Chang, Weng, and Dye [Hsieh, T.P., Chang, H.J., Weng, M.W., and Dye, C.Y. (2008), 'A Simple Approach to an Integrated Single-vendor Single-buyer Inventory System with Shortage', Production Planning and Control, 19, 601-604]. So, the main purpose of this article is to adopt the calculus approach not only to overcome shortcomings of the arithmetic-geometric mean method of Teng et al. (2009), Teng and Goyal (2009) and Hsieh et al. (2008), but also to develop the complete solution procedures for them.
NASA Astrophysics Data System (ADS)
Yoon, Sangpil; Wang, Yingxiao; Shung, K. K.
2016-03-01
Acoustic-transfection technique has been developed for the first time. We have developed acoustic-transfection by integrating a high frequency ultrasonic transducer and a fluorescence microscope. High frequency ultrasound with the center frequency over 150 MHz can focus acoustic sound field into a confined area with the diameter of 10 μm or less. This focusing capability was used to perturb lipid bilayer of cell membrane to induce intracellular delivery of macromolecules. Single cell level imaging was performed to investigate the behavior of a targeted single-cell after acoustic-transfection. FRET-based Ca2+ biosensor was used to monitor intracellular concentration of Ca2+ after acoustic-transfection and the fluorescence intensity of propidium iodide (PI) was used to observe influx of PI molecules. We changed peak-to-peak voltages and pulse duration to optimize the input parameters of an acoustic pulse. Input parameters that can induce strong perturbations on cell membrane were found and size dependent intracellular delivery of macromolecules was explored. To increase the amount of delivered molecules by acoustic-transfection, we applied several acoustic pulses and the intensity of PI fluorescence increased step wise. Finally, optimized input parameters of acoustic-transfection system were used to deliver pMax-E2F1 plasmid and GFP expression 24 hours after the intracellular delivery was confirmed using HeLa cells.
Nie, Jianhui; Wang, Wenbo; Wen, Zhiheng; Song, Aijing; Hong, Kunxue; Lu, Shan; Zhong, Ping; Xu, Jianqing; Kong, Wei; Li, Jingyun; Shang, Hong; Ling, Hong; Ruan, Li; Wang, Youchun
2012-11-01
Among the neutralizing antibody evaluation assays, the single-cycle pseudovirus infection assay is high-throughput and can provide rapid, sensitive and reproducible measurements after a single cycle of infection. Cell counts, pseudovirus inoculation levels, amount of diethylaminoethyl-dextran (DEAE-dextran), and the nonspecific effects of serum and plasma were tested to identify the optimal conditions for a neutralizing antibody assay based on pseudoviruses. Optimal conditions for cell counts, pseudovirus inoculation, and amount of DEAE-dextran were 1 × 10(4)cells/well, 200TCID(50)/well, and 15 μg/ml, respectively. Compared with serum samples, high-concentration anticoagulants reduced the relative light unit (RLU) value. The RLU value increased sharply initially but then decreased slowly with dilution of the plasma sample. Test kits containing 10 HIV-1 CRF07/08_BC pseudovirus strains and 10 plasma samples from individuals infected with HIV-1 CRF07/08_BC were assembled into two packages and distributed to nine laboratories with a standard operating procedure included. For the 10 laboratories that evaluated the test, 17 of 44 (37%) laboratory pairs were considered equivalent. A statistical qualification rule was developed based on the testing results from 5 experienced laboratories, where a laboratory qualified if at least 83% of values lied within the acceptable range. Copyright © 2012 Elsevier B.V. All rights reserved.
Vogel, Michael W; Vegh, Viktor; Reutens, David C
2013-05-01
This paper investigates optimal placement of a localized single-axis magnetometer for ultralow field (ULF) relaxometry in view of various sample shapes and sizes. The authors used finite element method for the numerical analysis to determine the sample magnetic field environment and evaluate the optimal location of the single-axis magnetometer. Given the different samples, the authors analysed the magnetic field distribution around the sample and determined the optimal orientation and possible positions of the sensor to maximize signal strength, that is, the power of the free induction decay. The authors demonstrate that a glass vial with flat bottom and 10 ml volume is the best structure to achieve the highest signal out of samples studied. This paper demonstrates the importance of taking into account the combined effects of sensor configuration and sample parameters for signal generation prior to designing and constructing ULF systems with a single-axis magnetometer. Through numerical simulations the authors were able to optimize structural parameters, such as sample shape and size, sensor orientation and location, to maximize the measured signal in ultralow field relaxometry.
Kaneda, Shohei; Ono, Koichi; Fukuba, Tatsuhiro; Nojima, Takahiko; Yamamoto, Takatoki; Fujii, Teruo
2011-01-01
In this paper, a rapid and simple method to determine the optimal temperature conditions for denaturant electrophoresis using a temperature-controlled on-chip capillary electrophoresis (CE) device is presented. Since on-chip CE operations including sample loading, injection and separation are carried out just by switching the electric field, we can repeat consecutive run-to-run CE operations on a single on-chip CE device by programming the voltage sequences. By utilizing the high-speed separation and the repeatability of the on-chip CE, a series of electrophoretic operations with different running temperatures can be implemented. Using separations of reaction products of single-stranded DNA (ssDNA) with a peptide nucleic acid (PNA) oligomer, the effectiveness of the presented method to determine the optimal temperature conditions required to discriminate a single-base substitution (SBS) between two different ssDNAs is demonstrated. It is shown that a single run for one temperature condition can be executed within 4 min, and the optimal temperature to discriminate the SBS could be successfully found using the present method. PMID:21845077
Ross, Scott E.; Linens, Shelley W.; Wright, Cynthia J.; Arnold, Brent L.
2013-01-01
Context: Stochastic resonance stimulation (SRS) administered at an optimal intensity could maximize the effects of treatment on balance. Objective: To determine if a customized optimal SRS intensity is better than a traditional SRS protocol (applying the same percentage sensory threshold intensity for all participants) for improving double- and single-legged balance in participants with or without functional ankle instability (FAI). Design: Case-control study with an embedded crossover design. Setting: Laboratory. Patients or Other Participants: Twelve healthy participants (6 men, 6 women; age = 22 ± 2 years, height = 170 ± 7 cm, mass = 64 ± 10 kg) and 12 participants (6 men, 6 women; age = 23 ± 3 years, height = 174 ± 8 cm, mass = 69 ± 10 kg) with FAI. Intervention(s): The SRS optimal intensity level was determined by finding the intensity from 4 experimental intensities at the percentage sensory threshold (25% [SRS25], 50% [SRS50], 75% [SRS75], 90% [SRS90]) that produced the greatest improvement in resultant center-of-pressure velocity (R-COPV) over a control condition (SRS0) during double-legged balance. We examined double- and single-legged balance tests, comparing optimal SRS (SRSopt1) and SRS0 using a battery of center-of-pressure measures in the frontal and sagittal planes. Main Outcome Measure(s): Anterior-posterior (A-P) and medial-lateral (M-L) center-of-pressure velocity (COPV) and center-of-pressure excursion (COPE), R-COPV, and 95th percentile center-of-pressure area ellipse (COPA-95). Results: Data were organized into bins that represented optimal (SRSopt1), second (SRSopt2), third (SRSopt3), and fourth (SRSopt4) improvement over SRS0. The SRSopt1 enhanced R-COPV (P ≤ .05) over SRS0 and other SRS conditions (SRS0 = 0.94 ± 0.32 cm/s, SRSopt1 = 0.80 ± 0.19 cm/s, SRSopt2 = 0.88 ± 0.24 cm/s, SRSopt3 = 0.94 ± 0.25 cm/s, SRSopt4 = 1.00 ± 0.28 cm/s). However, SRS did not improve R-COPV over SRS0 when data were categorized by sensory threshold. Furthermore, SRSopt1 improved double-legged balance over SRS0 from 11% to 25% in all participants for the center-of-pressure frontal- and sagittal-plane assessments (P ≤ .05). The SRSopt1 also improved single-legged balance over SRS0 from 10% to 17% in participants with FAI for the center-of-pressure frontal- and sagittal-plane assessments (P ≤ .05). The SRSopt1 did not improve single-legged balance in participants with stable ankles. Conclusions: The SRSopt1 improved double-legged balance and transfers to enhancing single-legged balance deficits associated with FAI. PMID:23724774
Optical Manipulation of Single Magnetic Beads in a Microwell Array on a Digital Microfluidic Chip.
Decrop, Deborah; Brans, Toon; Gijsenbergh, Pieter; Lu, Jiadi; Spasic, Dragana; Kokalj, Tadej; Beunis, Filip; Goos, Peter; Puers, Robert; Lammertyn, Jeroen
2016-09-06
The detection of single molecules in magnetic microbead microwell array formats revolutionized the development of digital bioassays. However, retrieval of individual magnetic beads from these arrays has not been realized until now despite having great potential for studying captured targets at the individual level. In this paper, optical tweezers were implemented on a digital microfluidic platform for accurate manipulation of single magnetic beads seeded in a microwell array. Successful optical trapping of magnetic beads was found to be dependent on Brownian motion of the beads, suggesting a 99% chance of trapping a vibrating bead. A tailor-made experimental design was used to screen the effect of bead type, ionic buffer strength, surfactant type, and concentration on the Brownian activity of beads in microwells. With the optimal conditions, the manipulation of magnetic beads was demonstrated by their trapping, retrieving, transporting, and repositioning to a desired microwell on the array. The presented platform combines the strengths of digital microfluidics, digital bioassays, and optical tweezers, resulting in a powerful dynamic microwell array system for single molecule and single cell studies.
Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke
X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less
Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime
Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke; ...
2017-06-29
X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less
Alkaline Comet Assay for Assessing DNA Damage in Individual Cells.
Pu, Xinzhu; Wang, Zemin; Klaunig, James E
2015-08-06
Single-cell gel electrophoresis, commonly called a comet assay, is a simple and sensitive method for assessing DNA damage at the single-cell level. It is an important technique in genetic toxicological studies. The comet assay performed under alkaline conditions (pH >13) is considered the optimal version for identifying agents with genotoxic activity. The alkaline comet assay is capable of detecting DNA double-strand breaks, single-strand breaks, alkali-labile sites, DNA-DNA/DNA-protein cross-linking, and incomplete excision repair sites. The inclusion of digestion of lesion-specific DNA repair enzymes in the procedure allows the detection of various DNA base alterations, such as oxidative base damage. This unit describes alkaline comet assay procedures for assessing DNA strand breaks and oxidative base alterations. These methods can be applied in a variety of cells from in vitro and in vivo experiments, as well as human studies. Copyright © 2015 John Wiley & Sons, Inc.
Operating Quantum States in Single Magnetic Molecules: Implementation of Grover's Quantum Algorithm.
Godfrin, C; Ferhat, A; Ballou, R; Klyatskaya, S; Ruben, M; Wernsdorfer, W; Balestro, F
2017-11-03
Quantum algorithms use the principles of quantum mechanics, such as, for example, quantum superposition, in order to solve particular problems outperforming standard computation. They are developed for cryptography, searching, optimization, simulation, and solving large systems of linear equations. Here, we implement Grover's quantum algorithm, proposed to find an element in an unsorted list, using a single nuclear 3/2 spin carried by a Tb ion sitting in a single molecular magnet transistor. The coherent manipulation of this multilevel quantum system (qudit) is achieved by means of electric fields only. Grover's search algorithm is implemented by constructing a quantum database via a multilevel Hadamard gate. The Grover sequence then allows us to select each state. The presented method is of universal character and can be implemented in any multilevel quantum system with nonequal spaced energy levels, opening the way to novel quantum search algorithms.
Application of dual-fuel propulsion to a single stage AMLS vehicle
NASA Technical Reports Server (NTRS)
Lepsch, Roger A., Jr.; Stanley, Douglas O.; Unal, Resit
1993-01-01
As part of NASA's Advanced Manned Launch System (AMLS) study to determine a follow-on, or complement, to the Space Shuttle, a reusable single-stage-to-orbit concept utilizing dual-fuel rocket propulsion has been examined. Several dual-fuel propulsion concepts were investigated. These include: a separate engine concept combining Russian RD-170 kerosene-fueled engines with SSME-derivative engines; the kerosene and hydrogen-fueled Russian RD-701 engine concept; and a dual-fuel, dual-expander engine concept. Analysis to determine vehicle weight and size characteristics was performed using conceptual level design techniques. A response surface methodology for multidisciplinary design was utilized to optimize the dual-fuel vehicle concepts with respect to several important propulsion system and vehicle design parameters in order to achieve minimum empty weight. Comparisons were then made with a hydrogen-fueled reference, single-stage vehicle. The tools and methods employed in the analysis process are also summarized.
Operating Quantum States in Single Magnetic Molecules: Implementation of Grover's Quantum Algorithm
NASA Astrophysics Data System (ADS)
Godfrin, C.; Ferhat, A.; Ballou, R.; Klyatskaya, S.; Ruben, M.; Wernsdorfer, W.; Balestro, F.
2017-11-01
Quantum algorithms use the principles of quantum mechanics, such as, for example, quantum superposition, in order to solve particular problems outperforming standard computation. They are developed for cryptography, searching, optimization, simulation, and solving large systems of linear equations. Here, we implement Grover's quantum algorithm, proposed to find an element in an unsorted list, using a single nuclear 3 /2 spin carried by a Tb ion sitting in a single molecular magnet transistor. The coherent manipulation of this multilevel quantum system (qudit) is achieved by means of electric fields only. Grover's search algorithm is implemented by constructing a quantum database via a multilevel Hadamard gate. The Grover sequence then allows us to select each state. The presented method is of universal character and can be implemented in any multilevel quantum system with nonequal spaced energy levels, opening the way to novel quantum search algorithms.
Singha, Poonam; Muthukumarappan, Kasiviswanathan
2018-07-01
Response surface methodology was used to investigate the single screw extrusion of apple pomace-defatted soy flour-corn grits blends and the product properties. Five different blends at a level of 0-20% w/w apple pomace were extrusion cooked with varied barrel and die temperature (100-140℃), screw speed (100-200 rpm), and feed moisture content (14-20% wet basis). Increasing apple pomace content in the blends significantly ( P < 0.05) increased the bulk density, the total phenolic content, and the antioxidant activity of the extrudates. The expansion ratio increased with pomace inclusion level of 5% but decreased significantly ( P < 0.05) at higher levels of pomace inclusion (10-20%). Moisture content had quadratic influence on water absorption and solubility indices. Optimal extrusion cooking conditions most likely to produce apple pomace-enriched extruded snack products were at 140℃ barrel and die temperature, 20% feed moisture content, and 200 rpm screw speed. The results indicated active interaction between apple pomace and starch during expansion process.
NASA Astrophysics Data System (ADS)
Kar, Soumen; Rao, V. V.
2018-07-01
In our first attempt to design a single phase R-SFCL in India, we have chosen the typical rating of a medium voltage level (3.3 kVrms, 200 Arms, 1Φ) R-SFCL. The step-by-step design procedure for the R-SFCL involves conductor selection, time dependent electro-thermal simulations and recovery time optimization after fault removal. In the numerical analysis, effective fault limitation for a fault current of 5 kA for the medium voltage level R-SFCL are simulated. Maximum normal state resistance and maximum temperature rise in the SFCL coil during current limitation are estimated using one-dimensional energy balance equation. Further, a cryogenic system is conceptually designed for aforesaid MV level R-SFCL by considering inner and outer vessel materials, wall-thickness and thermal insulation which can be used for R-SFCL system. Finally, the total thermal load is calculated for the designed R-SFCL cryostat to select a suitable cryo-refrigerator for LN2 re-condensation.
Long range personalized cancer treatment strategies incorporating evolutionary dynamics.
Yeang, Chen-Hsiang; Beckman, Robert A
2016-10-22
Current cancer precision medicine strategies match therapies to static consensus molecular properties of an individual's cancer, thus determining the next therapeutic maneuver. These strategies typically maintain a constant treatment while the cancer is not worsening. However, cancers feature complicated sub-clonal structure and dynamic evolution. We have recently shown, in a comprehensive simulation of two non-cross resistant therapies across a broad parameter space representing realistic tumors, that substantial improvement in cure rates and median survival can be obtained utilizing dynamic precision medicine strategies. These dynamic strategies explicitly consider intratumoral heterogeneity and evolutionary dynamics, including predicted future drug resistance states, and reevaluate optimal therapy every 45 days. However, the optimization is performed in single 45 day steps ("single-step optimization"). Herein we evaluate analogous strategies that think multiple therapeutic maneuvers ahead, considering potential outcomes at 5 steps ahead ("multi-step optimization") or 40 steps ahead ("adaptive long term optimization (ALTO)") when recommending the optimal therapy in each 45 day block, in simulations involving both 2 and 3 non-cross resistant therapies. We also evaluate an ALTO approach for situations where simultaneous combination therapy is not feasible ("Adaptive long term optimization: serial monotherapy only (ALTO-SMO)"). Simulations utilize populations of 764,000 and 1,700,000 virtual patients for 2 and 3 drug cases, respectively. Each virtual patient represents a unique clinical presentation including sizes of major and minor tumor subclones, growth rates, evolution rates, and drug sensitivities. While multi-step optimization and ALTO provide no significant average survival benefit, cure rates are significantly increased by ALTO. Furthermore, in the subset of individual virtual patients demonstrating clinically significant difference in outcome between approaches, by far the majority show an advantage of multi-step or ALTO over single-step optimization. ALTO-SMO delivers cure rates superior or equal to those of single- or multi-step optimization, in 2 and 3 drug cases respectively. In selected virtual patients incurable by dynamic precision medicine using single-step optimization, analogous strategies that "think ahead" can deliver long-term survival and cure without any disadvantage for non-responders. When therapies require dose reduction in combination (due to toxicity), optimal strategies feature complex patterns involving rapidly interleaved pulses of combinations and high dose monotherapy. This article was reviewed by Wendy Cornell, Marek Kimmel, and Andrzej Swierniak. Wendy Cornell and Andrzej Swierniak are external reviewers (not members of the Biology Direct editorial board). Andrzej Swierniak was nominated by Marek Kimmel.
Varga, Peter; Inzana, Jason A; Schwiedrzik, Jakob; Zysset, Philippe K; Gueorguiev, Boyko; Blauth, Michael; Windolf, Markus
2017-05-01
High incidence and increased mortality related to secondary, contralateral proximal femoral fractures may justify invasive prophylactic augmentation that reinforces the osteoporotic proximal femur to reduce fracture risk. Bone cement-based approaches (femoroplasty) may deliver the required strengthening effect; however, the significant variation in the results of previous studies calls for a systematic analysis and optimization of this method. Our hypothesis was that efficient generalized augmentation strategies can be identified via computational optimization. This study investigated, by means of finite element analysis, the effect of cement location and volume on the biomechanical properties of fifteen proximal femora in sideways fall. Novel cement cloud locations were developed using the principles of bone remodeling and compared to the "single central" location that was previously reported to be optimal. The new augmentation strategies provided significantly greater biomechanical benefits compared to the "single central" cement location. Augmenting with approximately 12ml of cement in the newly identified location achieved increases of 11% in stiffness, 64% in yield force, 156% in yield energy and 59% in maximum force, on average, compared to the non-augmented state. The weaker bones experienced a greater biomechanical benefit from augmentation than stronger bones. The effect of cement volume on the biomechanical properties was approximately linear. Results of the "single central" model showed good agreement with previous experimental studies. These findings indicate enhanced potential of cement-based prophylactic augmentation using the newly developed cementing strategy. Future studies should determine the required level of strengthening and confirm these numerical results experimentally. Copyright © 2017 Elsevier Ltd. All rights reserved.
Analysis and Design of Novel Nanophotonic Structures
NASA Astrophysics Data System (ADS)
Shugayev, Roman
Nanophotonic devices hold promise to revolutionize the fields of optical communications, quantum computing and bioimaging. Designing viable solutions to these pressing problems require developing accurate models of the relevant systems. While a great deal of work has been performed in terms of developing individual models with varying levels of fidelity, some of these more complex systems still require improved links between scales to allow for accurate design and optimization within a reasonable amount of computing time. For instance, color centers in nanocrystals appear to be a promising platform for room-temperature scalable quantum information science, but questions still remain about the optimal structures to control single-photon emitter rates, coupling fidelity, and suitable scaling architectures. In this work, a method for efficient optical access and readout of nanocrystal states via magnetic transitions was demonstrated. Separately novel Mie resonant devices that guarantee on-demand enhancement of emission from the single vacancy sources were shown. To improve addressability of the crystal-based impurities, a new approach for realization of single photon electro-optical devices is also proposed in this work. Furthermore, this work on color centers in nanocrystals has been shown to be sensitive to the local refractive index environment. This allows this system to be adapted to biomedical applications, such as sensitive, minimally invasive cancer detection. In this work, a novel scheme for propagation loss-free sensing of local refractive index using nanocrystal probes with broken symmetry is carefully investigated. In conclusion, this thesis develops several novel simulation and optimization techniques that combine existing nanophotonic modeling tools into a unique multi-scale modeling tool. It has been successfully applied to nanophotonically-tuned color vacancy centers. Potential applications span optical communications, quantum information processing, and biomedical sensing.
Identification of vehicle suspension parameters by design optimization
NASA Astrophysics Data System (ADS)
Tey, J. Y.; Ramli, R.; Kheng, C. W.; Chong, S. Y.; Abidin, M. A. Z.
2014-05-01
The design of a vehicle suspension system through simulation requires accurate representation of the design parameters. These parameters are usually difficult to measure or sometimes unavailable. This article proposes an efficient approach to identify the unknown parameters through optimization based on experimental results, where the covariance matrix adaptation-evolutionary strategy (CMA-es) is utilized to improve the simulation and experimental results against the kinematic and compliance tests. This speeds up the design and development cycle by recovering all the unknown data with respect to a set of kinematic measurements through a single optimization process. A case study employing a McPherson strut suspension system is modelled in a multi-body dynamic system. Three kinematic and compliance tests are examined, namely, vertical parallel wheel travel, opposite wheel travel and single wheel travel. The problem is formulated as a multi-objective optimization problem with 40 objectives and 49 design parameters. A hierarchical clustering method based on global sensitivity analysis is used to reduce the number of objectives to 30 by grouping correlated objectives together. Then, a dynamic summation of rank value is used as pseudo-objective functions to reformulate the multi-objective optimization to a single-objective optimization problem. The optimized results show a significant improvement in the correlation between the simulated model and the experimental model. Once accurate representation of the vehicle suspension model is achieved, further analysis, such as ride and handling performances, can be implemented for further optimization.
Siebers, Jeffrey V
2008-04-04
Monte Carlo (MC) is rarely used for IMRT plan optimization outside of research centres due to the extensive computational resources or long computation times required to complete the process. Time can be reduced by degrading the statistical precision of the MC dose calculation used within the optimization loop. However, this eventually introduces optimization convergence errors (OCEs). This study determines the statistical noise levels tolerated during MC-IMRT optimization under the condition that the optimized plan has OCEs <100 cGy (1.5% of the prescription dose) for MC-optimized IMRT treatment plans.Seven-field prostate IMRT treatment plans for 10 prostate patients are used in this study. Pre-optimization is performed for deliverable beams with a pencil-beam (PB) dose algorithm. Further deliverable-based optimization proceeds using: (1) MC-based optimization, where dose is recomputed with MC after each intensity update or (2) a once-corrected (OC) MC-hybrid optimization, where a MC dose computation defines beam-by-beam dose correction matrices that are used during a PB-based optimization. Optimizations are performed with nominal per beam MC statistical precisions of 2, 5, 8, 10, 15, and 20%. Following optimizer convergence, beams are re-computed with MC using 2% per beam nominal statistical precision and the 2 PTV and 10 OAR dose indices used in the optimization objective function are tallied. For both the MC-optimization and OC-optimization methods, statistical equivalence tests found that OCEs are less than 1.5% of the prescription dose for plans optimized with nominal statistical uncertainties of up to 10% per beam. The achieved statistical uncertainty in the patient for the 10% per beam simulations from the combination of the 7 beams is ~3% with respect to maximum dose for voxels with D>0.5D(max). The MC dose computation time for the OC-optimization is only 6.2 minutes on a single 3 Ghz processor with results clinically equivalent to high precision MC computations.
High-power all-fiber ultra-low noise laser
NASA Astrophysics Data System (ADS)
Zhao, Jian; Guiraud, Germain; Pierre, Christophe; Floissat, Florian; Casanova, Alexis; Hreibi, Ali; Chaibi, Walid; Traynor, Nicholas; Boullet, Johan; Santarelli, Giorgio
2018-06-01
High-power ultra-low noise single-mode single-frequency lasers are in great demand for interferometric metrology. Robust, compact all-fiber lasers represent one of the most promising technologies to replace the current laser sources in use based on injection-locked ring resonators or multi-stage solid-state amplifiers. Here, a linearly polarized high-power ultra-low noise all-fiber laser is demonstrated at a power level of 100 W. Special care has been taken in the study of relative intensity noise (RIN) and its reduction. Using an optimized servo actuator to directly control the driving current of the pump laser diode, we obtain a large feedback bandwidth of up to 1.3 MHz. The RIN reaches - 160 dBc/Hz between 3 and 20 kHz.
Instrumentation, control, and automation for submerged anaerobic membrane bioreactors.
Robles, Ángel; Durán, Freddy; Ruano, María Victoria; Ribes, Josep; Rosado, Alfredo; Seco, Aurora; Ferrer, José
2015-01-01
A submerged anaerobic membrane bioreactor (AnMBR) demonstration plant with two commercial hollow-fibre ultrafiltration systems (PURON®, Koch Membrane Systems, PUR-PSH31) was designed and operated for urban wastewater treatment. An instrumentation, control, and automation (ICA) system was designed and implemented for proper process performance. Several single-input-single-output (SISO) feedback control loops based on conventional on-off and PID algorithms were implemented to control the following operating variables: flow-rates (influent, permeate, sludge recycling and wasting, and recycled biogas through both reactor and membrane tanks), sludge wasting volume, temperature, transmembrane pressure, and gas sparging. The proposed ICA for AnMBRs for urban wastewater treatment enables the optimization of this new technology to be achieved with a high level of process robustness towards disturbances.
Entangling measurements for multiparameter estimation with two qubits
NASA Astrophysics Data System (ADS)
Roccia, Emanuele; Gianani, Ilaria; Mancino, Luca; Sbroscia, Marco; Somma, Fabrizia; Genoni, Marco G.; Barbieri, Marco
2018-01-01
Careful tailoring the quantum state of probes offers the capability of investigating matter at unprecedented precisions. Rarely, however, the interaction with the sample is fully encompassed by a single parameter, and the information contained in the probe needs to be partitioned on multiple parameters. There exist, then, practical bounds on the ultimate joint-estimation precision set by the unavailability of a single optimal measurement for all parameters. Here, we discuss how these considerations are modified for two-level quantum probes — qubits — by the use of two copies and entangling measurements. We find that the joint estimation of phase and phase diffusion benefits from such collective measurement, while for multiple phases no enhancement can be observed. We demonstrate this in a proof-of-principle photonics setup.
Multiple-taper spectral analysis: A stand-alone C-subroutine
NASA Astrophysics Data System (ADS)
Lees, Jonathan M.; Park, Jeffrey
1995-03-01
A simple set of subroutines in ANSI-C are presented for multiple taper spectrum estimation. The multitaper approach provides an optimal spectrum estimate by minimizing spectral leakage while reducing the variance of the estimate by averaging orthogonal eigenspectrum estimates. The orthogonal tapers are Slepian nπ prolate functions used as tapers on the windowed time series. Because the taper functions are orthogonal, combining them to achieve an average spectrum does not introduce spurious correlations as standard smoothed single-taper estimates do. Furthermore, estimates of the degrees of freedom and F-test values at each frequency provide diagnostics for determining levels of confidence in narrow band (single frequency) periodicities. The program provided is portable and has been tested on both Unix and Macintosh systems.
Multiscale modelling of Flow-Induced Blood Cell Damage
NASA Astrophysics Data System (ADS)
Liu, Yaling; Sohrabi, Salman
2017-11-01
We study red blood cell (RBC) damage and hemolysis at cellular level. Under high shear rates, pores form on RBC membranes through which hemoglobin (Hb) leaks out and increases free Hb content of plasma leading to hemolysis. By coupling lattice Boltzmann and spring connected network models through immersed boundary method, we estimate hemolysis of a single RBC under various shear rates. The developed cellular damage model can be used as a predictive tool for hydrodynamic and hematologic design optimization of blood-wetting medical devices.
Low vibration laboratory with a single-stage vibration isolation for microscopy applications.
Voigtländer, Bert; Coenen, Peter; Cherepanov, Vasily; Borgens, Peter; Duden, Thomas; Tautz, F Stefan
2017-02-01
The construction and the vibrational performance of a low vibration laboratory for microscopy applications comprising a 100 ton floating foundation supported by passive pneumatic isolators (air springs), which rest themselves on a 200 ton solid base plate, are discussed. The optimization of the air spring system leads to a vibration level on the floating floor below that induced by an acceleration of 10 ng for most frequencies. Additional acoustic and electromagnetic isolation is accomplished by a room-in-room concept.
Potgieter, Danielle; Simmers, Dale; Ryan, Lisa; Biccard, Bruce M; Lurati-Buse, Giovanna A; Cardinale, Daniela M; Chong, Carol P W; Cnotliwy, Miloslaw; Farzi, Sylvia I; Jankovic, Radmilo J; Lim, Wen Kwang; Mahla, Elisabeth; Manikandan, Ramaswamy; Oscarsson, Anna; Phy, Michael P; Rajagopalan, Sriram; Van Gaal, William J; Waliszek, Marek; Rodseth, Reitze N
2015-08-01
N-terminal fragment B-type natriuretic peptide (NT-proBNP) prognostic utility is commonly determined post hoc by identifying a single optimal discrimination threshold tailored to the individual study population. The authors aimed to determine how using these study-specific post hoc thresholds impacts meta-analysis results. The authors conducted a systematic review of studies reporting the ability of preoperative NT-proBNP measurements to predict the composite outcome of all-cause mortality and nonfatal myocardial infarction at 30 days after noncardiac surgery. Individual patient-level data NT-proBNP thresholds were determined using two different methodologies. First, a single combined NT-proBNP threshold was determined for the entire cohort of patients, and a meta-analysis conducted using this single threshold. Second, study-specific thresholds were determined for each individual study, with meta-analysis being conducted using these study-specific thresholds. The authors obtained individual patient data from 14 studies (n = 2,196). Using a single NT-proBNP cohort threshold, the odds ratio (OR) associated with an increased NT-proBNP measurement was 3.43 (95% CI, 2.08 to 5.64). Using individual study-specific thresholds, the OR associated with an increased NT-proBNP measurement was 6.45 (95% CI, 3.98 to 10.46). In smaller studies (<100 patients) a single cohort threshold was associated with an OR of 5.4 (95% CI, 2.27 to 12.84) as compared with an OR of 14.38 (95% CI, 6.08 to 34.01) for study-specific thresholds. Post hoc identification of study-specific prognostic biomarker thresholds artificially maximizes biomarker predictive power, resulting in an amplification or overestimation during meta-analysis of these results. This effect is accentuated in small studies.
NASA Astrophysics Data System (ADS)
Chen, Gongdai; Deng, Hongchang; Yuan, Libo
2018-07-01
We aim at a more compact, flexible, and simpler core-to-fiber coupling approach, optimal combinations of two graded refractive index (GRIN) lenses have been demonstrated for the interconnection between a twin-core single-mode fiber and two single-core single-mode fibers. The optimal two-lens combinations achieve an efficient core-to-fiber separating coupling and allow the fibers and lenses to coaxially assemble. Finally, axial deviations and transverse displacements of the components are discussed, and the latter increases the coupling loss more significantly. The gap length between the two lenses is designed to be fine-tuned to compensate for the transverse displacement, and the good linear compensation relationship contributes to the device manufacturing. This approach has potential applications in low coupling loss and low crosstalk devices without sophisticated alignment and adjustment, and enables the channel separating for multicore fibers.
Single-Molecule Counting of Point Mutations by Transient DNA Binding
NASA Astrophysics Data System (ADS)
Su, Xin; Li, Lidan; Wang, Shanshan; Hao, Dandan; Wang, Lei; Yu, Changyuan
2017-03-01
High-confidence detection of point mutations is important for disease diagnosis and clinical practice. Hybridization probes are extensively used, but are hindered by their poor single-nucleotide selectivity. Shortening the length of DNA hybridization probes weakens the stability of the probe-target duplex, leading to transient binding between complementary sequences. The kinetics of probe-target binding events are highly dependent on the number of complementary base pairs. Here, we present a single-molecule assay for point mutation detection based on transient DNA binding and use of total internal reflection fluorescence microscopy. Statistical analysis of single-molecule kinetics enabled us to effectively discriminate between wild type DNA sequences and single-nucleotide variants at the single-molecule level. A higher single-nucleotide discrimination is achieved than in our previous work by optimizing the assay conditions, which is guided by statistical modeling of kinetics with a gamma distribution. The KRAS c.34 A mutation can be clearly differentiated from the wild type sequence (KRAS c.34 G) at a relative abundance as low as 0.01% mutant to WT. To demonstrate the feasibility of this method for analysis of clinically relevant biological samples, we used this technology to detect mutations in single-stranded DNA generated from asymmetric RT-PCR of mRNA from two cancer cell lines.
A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme
NASA Astrophysics Data System (ADS)
Ghoman, Satyajit S.
The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of fitness-driven retention. This strategy capitalizes on the advantages of evolutionary algorithm as well as POD-based reduced order modeling, while overcoming the shortcomings inherent with these techniques. When linked with M3 DOE, this strategy offers a computationally efficient methodology for problems with high level of complexity and a challenging design-space. This newly developed framework is demonstrated for its robustness on a nonconventional supersonic tailless air vehicle wing shape optimization problem.
High-Power Fiber Lasers Using Photonic Band Gap Materials
NASA Technical Reports Server (NTRS)
DiDomenico, Leo; Dowling, Jonathan
2005-01-01
High-power fiber lasers (HPFLs) would be made from photonic band gap (PBG) materials, according to the proposal. Such lasers would be scalable in the sense that a large number of fiber lasers could be arranged in an array or bundle and then operated in phase-locked condition to generate a superposition and highly directed high-power laser beam. It has been estimated that an average power level as high as 1,000 W per fiber could be achieved in such an array. Examples of potential applications for the proposed single-fiber lasers include welding and laser surgery. Additionally, the bundled fibers have applications in beaming power through free space for autonomous vehicles, laser weapons, free-space communications, and inducing photochemical reactions in large-scale industrial processes. The proposal has been inspired in part by recent improvements in the capabilities of single-mode fiber amplifiers and lasers to produce continuous high-power radiation. In particular, it has been found that the average output power of a single strand of a fiber laser can be increased by suitably changing the doping profile of active ions in its gain medium to optimize the spatial overlap of the electromagnetic field with the distribution of active ions. Such optimization minimizes pump power losses and increases the gain in the fiber laser system. The proposal would expand the basic concept of this type of optimization to incorporate exploitation of the properties (including, in some cases, nonlinearities) of PBG materials to obtain power levels and efficiencies higher than are now possible. Another element of the proposal is to enable pumping by concentrated sunlight. Somewhat more specifically, the proposal calls for exploitation of the properties of PBG materials to overcome a number of stubborn adverse phenomena that have impeded prior efforts to perfect HPFLs. The most relevant of those phenomena is amplified spontaneous emission (ASE), which causes saturation of gain and power at undesirably low levels, and scattering of light from dopants. In designing a given fiber laser for reduced ASE, care must be taken to maintain a correct fiber structure for eventual scaling to an array of many such lasers such that the interactions among all the members of the array would cause them to operate in phase lock. Hence, the problems associated with improving a single-fiber laser are not entirely separate from the bundling problem, and some designs for individual fiber lasers may be better than others if the fibers are to be incorporated into bundles. Extensive calculations, expected to take about a year, must be performed in order to determine design parameters before construction of prototype individual and fiber lasers can begin. The design effort can be expected to include calculations to optimize overlaps between the electromagnetic modes and the gain media and calculations of responses of PBG materials to electromagnetic fields. Design alternatives and physical responses that may be considered include simple PBG fibers with no intensity-dependent responses, PBG fibers with intensity- dependent band-gap shifting (see figure), and broad-band pumping made possible by use of candidate broad-band pumping media in place of the air or vacuum gaps used in prior PBG fibers.
The Myth of Optimality in Clinical Neuroscience.
Holmes, Avram J; Patrick, Lauren M
2018-03-01
Clear evidence supports a dimensional view of psychiatric illness. Within this framework the expression of disorder-relevant phenotypes is often interpreted as a breakdown or departure from normal brain function. Conversely, health is reified, conceptualized as possessing a single ideal state. We challenge this concept here, arguing that there is no universally optimal profile of brain functioning. The evolutionary forces that shape our species select for a staggering diversity of human behaviors. To support our position we highlight pervasive population-level variability within large-scale functional networks and discrete circuits. We propose that, instead of examining behaviors in isolation, psychiatric illnesses can be best understood through the study of domains of functioning and associated multivariate patterns of variation across distributed brain systems. Copyright © 2018 Elsevier Ltd. All rights reserved.
Meniscus repair: the role of accelerated rehabilitation in return to sport.
Kozlowski, Erick J; Barcia, Anthony M; Tokish, John M
2012-06-01
With increasing understanding of the detrimental effects of the meniscectomized knee on outcomes and long-term durability, there is an ever increasing emphasis on meniscal preservation through repair. Repair in the young athlete is particularly challenging given the goals of returning to high-level sports. A healed meniscus is only the beginning of successful return to activity, and the understanding of "protection with progression" must be emphasized to ensure optimal return to performance. The principles of progression from low to high loads, single to multiplane activity, slow to high speeds, and stable to unstable platforms are cornerstones to this process. Emphasis on the kinetic chain environment that the knee will function within cannot be overemphasized. Communication between the operating surgeon and rehabilitation specialist is critical to optimizing effective return to sports.
2009-01-01
Single phase fluid flow in microchannels has been widely investigated ( Morini , 2006; Abdelaziz et al., 2008) and it was verified that the conventional...Optimization, Kluwer. 203 78. Morini , G. L., 2006, “Scaling Effects for Liquid Flows in Microchannels,” Heat Transfer Engineering, Vol. 27, No. 4, pp
NASA Astrophysics Data System (ADS)
Kumar, Rishi; Mevada, N. Ramesh; Rathore, Santosh; Agarwal, Nitin; Rajput, Vinod; Sinh Barad, AjayPal
2017-08-01
To improve Welding quality of aluminum (Al) plate, the TIG Welding system has been prepared, by which Welding current, Shielding gas flow rate and Current polarity can be controlled during Welding process. In the present work, an attempt has been made to study the effect of Welding current, current polarity, and shielding gas flow rate on the tensile strength of the weld joint. Based on the number of parameters and their levels, the Response Surface Methodology technique has been selected as the Design of Experiment. For understanding the influence of input parameters on Ultimate tensile strength of weldment, ANOVA analysis has been carried out. Also to describe and optimize TIG Welding using a new metaheuristic Nature - inspired algorithm which is called as Firefly algorithm which was developed by Dr. Xin-She Yang at Cambridge University in 2007. A general formulation of firefly algorithm is presented together with an analytical, mathematical modeling to optimize the TIG Welding process by a single equivalent objective function.
Steca, Patrizia; Monzani, Dario; Greco, Andrea; Chiesi, Francesca; Primi, Caterina
2015-06-01
This study is aimed at testing the measurement properties of the Life Orientation Test-Revised (LOT-R) for the assessment of dispositional optimism by employing item response theory (IRT) analyses. The LOT-R was administered to a large sample of 2,862 Italian adults. First, confirmatory factor analyses demonstrated the theoretical conceptualization of the construct measured by the LOT-R as a single bipolar dimension. Subsequently, IRT analyses for polytomous, ordered response category data were applied to investigate the items' properties. The equivalence of the items across gender and age was assessed by analyzing differential item functioning. Discrimination and severity parameters indicated that all items were able to distinguish people with different levels of optimism and adequately covered the spectrum of the latent trait. Additionally, the LOT-R appears to be gender invariant and, with minor exceptions, age invariant. Results provided evidence that the LOT-R is a reliable and valid measure of dispositional optimism. © The Author(s) 2014.
NASA Technical Reports Server (NTRS)
Foyle, David C.
1993-01-01
Based on existing integration models in the psychological literature, an evaluation framework is developed to assess sensor fusion displays as might be implemented in an enhanced/synthetic vision system. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The pilot's performance with the sensor fusion image is compared to models' predictions based on the pilot's performance when viewing the original component sensor images prior to fusion. This allows for the determination as to when a sensor fusion system leads to: poorer performance than one of the original sensor displays, clearly an undesirable system in which the fused sensor system causes some distortion or interference; better performance than with either single sensor system alone, but at a sub-optimal level compared to model predictions; optimal performance compared to model predictions; or, super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays.
Patel, Nitin R; Ankolekar, Suresh
2007-11-30
Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.
Optimal ciliary beating patterns
NASA Astrophysics Data System (ADS)
Vilfan, Andrej; Osterman, Natan
2011-11-01
We introduce a measure for energetic efficiency of single or collective biological cilia. We define the efficiency of a single cilium as Q2 / P , where Q is the volume flow rate of the pumped fluid and P is the dissipated power. For ciliary arrays, we define it as (ρQ) 2 / (ρP) , with ρ denoting the surface density of cilia. We then numerically determine the optimal beating patterns according to this criterion. For a single cilium optimization leads to curly, somewhat counterintuitive patterns. But when looking at a densely ciliated surface, the optimal patterns become remarkably similar to what is observed in microorganisms like Paramecium. The optimal beating pattern then consists of a fast effective stroke and a slow sweeping recovery stroke. Metachronal waves lead to a significantly higher efficiency than synchronous beating. Efficiency also increases with an increasing density of cilia up to the point where crowding becomes a problem. We finally relate the pumping efficiency of cilia to the swimming efficiency of a spherical microorganism and show that the experimentally estimated efficiency of Paramecium is surprisingly close to the theoretically possible optimum.
Optimal design of low-density SNP arrays for genomic prediction: algorithm and applications
USDA-ARS?s Scientific Manuscript database
Low-density (LD) single nucleotide polymorphism (SNP) arrays provide a cost-effective solution for genomic prediction and selection, but algorithms and computational tools are needed for their optimal design. A multiple-objective, local optimization (MOLO) algorithm was developed for design of optim...
Li, Qiheng; Chen, Wenxing; Xiao, Hai; Gong, Yue; Li, Zhi; Zheng, Lirong; Zheng, Xusheng; Yan, Wensheng; Cheong, Weng-Chon; Shen, Rongan; Fu, Ninghua; Gu, Lin; Zhuang, Zhongbin; Chen, Chen; Wang, Dingsheng; Peng, Qing; Li, Jun; Li, Yadong
2018-06-01
Heteroatom-doped Fe-NC catalyst has emerged as one of the most promising candidates to replace noble metal-based catalysts for highly efficient oxygen reduction reaction (ORR). However, delicate controls over their structure parameters to optimize the catalytic efficiency and molecular-level understandings of the catalytic mechanism are still challenging. Herein, a novel pyrrole-thiophene copolymer pyrolysis strategy to synthesize Fe-isolated single atoms on sulfur and nitrogen-codoped carbon (Fe-ISA/SNC) with controllable S, N doping is rationally designed. The catalytic efficiency of Fe-ISA/SNC shows a volcano-type curve with the increase of sulfur doping. The optimized Fe-ISA/SNC exhibits a half-wave potential of 0.896 V (vs reversible hydrogen electrode (RHE)), which is more positive than those of Fe-isolated single atoms on nitrogen codoped carbon (Fe-ISA/NC, 0.839 V), commercial Pt/C (0.841 V), and most reported nonprecious metal catalysts. Fe-ISA/SNC is methanol tolerable and shows negligible activity decay in alkaline condition during 15 000 voltage cycles. X-ray absorption fine structure analysis and density functional theory calculations reveal that the incorporated sulfur engineers the charges on N atoms surrounding the Fe reactive center. The enriched charge facilitates the rate-limiting reductive release of OH* and therefore improved the overall ORR efficiency. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Kahn, Johannes; Kaul, David; Böning, Georg; Rotzinger, Roman; Freyhardt, Patrick; Schwabe, Philipp; Maurer, Martin H; Renz, Diane Miriam; Streitparth, Florian
2017-09-01
Purpose As a supra-regional level-I trauma center, we evaluated computed tomography (CT) acquisitions of polytraumatized patients for quality and dose optimization purposes. Adapted statistical iterative reconstruction [(AS)IR] levels, tube voltage reduction as well as a split-bolus contrast agent (CA) protocol were applied. Materials and Methods 61 patients were split into 3 different groups that differed with respect to tube voltage (120 - 140 kVp) and level of applied ASIR reconstruction (ASIR 20 - 50 %). The CT protocol included a native acquisition of the head followed by a single contrast-enhanced acquisition of the whole body (64-MSCT). CA (350 mg/ml iodine) was administered as a split bolus injection of 100 ml (2 ml/s), 20 ml NaCl (1 ml/s), 60 ml (4 ml/s), 40 ml NaCl (4 ml/s) with a scan delay of 85 s to detect injuries of both the arterial system and parenchymal organs in a single acquisition. Both the quantitative (SNR/CNR) and qualitative (5-point Likert scale) image quality was evaluated in parenchymal organs that are often injured in trauma patients. Radiation exposure was assessed. Results The use of IR combined with a reduction of tube voltage resulted in good qualitative and quantitative image quality and a significant reduction in radiation exposure of more than 40 % (DLP 1087 vs. 647 mGyxcm). Image quality could be improved due to a dedicated protocol that included different levels of IR adapted to different slice thicknesses, kernels and the examined area for the evaluation of head, lung, body and bone injury patterns. In synopsis of our results, we recommend the implementation of a polytrauma protocol with a tube voltage of 120 kVp and the following IR levels: cCT 5mm: ASIR 20; cCT 0.625 mm: ASIR 40; lung 2.5 mm: ASIR 30, body 5 mm: ASIR 40; body 1.25 mm: ASIR 50; body 0.625 mm: ASIR 0. Conclusion A dedicated adaptation of the CT trauma protocol (level of reduction of tube voltage and of IR) according to the examined body region (head, lung, body, bone) combined with a split bolus CA injection protocol allows for a high-quality CT examination and a relevant reduction of radiation exposure in the examination of polytraumatized patients Key Points · Dedicated adaption of the CT trauma protocol allows for an optimized examination.. · Different levels of iterative reconstruction, tube voltage and the CA injection protocol are crucial.. · A reduction of radiation exposure of more than 40 % with good image quality is possible.. Citation Format · Kahn J, Kaul D, Böning G et al. Quality and Dose Optimized CT Trauma Protocol - Recommendation from a University Level-I Trauma Center. Fortschr Röntgenstr 2017; 189: 844 - 854. © Georg Thieme Verlag KG Stuttgart · New York.
Single-shot ADC imaging for fMRI.
Song, Allen W; Guo, Hua; Truong, Trong-Kha
2007-02-01
It has been suggested that apparent diffusion coefficient (ADC) contrast can be sensitive to cerebral blood flow (CBF) changes during brain activation. However, current ADC imaging techniques have an inherently low temporal resolution due to the requirement of multiple acquisitions with different b-factors, as well as potential confounds from cross talk between the deoxyhemoglobin-induced background gradients and the externally applied diffusion-weighting gradients. In this report a new method is proposed and implemented that addresses these two limitations. Specifically, a single-shot pulse sequence that sequentially acquires one gradient-echo (GRE) and two diffusion-weighted spin-echo (SE) images was developed. In addition, the diffusion-weighting gradient waveform was numerically optimized to null the cross terms with the deoxyhemoglobin-induced background gradients to fully isolate the effect of diffusion weighting from that of oxygenation-level changes. The experimental results show that this new single-shot method can acquire ADC maps with sufficient signal-to-noise ratio (SNR), and establish its practical utility in functional MRI (fMRI) to complement the blood oxygenation level-dependent (BOLD) technique and provide differential sensitivity for different vasculatures to better localize neural activity originating from the small vessels. Copyright (c) 2007 Wiley-Liss, Inc.
Determination of Ochratoxin A in Rye and Rye-Based Products by Fluorescence Polarization Immunoassay
Lippolis, Vincenzo; Porricelli, Anna C. R.; Cortese, Marina; Zanardi, Sandro; Pascale, Michelangelo
2017-01-01
A rapid fluorescence polarization immunoassay (FPIA) was optimized and validated for the determination of ochratoxin A (OTA) in rye and rye crispbread. Samples were extracted with a mixture of acetonitrile/water (60:40, v/v) and purified by SPE-aminopropyl column clean-up before performing the FPIA. Overall mean recoveries were 86 and 95% for spiked rye and rye crispbread with relative standard deviations lower than 6%. Limits of detection (LOD) of the optimized FPIA was 0.6 μg/kg for rye and rye crispbread, respectively. Good correlations (r > 0.977) were observed between OTA contents in contaminated samples obtained by FPIA and high-performance liquid chromatography (HPLC) with immunoaffinity cleanup used as reference method. Furthermore, single laboratory validation and small-scale collaborative trials were carried out for the determination of OTA in rye according to Regulation 519/2014/EU laying down procedures for the validation of screening methods. The precision profile of the method, cut-off level and rate of false suspect results confirm the satisfactory analytical performances of assay as a screening method. These findings show that the optimized FPIA is suitable for high-throughput screening, and permits reliable quantitative determination of OTA in rye and rye crispbread at levels that fall below the EU regulatory limits. PMID:28954398
Consistent integration of experimental and ab initio data into molecular and coarse-grained models
NASA Astrophysics Data System (ADS)
Vlcek, Lukas
As computer simulations are increasingly used to complement or replace experiments, highly accurate descriptions of physical systems at different time and length scales are required to achieve realistic predictions. The questions of how to objectively measure model quality in relation to reference experimental or ab initio data, and how to transition seamlessly between different levels of resolution are therefore of prime interest. To address these issues, we use the concept of statistical distance to define a measure of similarity between statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the systems' measurable properties. Through systematic coarse-graining, we arrive at appropriate expressions for optimization loss functions consistently incorporating microscopic ab initio data as well as macroscopic experimental data. The design of coarse-grained and multiscale models is then based on factoring the model system partition function into terms describing the system at different resolution levels. The optimization algorithm takes advantage of thermodynamic perturbation expressions for fast exploration of the model parameter space, enabling us to scan millions of parameter combinations per hour on a single CPU. The robustness and generality of the new model optimization framework and its efficient implementation are illustrated on selected examples including aqueous solutions, magnetic systems, and metal alloys.
Blana, Dimitra; Hincapie, Juan G; Chadwick, Edward K; Kirsch, Robert F
2013-01-01
Neuroprosthetic systems based on functional electrical stimulation aim to restore motor function to individuals with paralysis following spinal cord injury. Identifying the optimal electrode set for the neuroprosthesis is complicated because it depends on the characteristics of the individual (such as injury level), the force capacities of the muscles, the movements the system aims to restore, and the hardware limitations (number and type of electrodes available). An electrode-selection method has been developed that uses a customized musculoskeletal model. Candidate electrode sets are created based on desired functional outcomes and the hard ware limitations of the proposed system. Inverse-dynamic simulations are performed to determine the proportion of target movements that can be accomplished with each set; the set that allows the most movements to be performed is chosen as the optimal set. The technique is demonstrated here for a system recently developed by our research group to restore whole-arm movement to individuals with high-level tetraplegia. The optimal set included selective nerve-cuff electrodes for the radial and musculocutaneous nerves; single-channel cuffs for the axillary, suprascapular, upper subscapular, and long-thoracic nerves; and muscle-based electrodes for the remaining channels. The importance of functional goals, hardware limitations, muscle and nerve anatomy, and surgical feasibility are highlighted.
Xu, Xuexin; Zhang, Yinghua; Li, Jinpeng; Zhang, Meng; Zhou, Xiaonan; Zhou, Shunli; Wang, Zhimin
2018-01-01
Improving winter wheat grain yield and water use efficiency (WUE) with minimum irrigation is very important for ensuring agricultural and ecological sustainability in the Northern China Plain (NCP). A three-year field experiment was conducted to determine how single irrigation can improve grain yield and WUE by manipulating the "sink-source" relationships. To achieve this, no-irrigation after sowing (W0) as a control, and five single irrigation treatments after sowing (75 mm of each irrigation) were established. They included irrigation at upstanding (WU), irrigation at jointing (WJ), irrigation at booting (WB), irrigation at anthesis (WA) and irrigation at medium milk (WM). Results showed that compared with no-irrigation after sowing (W0), WU, WJ, WB, WA and WM significantly improved mean grain yield by 14.1%, 19.9%, 17.9%, 11.6%, and 7.5%, respectively. WJ achieved the highest grain yield (8653.1 kg ha-1) and WUE (20.3 kg ha-1 mm-1), and WB observed the same level of grain yield and WUE as WJ. In comparison to WU, WJ and WB coordinated pre- and post-anthesis water use while reducing pre-anthesis and total evapotranspiration (ET). They also retained higher soil water content above 180 cm soil layers at anthesis, increased post-anthesis water use, and ultimately increased WUE. WJ and WB optimized population quantity and individual leaf size, delayed leaf senescence, extended grain-filling duration, improved post-anthesis biomass and biomass remobilization (source supply capacity) as well as post-anthesis biomass per unit anthesis leaf area (PostBA-leaf ratio). WJ also optimized the allocation of assimilation, increased the spike partitioning index (SPI, spike biomass/biomass at anthesis) and grain production efficiency (GPE, the ratio of grain number to biomass at anthesis), thus improved mean sink capacity by 28.1%, 5.7%, 21.9%, and 26.7% in comparison to W0, WU, WA and WM, respectively. Compared with WA and WM, WJ and WB also increased sink capacity, post-anthesis biomass and biomass remobilization. These results demonstrated that single irrigation at jointing or booting could improve grain yield and WUE via coordinating the "source-sink" relationships with the high sink capacity and source supply capacity. Therefore, we propose that under adequate soil moisture conditions before sowing, single irrigation scheme from jointing to booting with 75 mm irrigation amount is the optimal minimum irrigation practice for wheat production in this region.
Optimal trajectories for an aerospace plane. Part 2: Data, tables, and graphs
NASA Technical Reports Server (NTRS)
Miele, Angelo; Lee, W. Y.; Wu, G. D.
1990-01-01
Data, tables, and graphs relative to the optimal trajectories for an aerospace plane are presented. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied for a single aerodynamic model (GHAME) and three engine models. Four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (1) minimization of the weight of fuel consumed; (2) minimization of the peak dynamic pressure; (3) minimization of the peak heating rate; and (4) minimization of the peak tangential acceleration. The above optimization studies are carried out for different combinations of constraints, specifically: initial path inclination that is either free or given; dynamic pressure that is either free or bounded; and tangential acceleration that is either free or bounded.
Ellis, BL; Hirsch, ML; Porter, SN; Samulski, RJ; Porteus, MH
2016-01-01
An emerging strategy for the treatment of monogenic diseases uses genetic engineering to precisely correct the mutation(s) at the genome level. Recent advancements in this technology have demonstrated therapeutic levels of gene correction using a zinc-finger nuclease (ZFN)-induced DNA double-strand break in conjunction with an exogenous DNA donor substrate. This strategy requires efficient nucleic acid delivery and among viral vectors, recombinant adeno-associated virus (rAAV) has demonstrated clinical success without pathology. However, a major limitation of rAAV is the small DNA packaging capacity and to date, the use of rAAV for ZFN gene delivery has yet to be reported. Theoretically, an ideal situation is to deliver both ZFNs and the repair substrate in a single vector to avoid inefficient gene targeting and unwanted mutagenesis, both complications of a rAAV co-transduction strategy. Therefore, a rAAV format was generated in which a single polypeptide encodes the ZFN monomers connected by a ribosome skipping 2A peptide and furin cleavage sequence. On the basis of this arrangement, a DNA repair substrate of 750 nucleotides was also included in this vector. Efficient polypeptide processing to discrete ZFNs is demonstrated, as well as the ability of this single vector format to stimulate efficient gene targeting in a human cell line and mouse model derived fibroblasts. Additionally, we increased rAAV-mediated gene correction up to sixfold using a combination of Food and Drug Administration-approved drugs, which act at the level of AAV vector transduction. Collectively, these experiments demonstrate the ability to deliver ZFNs and a repair substrate by a single AAV vector and offer insights for the optimization of rAAV-mediated gene correction using drug therapy. PMID:22257934
Zhang, Jian Qing; Loughlin, Kevin R; Zou, Kelly H; Haker, Steven; Tempany, Clare M C
2007-06-01
To evaluate the role of the combination of endorectal coil and external multicoil array magnetic resonance imaging (MRI) in the management of prostate cancer and predicting the surgical margin status in a single-surgeon practice. We reviewed all patients referred by a single surgeon from January 1993 to May 2002 for staging prostate MRI before selecting treatment. All MRI examinations were performed using 1.5T (Signa, GE Medical Systems) with a combination of endorectal and pelvic multicoil array. The tumor size, stage, and total gland volume on MRI, prostate-specific antigen (PSA) level, and Gleason score were all compared with the pathologic stage and diagnosis of positive surgical margins (PSMs). A total of 232 patients were evaluated, of whom 110 underwent radical prostatectomy, all performed by one surgeon (group 1), and 122 did not (group 2). The results showed that MRI stage, PSA level, and age were all significantly different (P <0.001). In group 1, the results showed a high specificity (99%) and accuracy (91%) for MRI staging of T3 cancer. The postoperative follow-up (median 4.5 years) revealed that 90% of men had PSA levels of less than 0.1 ng/mL. The PSM rate was 16%. No significant difference was found on MRI between the PSM group and non-PSM group. A single tumor length greater than 1.8 cm was the cutpoint above which PSMs were found (P = 0.002). The results of our study have shown that the combined use of clinical data and endorectal MRI can help optimize patient treatment and selection for surgery and, in a single surgeon's practice, lead to successful outcomes.
Operant Conditioning of Primate Prefrontal Neurons
Schultz, Wolfram; Sakagami, Masamichi
2010-01-01
An operant is a behavioral act that has an impact on the environment to produce an outcome, constituting an important component of voluntary behavior. Because the environment can be volatile, the same action may cause different consequences. Thus to obtain an optimal outcome, it is crucial to detect action–outcome relationships and adapt the behavior accordingly. Although prefrontal neurons are known to change activity depending on expected reward, it remains unknown whether prefrontal activity contributes to obtaining reward. We investigated this issue by setting variable relationships between levels of single-neuron activity and rewarding outcomes. Lateral prefrontal neurons changed their spiking activity according to the specific requirements for gaining reward, without the animals making a motor response. Thus spiking activity constituted an operant response. Data from a control task suggested that these changes were unlikely to reflect simple reward predictions. These data demonstrate a remarkable capacity of prefrontal neurons to adapt to specific operant requirements at the single-neuron level. PMID:20107129
A Technical Analysis Information Fusion Approach for Stock Price Analysis and Modeling
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
In this paper, we address the problem of technical analysis information fusion in improving stock market index-level prediction. We present an approach for analyzing stock market price behavior based on different categories of technical analysis metrics and a multiple predictive system. Each category of technical analysis measures is used to characterize stock market price movements. The presented predictive system is based on an ensemble of neural networks (NN) coupled with particle swarm intelligence for parameter optimization where each single neural network is trained with a specific category of technical analysis measures. The experimental evaluation on three international stock market indices and three individual stocks show that the presented ensemble-based technical indicators fusion system significantly improves forecasting accuracy in comparison with single NN. Also, it outperforms the classical neural network trained with index-level lagged values and NN trained with stationary wavelet transform details and approximation coefficients. As a result, technical information fusion in NN ensemble architecture helps improving prediction accuracy.
Jing, Luyang; Wang, Taiyong; Zhao, Ming; Wang, Peng
2017-01-01
A fault diagnosis approach based on multi-sensor data fusion is a promising tool to deal with complicated damage detection problems of mechanical systems. Nevertheless, this approach suffers from two challenges, which are (1) the feature extraction from various types of sensory data and (2) the selection of a suitable fusion level. It is usually difficult to choose an optimal feature or fusion level for a specific fault diagnosis task, and extensive domain expertise and human labor are also highly required during these selections. To address these two challenges, we propose an adaptive multi-sensor data fusion method based on deep convolutional neural networks (DCNN) for fault diagnosis. The proposed method can learn features from raw data and optimize a combination of different fusion levels adaptively to satisfy the requirements of any fault diagnosis task. The proposed method is tested through a planetary gearbox test rig. Handcraft features, manual-selected fusion levels, single sensory data, and two traditional intelligent models, back-propagation neural networks (BPNN) and a support vector machine (SVM), are used as comparisons in the experiment. The results demonstrate that the proposed method is able to detect the conditions of the planetary gearbox effectively with the best diagnosis accuracy among all comparative methods in the experiment. PMID:28230767
Interplay of superconductivity and magnetic fluctuations in single crystals of BaFe2-xCoxAs2
NASA Astrophysics Data System (ADS)
Bag, Biplab; Kumar, Ankit; Banerjee, S. S.; Vinod, K.; Bharathi, A.
2018-04-01
We report unusual pinning response in optimally doped and overdoped single crystals of BaFe2-xCoxAs2. Here we use magneto-optical imaging technique to measure the local magnetization response which shows an unusual transformation from low temperature diamagnetic state to high temperature positive magnetization response. Our data suggests coexistence of magnetic fluctuation along with superconductivity in the optimally doped crystal. The strength of magnetic fluctuations is the strongest in the optimally doped compound with the highest Tc.
Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates
NASA Technical Reports Server (NTRS)
Patera, Anthony T.
1997-01-01
A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.
González-Bacerio, Jorge; Osuna, Joel; Ponce, Amaia; Fando, Rafael; Figarella, Katherine; Méndez, Yanira; Charli, Jean-Louis; Chávez, María de Los Á
2014-12-01
Plasmodium falciparum neutral metallo-aminopeptidase (PfAM1), a member of the M1 family of metallo proteases, is a promising target for malaria, a devastating human parasitic disease. We report the high-level expression of PfAM1 in Escherichia coli BL21. An optimized gene, with a codon adaptation index and an average G/C content higher than the native gene, was synthesized and cloned in the pTrcHis2B vector. Optimal expression was achieved by induction with 1mM IPTG at 37°C for 18h. This allowed obtaining 100mg of recombinant PfAM1 (rPfAM1) per L of culture medium; 19% of the E. coli soluble protein mass was from rPFAM1. rPfAM1, fused to an amino-terminal 6×His tag, was purified in a single step by immobilized metal ion affinity chromatography. The protein showed only limited signs of proteolytic degradation, and this step increased purity 27-fold. The kinetic characteristics of rPfAM1, such as a neutral optimal pH, a preference for substrates with basic or hydrophobic amino acids at the P1 position, an inhibition profile typical of metallo-aminopeptidases, and inhibition from Zn(2+) excess, were similar to those of the native PfAM1. We have thus optimized an expression system that should be useful for identifying new PfAM1 inhibitors. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimization of Ferroelectric Ceramics by Design at the Microstructure Level
NASA Astrophysics Data System (ADS)
Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.
2010-05-01
Ferroelectric materials show remarkable physical behaviors that make them essential for many devices and have been extensively studied for their applications of nonvolatile random access memory (NvRAM) and high-speed random access memories. Although ferroelectric ceramics (polycrystals) present ease in manufacture and in compositional modifications and represent the widest application area of materials, computational and theoretical studies are sparse owing to many reasons including the large number of constituent atoms. Macroscopic properties of ferroelectric polycrystals are dominated by the inhomogeneities at the crystallographic domain/grain level. Orientation of grains/domains is critical to the electromechanical response of the single crystalline and polycrystalline materials. Polycrystalline materials have the potential of exhibiting better performance at a macroscopic scale by design of the domain/grain configuration at the domain-size scale. This suggests that piezoelectric properties can be optimized by a proper choice of the parameters which control the distribution of grain orientations. Nevertheless, this choice is complicated and it is impossible to analyze all possible combinations of the distribution parameters or the angles themselves. Hence we have implemented the stochastic optimization technique of simulated annealing combined with the homogenization for the optimization problem. The mathematical homogenization theory of a piezoelectric medium is implemented in the finite element method (FEM) by solving the coupled equilibrium electrical and mechanical fields. This implementation enables the study of the dependence of the macroscopic electromechanical properties of a typical crystalline and polycrystalline ferroelectric ceramic on the grain orientation.
Optimal noise reduction in 3D reconstructions of single particles using a volume-normalized filter
Sindelar, Charles V.; Grigorieff, Nikolaus
2012-01-01
The high noise level found in single-particle electron cryo-microscopy (cryo-EM) image data presents a special challenge for three-dimensional (3D) reconstruction of the imaged molecules. The spectral signal-to-noise ratio (SSNR) and related Fourier shell correlation (FSC) functions are commonly used to assess and mitigate the noise-generated error in the reconstruction. Calculation of the SSNR and FSC usually includes the noise in the solvent region surrounding the particle and therefore does not accurately reflect the signal in the particle density itself. Here we show that the SSNR in a reconstructed 3D particle map is linearly proportional to the fractional volume occupied by the particle. Using this relationship, we devise a novel filter (the “single-particle Wiener filter”) to minimize the error in a reconstructed particle map, if the particle volume is known. Moreover, we show how to approximate this filter even when the volume of the particle is not known, by optimizing the signal within a representative interior region of the particle. We show that the new filter improves on previously proposed error-reduction schemes, including the conventional Wiener filter as well as figure-of-merit weighting, and quantify the relationship between all of these methods by theoretical analysis as well as numeric evaluation of both simulated and experimentally collected data. The single-particle Wiener filter is applicable across a broad range of existing 3D reconstruction techniques, but is particularly well suited to the Fourier inversion method, leading to an efficient and accurate implementation. PMID:22613568
Ogura, Takahiro; Tsuchiya, Akihiro; Minas, Tom; Mizuno, Shuichi
2018-04-01
Objective The effects of hydrostatic pressure (HP) on the matrix synthesis by human articular chondrocytes have been reported elsewhere. In order to optimize the production of extracellular matrix, we aimed to clarify the effects of repetitive HP on metabolic function by human articular chondrocytes. Design The human articular chondrocytes were expanded and embedded within a collagen gel/sponge scaffold. We incubated these constructs with and without HP followed by atmospheric pressure (AP) and repeated the second HP followed by AP over 14 days. Genomic, biochemical, and histological evaluation were performed to compare the effects of each regimen on the constructs. Results The gene expressions of collagen type II and aggrecan core protein were significantly upregulated with repetitive HP regimens compared with a single HP or AP by 14 days ( P < 0.01 or 0.05). Matrix metalloptoteinase-13 (MMP-13) in AP was upregulated significantly compared to other HP regimens at day 14 ( P < 0.01). No significant difference was observed in tissue inhibitor of metalloproteinases-II. Immunohistology demonstrated that application of HP (both repetitive and single) promoted the accumulation of specific extracellular matrix and reduced a MMP-13. A single regimen of HP followed by AP significantly increased the amount of sulfated glycosaminoglycan than that of the AP, whereas repetitive HP remained similar level of that of the AP. Conclusions Repetitive HP had a greater effect on anabolic activity by chondrocytes than a single HP regimen, which will be advantageous for producing a matrix-rich cell construct.
NASA Astrophysics Data System (ADS)
Zorn, Martin; Hülsewede, Ralf; Pietrzak, Agnieszka; Meusel, Jens; Sebastian, Jürgen
2015-03-01
Laser bars, laser arrays, and single emitters are highly-desired light sources e.g. for direct material processing, pump sources for solid state and fiber lasers or medical applications. These sources require high output powers with optimal efficiency together with good reliability resulting in a long lifetime of the device. Desired wavelengths range from 760 nm in esthetic skin treatment over 915 nm, 940 nm and 976 nm to 1030 nm for direct material processing and pumping applications. In this publication we present our latest developments for the different application-defined wavelengths in continuouswave operation mode. At 760nm laser bars with 30 % filling factor and 1.5 mm resonator length show optical output powers around 90-100 W using an optimized design. For longer wavelengths between 915 nm and 1030 nm laser bars with 4 mm resonator length and 50 % filling factor show reliable output powers above 200 W. The efficiency reached lies above 60% and the slow axis divergence (95% power content) is below 7°. Further developments of bars tailored for 940 nm emission wavelength reach output powers of 350 W. Reliable single emitters for effective fiber coupling having emitter widths of 90 μm and 195 μm are presented. They emit optical powers of 12 W and 24 W, respectively, at emission wavelengths of 915 nm, 940 nm and 976 nm. Moreover, reliability tests of 90 μm-single emitters at a power level of 12W currently show a life time over 3500 h.
Single molecule fluorescence burst detection of DNA fragments separated by capillary electrophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haab, B.B.; Mathies, R.A.
A method has been developed for detecting DNA separated by capillary gel electrophoresis (CGE) using single molecule photon burst counting. A confocal fluorescence microscope was used to observe the fluorescence bursts from single molecules of DNA multiply labeled with the thiazole orange derivative TO6 as they passed through the nearly 2-{mu}m diameter focused laser beam. Amplified photo-electron pulses from the photomultiplier are grouped into bins of 360-450 {mu}s in duration, and the resulting histogram is stored in a computer for analysis. Solutions of M13 DNA were first flowed through the capillary at various concentrations, and the resulting data were usedmore » to optimize the parameters for digital filtering using a low-pass Fourier filter, selecting a discriminator level for peak detection, and applying a peak-calling algorithm. The optimized single molecule counting method was then applied to an electrophoretic separation of M13 DNA and to a separation of pBR 322 DNA from pRL 277 DNA. Clusters of discreet fluorescence bursts were observed at the expected appearance time of each DNA band. The auto-correlation function of these data indicated transit times that were consistent with the observed electrophoretic velocity. These separations were easily detected when only 50-100 molecules of DNA per band traveled through the detection region. This new detection technology should lead to the routine analysis of DNA in capillary columns with an on-column sensitivity of nearly 100 DNA molecules/band or better. 45 refs., 10 figs.« less
Yango, Pamela; Altman, Eran; Smith, James F.; Klatsky, Peter C.; Tran, Nam D.
2015-01-01
Objective To determine whether optimal human spermatogonial stem cell (SSC) cryopreservation is best achieved with testicular tissue or single cell suspension cryopreservation. This study compares the effectiveness between these two approaches by using testicular SSEA-4+ cells, a known population containing SSCs. Design In vitro human testicular tissues. Setting Academic research unit. Patients Adult testicular tissues (n = 4) collected from subjects with normal spermatogenesis and normal fetal testicular tissues (n = 3). Intervention(s) Testicular tissue vs. single cell suspension cryopreservation. Main Outcome Measures Cell viability, total cell recovery per milligram of tissue, as well as, viable and SSEA-4+ cell recovery. Results Single cell suspension cryopreservation yielded higher recovery of SSEA-4+ cells enriched in adult SSCs whereas fetal SSEA-4+ cell recovery was similar between testicular tissue and single cell suspension cryopreservation. Conclusions Adult and fetal human SSEA-4+ populations exhibited differential sensitivity to cryopreservation based on whether they were cryopreserved in situ as testicular tissues or as single cells. Thus, optimal preservation of human SSCs depends on the patient age, type of samples cryopreserved, and end points of therapeutic applications. PMID:25241367
Optimized Free Energies from Bidirectional Single-Molecule Force Spectroscopy
NASA Astrophysics Data System (ADS)
Minh, David D. L.; Adib, Artur B.
2008-05-01
An optimized method for estimating path-ensemble averages using data from processes driven in opposite directions is presented. Based on this estimator, bidirectional expressions for reconstructing free energies and potentials of mean force from single-molecule force spectroscopy—valid for biasing potentials of arbitrary stiffness—are developed. Numerical simulations on a model potential indicate that these methods perform better than unidirectional strategies.
Koziel, David; Michaelis, Uwe; Kruse, Tobias
2018-08-01
Endotoxins contaminate proteins that are produced in E. coli. High levels of endotoxins can influence cellular assays and cause severe adverse effects when administered to humans. Thus, endotoxin removal is important in protein purification for academic research and in GMP manufacturing of biopharmaceuticals. Several methods exist to remove endotoxin, but often require additional downstream-processing steps, decrease protein yield and are costly. These disadvantages can be avoided by using an integrated endotoxin depletion (iED) wash-step that utilizes Triton X-114 (TX114). In this paper, we show that the iED wash-step is broadly applicable in most commonly used chromatographies: it reduces endotoxin by a factor of 10 3 to 10 6 during NiNTA-, MBP-, SAC-, GST-, Protein A and CEX-chromatography but not during AEX or HIC-chromatography. We characterized the iED wash-step using Design of Experiments (DoE) and identified optimal experimental conditions for application scenarios that are relevant to academic research or industrial GMP manufacturing. A single iED wash-step with 0.75% (v/v) TX114 added to the feed and wash buffer can reduce endotoxin levels to below 2 EU/ml or deplete most endotoxin while keeping the manufacturing costs as low as possible. The comprehensive characterization enables academia and industry to widely adopt the iED wash-step for a routine, efficient and cost-effective depletion of endotoxin during protein purification at any scale. Copyright © 2018. Published by Elsevier B.V.
Fractional Programming for Communication Systems—Part I: Power Control and Beamforming
NASA Astrophysics Data System (ADS)
Shen, Kaiming; Yu, Wei
2018-05-01
This two-part paper explores the use of FP in the design and optimization of communication systems. Part I of this paper focuses on FP theory and on solving continuous problems. The main theoretical contribution is a novel quadratic transform technique for tackling the multiple-ratio concave-convex FP problem--in contrast to conventional FP techniques that mostly can only deal with the single-ratio or the max-min-ratio case. Multiple-ratio FP problems are important for the optimization of communication networks, because system-level design often involves multiple signal-to-interference-plus-noise ratio terms. This paper considers the applications of FP to solving continuous problems in communication system design, particularly for power control, beamforming, and energy efficiency maximization. These application cases illustrate that the proposed quadratic transform can greatly facilitate the optimization involving ratios by recasting the original nonconvex problem as a sequence of convex problems. This FP-based problem reformulation gives rise to an efficient iterative optimization algorithm with provable convergence to a stationary point. The paper further demonstrates close connections between the proposed FP approach and other well-known algorithms in the literature, such as the fixed-point iteration and the weighted minimum mean-square-error beamforming. The optimization of discrete problems is discussed in Part II of this paper.
Zhang, J D; Yang, Q
2015-03-13
The aim of this study was to develop a protocol for the production of fungal bio-pesticides with high efficiency, low cost, and non-polluting fermentation, while also increasing their survival rate under field conditions. This is the first study to develop biocontrol Trichoderma harzianum transformants TS1 that are resistant to benzimidazole fungicides. Agricultural corn stover and wheat bran waste were used as a medium and inducing carbon source for solid fermentation. Spore production was observed, and the method was optimized using single-factor tests with 4 factors at 3 levels in an orthogonal experimental design to determine the optimal culture conditions for T. harzianum TS1. In this step, we determined the best conditions for fermenting the biocontrol fungi. The optimal culture conditions for T. harzianum TS1 were cultivated for 8 days, a ratio of straw to wheat bran of 1:3, ammonium persulfate as the nitrogen source, and a water content of 30 mL. Under optimal culture conditions, the sporulation of T. harzianum TS1 reached 1.49 x 10(10) CFU/g, which was 1.46-fold higher than that achieved before optimization. Increased sporulation of T. harzianum TS1 results in better utilization of space and nutrients to achieve control of plant pathogens. This method allows for the recycling of agricultural waste straw.
Structural optimization of large structural systems by optimality criteria methods
NASA Technical Reports Server (NTRS)
Berke, Laszlo
1992-01-01
The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.
Wang, Han; Chen, Beibei; He, Man; Hu, Bin
2017-05-02
Single cell analysis is a significant research field in recent years reflecting the heterogeneity of cells in a biological system. In this work, a facile droplet chip was fabricated and online combined with time-resolved inductively coupled plasma mass spectrometry (ICPMS) via a microflow nebulizer for the determination of zinc in single HepG2 cells. On the focusing geometric designed PDMS microfluidic chip, the aqueous cell suspension was ejected and divided by hexanol to generate droplets. The droplets encapsulated single cells remain intact during the transportation into ICP for subsequent detection. Under the optimized conditions, the frequency of droplet generation is 3-6 × 10 6 min -1 , and the injected cell number is 2500 min -1 , which can ensure the single cell encapsulation. ZnO nanoparticles (NPs) were used for the quantification of zinc in single cells, and the accuracy was validated by conventional acid digestion-ICPMS method. The ZnO NPs incubated HepG2 cells were analyzed as model samples, and the results exhibit the heterogeneity of HepG2 cells in the uptake/adsorption of ZnO NPs. The developed online droplet-chip-ICPMS analysis system achieves stable single cell encapsulation and has high throughput for single cell analysis. It has the potential in monitoring the content as well as distribution of trace elements/NPs at the single cell level.
Optimizing drug-dose alerts using commercial software throughout an integrated health care system.
Saiyed, Salim M; Greco, Peter J; Fernandes, Glenn; Kaelber, David C
2017-11-01
All default electronic health record and drug reference database vendor drug-dose alerting recommendations (single dose, daily dose, dose frequency, and dose duration) were silently turned on in inpatient, outpatient, and emergency department areas for pediatric-only and nonpediatric-only populations. Drug-dose alerts were evaluated during a 3-month period. Drug-dose alerts fired on 12% of orders (104 098/834 911). System-level and drug-specific strategies to decrease drug-dose alerts were analyzed. System-level strategies included: (1) turning off all minimum drug-dosing alerts, (2) turning off all incomplete information drug-dosing alerts, (3) increasing the maximum single-dose drug-dose alert threshold to 125%, (4) increasing the daily dose maximum drug-dose alert threshold to 125%, and (5) increasing the dose frequency drug-dose alert threshold to more than 2 doses per day above initial threshold. Drug-specific strategies included changing drug-specific maximum single and maximum daily drug-dose alerting parameters for the top 22 drug categories by alert frequency. System-level approaches decreased alerting to 5% (46 988/834 911) and drug-specific approaches decreased alerts to 3% (25 455/834 911). Drug-dose alerts varied between care settings and patient populations. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
28nm node process optimization: a lithography centric view
NASA Astrophysics Data System (ADS)
Seltmann, Rolf
2014-10-01
Many experts claim that the 28nm technology node will be the most cost effective technology node forever. This results from primarily from the cost of manufacturing due to the fact that 28nm is the last true Single Patterning (SP) node. It is also affected by the dramatic increase of design costs and the limited shrink factor of the next following nodes. Thus, it is assumed that this technology still will be alive still for many years. To be cost competitive, high yields are mandatory. Meanwhile, leading edge foundries have optimized the yield of the 28nm node to such a level that that it is nearly exclusively defined by random defectivity. However, it was a long way to go to come to that level. In my talk I will concentrate on the contribution of lithography to this yield learning curve. I will choose a critical metal patterning application. I will show what was needed to optimize the process window to a level beyond the usual OPC model work that was common on previous nodes. Reducing the process (in particular focus) variability is a complementary need. It will be shown which improvements were needed in tooling, process control and design-mask-wafer interaction to remove all systematic yield detractors. Over the last couple of years new scanner platforms were introduced that were targeted for both better productivity and better parametric performance. But this was not a clear run-path. It needed some extra affords of the tool suppliers together with the Fab to bring the tool variability down to the necessary level. Another important topic to reduce variability is the interaction of wafer none-planarity and lithography optimization. Having an accurate knowledge of within die topography is essential for optimum patterning. By completing both the variability reduction work and the process window enhancement work we were able to transfer the original marginal process budget to a robust positive budget and thus ensuring high yield and low costs.
CFD-based optimization in plastics extrusion
NASA Astrophysics Data System (ADS)
Eusterholz, Sebastian; Elgeti, Stefanie
2018-05-01
This paper presents novel ideas in numerical design of mixing elements in single-screw extruders. The actual design process is reformulated as a shape optimization problem, given some functional, but possibly inefficient initial design. Thereby automatic optimization can be incorporated and the design process is advanced, beyond the simulation-supported, but still experience-based approach. This paper proposes concepts to extend a method which has been developed and validated for die design to the design of mixing-elements. For simplicity, it focuses on single-phase flows only. The developed method conducts forward-simulations to predict the quasi-steady melt behavior in the relevant part of the extruder. The result of each simulation is used in a black-box optimization procedure based on an efficient low-order parameterization of the geometry. To minimize user interaction, an objective function is formulated that quantifies the products' quality based on the forward simulation. This paper covers two aspects: (1) It reviews the set-up of the optimization framework as discussed in [1], and (2) it details the necessary extensions for the optimization of mixing elements in single-screw extruders. It concludes with a presentation of first advances in the unsteady flow simulation of a metering and mixing section with the SSMUM [2] using the Carreau material model.
Acquisition of decision making criteria: reward rate ultimately beats accuracy.
Balci, Fuat; Simen, Patrick; Niyogi, Ritwik; Saxe, Andrew; Hughes, Jessica A; Holmes, Philip; Cohen, Jonathan D
2011-02-01
Speed-accuracy trade-offs strongly influence the rate of reward that can be earned in many decision-making tasks. Previous reports suggest that human participants often adopt suboptimal speed-accuracy trade-offs in single session, two-alternative forced-choice tasks. We investigated whether humans acquired optimal speed-accuracy trade-offs when extensively trained with multiple signal qualities. When performance was characterized in terms of decision time and accuracy, our participants eventually performed nearly optimally in the case of higher signal qualities. Rather than adopting decision criteria that were individually optimal for each signal quality, participants adopted a single threshold that was nearly optimal for most signal qualities. However, setting a single threshold for different coherence conditions resulted in only negligible decrements in the maximum possible reward rate. Finally, we tested two hypotheses regarding the possible sources of suboptimal performance: (1) favoring accuracy over reward rate and (2) misestimating the reward rate due to timing uncertainty. Our findings provide support for both hypotheses, but also for the hypothesis that participants can learn to approach optimality. We find specifically that an accuracy bias dominates early performance, but diminishes greatly with practice. The residual discrepancy between optimal and observed performance can be explained by an adaptive response to uncertainty in time estimation.
NASA Astrophysics Data System (ADS)
Najafi, Amir Abbas; Pourahmadi, Zahra
2016-04-01
Selecting the optimal combination of assets in a portfolio is one of the most important decisions in investment management. As investment is a long term concept, looking into a portfolio optimization problem just in a single period may cause loss of some opportunities that could be exploited in a long term view. Hence, it is tried to extend the problem from single to multi-period model. We include trading costs and uncertain conditions to this model which made it more realistic and complex. Hence, we propose an efficient heuristic method to tackle this problem. The efficiency of the method is examined and compared with the results of the rolling single-period optimization and the buy and hold method which shows the superiority of the proposed method.
Improved crystal orientation and physical properties from single-shot XFEL stills
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sauter, Nicholas K., E-mail: nksauter@lbl.gov; Hattne, Johan; Brewster, Aaron S.
X-ray free-electron laser crystallography relies on the collection of still-shot diffraction patterns. New methods are developed for optimal modeling of the crystals’ orientations and mosaic block properties. X-ray diffraction patterns from still crystals are inherently difficult to process because the crystal orientation is not uniquely determined by measuring the Bragg spot positions. Only one of the three rotational degrees of freedom is directly coupled to spot positions; the other two rotations move Bragg spots in and out of the reflecting condition but do not change the direction of the diffracted rays. This hinders the ability to recover accurate structure factorsmore » from experiments that are dependent on single-shot exposures, such as femtosecond diffract-and-destroy protocols at X-ray free-electron lasers (XFELs). Here, additional methods are introduced to optimally model the diffraction. The best orientation is obtained by requiring, for the brightest observed spots, that each reciprocal-lattice point be placed into the exact reflecting condition implied by Bragg’s law with a minimal rotation. This approach reduces the experimental uncertainties in noisy XFEL data, improving the crystallographic R factors and sharpening anomalous differences that are near the level of the noise.« less
A preparation for studying electrical stimulation of the retina in vivo in rat
NASA Astrophysics Data System (ADS)
Baig-Silva, M. S.; Hathcock, C. D.; Hetling, J. R.
2005-03-01
A remaining challenge to the development of electronic prostheses for vision is improving the effectiveness of retinal stimulation. Electrode design and stimulus parameters need to be optimized such that the neural output from the retina conveys information to the mind's eye that aids the patient in interpreting his or her environment. This optimization will require a detailed understanding of the response of the retina to electrical stimulation. The identity and response characteristics of the cellular targets of stimulation need to be defined and evaluated. Described here is an in vivo preparation for studying electrical stimulation of the retina in rat at the cellular level. The use of rat makes available a number of well-described models of retinal disease that motivate prosthesis development. Artificial stimulation can be investigated by adapting techniques traditionally employed to study the response of the retina to photic stimuli, such as recording at the cornea, single-cell recording, and pharmacological dissection of the response. Pilot studies include amplitude-intensity response data for subretinal and transretinal stimulation paradigms recorded in wild-type rats and a transgenic rat model of autosomal dominant retinitis pigmentosa. The ability to record single-unit ganglion cell activity in vivo is also demonstrated.
Dobryakov, A L; Kovalenko, S A; Weigel, A; Pérez-Lustres, J L; Lange, J; Müller, A; Ernsting, N P
2010-11-01
A setup for pump/supercontinuum-probe spectroscopy is described which (i) is optimized to cancel fluctuations of the probe light by single-shot referencing, and (ii) extends the probe range into the near-uv (1000-270 nm). Reflective optics allow 50 μm spot size in the sample and upon entry into two separate spectrographs. The correlation γ(same) between sample and reference readings of probe light level at every pixel exceeds 0.99, compared to γ(consec)<0.92 reported for consecutive referencing. Statistical analysis provides the confidence interval of the induced optical density, ΔOD. For demonstration we first examine a dye (Hoechst 33258) bound in the minor groove of double-stranded DNA. A weak 1.1 ps spectral oscillation in the fluorescence region, assigned to DNA breathing, is shown to be significant. A second example concerns the weak vibrational structure around t=0 which reflects stimulated Raman processes. With 1% fluctuations of probe power, baseline noise for a transient absorption spectrum becomes 25 μOD rms in 1 s at 1 kHz, allowing to record resonance Raman spectra of flavine adenine dinucleotide in the S(0) and S(1) state.
Mudge, Elizabeth M; Liu, Ying; Lund, Jensen A; Brown, Paula N
2016-11-01
Suitably validated analytical methods that can be used to quantify medicinally active phytochemicals in natural health products are required by regulators, manufacturers, and consumers. Hawthorn ( Crataegus ) is a botanical ingredient in natural health products used for the treatment of cardiovascular disorders. A method for the quantitation of vitexin-2″- O - rhamnoside, vitexin, isovitexin, rutin, and hyperoside in hawthorn leaf and flower raw materials and finished products was optimized and validated according to AOAC International guidelines. A two-level partial factorial study was used to guide the optimization of the sample preparation. The optimal conditions were found to be a 60-minute extraction using 50 : 48 : 2 methanol : water : acetic acid followed by a 25-minute separation using a reversed-phased liquid chromatography column with ultraviolet absorbance detection. The single-laboratory validation study evaluated method selectivity, accuracy, repeatability, linearity, limit of quantitation, and limit of detection. Individual flavonoid content ranged from 0.05 mg/g to 17.5 mg/g in solid dosage forms and raw materials. Repeatability ranged from 0.7 to 11.7 % relative standard deviation corresponding to HorRat ranges from 0.2 to 1.6. Calibration curves for each flavonoid were linear within the analytical ranges with correlation coefficients greater than 99.9 %. Herein is the first report of a validated method that is fit for the purpose of quantifying five major phytochemical marker compounds in both raw materials and finished products made from North American ( Crataegus douglasii ) and European ( Crataegus monogyna and Crataegus laevigata) hawthorn species. The method includes optimized extraction of samples without a prolonged drying process and reduced liquid chromatography separation time. Georg Thieme Verlag KG Stuttgart · New York.
Genetic algorithm in the structural design of Cooke triplet lenses
NASA Astrophysics Data System (ADS)
Hazra, Lakshminarayan; Banerjee, Saswatee
1999-08-01
This paper is in tune with our efforts to develop a systematic method for multicomponent lens design. Our aim is to find a suitable starting point in the final configuration space, so that popular local search methods like damped least squares (DLS) may directly lead to a useful solution. For 'ab initio' design problems, a thin lens layout specifying the powers of the individual components and the intercomponent separations are worked out analytically. Requirements of central aberration targets for the individual components in order to satisfy the prespecified primary aberration targets for the overall system are then determined by nonlinear optimization. The next step involves structural design of the individual components by optimization techniques. This general method may be adapted for the design of triplets and their derivatives. However, for the thin lens design of a Cooke triplet composed of three airspaced singlets, the two steps of optimization mentioned above may be combined into a single optimization procedure. The optimum configuration for each of the single set, catering to the required Gaussian specification and primary aberration targets for the Cooke triplet, are determined by an application of genetic algorithm (GA). Our implementation of this algorithm is based on simulations of some complex tools of natural evolution, like selection, crossover and mutation. Our version of GA may or may not converge to a unique optimum, depending on some of the algorithm specific parameter values. With our algorithm, practically useful solutions are always available, although convergence to a global optimum can not be guaranteed. This is perfectly in keeping with our need to allow 'floating' of aberration targets in the subproblem level. Some numerical results dealing with our preliminary investigations on this problem are presented.
Graph SLAM correction for single scanner MLS forest data under boreal forest canopy
NASA Astrophysics Data System (ADS)
Kukko, Antero; Kaijaluoto, Risto; Kaartinen, Harri; Lehtola, Ville V.; Jaakkola, Anttoni; Hyyppä, Juha
2017-10-01
Mobile laser scanning (MLS) provides kinematic means to collect three dimensional data from surroundings for various mapping and environmental analysis purposes. Vehicle based MLS has been used for road and urban asset surveys for about a decade. The equipment to derive the trajectory information for the point cloud generation from the laser data is almost without exception based on GNSS-IMU (Global Navigation Satellite System - Inertial Measurement Unit) technique. That is because of the GNSS ability to maintain global accuracy, and IMU to produce the attitude information needed to orientate the laser scanning and imaging sensor data. However, there are known challenges in maintaining accurate positioning when GNSS signal is weak or even absent over long periods of time. The duration of the signal loss affects the severity of degradation of the positioning solution depending on the quality/performance level of the IMU in use. The situation could be improved to a certain extent with higher performance IMUs, but increasing system expenses make such approach unsustainable in general. Another way to tackle the problem is to attach additional sensors to the system to overcome the degrading position accuracy: such that observe features from the environment to solve for short term system movements accurately enough to prevent the IMU solution to drift. This results in more complex system integration with need for more calibration and synchronization of multiple sensors into an operational approach. In this paper we study operation of an ATV (All -terrain vehicle) mounted, GNSS-IMU based single scanner MLS system in boreal forest conditions. The data generated by RoamerR2 system is targeted for generating 3D terrain and tree maps for optimizing harvester operations and forest inventory purposes at individual tree level. We investigate a process-flow and propose a graph optimization based method which uses data from a single scanner MLS for correcting the post-processed GNSS-IMU trajectory for positional drift under mature boreal forest canopy conditions. The result shows that we can improve the internal conformity of the data significantly from 0.7 m to 1 cm based on tree stem feature location data. When the optimization result is compared to reference at plot level we reach down to 6 cm mean error in absolute tree stem locations. The approach can be generalized to any MLS point cloud data, and provides as such a remarkable contribution to harness MLS for practical forestry and high precision terrain and structural modeling in GNSS obstructed environments.
Andrews, J R
1981-01-01
Two methods dominate cancer treatment--one, the traditional best practice, individualized treatment method and two, the a priori determined decision method of the interinstitutional, cooperative, clinical trial. In the first, choices are infinite and can be made at the time of treatment; in the second, choices are finite and are made in advance of treatment on a random basis. Neither method systematically selects, identifies, or formalizes the optimum level of effect in the treatment chosen. Of the two, it can be argued that the first, other things being equal, is more likely to select the optimum treatment. The determination of level of effect for the optimization of cancer treatment requires the generation of dose-response relationships for both benefit and risk and the introduction of benefit and risk considerations and judgements. The clinical trial, as presently constituted, doses not yield this kind of information, it being, generally, of the binary yes or no, better or worse type. The best practice, individualized treatment method can yield, when adequately documented, both a range of dose-response relationships and a variety of benefit and risk considerations. The presentation will be limited to a consideration of a single modality of cancer treatment, radiation therapy, but an analogy with other modalities of cancer treatment will be inferred. Criteria for optimization will be developed and graphic means for its identification and formalization will be demonstrated with examples taken from the radiotherapy literature. The general problem of optimization theory and practice will be discussed; the necessity for its exploration in relation to the increasing complexity of cancer treatment will be developed; and recommendations for clinical research will be made including a proposal for the support of clinics as an alternative to the support of programs.
Optimal percolation on multiplex networks.
Osat, Saeed; Faqeeh, Ali; Radicchi, Filippo
2017-11-16
Optimal percolation is the problem of finding the minimal set of nodes whose removal from a network fragments the system into non-extensive disconnected clusters. The solution to this problem is important for strategies of immunization in disease spreading, and influence maximization in opinion dynamics. Optimal percolation has received considerable attention in the context of isolated networks. However, its generalization to multiplex networks has not yet been considered. Here we show that approximating the solution of the optimal percolation problem on a multiplex network with solutions valid for single-layer networks extracted from the multiplex may have serious consequences in the characterization of the true robustness of the system. We reach this conclusion by extending many of the methods for finding approximate solutions of the optimal percolation problem from single-layer to multiplex networks, and performing a systematic analysis on synthetic and real-world multiplex networks.
Zhao, Yong-Ming; Yang, Jian-Ming; Liu, Ying-Hui; Zhao, Ming; Wang, Jin
2018-02-01
The aim of this study was to optimize the extraction process of polysaccharides from the fruiting bodies of Lentinus edodes and investigate its anti-hepatitis B virus activity. The extracting parameters including ultrasonic power (240-320W), extraction temperature (40-60°C) and extraction time (15-25min) was optimized by using three-variable-three-level Box-Behnken design based on the single-factor experiments. Data analysis results showed that the optimal conditions for extracting LEPs were an extraction temperature of 45°C, extraction time of 21min and ultrasonic power of 290W. Under these optimal conditions, the experimental yield of LEPs was 9.75%, a 1.62-fold increase compared with conventional heat water extraction (HWE). In addition, crude polysaccharides were purified to obtain two fractions (LEP-1 and LEP-2). Chemical analysis showed that these components were rich in glucose, arabinose and mannose. Furthermore, HepG2.2.15 cells were used as in vitro models to evaluate their anti-hepatitis B virus (HBV) activity. The results suggest that LEPs possesses potent anti-HBV activity in vitro. Copyright © 2017 Elsevier B.V. All rights reserved.
Asadzadeh, Farrokh; Maleki-Kaklar, Mahdi; Soiltanalinejad, Nooshin; Shabani, Farzin
2018-02-08
Citric acid (CA) was evaluated in terms of its efficiency as a biodegradable chelating agent, in removing zinc (Zn) from heavily contaminated soil, using a soil washing process. To determine preliminary ranges of variables in the washing process, single factor experiments were carried out with different CA concentrations, pH levels and washing times. Optimization of batch washing conditions followed using a response surface methodology (RSM) based central composite design (CCD) approach. CCD predicted values and experimental results showed strong agreement, with an R 2 value of 0.966. Maximum removal of 92.8% occurred with a CA concentration of 167.6 mM, pH of 4.43, and washing time of 30 min as optimal variable values. A leaching column experiment followed, to examine the efficiency of the optimum conditions established by the CCD model. A comparison of two soil washing techniques indicated that the removal efficiency rate of the column experiment (85.8%) closely matching that of the batch experiment (92.8%). The methodology supporting the research experimentation for optimizing Zn removal may be useful in the design of protocols for practical engineering soil decontamination applications.
FEC decoder design optimization for mobile satellite communications
NASA Technical Reports Server (NTRS)
Roy, Ashim; Lewi, Leng
1990-01-01
A new telecommunications service for location determination via satellite is being proposed for the continental USA and Europe, which provides users with the capability to find the location of, and communicate from, a moving vehicle to a central hub and vice versa. This communications system is expected to operate in an extremely noisy channel in the presence of fading. In order to achieve high levels of data integrity, it is essential to employ forward error correcting (FEC) encoding and decoding techniques in such mobile satellite systems. A constraint length k = 7 FEC decoder has been implemented in a single chip for such systems. The single chip implementation of the maximum likelihood decoder helps to minimize the cost, size, and power consumption, and improves the bit error rate (BER) performance of the mobile earth terminal (MET).
Propeller performance analysis and multidisciplinary optimization using a genetic algorithm
NASA Astrophysics Data System (ADS)
Burger, Christoph
A propeller performance analysis program has been developed and integrated into a Genetic Algorithm for design optimization. The design tool will produce optimal propeller geometries for a given goal, which includes performance and/or acoustic signature. A vortex lattice model is used for the propeller performance analysis and a subsonic compact source model is used for the acoustic signature determination. Compressibility effects are taken into account with the implementation of Prandtl-Glauert domain stretching. Viscous effects are considered with a simple Reynolds number based model to account for the effects of viscosity in the spanwise direction. An empirical flow separation model developed from experimental lift and drag coefficient data of a NACA 0012 airfoil is included. The propeller geometry is generated using a recently introduced Class/Shape function methodology to allow for efficient use of a wide design space. Optimizing the angle of attack, the chord, the sweep and the local airfoil sections, produced blades with favorable tradeoffs between single and multiple point optimizations of propeller performance and acoustic noise signatures. Optimizations using a binary encoded IMPROVE(c) Genetic Algorithm (GA) and a real encoded GA were obtained after optimization runs with some premature convergence. The newly developed real encoded GA was used to obtain the majority of the results which produced generally better convergence characteristics when compared to the binary encoded GA. The optimization trade-offs show that single point optimized propellers have favorable performance, but circulation distributions were less smooth when compared to dual point or multiobjective optimizations. Some of the single point optimizations generated propellers with proplets which show a loading shift to the blade tip region. When noise is included into the objective functions some propellers indicate a circulation shift to the inboard sections of the propeller as well as a reduction in propeller diameter. In addition the propeller number was increased in some optimizations to reduce the acoustic blade signature.
Kharmanda, Ghias; Kharma, Mohamed-Yaser
2017-06-01
The objective of this work is to integrate structural optimization and reliability concepts into mini-plate fixation strategy used in symphysis mandibular fractures. The structural reliability levels are next estimated when considering a single failure mode and multiple failure modes. A 3-dimensional finite element model is developed in order to evaluate the ability of reducing the negative effect due to the stabilization of the fracture. Topology optimization process is considered in the conceptual design stage to predict possible fixation layouts. In the detailed design stage, suitable mini-plates are selected taking into account the resulting topology and different anatomical considerations. Several muscle forces are considered in order to obtain realistic predictions. Since some muscles can be cut or harmed during the surgery and cannot operate at its maximum capacity, there is a strong motivation to introduce the loading uncertainties in order to obtain reliable designs. The structural reliability is carried out for a single failure mode and multiple failure modes. The different results are validated with a clinical case of a male patient with symphysis fracture. In this case while use of the upper plate fixation with four holes, only two screws were applied to protect adjacent vital structure. This behavior does not affect the stability of the fracture. The proposed strategy to optimize bone plates leads to fewer complications and second surgeries, less patient discomfort, and shorter time of healing.
Feedback-tuned, noise resilient gates for encoded spin qubits
NASA Astrophysics Data System (ADS)
Bluhm, Hendrik
Spin 1/2 particles form native two level systems and thus lend themselves as a natural qubit implementation. However, encoding a single qubit in several spins entails benefits, such as reducing the resources necessary for qubit control and protection from certain decoherence channels. While several varieties of such encoded spin qubits have been implemented, accurate control remains challenging, and leakage out of the subspace of valid qubit states is a potential issue. Optimal performance typically requires large pulse amplitudes for fast control, which is prone to systematic errors and prohibits standard control approaches based on Rabi flopping. Furthermore, the exchange interaction typically used to electrically manipulate encoded spin qubits is inherently sensitive to charge noise. I will discuss all-electrical, high-fidelity single qubit operations for a spin qubit encoded in two electrons in a GaAs double quantum dot. Starting from a set of numerically optimized control pulses, we employ an iterative tuning procedure based on measured error syndromes to remove systematic errors.Randomized benchmarking yields an average gate fidelity exceeding 98 % and a leakage rate into invalid states of 0.2 %. These gates exhibit a certain degree of resilience to both slow charge and nuclear spin fluctuations due to dynamical correction analogous to a spin echo. Furthermore, the numerical optimization minimizes the impact of fast charge noise. Both types of noise make relevant contributions to gate errors. The general approach is also adaptable to other qubit encodings and exchange based two-qubit gates.
Liu, Yong; Zhou, Lin; Sun, Kewei; ...
2018-02-16
Here, we present a thorough study of doping dependent magnetic hysteresis and relaxation characteristics in single crystals of (Ba 1-xK x) Fe 2As 2 (0.18 ≤ x ≤ 1). The critical current density J c reaches maximum in the underdoped sample x = 0.26 and then decreases in the optimally doped and overdoped samples. Meanwhile, the magnetic relaxation rate S rapidly increases and the flux creep activation barrier U 0 sharply decreases in the overdoped sample x = 0.70. These results suggest that vortex pinning is very strong in the underdoped regime, but it is greatly reduced in the optimallymore » doped and overdoped regime. Transmission electron microscope (TEM) measurements reveal the existence of dislocations and inclusions in all three studied samples x = 0.38, 0.46, and 0.65. An investigation of the paramagnetic Meissner effect (PME) suggests that spatial variations in T c become small in the samples x = 0.43 and 0.46, slightly above the optimal doping levels. Our results support that two types of pinning sources dominate the (Ba 1-xK x) Fe 2As 2 crystals: (i) strong δl pinning, which results from the fluctuations in the mean free path l and δT c pinning from the spatial variations in T c in the underdoped regime, and (ii) weak δT c pinning in the optimally doped and overdoped regime.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yong; Zhou, Lin; Sun, Kewei
Here, we present a thorough study of doping dependent magnetic hysteresis and relaxation characteristics in single crystals of (Ba 1-xK x) Fe 2As 2 (0.18 ≤ x ≤ 1). The critical current density J c reaches maximum in the underdoped sample x = 0.26 and then decreases in the optimally doped and overdoped samples. Meanwhile, the magnetic relaxation rate S rapidly increases and the flux creep activation barrier U 0 sharply decreases in the overdoped sample x = 0.70. These results suggest that vortex pinning is very strong in the underdoped regime, but it is greatly reduced in the optimallymore » doped and overdoped regime. Transmission electron microscope (TEM) measurements reveal the existence of dislocations and inclusions in all three studied samples x = 0.38, 0.46, and 0.65. An investigation of the paramagnetic Meissner effect (PME) suggests that spatial variations in T c become small in the samples x = 0.43 and 0.46, slightly above the optimal doping levels. Our results support that two types of pinning sources dominate the (Ba 1-xK x) Fe 2As 2 crystals: (i) strong δl pinning, which results from the fluctuations in the mean free path l and δT c pinning from the spatial variations in T c in the underdoped regime, and (ii) weak δT c pinning in the optimally doped and overdoped regime.« less
Optimal control of information epidemics modeled as Maki Thompson rumors
NASA Astrophysics Data System (ADS)
Kandhway, Kundan; Kuri, Joy
2014-12-01
We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns.
NASA Astrophysics Data System (ADS)
Lu, Qianbo; Bai, Jian; Wang, Kaiwei; Lou, Shuqi; Jiao, Xufen; Han, Dandan; Yang, Guoguang
2016-08-01
The ultrahigh static displacement-acceleration sensitivity of a mechanical sensing chip is essential primarily for an ultrasensitive accelerometer. In this paper, an optimal design to implement to a single-axis MOEMS accelerometer consisting of a grating interferometry cavity and a micromachined sensing chip is presented. The micromachined sensing chip is composed of a proof mass along with its mechanical cantilever suspension and substrate. The dimensional parameters of the sensing chip, including the length, width, thickness and position of the cantilevers are evaluated and optimized both analytically and by finite-element-method (FEM) simulation to yield an unprecedented acceleration-displacement sensitivity. Compared with one of the most sensitive single-axis MOEMS accelerometers reported in the literature, the optimal mechanical design can yield a profound sensitivity improvement with an equal footprint area, specifically, 200% improvement in displacement-acceleration sensitivity with moderate resonant frequency and dynamic range. The modified design was microfabricated, packaged with the grating interferometry cavity and tested. The experimental results demonstrate that the MOEMS accelerometer with modified design can achieve the acceleration-displacement sensitivity of about 150μm/g and acceleration sensitivity of greater than 1500V/g, which validates the effectiveness of the optimal design.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
Dynamics and Control of a Quadrotor with Active Geometric Morphing
NASA Astrophysics Data System (ADS)
Wallace, Dustin A.
Quadrotors are manufactured in a wide variety of shapes, sizes, and performance levels to fulfill a multitude of roles. Robodub Inc. has patented a morphing quadrotor which will allow active reconfiguration between various shapes for performance optimization across a wider spectrum of roles. The dynamics of the system are studied and modeled using Newtonian Mechanics. Controls are developed and simulated using both Linear Quadratic and Numerical Nonlinear Optimal control for a symmetric simplificiation of the system dynamics. Various unique vehicle capabilities are investigated, including novel single-throttle flight control using symmetric geometric morphing, as well as recovery from motor loss by reconfiguring into a trirotor configuration. The system dynamics were found to be complex and highly nonlinear. All attempted control strategies resulted in controllability, suggesting further research into each may lead to multiple viable control strategies for a physical prototype.
Radiation Mitigation and Power Optimization Design Tools for Reconfigurable Hardware in Orbit
NASA Technical Reports Server (NTRS)
French, Matthew; Graham, Paul; Wirthlin, Michael; Wang, Li; Larchev, Gregory
2005-01-01
The Reconfigurable Hardware in Orbit (RHinO)project is focused on creating a set of design tools that facilitate and automate design techniques for reconfigurable computing in space, using SRAM-based field-programmable-gate-array (FPGA) technology. In the second year of the project, design tools that leverage an established FPGA design environment have been created to visualize and analyze an FPGA circuit for radiation weaknesses and power inefficiencies. For radiation, a single event Upset (SEU) emulator, persistence analysis tool, and a half-latch removal tool for Xilinx/Virtex-II devices have been created. Research is underway on a persistence mitigation tool and multiple bit upsets (MBU) studies. For power, synthesis level dynamic power visualization and analysis tools have been completed. Power optimization tools are under development and preliminary test results are positive.
Yang, Jin-ling; He, Hui-xia; Zhu, Hui-xin; Cheng, Ke-di; Zhu, Ping
2009-01-01
The technology of liquid fermentation for producing the recombinant analgesic peptide BmK AngM1 from Buthus martensii Karsch in Pichia pastoris was studied by single-factor and orthogonal test. The results showed that the optimal culture conditions were as follows: 1.2% methanol, 0.6% casamino acids, initial pH 6.0, and three times of basal inoculation volume. Under the above culture conditions, the expression level of recombinant BmK AngM1 in Pichia pastoris was above 500 mg x L(-1), which was more than three times of the control. The study has laid a foundation for the large-scale preparation of BmK AngM1 to meet the needs of theoretical research of BmK AngM1 and development of new medicines.
Optical realization of optimal symmetric real state quantum cloning machine
NASA Astrophysics Data System (ADS)
Hu, Gui-Yu; Zhang, Wen-Hai; Ye, Liu
2010-01-01
We present an experimentally uniform linear optical scheme to implement the optimal 1→2 symmetric and optimal 1→3 symmetric economical real state quantum cloning machine of the polarization state of the single photon. This scheme requires single-photon sources and two-photon polarization entangled state as input states. It also involves linear optical elements and three-photon coincidence. Then we consider the realistic realization of the scheme by using the parametric down-conversion as photon resources. It is shown that under certain condition, the scheme is feasible by current experimental technology.
Replica Analysis for Portfolio Optimization with Single-Factor Model
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2017-06-01
In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report describes the current state of development of methods for calculating optimal orbital transfers with large numbers of burns. Reported on first is the homotopy-motivated and so-called direction correction method. So far this method has been partially tested with one solver; the final step has yet to be implemented. Second is the patched transfer method. This method is rooted in some simplifying approximations made on the original optimal control problem. The transfer is broken up into single-burn segments, each single-burn solved as a predictor step and the whole problem then solved with a corrector step.
Guevara-Torres, A.; Joseph, A.; Schallek, J. B.
2016-01-01
Measuring blood cell dynamics within the capillaries of the living eye provides crucial information regarding the health of the microvascular network. To date, the study of single blood cell movement in this network has been obscured by optical aberrations, hindered by weak optical contrast, and often required injection of exogenous fluorescent dyes to perform measurements. Here we present a new strategy to non-invasively image single blood cells in the living mouse eye without contrast agents. Eye aberrations were corrected with an adaptive optics camera coupled with a fast, 15 kHz scanned beam orthogonal to a capillary of interest. Blood cells were imaged as they flowed past a near infrared imaging beam to which the eye is relatively insensitive. Optical contrast of cells was optimized using differential scatter of blood cells in the split-detector imaging configuration. Combined, these strategies provide label-free, non-invasive imaging of blood cells in the retina as they travel in single file in capillaries, enabling determination of cell flux, morphology, class, velocity, and rheology at the single cell level. PMID:27867728
Guevara-Torres, A; Joseph, A; Schallek, J B
2016-10-01
Measuring blood cell dynamics within the capillaries of the living eye provides crucial information regarding the health of the microvascular network. To date, the study of single blood cell movement in this network has been obscured by optical aberrations, hindered by weak optical contrast, and often required injection of exogenous fluorescent dyes to perform measurements. Here we present a new strategy to non-invasively image single blood cells in the living mouse eye without contrast agents. Eye aberrations were corrected with an adaptive optics camera coupled with a fast, 15 kHz scanned beam orthogonal to a capillary of interest. Blood cells were imaged as they flowed past a near infrared imaging beam to which the eye is relatively insensitive. Optical contrast of cells was optimized using differential scatter of blood cells in the split-detector imaging configuration. Combined, these strategies provide label-free, non-invasive imaging of blood cells in the retina as they travel in single file in capillaries, enabling determination of cell flux, morphology, class, velocity, and rheology at the single cell level.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Integrated aerodynamic/dynamic optimization of helicopter rotor blades
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Walsh, Joanne L.; Riley, Michael F.
1989-01-01
An integrated aerodynamic/dynamic optimization procedure is used to minimize blade weight and 4 per rev vertical hub shear for a rotor blade in forward flight. The coupling of aerodynamics and dynamics is accomplished through the inclusion of airloads which vary with the design variables during the optimization process. Both single and multiple objective functions are used in the optimization formulation. The Global Criteria Approach is used to formulate the multiple objective optimization and results are compared with those obtained by using single objective function formulations. Constraints are imposed on natural frequencies, autorotational inertia, and centrifugal stress. The program CAMRAD is used for the blade aerodynamic and dynamic analyses, and the program CONMIN is used for the optimization. Since the spanwise and the azimuthal variations of loading are responsible for most rotor vibration and noise, the vertical airload distributions on the blade, before and after optimization, are compared. The total power required by the rotor to produce the same amount of thrust for a given area is also calculated before and after optimization. Results indicate that integrated optimization can significantly reduce the blade weight, the hub shear and the amplitude of the vertical airload distributions on the blade and the total power required by the rotor.
A distributed system for fast alignment of next-generation sequencing data.
Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D
2010-12-01
We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.
Additive Manufacturing in Production: A Study Case Applying Technical Requirements
NASA Astrophysics Data System (ADS)
Ituarte, Iñigo Flores; Coatanea, Eric; Salmi, Mika; Tuomi, Jukka; Partanen, Jouni
Additive manufacturing (AM) is expanding the manufacturing capabilities. However, quality of AM produced parts is dependent on a number of machine, geometry and process parameters. The variability of these parameters affects the manufacturing drastically and therefore standardized processes and harmonized methodologies need to be developed to characterize the technology for end use applications and enable the technology for manufacturing. This research proposes a composite methodology integrating Taguchi Design of Experiments, multi-objective optimization and statistical process control, to optimize the manufacturing process and fulfil multiple requirements imposed to an arbitrary geometry. The proposed methodology aims to characterize AM technology depending upon manufacturing process variables as well as to perform a comparative assessment of three AM technologies (Selective Laser Sintering, Laser Stereolithography and Polyjet). Results indicate that only one machine, laser-based Stereolithography, was feasible to fulfil simultaneously macro and micro level geometrical requirements but mechanical properties were not at required level. Future research will study a single AM system at the time to characterize AM machine technical capabilities and stimulate pre-normative initiatives of the technology for end use applications.
NASA Astrophysics Data System (ADS)
Wahl, Michael; Rahn, Hans-Jürgen; Gregor, Ingo; Erdmann, Rainer; Enderlein, Jörg
2007-03-01
Time-correlated single photon counting is a powerful method for sensitive time-resolved fluorescence measurements down to the single molecule level. The method is based on the precisely timed registration of single photons of a fluorescence signal. Historically, its primary goal was the determination of fluorescence lifetimes upon optical excitation by a short light pulse. This goal is still important today and therefore has a strong influence on instrument design. However, modifications and extensions of the early designs allow for the recovery of much more information from the detected photons and enable entirely new applications. Here, we present a new instrument that captures single photon events on multiple synchronized channels with picosecond resolution and over virtually unlimited time spans. This is achieved by means of crystal-locked time digitizers with high resolution and very short dead time. Subsequent event processing in programmable logic permits classical histogramming as well as time tagging of individual photons and their streaming to the host computer. Through the latter, any algorithms and methods for the analysis of fluorescence dynamics can be implemented either in real time or offline. Instrument test results from single molecule applications will be presented.
Zhang, Zheng; Milias-Argeitis, Andreas; Heinemann, Matthias
2018-02-01
Recent work has shown that metabolism between individual bacterial cells in an otherwise isogenetic population can be different. To investigate such heterogeneity, experimental methods to zoom into the metabolism of individual cells are required. To this end, the autofluoresence of the redox cofactors NADH and NADPH offers great potential for single-cell dynamic NAD(P)H measurements. However, NAD(P)H excitation requires UV light, which can cause cell damage. In this work, we developed a method for time-lapse NAD(P)H imaging in single E. coli cells. Our method combines a setup with reduced background emission, UV-enhanced microscopy equipment and optimized exposure settings, overall generating acceptable NAD(P)H signals from single cells, with minimal negative effect on cell growth. Through different experiments, in which we perturb E. coli's redox metabolism, we demonstrated that the acquired fluorescence signal indeed corresponds to NAD(P)H. Using this new method, for the first time, we report that intracellular NAD(P)H levels oscillate along the bacterial cell division cycle. The developed method for dynamic measurement of NAD(P)H in single bacterial cells will be an important tool to zoom into metabolism of individual cells.
Modeling genome coverage in single-cell sequencing
Daley, Timothy; Smith, Andrew D.
2014-01-01
Motivation: Single-cell DNA sequencing is necessary for examining genetic variation at the cellular level, which remains hidden in bulk sequencing experiments. But because they begin with such small amounts of starting material, the amount of information that is obtained from single-cell sequencing experiment is highly sensitive to the choice of protocol employed and variability in library preparation. In particular, the fraction of the genome represented in single-cell sequencing libraries exhibits extreme variability due to quantitative biases in amplification and loss of genetic material. Results: We propose a method to predict the genome coverage of a deep sequencing experiment using information from an initial shallow sequencing experiment mapped to a reference genome. The observed coverage statistics are used in a non-parametric empirical Bayes Poisson model to estimate the gain in coverage from deeper sequencing. This approach allows researchers to know statistical features of deep sequencing experiments without actually sequencing deeply, providing a basis for optimizing and comparing single-cell sequencing protocols or screening libraries. Availability and implementation: The method is available as part of the preseq software package. Source code is available at http://smithlabresearch.org/preseq. Contact: andrewds@usc.edu Supplementary information: Supplementary material is available at Bioinformatics online. PMID:25107873
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms.
Brown, Joshua B; Gestring, Mark L; Leeper, Christine M; Sperry, Jason L; Peitzman, Andrew B; Billiar, Timothy R; Gaines, Barbara A
2017-06-01
The Injury Severity Score (ISS) is the most commonly used injury scoring system in trauma research and benchmarking. An ISS greater than 15 conventionally defines severe injury; however, no studies evaluate whether ISS performs similarly between adults and children. Our objective was to evaluate ISS and Abbreviated Injury Scale (AIS) to predict mortality and define optimal thresholds of severe injury in pediatric trauma. Patients from the Pennsylvania trauma registry 2000-2013 were included. Children were defined as younger than 16 years. Logistic regression predicted mortality from ISS for children and adults. The optimal ISS cutoff for mortality that maximized diagnostic characteristics was determined in children. Regression also evaluated the association between mortality and maximum AIS in each body region, controlling for age, mechanism, and nonaccidental trauma. Analysis was performed in single and multisystem injuries. Sensitivity analyses with alternative outcomes were performed. Included were 352,127 adults and 50,579 children. Children had similar predicted mortality at ISS of 25 as adults at ISS of 15 (5%). The optimal ISS cutoff in children was ISS greater than 25 and had a positive predictive value of 19% and negative predictive value of 99% compared to a positive predictive value of 7% and negative predictive value of 99% for ISS greater than 15 to predict mortality. In single-system-injured children, mortality was associated with head (odds ratio, 4.80; 95% confidence interval, 2.61-8.84; p < 0.01) and chest AIS (odds ratio, 3.55; 95% confidence interval, 1.81-6.97; p < 0.01), but not abdomen, face, neck, spine, or extremity AIS (p > 0.05). For multisystem injury, all body region AIS scores were associated with mortality except extremities. Sensitivity analysis demonstrated ISS greater than 23 to predict need for full trauma activation, and ISS greater than 26 to predict impaired functional independence were optimal thresholds. An ISS greater than 25 may be a more appropriate definition of severe injury in children. Pattern of injury is important, as only head and chest injury drive mortality in single-system-injured children. These findings should be considered in benchmarking and performance improvement efforts. Epidemiologic study, level III.
Scheduling time-critical graphics on multiple processors
NASA Technical Reports Server (NTRS)
Meyer, Tom W.; Hughes, John F.
1995-01-01
This paper describes an algorithm for the scheduling of time-critical rendering and computation tasks on single- and multiple-processor architectures, with minimal pipelining. It was developed to manage scientific visualization scenes consisting of hundreds of objects, each of which can be computed and displayed at thousands of possible resolution levels. The algorithm generates the time-critical schedule using progressive-refinement techniques; it always returns a feasible schedule and, when allowed to run to completion, produces a near-optimal schedule which takes advantage of almost the entire multiple-processor system.
High-level expression of a synthetic gene encoding a sweet protein, monellin, in Escherichia coli.
Chen, Zhongjun; Cai, Heng; Lu, Fuping; Du, Lianxiang
2005-11-01
The expression of a synthetic gene encoding monellin, a sweet protein, in E. coli under the control of T7 promoter from phage is described. The single-chain monellin gene was designed based on the biased codons of E. coli so as to optimize its expression. Monellin was produced and accounted for 45% of total soluble proteins. It was purified to yield 43 mg protein per g dry cell wt. The purity of the recombinant protein was confirmed by SDS-PAGE.
Chapwanya, A; Clegg, T; Stanley, P; Vaughan, L
2008-09-15
Determination of optimal breeding time in bitches earmarked for single insemination only is based on measurement of peripheral blood serum or plasma progesterone concentration. In this paper a comparison is made between radioimmune assay (RIA) and chemoluminescent assay (Immulite) for determination of P4 concentrations in the bitch. The Immulite assay is shown to be an accurate and reliable method for serum or plasma P4 measurement. It compares favourably with other methods in terms of turn-around time, cost and accessibility for veterinarians in practice.
NASA Astrophysics Data System (ADS)
Voigtländer, Bert; Coenen, Peter; Cherepanov, Vasily; Borgens, Peter; Duden, Thomas; Tautz, F. Stefan
2018-01-01
The construction and the vibrational performance of a low vibration laboratory for microscopy applications comprising a 100 ton floating foundation supported by passive pneumatic isolators (air springs), which rest themselves on a 200 ton solid base plate is discussed. The optimization of the air spring system lead to a vibration level on the floating floor below that induced by an acceleration of 10 ng for most frequencies. Additional acoustic and electromagnetic isolation is accomplished by a room-in-room concept.
Computer Simulations of Ion Transport in Polymer Electrolyte Membranes.
Mogurampelly, Santosh; Borodin, Oleg; Ganesan, Venkat
2016-06-07
Understanding the mechanisms and optimizing ion transport in polymer membranes have been the subject of active research for more than three decades. We present an overview of the progress and challenges involved with the modeling and simulation aspects of the ion transport properties of polymer membranes. We are concerned mainly with atomistic and coarser level simulation studies and discuss some salient work in the context of pure binary and single ion conducting polymer electrolytes, polymer nanocomposites, block copolymers, and ionic liquid-based hybrid electrolytes. We conclude with an outlook highlighting future directions.
Multiobjective Optimization Using a Pareto Differential Evolution Approach
NASA Technical Reports Server (NTRS)
Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.
Exploring efficacy of residential energy efficiency programs in Florida
NASA Astrophysics Data System (ADS)
Taylor, Nicholas Wade
Electric utilities, government agencies, and private interests in the U.S. have committed and continue to invest substantial resources in the pursuit of energy efficiency and conservation through demand-side management (DSM) programs. Program investments, and the demand for impact evaluations that accompany them, are projected to grow in coming years due to increased pressure from state-level energy regulation, costs and challenges of building additional production capacity, fuel costs and potential carbon or renewable energy regulation. This dissertation provides detailed analyses of ex-post energy savings from energy efficiency programs in three key sectors of residential buildings: new, single-family, detached homes; retrofits to existing single-family, detached homes; and retrofits to existing multifamily housing units. Each of the energy efficiency programs analyzed resulted in statistically significant energy savings at the full program group level, yet savings for individual participants and participant subgroups were highly variable. Even though savings estimates were statistically greater than zero, those energy savings did not always meet expectations. Results also show that high variability in energy savings among participant groups or subgroups can negatively impact overall program performance and can undermine marketing efforts for future participation. Design, implementation, and continued support of conservation programs based solely on deemed or projected savings is inherently counter to the pursuit of meaningful energy conservation and reductions in greenhouse gas emissions. To fully understand and optimize program impacts, consistent and robust measurement and verification protocols must be instituted in the design phase and maintained over time. Furthermore, marketing for program participation must target those who have the greatest opportunity for savings. In most utility territories it is not possible to gain access to the type of large scale datasets that would facilitate robust program analysis. Along with measuring and optimizing energy conservation programs, utilities should provide public access to historical consumption data. Open access to data, program optimization, consistent measurement and verification and transparency in reported savings are essential to reducing energy use and its associated environmental impacts.
Production of knock-in mice in a single generation from embryonic stem cells.
Ukai, Hideki; Kiyonari, Hiroshi; Ueda, Hiroki R
2017-12-01
The system-level identification and analysis of molecular networks in mammals can be accelerated by 'next-generation' genetics, defined as genetics that does not require crossing of multiple generations of animals in order to achieve the desired genetic makeup. We have established a highly efficient procedure for producing knock-in (KI) mice within a single generation, by optimizing the genome-editing protocol for KI embryonic stem (ES) cells and the protocol for the generation of fully ES-cell-derived mice (ES mice). Using this protocol, the production of chimeric mice is eliminated, and, therefore, there is no requirement for the crossing of chimeric mice to produce mice that carry the KI gene in all cells of the body. Our procedure thus shortens the time required to produce KI ES mice from about a year to ∼3 months. Various kinds of KI ES mice can be produced with a minimized amount of work, facilitating the elucidation of organism-level phenomena using a systems biology approach. In this report, we describe the basic technologies and protocols for this procedure, and discuss the current challenges for next-generation mammalian genetics in organism-level systems biology studies.
Memoryless cooperative graph search based on the simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Hou, Jian; Yan, Gang-Feng; Fan, Zhen
2011-04-01
We have studied the problem of reaching a globally optimal segment for a graph-like environment with a single or a group of autonomous mobile agents. Firstly, two efficient simulated-annealing-like algorithms are given for a single agent to solve the problem in a partially known environment and an unknown environment, respectively. It shows that under both proposed control strategies, the agent will eventually converge to a globally optimal segment with probability 1. Secondly, we use multi-agent searching to simultaneously reduce the computation complexity and accelerate convergence based on the algorithms we have given for a single agent. By exploiting graph partition, a gossip-consensus method based scheme is presented to update the key parameter—radius of the graph, ensuring that the agents spend much less time finding a globally optimal segment.
Optimal radiotherapy dose schedules under parametric uncertainty
NASA Astrophysics Data System (ADS)
Badri, Hamidreza; Watanabe, Yoichi; Leder, Kevin
2016-01-01
We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variability in normal and tumor tissue radiosensitivity or sparing factor of the organs-at-risk (OAR) are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the allowable threshold. This paper proposes a stochastic radiation scheduling concept to incorporate inter-patient variability into the scheduling optimization problem. Our method is based on a probabilistic approach, where the model parameters are given by a set of random variables. Our probabilistic formulation ensures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a variable transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we implemented a branch and bound algorithm to find the global optimal solution. We demonstrated how the configuration of optimal schedules in the presence of uncertainty compares to optimal schedules in the absence of uncertainty (conventional schedule). We observed that in order to protect against the possibility of the model parameters falling into a region where the conventional schedule is no longer feasible, it is required to avoid extremal solutions, i.e. a single large dose or very large total dose delivered over a long period. Finally, we performed numerical experiments in the setting of head and neck tumors including several normal tissues to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the solutions to the choice of key model parameters.
NASA Astrophysics Data System (ADS)
Gao, F.; Song, X. H.; Zhang, Y.; Li, J. F.; Zhao, S. S.; Ma, W. Q.; Jia, Z. Y.
2017-05-01
In order to reduce the adverse effects of uncertainty on optimal dispatch in active distribution network, an optimal dispatch model based on chance-constrained programming is proposed in this paper. In this model, the active and reactive power of DG can be dispatched at the aim of reducing the operating cost. The effect of operation strategy on the cost can be reflected in the objective which contains the cost of network loss, DG curtailment, DG reactive power ancillary service, and power quality compensation. At the same time, the probabilistic constraints can reflect the operation risk degree. Then the optimal dispatch model is simplified as a series of single stage model which can avoid large variable dimension and improve the convergence speed. And the single stage model is solved using a combination of particle swarm optimization (PSO) and point estimate method (PEM). Finally, the proposed optimal dispatch model and method is verified by the IEEE33 test system.
NASA Technical Reports Server (NTRS)
Cunefare, K. A.; Koopmann, G. H.
1991-01-01
This paper presents the theoretical development of an approach to active noise control (ANC) applicable to three-dimensional radiators. The active noise control technique, termed ANC Optimization Analysis, is based on minimizing the total radiated power by adding secondary acoustic sources on the primary noise source. ANC Optimization Analysis determines the optimum magnitude and phase at which to drive the secondary control sources in order to achieve the best possible reduction in the total radiated power from the noise source/control source combination. For example, ANC Optimization Analysis predicts a 20 dB reduction in the total power radiated from a sphere of radius at a dimensionless wavenumber ka of 0.125, for a single control source representing 2.5 percent of the total area of the sphere. ANC Optimization Analysis is based on a boundary element formulation of the Helmholtz Integral Equation, and thus, the optimization analysis applies to a single frequency, while multiple frequencies can be treated through repeated analyses.
Feng, Qiang; Chen, Yiran; Sun, Bo; Li, Songjie
2014-01-01
An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.
Chen, Yiran; Sun, Bo; Li, Songjie
2014-01-01
An optimization method for condition based maintenance (CBM) of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL) distribution of the key line replaceable Module (LRM) has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success. PMID:24892046
Gonnissen, J; De Backer, A; den Dekker, A J; Sijbers, J; Van Aert, S
2016-11-01
In the present paper, the optimal detector design is investigated for both detecting and locating light atoms from high resolution scanning transmission electron microscopy (HR STEM) images. The principles of detection theory are used to quantify the probability of error for the detection of light atoms from HR STEM images. To determine the optimal experiment design for locating light atoms, use is made of the so-called Cramér-Rao Lower Bound (CRLB). It is investigated if a single optimal design can be found for both the detection and location problem of light atoms. Furthermore, the incoming electron dose is optimised for both research goals and it is shown that picometre range precision is feasible for the estimation of the atom positions when using an appropriate incoming electron dose under the optimal detector settings to detect light atoms. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
Optimization of a Turboprop UAV for Maximum Loiter and Specific Power Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Dinc, Ali
2016-09-01
In this study, a genuine code was developed for optimization of selected parameters of a turboprop engine for an unmanned aerial vehicle (UAV) by employing elitist genetic algorithm. First, preliminary sizing of a UAV and its turboprop engine was done, by the code in a given mission profile. Secondly, single and multi-objective optimization were done for selected engine parameters to maximize loiter duration of UAV or specific power of engine or both. In single objective optimization, as first case, UAV loiter time was improved with an increase of 17.5% from baseline in given boundaries or constraints of compressor pressure ratio and burner exit temperature. In second case, specific power was enhanced by 12.3% from baseline. In multi-objective optimization case, where previous two objectives are considered together, loiter time and specific power were increased by 14.2% and 9.7% from baseline respectively, for the same constraints.
Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J.-W.
2001-01-01
The formulation and implementation of an optimization method called Simultaneous Aerodynamic Analysis and Design Optimization (SAADO) are extended from single discipline analysis (aerodynamics only) to multidisciplinary analysis - in this case, static aero-structural analysis - and applied to a simple 3-D wing problem. The method aims to reduce the computational expense incurred in performing shape optimization using state-of-the-art Computational Fluid Dynamics (CFD) flow analysis, Finite Element Method (FEM) structural analysis and sensitivity analysis tools. Results for this small problem show that the method reaches the same local optimum as conventional optimization. However, unlike its application to the win,, (single discipline analysis), the method. as I implemented here, may not show significant reduction in the computational cost. Similar reductions were seen in the two-design-variable (DV) problem results but not in the 8-DV results given here.
Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih
2015-11-01
This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors
NASA Technical Reports Server (NTRS)
Palumbo, D. L.; Padula, S. L.
1997-01-01
Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.
Fast Raman single bacteria identification: toward a routine in-vitro diagnostic
NASA Astrophysics Data System (ADS)
Douet, Alice; Josso, Quentin; Marchant, Adrien; Dutertre, Bertrand; Filiputti, Delphine; Novelli-Rousseau, Armelle; Espagnon, Isabelle; Kloster-Landsberg, Meike; Mallard, Frédéric; Perraut, Francois
2016-04-01
Timely microbiological results are essential to allow clinicians to optimize the prescribed treatment, ideally at the initial stage of the therapeutic process. Several approaches have been proposed to solve this issue and to provide the microbiological result in a few hours directly from the sample such as molecular biology. However fast and sensitive those methods are not based on single phenotypic information which presents several drawbacks and limitations. Optical methods have the advantage to allow single-cell sensitivity and to probe the phenotype of measured cells. Here we present a process and a prototype that allow automated single-bacteria phenotypic analysis. This prototype is based on the use of Digital In-line Holography techniques combined with a specially designed Raman spectrometer using a dedicated device to capture bacteria. The localization of single-cell is finely determined by using holograms and a proper propagation kernel. Holographic images are also used to analyze bacteria in the sample to sort potential pathogens from flora dwelling species or other biological particles. This accurate localization enables the use of a small confocal volume adapted to the measurement of single-cell. Along with the confocal volume adaptation, we also have modified every components of the spectrometer to optimize single-bacteria Raman measurements. This optimization allowed us to acquire informative single-cell spectra using an integration time of 0.5s only. Identification results obtained with this prototype are presented based on a 65144 Raman spectra database acquired automatically on 48 bacteria strains belonging to 8 species.
Scheduling Jobs and a Variable Maintenance on a Single Machine with Common Due-Date Assignment
Wan, Long
2014-01-01
We investigate a common due-date assignment scheduling problem with a variable maintenance on a single machine. The goal is to minimize the total earliness, tardiness, and due-date cost. We derive some properties on an optimal solution for our problem. For a special case with identical jobs we propose an optimal polynomial time algorithm followed by a numerical example. PMID:25147861
Optimization of knowledge sharing through multi-forum using cloud computing architecture
NASA Astrophysics Data System (ADS)
Madapusi Vasudevan, Sriram; Sankaran, Srivatsan; Muthuswamy, Shanmugasundaram; Ram, N. Sankar
2011-12-01
Knowledge sharing is done through various knowledge sharing forums which requires multiple logins through multiple browser instances. Here a single Multi-Forum knowledge sharing concept is introduced which requires only one login session which makes user to connect multiple forums and display the data in a single browser window. Also few optimization techniques are introduced here to speed up the access time using cloud computing architecture.
Kudchadkar, Sapna R; Beers, M Claire; Ascenzi, Judith A; Jastaniah, Ebaa; Punjabi, Naresh M
2016-09-01
The architectural design of the pediatric intensive care unit may play a major role in optimizing the environment to promote patients' sleep while improving stress levels and the work experience of critical care nurses. To examine changes in nurses' perceptions of the environment of a pediatric critical care unit for promotion of patients' sleep and the nurses' work experience after a transition from multipatient rooms to single-patient rooms. A cross-sectional survey of nurses was conducted before and after the move to a new hospital building in which all rooms in the pediatric critical care unit were single-patient rooms. Nurses reported that compared with multipatient rooms, single-patient private rooms were more conducive to patients sleeping well at night and promoted a more normal sleep-wake cycle (P < .001). Monitors/alarms and staff conversations were the biggest factors that adversely influenced the environment for sleep promotion in both settings. Nurses were less annoyed by noise in single-patient rooms (33%) than in multipatient rooms (79%; P < .001) and reported improved exposure to sunlight. Use of single-patient rooms rather than multipatient rooms improved nurses' perceptions of the pediatric intensive care unit environment for promoting patients' sleep and the nurses' own work experience. ©2016 American Association of Critical-Care Nurses.
Bilateral step length estimation using a single inertial measurement unit attached to the pelvis
2012-01-01
Background The estimation of the spatio-temporal gait parameters is of primary importance in both physical activity monitoring and clinical contexts. A method for estimating step length bilaterally, during level walking, using a single inertial measurement unit (IMU) attached to the pelvis is proposed. In contrast to previous studies, based either on a simplified representation of the human gait mechanics or on a general linear regressive model, the proposed method estimates the step length directly from the integration of the acceleration along the direction of progression. Methods The IMU was placed at pelvis level fixed to the subject's belt on the right side. The method was validated using measurements from a stereo-photogrammetric system as a gold standard on nine subjects walking ten laps along a closed loop track of about 25 m, varying their speed. For each loop, only the IMU data recorded in a 4 m long portion of the track included in the calibrated volume of the SP system, were used for the analysis. The method takes advantage of the cyclic nature of gait and it requires an accurate determination of the foot contact instances. A combination of a Kalman filter and of an optimally filtered direct and reverse integration applied to the IMU signals formed a single novel method (Kalman and Optimally filtered Step length Estimation - KOSE method). A correction of the IMU displacement due to the pelvic rotation occurring in gait was implemented to estimate the step length and the traversed distance. Results The step length was estimated for all subjects with less than 3% error. Traversed distance was assessed with less than 2% error. Conclusions The proposed method provided estimates of step length and traversed distance more accurate than any other method applied to measurements obtained from a single IMU that can be found in the literature. In healthy subjects, it is reasonable to expect that, errors in traversed distance estimation during daily monitoring activity would be of the same order of magnitude of those presented. PMID:22316235
AC signal characterization for optimization of a CMOS single-electron pump
NASA Astrophysics Data System (ADS)
Murray, Roy; Perron, Justin K.; Stewart, M. D., Jr.; Zimmerman, Neil M.
2018-02-01
Pumping single electrons at a set rate is being widely pursued as an electrical current standard. Semiconductor charge pumps have been pursued in a variety of modes, including single gate ratchet, a variety of 2-gate ratchet pumps, and 2-gate turnstiles. Whether pumping with one or two AC signals, lower error rates can result from better knowledge of the properties of the AC signal at the device. In this work, we operated a CMOS single-electron pump with a 2-gate ratchet style measurement and used the results to characterize and optimize our two AC signals. Fitting this data at various frequencies revealed both a difference in signal path length and attenuation between our two AC lines. Using this data, we corrected for the difference in signal path length and attenuation by applying an offset in both the phase and the amplitude at the signal generator. Operating the device as a turnstile while using the optimized parameters determined from the 2-gate ratchet measurement led to much flatter, more robust charge pumping plateaus. This method was useful in tuning our device up for optimal charge pumping, and may prove useful to the semiconductor quantum dot community to determine signal attenuation and path differences at the device.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, D; Salmon, H; Pavan, G
2014-06-01
Purpose: Evaluate and compare retrospective prostate treatment plan using Volumetric Modulated Arc Therapy (RapidArc™ - Varian) technique with single or double arcs at COI Group. Methods: Ten patients with present prostate and seminal vesicle neoplasia were replanned as a target treatment volume and a prescribed dose of 78 Gy. A baseline planning, using single arc, was developed for each case reaching for the best result on PTV, in order to minimize the dose on organs at risk (OAR). Maintaining the same optimization objectives used on baseline plan, two copies for optimizing single and double arcs, have been developed. The plansmore » were performed with 10 MV photon beam energy on Eclipse software, version 11.0, making use of Trilogy linear accelerator with Millenium HD120 multileaf collimator. Comparisons on PTV have been performed, such as: maximum, minimum and mean dose, gradient dose, as well as the quantity of monitor units, treatment time and homogeneity and conformity index. OARs constrains dose have been evaluated, comparing both optimizations. Results: Regarding PTV coverage, the difference of the minimum, maximum and mean dose were 1.28%, 0.7% and 0.2% respectively higher for single arc. When analyzed the index of homogeneity found a difference of 0.99% higher when compared with double arcs. However homogeneity index was 0.97% lower on average by using single arc. The doses on the OARs, in both cases, were in compliance to the recommended limits RTOG 0415. With the use of single arc, the quantity of monitor units was 10,1% lower, as well as the Beam-On time, 41,78%, when comparing double arcs, respectively. Conclusion: Concerning the optimization of patients with present prostate and seminal vesicle neoplasia, the use of single arc reaches similar objectives, when compared to double arcs, in order to decrease the treatment time and the quantity of monitor units.« less
Research of grasping algorithm based on scara industrial robot
NASA Astrophysics Data System (ADS)
Peng, Tao; Zuo, Ping; Yang, Hai
2018-04-01
As the tobacco industry grows, facing the challenge of the international tobacco giant, efficient logistics service is one of the key factors. How to complete the tobacco sorting task of efficient economy is the goal of tobacco sorting and optimization research. Now the cigarette distribution system uses a single line to carry out the single brand sorting task, this article adopts a single line to realize the cigarette sorting task of different brands. Using scara robot special algorithm for sorting and packaging, the optimization scheme significantly enhances the indicators of smoke sorting system. Saving labor productivity, obviously improve production efficiency.
Optimal dual-fuel propulsion for minimum inert weight or minimum fuel cost
NASA Technical Reports Server (NTRS)
Martin, J. A.
1973-01-01
An analytical investigation of single-stage vehicles with multiple propulsion phases has been conducted with the phasing optimized to minimize a general cost function. Some results are presented for linearized sizing relationships which indicate that single-stage-to-orbit, dual-fuel rocket vehicles can have lower inert weight than similar single-fuel rocket vehicles and that the advantage of dual-fuel vehicles can be increased if a dual-fuel engine is developed. The results also indicate that the optimum split can vary considerably with the choice of cost function to be minimized.
Miller, Joseph D; Roy, Sukesh; Slipchenko, Mikhail N; Gord, James R; Meyer, Terrence R
2011-08-01
High-repetition-rate, single-laser-shot measurements are important for the investigation of unsteady flows where temperature and species concentrations can vary significantly. Here, we demonstrate single-shot, pure-rotational, hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (fs/ps RCARS) thermometry based on a kHz-rate fs laser source. Interferences that can affect nanosecond (ns) and ps CARS, such as nonresonant background and collisional dephasing, are eliminated by selecting an appropriate time delay between the 100-fs pump/Stokes pulses and the pulse-shaped 8.4-ps probe. A time- and frequency-domain theoretical model is introduced to account for rotational-level dependent collisional dephasing and indicates that the optimal probe-pulse time delay is 13.5 ps to 30 ps. This time delay allows for uncorrected best-fit N2-RCARS temperature measurements with ~1% accuracy. Hence, the hybrid fs/ps RCARS approach can be performed with kHz-rate laser sources while avoiding corrections that can be difficult to predict in unsteady flows.
NASA Astrophysics Data System (ADS)
Miller, Joseph D.; Roy, Sukesh; Slipchenko, Mikhail N.; Gord, James R.; Meyer, Terrence R.
2011-08-01
High-repetition-rate, single-laser-shot measurements are important for the investigation of unsteady flows where temperature and species concentrations can vary significantly. Here, we demonstrate single-shot, pure-rotational, hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (fs/ps RCARS) thermometry based on a kHz-rate fs laser source. Interferences that can affect nanosecond (ns) and ps CARS, such as nonresonant background and collisional dephasing, are eliminated by selecting an appropriate time delay between the 100-fs pump/Stokes pulses and the pulse-shaped 8.4-ps probe. A time- and frequency-domain theoretical model is introduced to account for rotational-level dependent collisional dephasing and indicates that the optimal probe-pulse time delay is 13.5 ps to 30 ps. This time delay allows for uncorrected best-fit N2-RCARS temperature measurements with ~1% accuracy. Hence, the hybrid fs/ps RCARS approach can be performed with kHz-rate laser sources while avoiding corrections that can be difficult to predict in unsteady flows.
NASA Astrophysics Data System (ADS)
Suresh Kumar, G. S.; Antony Muthu Prabhu, A.; Bhuvanesh, N.
2014-10-01
We have studied the self-catalyzed Knoevenagel condensation, spectral characterization, DPPH radical scavenging activity, cytotoxicity, and molecular properties of 5-arylidene-2,2-dimethyl-1,3-dioxane-4,6-diones using single crystal XRD and DFT techniques. In the absence of any catalyst, a series of novel 5-arylidene-2,2-dimethyl-1,3-dioxane-4,6-diones were synthesized using Meldrum’s acid and formylphenoxyaliphatic acid(s) in water. These molecules are arranged in the dimer form through intermolecular H-bonding in the single crystal XRD structure. Compounds have better DPPH radical scavenging activity and cytotoxicity against A431 cancer cell line. The optimized molecular structure, natural bond orbital analysis, electrostatic potential map, HOMO-LUMO energies, molecular properties, and atomic charges of these molecules have been studied by performing DFT/B3LYP/3-21G(*) level of theory in gas phase.
Claridge, Shelley A.; Thomas, John C.; Silverman, Miles A.; Schwartz, Jeffrey J.; Yang, Yanlian; Wang, Chen; Weiss, Paul S.
2014-01-01
Single-molecule measurements of complex biological structures such as proteins are an attractive route for determining structures of the large number of important biomolecules that have proved refractory to analysis through standard techniques such as X-ray crystallography and nuclear magnetic resonance. We use a custom-built low-current scanning tunneling microscope to image peptide structure at the single-molecule scale in a model peptide that forms β sheets, a structural motif common in protein misfolding diseases. We successfully differentiate between histidine and alanine amino acid residues, and further differentiate side chain orientations in individual histidine residues, by correlating features in scanning tunneling microscope images with those in energy-optimized models. Beta sheets containing histidine residues are used as a model system due to the role histidine plays in transition metal binding associated with amyloid oligomerization in Alzheimer’s and other diseases. Such measurements are a first step toward analyzing peptide and protein structures at the single-molecule level. PMID:24219245
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurkin, N V; Konyshev, V A; Novikov, A G
2015-01-31
We have studied experimentally and using numerical simulations and a phenomenological analytical model the dependences of the bit error rate (BER) on the signal power and length of a coherent single-span communication line with transponders employing polarisation division multiplexing and four-level phase modulation (100 Gbit s{sup -1} DP-QPSK format). In comparing the data of the experiment, numerical simulations and theoretical analysis, we have found two optimal powers: the power at which the BER is minimal and the power at which the fade margin in the line is maximal. We have derived and analysed the dependences of the BER on themore » optical signal power at the fibre line input and the dependence of the admissible input signal power range for implementation of the communication lines with a length from 30 – 50 km up to a maximum length of 250 km. (optical transmission of information)« less
NASA Astrophysics Data System (ADS)
Meier, W. R.; Kong, T.; Bud'ko, S. L.; Canfield, P. C.
2017-06-01
Measurements of the anisotropic properties of single crystals play a crucial role in probing the physics of new materials. Determining a growth protocol that yields suitable high-quality single crystals can be particularly challenging for multicomponent compounds. Here we present a case study of how we refined a procedure to grow single crystals of CaKFe4As4 from a high temperature, quaternary liquid solution rich in iron and arsenic ("FeAs self-flux"). Temperature dependent resistance and magnetization measurements are emphasized, in addition to the x-ray diffraction, to detect intergrown CaKFe4As4 , CaFe2As2 , and KFe2As2 within what appear to be single crystals. Guided by the rules of phase equilibria and these data, we adjusted growth parameters to suppress formation of the impurity phases. The resulting optimized procedure yielded phase-pure single crystals of CaKFe4As4 . This optimization process offers insight into the growth of quaternary compounds and a glimpse of the four-component phase diagram in the pseudoternary FeAs -CaFe2As2-KFe2As2 system.
Jayasuriya, W J A B; Suresh, T S; Abeytunga, D; Fernando, G H; Wanigatunga, C A
2012-01-01
This study investigates the oral hypoglycemic activity of Pleurotus ostreatus (P.o.) and P. cystidiosus (P.c.) mushrooms on normal and alloxan-induced diabetic Wistar rats. Different doses (250, 500, 750, 1000, and 1250 mg/kg/body weight) of suspensions of freeze-dried and powdered (SFDP) P.o. and P.c. were administered to normal rats, and postprandial serum glucose levels were measured. Optimal time of activity was investigated using the dose 500 mg/kg. Hypoglycemic effect of a single dose of SFDP P.o. and P.c. (500 mg/kg) were investigated using diabetic male and female rats at different stages of estrous cycle and compared with metformin and glibenclamide. Chronic hypoglycemic activity of SFDP P.o. and P.c. (500 mg/kg) was studied using serum glucose levels and glycosylated hemoglobin levels. Maximally effective dose of SFDP P.o. and P.c. was 500 mg/kg. The highest reduction in the serum glucose level was observed 120 minutes after administration of mushrooms. A single dose of P.o. and P.c. significantly (P < 0.05) reduced the serum glucose levels of male diabetic rats. The hypoglycemic activity in female rats was highest in proestrous stage. The hypoglycemic effect of P.o. and P.c. is comparable with metformin and glibenclamide. Daily single administrations of P.o. and P.c. to diabetic rats exert apparent control on the homeostasis of blood glucose. SFDP P.o. and P.c. possessed marked and significant oral hypoglycemic activity. This study suggests the consumption of P.o. and P.c. mushrooms might bring health benefits to mankind as it shows hypoglycemic activity in rats.
Anterior surgical management of single-level cervical disc disease: a cost-effectiveness analysis.
Lewis, Daniel J; Attiah, Mark A; Malhotra, Neil R; Burnett, Mark G; Stein, Sherman C
2014-12-01
Cost-effectiveness analysis with decision analysis and meta-analysis. To determine the relative cost-effectiveness of anterior cervical discectomy with fusion (with autograft, allograft, or spacers), anterior cervical discectomy without fusion (ACD), and cervical disc replacement (CDR) for the treatment of 1-level cervical disc disease. There is debate as to the optimal anterior surgical strategy to treat single-level cervical disc disease. Surgical strategies include 3 techniques of anterior cervical discectomy with fusion (autograft, allograft, or spacer-assisted fusion), ACD, and CDR. Several controlled trials have compared these treatments but have yielded mixed results. Decision analysis provides a structure for making a quantitative comparison of the costs and outcomes of each treatment. A literature search was performed and yielded 156 case series that fulfilled our search criteria describing nearly 17,000 cases. Data were abstracted from these publications and pooled meta-analytically to estimate the incidence of various outcomes, including index-level and adjacent-level reoperation. A decision analytic model calculated the expected costs in US dollars and outcomes in quality-adjusted life years for a typical adult patient with 1-level cervical radiculopathy subjected to each of the 5 approaches. At 5 years postoperatively, patients who had undergone ACD alone had significantly (P < 0.001) more quality-adjusted life years (4.885 ± 0.041) than those receiving other treatments. Patients with ACD also exhibited highly significant (P < 0.001) differences in costs, incurring the lowest societal costs ($16,558 ± $539). Follow-up data were inadequate for comparison beyond 5 years. The results of our decision analytic model indicate advantages for ACD, both in effectiveness and costs, over other strategies. Thus, ACD is a cost-effective alternative to anterior cervical discectomy with fusion and CDR in patients with single-level cervical disc disease. Definitive conclusions about degenerative changes after ACD and adjacent-level disease after CDR await longer follow-up. 4.
On the Run-Time Optimization of the Boolean Logic of a Program.
ERIC Educational Resources Information Center
Cadolino, C.; Guazzo, M.
1982-01-01
Considers problem of optimal scheduling of Boolean expression (each Boolean variable represents binary outcome of program module) on single-processor system. Optimization discussed consists of finding operand arrangement that minimizes average execution costs representing consumption of resources (elapsed time, main memory, number of…
Singular perturbation analysis of AOTV-related trajectory optimization problems
NASA Technical Reports Server (NTRS)
Calise, Anthony J.; Bae, Gyoung H.
1990-01-01
The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.
Minimum airflow reset of single-duct VAV terminal boxes
NASA Astrophysics Data System (ADS)
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and applied to actual systems for performance validation. The results of the theoretical analysis, numeric simulations, and experiments show that the optimal control algorithms can automatically identify the minimum rate of heating airflow under actual working conditions. Improved control helps to stabilize room air temperatures. The vertical difference in the room air temperature was lower than the comfort value. Measurements of room CO2 levels indicate that when the minimum airflow set point was reduced it did not adversely affect the indoor air quality. According to the measured energy results, optimal control algorithms give a lower rate of reheating energy consumption than conventional controls.
Graham, Victoria A.; Bewley, Kevin R.; Dennis, Mike; Taylor, Irene; Funnell, Simon G. P.; Bate, Simon R.; Steeds, Kimberley; Tipton, Thomas; Bean, Thomas; Hudson, Laura; Atkinson, Deborah J.; McLuckie, Gemma; Charlwood, Melanie; Roberts, Allen D. G.; Vipond, Julia
2013-01-01
To support the licensure of a new and safer vaccine to protect people against smallpox, a monkeypox model of infection in cynomolgus macaques, which simulates smallpox in humans, was used to evaluate two vaccines, Acam2000 and Imvamune, for protection against disease. Animals vaccinated with a single immunization of Imvamune were not protected completely from severe and/or lethal infection, whereas those receiving either a prime and boost of Imvamune or a single immunization with Acam2000 were protected completely. Additional parameters, including clinical observations, radiographs, viral load in blood, throat swabs, and selected tissues, vaccinia virus-specific antibody responses, immunophenotyping, extracellular cytokine levels, and histopathology were assessed. There was no significant difference (P > 0.05) between the levels of neutralizing antibody in animals vaccinated with a single immunization of Acam2000 (132 U/ml) and the prime-boost Imvamune regime (69 U/ml) prior to challenge with monkeypox virus. After challenge, there was evidence of viral excretion from the throats of 2 of 6 animals in the prime-boost Imvamune group, whereas there was no confirmation of excreted live virus in the Acam2000 group. This evaluation of different human smallpox vaccines in cynomolgus macaques helps to provide information about optimal vaccine strategies in the absence of human challenge studies. PMID:23658452
Wilson, Catherine L; Johnson, David; Oakley, Ed
2016-02-01
Systematic review of knowledge translation studies focused on paediatric emergency care to describe and assess the interventions used in emergency department settings. Electronic databases were searched for knowledge translation studies conducted in the emergency department that included the care of children. Two researchers independently reviewed the studies. From 1305 publications identified, 15 studies of varied design were included. Four were cluster-controlled trials, two patient-level randomised controlled trials, two interrupted time series, one descriptive study and six before and after intervention studies. Knowledge translation interventions were predominantly aimed at the treating clinician, with some targeting the organisation. Studies assessed effectiveness of interventions over 6-12 months in before and after studies, and 3-28 months in cluster or patient level controlled trials. Changes in clinical practice were variable, with studies on single disease and single treatments in a single site showing greater improvement. Evidence for effective methods to translate knowledge into practice in paediatric emergency medicine is fairly limited. More optimal study designs with more explicit descriptions of interventions are needed to facilitate other groups to effectively apply these procedures in their own setting. © 2016 The Authors Journal of Paediatrics and Child Health © 2016 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, W.L.; Pines, H.S.; Silvester, L.F.
1978-03-01
A new heat exchanger program, SIZEHX, is described. This program allows single step multiparameter cost optimizations on single phase or supercritical exchanger arrays with variable properties and arbitrary fouling for a multitude of matrix configurations and fluids. SIZEHX uses a simplified form of Tinker's method for characterization of shell side performance; the Starling modified BWR equation for thermodynamic properties of hydrocarbons; and transport properties developed by NBS. Results of four parameter cost optimizations on exchangers for specific geothermal applications are included. The relative mix of capital cost, pumping cost, and brine cost ($/Btu) is determined for geothermal exchangers illustrating themore » invariant nature of the optimal cost distribution for fixed unit costs.« less
Single-photon quantum key distribution in the presence of loss
NASA Astrophysics Data System (ADS)
Curty, Marcos; Moroder, Tobias
2007-05-01
We investigate two-way and one-way single-photon quantum key distribution (QKD) protocols in the presence of loss introduced by the quantum channel. Our analysis is based on a simple precondition for secure QKD in each case. In particular, the legitimate users need to prove that there exists no separable state (in the case of two-way QKD), or that there exists no quantum state having a symmetric extension (one-way QKD), that is compatible with the available measurements results. We show that both criteria can be formulated as a convex optimization problem known as a semidefinite program, which can be efficiently solved. Moreover, we prove that the solution to the dual optimization corresponds to the evaluation of an optimal witness operator that belongs to the minimal verification set of them for the given two-way (or one-way) QKD protocol. A positive expectation value of this optimal witness operator states that no secret key can be distilled from the available measurements results. We apply such analysis to several well-known single-photon QKD protocols under losses.
NASA Astrophysics Data System (ADS)
Averin, Dmitri V.; Pekola, Jukka P.
2017-03-01
According to Landauer's principle, erasure of information is the only part of a computation process that unavoidably involves energy dissipation. If done reversibly, such an erasure generates the minimal heat of $k_BT\\ln 2$ per erased bit of information. The goal of this work is to discuss the actual reversal of the optimal erasure which can serve as the basis for the Maxwell's demon operating with ultimate thermodynamic efficiency as dictated by the second law of thermodynamics. The demon extracts $k_BT\\ln 2$ of heat from an equilibrium reservoir at temperature $T$ per one bit of information obtained about the measured system used by the demon. We have analyzed this Maxwell's demon in the situation when it uses a general quantum system with a discrete spectrum of energy levels as its working body. In the case of the effectively two-level system, which has been realized experimentally based on tunneling of individual electron in a single-electron box [J.V. Koski et al., PNAS 111, 13786 (2014)], we also studied and minimized corrections to the ideal reversible operation of the demon. These corrections include, in particular, the non-adiabatic terms which are described by a version of the classical fluctuation-dissipation theorem. The overall reversibility of the Maxwell's demon requires, beside the reversibility of the intrinsic working body dynamics, the reversibility of the measurement and feedback processes. The single-electron demon can, in principle, be made fully reversible by developing a thermodynamically reversible single-electron charge detector for measurements of the individual charge states of the single-electron box.
NASA Astrophysics Data System (ADS)
Hu, Xuemin; Chen, Long; Tang, Bo; Cao, Dongpu; He, Haibo
2018-02-01
This paper presents a real-time dynamic path planning method for autonomous driving that avoids both static and moving obstacles. The proposed path planning method determines not only an optimal path, but also the appropriate acceleration and speed for a vehicle. In this method, we first construct a center line from a set of predefined waypoints, which are usually obtained from a lane-level map. A series of path candidates are generated by the arc length and offset to the center line in the s - ρ coordinate system. Then, all of these candidates are converted into Cartesian coordinates. The optimal path is selected considering the total cost of static safety, comfortability, and dynamic safety; meanwhile, the appropriate acceleration and speed for the optimal path are also identified. Various types of roads, including single-lane roads and multi-lane roads with static and moving obstacles, are designed to test the proposed method. The simulation results demonstrate the effectiveness of the proposed method, and indicate its wide practical application to autonomous driving.
Sparse bursts optimize information transmission in a multiplexed neural code.
Naud, Richard; Sprekeler, Henning
2018-06-22
Many cortical neurons combine the information ascending and descending the cortical hierarchy. In the classical view, this information is combined nonlinearly to give rise to a single firing-rate output, which collapses all input streams into one. We analyze the extent to which neurons can simultaneously represent multiple input streams by using a code that distinguishes spike timing patterns at the level of a neural ensemble. Using computational simulations constrained by experimental data, we show that cortical neurons are well suited to generate such multiplexing. Interestingly, this neural code maximizes information for short and sparse bursts, a regime consistent with in vivo recordings. Neurons can also demultiplex this information, using specific connectivity patterns. The anatomy of the adult mammalian cortex suggests that these connectivity patterns are used by the nervous system to maintain sparse bursting and optimal multiplexing. Contrary to firing-rate coding, our findings indicate that the physiology and anatomy of the cortex may be interpreted as optimizing the transmission of multiple independent signals to different targets. Copyright © 2018 the Author(s). Published by PNAS.
Lowden, Jonathan; Miller Neilan, Rachael; Yahdi, Mohammed
2014-03-01
The rising prevalence of vancomycin-resistant enterococci (VRE) is a major health problem in intensive care units (ICU) because of its association with increased mortality and high health care costs. We present a mathematical framework for determining cost-effective strategies for prevention and treatment of VRE in the ICU. A system of five ordinary differential equations describes the movement of ICU patients in and out of five VRE-related states. Two control variables representing the prevention and treatment of VRE are incorporated into the system. The basic reproductive number is derived and calculated for different levels of the two controls. An optimal control problem is formulated to minimize VRE-related deaths and costs associated with prevention and treatment controls over a finite time period. Numerical solutions illustrate optimal single and dual allocations of the controls for various cost values. Results show that preventive care has the greatest impact in reducing the basic reproductive number, while treatment of VRE infections has the most impact on reducing VRE-related deaths. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimization of data retrieval process for spectroscopic CO2 isotopologue ratio measurements
NASA Astrophysics Data System (ADS)
Hovorka, J.; Čermák, P.; Veis, P.
2017-05-01
In this work, a numerical model was developed for critical evaluation of the 13CO2/12CO2 ratio retrievals ( Δ δ value) from laser absorption spectra. The goal of the analysis was to determine the dependency of the absolute error of δ on different experimental parameters, in order to find the optimal conditions for isotopic ratio retrievals without using calibrated reference samples. In our study, the target precision for Δ δ was set at a level of ≤slant 1 %. The analysis was performed in the spectral range of the {ν1}+{ν3} CO2 band at 1.6 μm, with the theoretical data originating from the HITRAN database. The proposed fitting algorithm allowed efficient compensation of the interference from weak transitions which are not well recognizable in a single spectrum. This effect was found to make a dominant contribution to the Δ δ value. Next, the optimal conditions for such an experiment regarding the pressure, spectral range and spectrum noise were found and discussed from the perspective of widely tunable laser applications.
Tajabadi, Naser; Ebrahimpour, Afshin; Baradaran, Ali; Rahim, Raha Abdul; Mahyudin, Nor Ainy; Manap, Mohd Yazid Abdul; Bakar, Fatimah Abu; Saari, Nazamid
2015-04-15
Dominant strains of lactic acid bacteria (LAB) isolated from honey bees were evaluated for their γ-aminobutyric acid (GABA)-producing ability. Out of 24 strains, strain Taj-Apis362 showed the highest GABA-producing ability (1.76 mM) in MRS broth containing 50 mM initial glutamic acid cultured for 60 h. Effects of fermentation parameters, including initial glutamic acid level, culture temperature, initial pH and incubation time on GABA production were investigated via a single parameter optimization strategy. The optimal fermentation condition for GABA production was modeled using response surface methodology (RSM). The results showed that the culture temperature was the most significant factor for GABA production. The optimum conditions for maximum GABA production by Lactobacillus plantarum Taj-Apis362 were an initial glutamic acid concentration of 497.97 mM, culture temperature of 36 °C, initial pH of 5.31 and incubation time of 60 h, which produced 7.15 mM of GABA. The value is comparable with the predicted value of 7.21 mM.
Reynolds, Penny S; Tamariz, Francisco J; Barbee, Robert Wayne
2010-04-01
Exploratory pilot studies are crucial to best practice in research but are frequently conducted without a systematic method for maximizing the amount and quality of information obtained. We describe the use of response surface regression models and simultaneous optimization methods to develop a rat model of hemorrhagic shock in the context of chronic hypertension, a clinically relevant comorbidity. Response surface regression model was applied to determine optimal levels of two inputs--dietary NaCl concentration (0.49%, 4%, and 8%) and time on the diet (4, 6, 8 weeks)--to achieve clinically realistic and stable target measures of systolic blood pressure while simultaneously maximizing critical oxygen delivery (a measure of vulnerability to hemorrhagic shock) and body mass M. Simultaneous optimization of the three response variables was performed though a dimensionality reduction strategy involving calculation of a single aggregate measure, the "desirability" function. Optimal conditions for inducing systolic blood pressure of 208 mmHg, critical oxygen delivery of 4.03 mL/min, and M of 290 g were determined to be 4% [NaCl] for 5 weeks. Rats on the 8% diet did not survive past 7 weeks. Response surface regression model and simultaneous optimization method techniques are commonly used in process engineering but have found little application to date in animal pilot studies. These methods will ensure both the scientific and ethical integrity of experimental trials involving animals and provide powerful tools for the development of novel models of clinically interacting comorbidities with shock.
Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra. Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra. PMID:29887907
Application of high-performance computing to numerical simulation of human movement
NASA Technical Reports Server (NTRS)
Anderson, F. C.; Ziegler, J. M.; Pandy, M. G.; Whalen, R. T.
1995-01-01
We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.
Yu, Li; Jin, Weifeng; Li, Xiaohong; Zhang, Yuyan
2018-01-01
The ultraviolet spectrophotometric method is often used for determining the content of glycyrrhizic acid from Chinese herbal medicine Glycyrrhiza glabra . Based on the traditional single variable approach, four extraction parameters of ammonia concentration, ethanol concentration, circumfluence time, and liquid-solid ratio are adopted as the independent extraction variables. In the present work, central composite design of four factors and five levels is applied to design the extraction experiments. Subsequently, the prediction models of response surface methodology, artificial neural networks, and genetic algorithm-artificial neural networks are developed to analyze the obtained experimental data, while the genetic algorithm is utilized to find the optimal extraction parameters for the above well-established models. It is found that the optimization of extraction technology is presented as ammonia concentration 0.595%, ethanol concentration 58.45%, return time 2.5 h, and liquid-solid ratio 11.065 : 1. Under these conditions, the model predictive value is 381.24 mg, the experimental average value is 376.46 mg, and the expectation discrepancy is 4.78 mg. For the first time, a comparative study of these three approaches is conducted for the evaluation and optimization of the effects of the extraction independent variables. Furthermore, it is demonstrated that the combinational method of genetic algorithm and artificial neural networks provides a more reliable and more accurate strategy for design and optimization of glycyrrhizic acid extraction from Glycyrrhiza glabra .
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises
Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise. PMID:28692667
Optimal Weights Mixed Filter for removing mixture of Gaussian and impulse noises.
Jin, Qiyu; Grama, Ion; Liu, Quansheng
2017-01-01
In this paper we consider the problem of restoration of a image contaminated by a mixture of Gaussian and impulse noises. We propose a new statistic called ROADGI which improves the well-known Rank-Ordered Absolute Differences (ROAD) statistic for detecting points contaminated with the impulse noise in this context. Combining ROADGI statistic with the method of weights optimization we obtain a new algorithm called Optimal Weights Mixed Filter (OWMF) to deal with the mixed noise. Our simulation results show that the proposed filter is effective for mixed noises, as well as for single impulse noise and for single Gaussian noise.
Stochastic optimal control of non-stationary response of a single-degree-of-freedom vehicle model
NASA Astrophysics Data System (ADS)
Narayanan, S.; Raju, G. V.
1990-09-01
An active suspension system to control the non-stationary response of a single-degree-of-freedom (sdf) vehicle model with variable velocity traverse over a rough road is investigated. The suspension is optimized with respect to ride comfort and road holding, using stochastic optimal control theory. The ground excitation is modelled as a spatial homogeneous random process, being the output of a linear shaping filter to white noise. The effect of the rolling contact of the tyre is considered by an additional filter in cascade. The non-stationary response with active suspension is compared with that of a passive system.
Celik, Yuksel; Ulker, Erkan
2013-01-01
Marriage in honey bees optimization (MBO) is a metaheuristic optimization algorithm developed by inspiration of the mating and fertilization process of honey bees and is a kind of swarm intelligence optimizations. In this study we propose improved marriage in honey bees optimization (IMBO) by adding Levy flight algorithm for queen mating flight and neighboring for worker drone improving. The IMBO algorithm's performance and its success are tested on the well-known six unconstrained test functions and compared with other metaheuristic optimization algorithms. PMID:23935416
Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks
Bistra Dilkina; Rachel Houtman; Carla P. Gomes; Claire A. Montgomery; Kevin S. McKelvey; Katherine Kendall; Tabitha A. Graves; Richard Bernstein; Michael K. Schwartz
2016-01-01
Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a...
Spatial targeting of agri-environmental policy using bilevel evolutionary optimization
USDA-ARS?s Scientific Manuscript database
In this study we describe the optimal designation of agri-environmental policy as a bilevel optimization problem and propose an integrated solution method using a hybrid genetic algorithm. The problem is characterized by a single leader, the agency, that establishes a policy with the goal of optimiz...
Dynamic nuclear polarization and optimal control spatial-selective 13C MRI and MRS
NASA Astrophysics Data System (ADS)
Vinding, Mads S.; Laustsen, Christoffer; Maximov, Ivan I.; Søgaard, Lise Vejby; Ardenkjær-Larsen, Jan H.; Nielsen, Niels Chr.
2013-02-01
Aimed at 13C metabolic magnetic resonance imaging (MRI) and spectroscopy (MRS) applications, we demonstrate that dynamic nuclear polarization (DNP) may be combined with optimal control 2D spatial selection to simultaneously obtain high sensitivity and well-defined spatial restriction. This is achieved through the development of spatial-selective single-shot spiral-readout MRI and MRS experiments combined with dynamic nuclear polarization hyperpolarized [1-13C]pyruvate on a 4.7 T pre-clinical MR scanner. The method stands out from related techniques by facilitating anatomic shaped region-of-interest (ROI) single metabolite signals available for higher image resolution or single-peak spectra. The 2D spatial-selective rf pulses were designed using a novel Krotov-based optimal control approach capable of iteratively fast providing successful pulse sequences in the absence of qualified initial guesses. The technique may be important for early detection of abnormal metabolism, monitoring disease progression, and drug research.
Risk-based planning analysis for a single levee
NASA Astrophysics Data System (ADS)
Hui, Rui; Jachens, Elizabeth; Lund, Jay
2016-04-01
Traditional risk-based analysis for levee planning focuses primarily on overtopping failure. Although many levees fail before overtopping, few planning studies explicitly include intermediate geotechnical failures in flood risk analysis. This study develops a risk-based model for two simplified levee failure modes: overtopping failure and overall intermediate geotechnical failure from through-seepage, determined by the levee cross section represented by levee height and crown width. Overtopping failure is based only on water level and levee height, while through-seepage failure depends on many geotechnical factors as well, mathematically represented here as a function of levee crown width using levee fragility curves developed from professional judgment or analysis. These levee planning decisions are optimized to minimize the annual expected total cost, which sums expected (residual) annual flood damage and annualized construction costs. Applicability of this optimization approach to planning new levees or upgrading existing levees is demonstrated preliminarily for a levee on a small river protecting agricultural land, and a major levee on a large river protecting a more valuable urban area. Optimized results show higher likelihood of intermediate geotechnical failure than overtopping failure. The effects of uncertainty in levee fragility curves, economic damage potential, construction costs, and hydrology (changing climate) are explored. Optimal levee crown width is more sensitive to these uncertainties than height, while the derived general principles and guidelines for risk-based optimal levee planning remain the same.
Wing Configuration Impact on Design Optimums for a Subsonic Passenger Transport
NASA Technical Reports Server (NTRS)
Wells, Douglas P.
2014-01-01
This study sought to compare four aircraft wing configurations at a conceptual level using a multi-disciplinary optimization (MDO) process. The MDO framework used was created by Georgia Institute of Technology and Virginia Polytechnic Institute and State University. They created a multi-disciplinary design and optimization environment that could capture the unique features of the truss-braced wing (TBW) configuration. The four wing configurations selected for the study were a low wing cantilever installation, a high wing cantilever, a strut-braced wing, and a single jury TBW. The mission that was used for this study was a 160 passenger transport aircraft with a design range of 2,875 nautical miles at the design payload, flown at a cruise Mach number of 0.78. This paper includes discussion and optimization results for multiple design objectives. Five design objectives were chosen to illustrate the impact of selected objective on the optimization result: minimum takeoff gross weight (TOGW), minimum operating empty weight, minimum block fuel weight, maximum start of cruise lift-to-drag ratio, and minimum start of cruise drag coefficient. The results show that the design objective selected will impact the characteristics of the optimized aircraft. Although minimum life cycle cost was not one of the objectives, TOGW is often used as a proxy for life cycle cost. The low wing cantilever had the lowest TOGW followed by the strut-braced wing.
Wu, Xueyun; Yang, Dong; Zhu, Xiangcheng; Feng, Zhiyang; Lv, Zhengbin; Zhang, Yaozhou; Shen, Ben; Xu, Zhinan
2011-01-01
The heterologous production of iso-migrastatin (iso-MGS) was successfully demonstrated in an engineered S. lividans SB11002 strain, which was derived from S. lividans K4–114, following introduction of pBS11001, which harbored the entire mgs biosynthetic gene cluster. However, under similar fermentation conditions, the iso-MGS titer in the engineered strain was significantly lower than that in the native producer - Streptomyces platensis NRRL 18993. To circumvent the problem of low iso-MGS titers and to expand the utility of this heterologous system for iso-MGS biosynthesis and engineering, systematic optimization of the fermentation medium was carried out. The effects of major components in the cultivation medium, including carbon, organic and inorganic nitrogen sources, were investigated using a single factor optimization method. As a result, sucrose and yeast extract were determined to be the best carbon and organic nitrogen sources, resulting in optimized iso-MGS production. Conversely, all other inorganic nitrogen sources evaluated produced various levels of inhibition of iso-MGS production. The final optimized R2YE production medium produced iso-MGS with a titer of 86.5 mg/L, about 3.6-fold higher than that in the original R2YE medium, and 1.5 fold higher than that found within the native S. platensis NRRL 18993 producer. PMID:21625393
Application of the gravity search algorithm to multi-reservoir operation optimization
NASA Astrophysics Data System (ADS)
Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.
2016-12-01
Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.
Wu, Qixue; Snyder, Karen Chin; Liu, Chang; Huang, Yimei; Zhao, Bo; Chetty, Indrin J; Wen, Ning
2016-09-30
Treatment of patients with multiple brain metastases using a single-isocenter volumetric modulated arc therapy (VMAT) has been shown to decrease treatment time with the tradeoff of larger low dose to the normal brain tissue. We have developed an efficient Projection Summing Optimization Algorithm to optimize the treatment geometry in order to reduce dose to normal brain tissue for radiosurgery of multiple metastases with single-isocenter VMAT. The algorithm: (a) measures coordinates of outer boundary points of each lesion to be treated using the Eclipse Scripting Application Programming Interface, (b) determines the rotations of couch, collimator, and gantry using three matrices about the cardinal axes, (c) projects the outer boundary points of the lesion on to Beam Eye View projection plane, (d) optimizes couch and collimator angles by selecting the least total unblocked area for each specific treatment arc, and (e) generates a treatment plan with the optimized angles. The results showed significant reduction in the mean dose and low dose volume to normal brain, while maintaining the similar treatment plan qualities on the thirteen patients treated previously. The algorithm has the flexibility with regard to the beam arrangements and can be integrated in the treatment planning system for clinical application directly.
NASA Astrophysics Data System (ADS)
Tiwari, Shivendra N.; Padhi, Radhakant
2018-01-01
Following the philosophy of adaptive optimal control, a neural network-based state feedback optimal control synthesis approach is presented in this paper. First, accounting for a nominal system model, a single network adaptive critic (SNAC) based multi-layered neural network (called as NN1) is synthesised offline. However, another linear-in-weight neural network (called as NN2) is trained online and augmented to NN1 in such a manner that their combined output represent the desired optimal costate for the actual plant. To do this, the nominal model needs to be updated online to adapt to the actual plant, which is done by synthesising yet another linear-in-weight neural network (called as NN3) online. Training of NN3 is done by utilising the error information between the nominal and actual states and carrying out the necessary Lyapunov stability analysis using a Sobolev norm based Lyapunov function. This helps in training NN2 successfully to capture the required optimal relationship. The overall architecture is named as 'Dynamically Re-optimised single network adaptive critic (DR-SNAC)'. Numerical results for two motivating illustrative problems are presented, including comparison studies with closed form solution for one problem, which clearly demonstrate the effectiveness and benefit of the proposed approach.
[Imaging center - optimization of the imaging process].
Busch, H-P
2013-04-01
Hospitals around the world are under increasing pressure to optimize the economic efficiency of treatment processes. Imaging is responsible for a great part of the success but also of the costs of treatment. In routine work an excessive supply of imaging methods leads to an "as well as" strategy up to the limit of the capacity without critical reflection. Exams that have no predictable influence on the clinical outcome are an unjustified burden for the patient. They are useless and threaten the financial situation and existence of the hospital. In recent years the focus of process optimization was exclusively on the quality and efficiency of performed single examinations. In the future critical discussion of the effectiveness of single exams in relation to the clinical outcome will be more important. Unnecessary exams can be avoided, only if in addition to the optimization of single exams (efficiency) there is an optimization strategy for the total imaging process (efficiency and effectiveness). This requires a new definition of processes (Imaging Pathway), new structures for organization (Imaging Center) and a new kind of thinking on the part of the medical staff. Motivation has to be changed from gratification of performed exams to gratification of process quality (medical quality, service quality, economics), including the avoidance of additional (unnecessary) exams. © Georg Thieme Verlag KG Stuttgart · New York.
Dimensions of design space: a decision-theoretic approach to optimal research design.
Conti, Stefano; Claxton, Karl
2009-01-01
Bayesian decision theory can be used not only to establish the optimal sample size and its allocation in a single clinical study but also to identify an optimal portfolio of research combining different types of study design. Within a single study, the highest societal payoff to proposed research is achieved when its sample sizes and allocation between available treatment options are chosen to maximize the expected net benefit of sampling (ENBS). Where a number of different types of study informing different parameters in the decision problem could be conducted, the simultaneous estimation of ENBS across all dimensions of the design space is required to identify the optimal sample sizes and allocations within such a research portfolio. This is illustrated through a simple example of a decision model of zanamivir for the treatment of influenza. The possible study designs include: 1) a single trial of all the parameters, 2) a clinical trial providing evidence only on clinical endpoints, 3) an epidemiological study of natural history of disease, and 4) a survey of quality of life. The possible combinations, samples sizes, and allocation between trial arms are evaluated over a range of cost-effectiveness thresholds. The computational challenges are addressed by implementing optimization algorithms to search the ENBS surface more efficiently over such large dimensions.
Westerwalbesloh, Christoph; Grünberger, Alexander; Stute, Birgit; Weber, Sophie; Wiechert, Wolfgang; Kohlheyer, Dietrich; von Lieres, Eric
2015-11-07
A microfluidic device for microbial single-cell cultivation of bacteria was modeled and simulated using COMSOL Multiphysics. The liquid velocity field and the mass transfer within the supply channels and cultivation chambers were calculated to gain insight in the distribution of supplied nutrients and metabolic products secreted by the cultivated bacteria. The goal was to identify potential substrate limitations or product accumulations within the cultivation device. The metabolic uptake and production rates, colony size, and growth medium composition were varied covering a wide range of operating conditions. Simulations with glucose as substrate did not show limitations within the typically used concentration range, but for alternative substrates limitations could not be ruled out. This lays the foundation for further studies and the optimization of existing picoliter bioreactor systems.
Kumar, S Chaitanya; Samanta, G K; Ebrahim-Zadeh, M
2009-08-03
Characteristics of high-power, narrow-linewidth, continuous-wave (cw) green radiation obtained by simple single-pass second-harmonic-generation (SHG) of a cw ytterbium fiber laser at 1064 nm in the nonlinear crystals of PPKTP and MgO:sPPLT are studied and compared. Temperature tuning and SHG power scaling up to nearly 10 W for input fundamental power levels up to 30 W are performed. Various contributions to thermal effects in both crystals, limiting the SHG conversion efficiency, are studied. Optimal focusing conditions and thermal management schemes are investigated to maximize SHG performance in MgO:sPPLT. Stable green output power and high spatial beam quality with M(2)<1.33 and M(2)<1.34 is achieved in MgO:sPPLT and PPKTP, respectively.
Dal Santo, Vladimiro; Liguori, Francesca; Pirovano, Claudio; Guidotti, Matteo
2010-05-26
Nanostructured single-site heterogeneous catalysts possess the advantages of classical solid catalysts, in terms of easy recovery and recycling, together with a defined tailored chemical and steric environment around the catalytically active metal site. The use of inorganic oxide supports with selected shape and porosity at a nanometric level may have a relevant impact on the regio- and stereochemistry of the catalytic reaction. Analogously, by choosing the optimal preparation techniques to obtain spatially isolated and well-characterised active sites, it is possible to achieve performances that are comparable to (or, in the most favourable cases, better than) those obtained with homogeneous systems. Such catalysts are therefore particularly suitable for the transformation of highly-functionalised fine chemicals and some relevant examples where high chemo-, regio- and stereoselectivity are crucial will be described.
NASA Astrophysics Data System (ADS)
Goltz, T.; Kamusella, S.; Jeevan, H. S.; Gegenwart, P.; Luetkens, H.; Materne, P.; Spehling, J.; Sarkar, R.; Klauss, H.-H.
2014-12-01
We present our results of a local probe study on EuFe2(As1-xPx)2 single crystals with x=0.13, 0.19 and 0.28 by means of muon spin rotation and 57Fe Mössbauer spectroscopy. We focus our discussion on the sample with x=0.19 viz. at the optimal substitution level, where bulk superconductivity (TSC = 28 K) sets in above static europium order (TEu = 20 K) but well below the onset of the iron antiferromagnetic (AFM) transition (~100 K). We find enhanced spin dynamics in the Fe sublattice closely above TSC and propose that these are related to enhanced Eu fluctuations due to the evident coupling of both sublattices observed in our experiments.
NASA Astrophysics Data System (ADS)
Gagnon, Hugo
This thesis represents a step forward to bring geometry parameterization and control on par with the disciplinary analyses involved in shape optimization, particularly high-fidelity aerodynamic shape optimization. Central to the proposed methodology is the non-uniform rational B-spline, used here to develop a new geometry generator and geometry control system applicable to the aerodynamic design of both conventional and unconventional aircraft. The geometry generator adopts a component-based approach, where any number of predefined but modifiable (parametric) wing, fuselage, junction, etc., components can be arbitrarily assembled to generate the outer mold line of aircraft geometry. A unique Python-based user interface incorporating an interactive OpenGL windowing system is proposed. Together, these tools allow for the generation of high-quality, C2 continuous (or higher), and customized aircraft geometry with fast turnaround. The geometry control system tightly integrates shape parameterization with volume mesh movement using a two-level free-form deformation approach. The framework is augmented with axial curves, which are shown to be flexible and efficient at parameterizing wing systems of arbitrary topology. A key aspect of this methodology is that very large shape deformations can be achieved with only a few, intuitive control parameters. Shape deformation consumes a few tenths of a second on a single processor and surface sensitivities are machine accurate. The geometry control system is implemented within an existing aerodynamic optimizer comprising a flow solver for the Euler equations and a sequential quadratic programming optimizer. Gradients are evaluated exactly with discrete-adjoint variables. The algorithm is first validated by recovering an elliptical lift distribution on a rectangular wing, and then demonstrated through the exploratory shape optimization of a three-pronged feathered winglet leading to a span efficiency of 1.22 under a height-to-span ratio constraint of 0.1. Finally, unconventional aircraft configurations sized for a regional mission are compared against a conventional baseline. Each aircraft is optimized by varying wing section and wing planform (excluding span) under lift and trim constraints at a single operating point. Based on inviscid pressure drag, the box-wing, C-tip blended-wing-body, and braced-wing configurations considered here are respectively 22%, 25%, and 45% more efficient than the tube-and-wing configuration.
Fernández, Elena; Vidal, Lorena; Canals, Antonio
2017-11-23
A new, fast, easy to handle, and environmentally friendly magnetic headspace single-drop microextraction (Mag-HS-SDME) based on a magnetic ionic liquid (MIL) as an extractant solvent is presented. A small drop of the MIL 1-ethyl-3-methylimidazolium tetraisothiocyanatocobaltate(II) ([Emim] 2 [Co(NCS) 4 ]) is located on one end of a small neodymium magnet to extract nine chlorobenzenes (1,2-dichlorobenzene, 1,3-dichlorobenzene, 1,4-dichlorobenzene, 1,2,3-trichlorobenzene, 1,2,4-trichlorobenzene, 1,3,5-trichlorobenzene, 1,2,3,4-tetrachlorobenzene, 1,2,4,5-tetrachlorobenzene, and pentachlorobenzene) as model analytes from water samples prior to thermal desorption-gas chromatography-mass spectrometry determination. A multivariate optimization strategy was employed to optimize experimental parameters affecting Mag-HS-SDME. The method was evaluated under optimized extraction conditions (i.e., sample volume, 20 mL; MIL volume, 1 μL; extraction time, 10 min; stirring speed, 1500 rpm; and ionic strength, 15% NaCl (w/v)), obtaining a linear response from 0.05 to 5 μg L -1 for all analytes. The repeatability of the proposed method was evaluated at 0.7 and 3 μg L -1 spiking levels and coefficients of variation ranged between 3 and 18% (n = 3). Limits of detection were in the order of nanograms per liter ranging from 4 ng L -1 for 1,4-dichlorobenzene and 1,2,3,4-tetrachlorobenzene to 8 ng L -1 for 1,2,4,5-tetrachlorobenzene. Finally, tap water, pond water, and wastewater were selected as real water samples to assess the applicability of the method. Relative recoveries varied between 82 and 114% showing negligible matrix effects. Graphical abstract Magnetic headspace single-drop microextraction followed by thermal desorption-gas chromatography-mass spectrometry.