On Matrices, Automata, and Double Counting
NASA Astrophysics Data System (ADS)
Beldiceanu, Nicolas; Carlsson, Mats; Flener, Pierre; Pearson, Justin
Matrix models are ubiquitous for constraint problems. Many such problems have a matrix of variables M, with the same constraint defined by a finite-state automaton A on each row of M and a global cardinality constraint gcc on each column of M. We give two methods for deriving, by double counting, necessary conditions on the cardinality variables of the gcc constraints from the automaton A. The first method yields linear necessary conditions and simple arithmetic constraints. The second method introduces the cardinality automaton, which abstracts the overall behaviour of all the row automata and can be encoded by a set of linear constraints. We evaluate the impact of our methods on a large set of nurse rostering problem instances.
A depth-first search algorithm to compute elementary flux modes by linear programming.
Quek, Lake-Ee; Nielsen, Lars K
2014-07-30
The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.
A depth-first search algorithm to compute elementary flux modes by linear programming
2014-01-01
Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068
NASA Astrophysics Data System (ADS)
Li, Shuo; Wang, Hui; Wang, Liyong; Yu, Xiangzhou; Yang, Le
2018-01-01
The uneven illumination phenomenon reduces the quality of remote sensing image and causes interference in the subsequent processing and applications. A variational method based on Retinex with double-norm hybrid constraints for uneven illumination correction is proposed. The L1 norm and the L2 norm are adopted to constrain the textures and details of reflectance image and the smoothness of the illumination image, respectively. The problem of separating the illumination image from the reflectance image is transformed into the optimal solution of the variational model. In order to accelerate the solution, the split Bregman method is used to decompose the variational model into three subproblems, which are calculated by alternate iteration. Two groups of experiments are implemented on two synthetic images and three real remote sensing images. Compared with the variational Retinex method with single-norm constraint and the Mask method, the proposed method performs better in both visual evaluation and quantitative measurements. The proposed method can effectively eliminate the uneven illumination while maintaining the textures and details of the remote sensing image. Moreover, the proposed method using split Bregman method is more than 10 times faster than the method with the steepest descent method.
Multiple quay cranes scheduling for double cycling in container terminals
Chu, Yanling; Zhang, Xiaoju; Yang, Zhongzhen
2017-01-01
Double cycling is an efficient tool to increase the efficiency of quay crane (QC) in container terminals. In this paper, an optimization model for double cycling is developed to optimize the operation sequence of multiple QCs. The objective is to minimize the makespan of the ship handling operation considering the ship balance constraint. To solve the model, an algorithm based on Lagrangian relaxation is designed. Finally, we compare the efficiency of the Lagrangian relaxation based heuristic with the branch-and-bound method and a genetic algorithm using instances of different sizes. The results of numerical experiments indicate that the proposed model can effectively reduce the unloading and loading times of QCs. The effects of the ship balance constraint are more notable when the number of QCs is high. PMID:28692699
Multiple quay cranes scheduling for double cycling in container terminals.
Chu, Yanling; Zhang, Xiaoju; Yang, Zhongzhen
2017-01-01
Double cycling is an efficient tool to increase the efficiency of quay crane (QC) in container terminals. In this paper, an optimization model for double cycling is developed to optimize the operation sequence of multiple QCs. The objective is to minimize the makespan of the ship handling operation considering the ship balance constraint. To solve the model, an algorithm based on Lagrangian relaxation is designed. Finally, we compare the efficiency of the Lagrangian relaxation based heuristic with the branch-and-bound method and a genetic algorithm using instances of different sizes. The results of numerical experiments indicate that the proposed model can effectively reduce the unloading and loading times of QCs. The effects of the ship balance constraint are more notable when the number of QCs is high.
Bazargan-Lari, Y; Eghtesad, M; Khoogar, A; Mohammad-Zadeh, A
2014-09-01
Despite some successful dynamic simulation of self-impact double pendulum (SIDP)-as humanoid robots legs or arms- studies, there is limited information available about the control of one leg locomotion. The main goal of this research is to improve the reliability of the mammalians leg locomotion and building more elaborated models close to the natural movements, by modeling the swing leg as a SIDP. This paper also presents the control design for a SIDP by a nonlinear model-based control method. To achieve this goal, the available data of normal human gait will be taken as the desired trajectories of the hip and knee joints. The model is characterized by the constraint that occurs at the knee joint (the lower joint of the model) in both dynamic modeling and control design. Since the system dynamics is nonlinear, the MIMO Input-Output Feedback Linearization method will be employed for control purposes. The first constraint in forward impact simulation happens at 0.5 rad where the speed of the upper link is increased to 2.5 rad/sec. and the speed of the lower link is reduced to -5 rad/sec. The subsequent constraints occur rather moderately. In the case of both backward and forward constraints simulation, the backward impact occurs at -0.5 rad and the speeds of the upper and lower links increase to 2.2 and 1.5 rad/sec., respectively. The designed controller performed suitably well and regulated the system accurately.
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2018-03-01
The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.
Canonical formulation and conserved charges of double field theory
Naseer, Usman
2015-10-26
We provide the canonical formulation of double field theory. It is shown that this dynamics is subject to primary and secondary constraints. The Poisson bracket algebra of secondary constraints is shown to close on-shell according to the C-bracket. We also give a systematic way of writing boundary integrals in doubled geometry. Finally, by including appropriate boundary terms in the double field theory Hamiltonian, expressions for conserved energy and momentum of an asymptotically flat doubled space-time are obtained and applied to a number of solutions.
Spontaneous formation of non-uniform double helices for elastic rods under torsion
NASA Astrophysics Data System (ADS)
Li, Hongyuan; Zhao, Shumin; Xia, Minggang; He, Siyu; Yang, Qifan; Yan, Yuming; Zhao, Hanqiao
2017-02-01
The spontaneous formation of double helices for filaments under torsion is common and significant. For example, the research on the supercoiling of DNA is helpful for understanding the replication and transcription of DNA. Similar double helices can appear in carbon nanotube yarns, cables, telephone wires and so forth. We noticed that non-uniform double helices can be produced due to the surface friction induced by the self-contact. Therefore an ideal model was presented to investigate the formation of double helices for elastic rods under torque. A general equilibrium condition which is valid for both the smooth surface and the rough surface situations is derived by using the variational method. By adding further constraints, the smooth and rough surface situations are investigated in detail respectively. Additionally, the model showed that the specific process of how to twist and slack the rod can determine the surface friction and hence influence the configuration of the double helix formed by rods with rough surfaces. Based on this principle, a method of manufacturing double helices with designed configurations was proposed and demonstrated. Finally, experiments were performed to verify the model and the results agreed well with the theory.
On the motion of one-dimensional double pendulum
NASA Astrophysics Data System (ADS)
Burian, S. N.; Kalnitsky, V. S.
2018-05-01
A two-dimensional dynamic Lagrangian system of a double mathematical pendulum with one special constraint is considered. Configuration spaces for a given constraints (ellipses) are studied. The diagrams of paths and reactions in the course of motion along them are shown. The calculations of the transversal intersection case and in the case of tangency are given.
Simultaneous multislice refocusing via time optimal control.
Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf
2018-02-09
Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.
Towards weakly constrained double field theory
NASA Astrophysics Data System (ADS)
Lee, Kanghoon
2016-08-01
We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
Advanced data assimilation in strongly nonlinear dynamical systems
NASA Technical Reports Server (NTRS)
Miller, Robert N.; Ghil, Michael; Gauthiez, Francois
1994-01-01
Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence. Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided. Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method--based on an empirical statistical model derived from a Monte Carlo simulation--is formulated, and shown to work very well. Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.
NASA Astrophysics Data System (ADS)
Seifert, C.; Lobell, D. B.
2014-12-01
In adapting U.S. agriculture to the climate of the 21st century, multiple cropping presents a unique opportunity to help offset projected negative trends in agricultural production while moving critical crop yield formation periods outside of the hottest months of the year. Critical constraints on this practice include moisture availability, and, more importantly, growing season length. We review evidence that this last constraint has decreased in the previous quarter century, allowing for more winter wheat/soybean double cropping in previously phenologically constrained areas. We also carry this pattern forward to 2100, showing a 126% to 211% increase in the area phenologically suitable for double cropping under the RCP45 and RCP85 scenarios respectively. These results suggest that climate change will relieve phenological constraints on wheat-soy double cropping systems over much of the United States, changing production patterns and crop rotations as areas become suitable for the practice.
Scheduling double round-robin tournaments with divisional play using constraint programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less
Using large spectroscopic surveys to test the double degenerate model for Type Ia supernovae
NASA Astrophysics Data System (ADS)
Breedt, E.; Steeghs, D.; Marsh, T. R.; Gentile Fusillo, N. P.; Tremblay, P.-E.; Green, M.; De Pasquale, S.; Hermes, J. J.; Gänsicke, B. T.; Parsons, S. G.; Bours, M. C. P.; Longa-Peña, P.; Rebassa-Mansergas, A.
2017-07-01
An observational constraint on the contribution of double degenerates to Type Ia supernovae requires multiple radial velocity measurements of ideally thousands of white dwarfs. This is because only a small fraction of the double degenerate population is massive enough, with orbital periods short enough, to be considered viable Type Ia progenitors. We show how the radial velocity information available from public surveys such as the Sloan Digital Sky Survey can be used to pre-select targets for variability, leading to a 10-fold reduction in observing time required compared to an unranked or random survey. We carry out Monte Carlo simulations to quantify the detection probability of various types of binaries in the survey and show that this method, even in the most pessimistic case, doubles the survey size of the largest survey to date (the SPY Survey) in less than 15 per cent of the required observing time. Our initial follow-up observations corroborate the method, yielding 15 binaries so far (eight known and seven new), as well as orbital periods for four of the new binaries.
NASA Astrophysics Data System (ADS)
Sun, Ning; Wu, Yiming; Chen, He; Fang, Yongchun
2018-03-01
Underactuated cranes play an important role in modern industry. Specifically, in most situations of practical applications, crane systems exhibit significant double pendulum characteristics, which makes the control problem quite challenging. Moreover, most existing planners/controllers obtained with standard methods/techniques for double pendulum cranes cannot minimize the energy consumption when fulfilling the transportation tasks. Therefore, from a practical perspective, this paper proposes an energy-optimal solution for transportation control of double pendulum cranes. By applying the presented approach, the transportation objective, including fast trolley positioning and swing elimination, is achieved with minimized energy consumption, and the residual oscillations are suppressed effectively with all the state constrains being satisfied during the entire transportation process. As far as we know, this is the first energy-optimal solution for transportation control of underactuated double pendulum cranes with various state and control constraints. Hardware experimental results are included to verify the effectiveness of the proposed approach, whose superior performance is reflected by being experimentally compared with some comparative controllers.
Harmonic mode-locking using the double interval technique in quantum dot lasers.
Li, Yan; Chiragh, Furqan L; Xin, Yong-Chun; Lin, Chang-Yi; Kim, Junghoon; Christodoulou, Christos G; Lester, Luke F
2010-07-05
Passive harmonic mode-locking in a quantum dot laser is realized using the double interval technique, which uses two separate absorbers to stimulate a specific higher-order repetition rate compared to the fundamental. Operating alone these absorbers would otherwise reinforce lower harmonic frequencies, but by operating together they produce the harmonic corresponding to their least common multiple. Mode-locking at a nominal 60 GHz repetition rate, which is the 10(th) harmonic of the fundamental frequency of the device, is achieved unambiguously despite the constraint of a uniformly-segmented, multi-section device layout. The diversity of repetition rates available with this method is also discussed.
Evaluating Emergent Constraints for Equilibrium Climate Sensitivity
Caldwell, Peter M.; Zelinka, Mark D.; Klein, Stephen A.
2018-04-23
Emergent constraints are quantities that are observable from current measurements and have skill predicting future climate. Here, this study explores 19 previously proposed emergent constraints related to equilibrium climate sensitivity (ECS; the global-average equilibrium surface temperature response to CO 2 doubling). Several constraints are shown to be closely related, emphasizing the importance for careful understanding of proposed constraints. A new method is presented for decomposing correlation between an emergent constraint and ECS into terms related to physical processes and geographical regions. Using this decomposition, one can determine whether the processes and regions explaining correlation with ECS correspond to the physicalmore » explanation offered for the constraint. Shortwave cloud feedback is generally found to be the dominant contributor to correlations with ECS because it is the largest source of intermodel spread in ECS. In all cases, correlation results from interaction between a variety of terms, reflecting the complex nature of ECS and the fact that feedback terms and forcing are themselves correlated with each other. For 4 of the 19 constraints, the originally proposed explanation for correlation is borne out by our analysis. These four constraints all predict relatively high climate sensitivity. The credibility of six other constraints is called into question owing to correlation with ECS coming mainly from unexpected sources and/or lack of robustness to changes in ensembles. Another six constraints lack a testable explanation and hence cannot be confirmed. Lastly, the fact that this study casts doubt upon more constraints than it confirms highlights the need for caution when identifying emergent constraints from small ensembles.« less
Evaluating Emergent Constraints for Equilibrium Climate Sensitivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caldwell, Peter M.; Zelinka, Mark D.; Klein, Stephen A.
Emergent constraints are quantities that are observable from current measurements and have skill predicting future climate. Here, this study explores 19 previously proposed emergent constraints related to equilibrium climate sensitivity (ECS; the global-average equilibrium surface temperature response to CO 2 doubling). Several constraints are shown to be closely related, emphasizing the importance for careful understanding of proposed constraints. A new method is presented for decomposing correlation between an emergent constraint and ECS into terms related to physical processes and geographical regions. Using this decomposition, one can determine whether the processes and regions explaining correlation with ECS correspond to the physicalmore » explanation offered for the constraint. Shortwave cloud feedback is generally found to be the dominant contributor to correlations with ECS because it is the largest source of intermodel spread in ECS. In all cases, correlation results from interaction between a variety of terms, reflecting the complex nature of ECS and the fact that feedback terms and forcing are themselves correlated with each other. For 4 of the 19 constraints, the originally proposed explanation for correlation is borne out by our analysis. These four constraints all predict relatively high climate sensitivity. The credibility of six other constraints is called into question owing to correlation with ECS coming mainly from unexpected sources and/or lack of robustness to changes in ensembles. Another six constraints lack a testable explanation and hence cannot be confirmed. Lastly, the fact that this study casts doubt upon more constraints than it confirms highlights the need for caution when identifying emergent constraints from small ensembles.« less
Physics constraints on double-pulse LIA engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekdahl, Carl August Jr.
2015-05-20
The options for advanced-radiography double-pulse linear induction accelerators (LIA) under consideration naturally fall into three categories that differ by the number of cells required. Since the two major physics issues, beam breakup (BBU) and corkscrew, are also dependent on the number of cells, it may be useful for the decision process to review the engineering consequences of beam physics constraints for each class. The LIAs can be categorized three different ways, and this report compares the different categories based upon the physics of their beams.
Correcting for the free energy costs of bond or angle constraints in molecular dynamics simulations
König, Gerhard; Brooks, Bernard R.
2014-01-01
Background Free energy simulations are an important tool in the arsenal of computational biophysics, allowing the calculation of thermodynamic properties of binding or enzymatic reactions. This paper introduces methods to increase the accuracy and precision of free energy calculations by calculating the free energy costs of constraints during post-processing. The primary purpose of employing constraints for these free energy methods is to increase the phase space overlap between ensembles, which is required for accuracy and convergence. Methods The free energy costs of applying or removing constraints are calculated as additional explicit steps in the free energy cycle. The new techniques focus on hard degrees of freedom and use both gradients and Hessian estimation. Enthalpy, vibrational entropy, and Jacobian free energy terms are considered. Results We demonstrate the utility of this method with simple classical systems involving harmonic and anharmonic oscillators, four-atomic benchmark systems, an alchemical mutation of ethane to methanol, and free energy simulations between alanine and serine. The errors for the analytical test cases are all below 0.0007 kcal/mol, and the accuracy of the free energy results of ethane to methanol is improved from 0.15 to 0.04 kcal/mol. For the alanine to serine case, the phase space overlaps of the unconstrained simulations range between 0.15 and 0.9%. The introduction of constraints increases the overlap up to 2.05%. On average, the overlap increases by 94% relative to the unconstrained value and precision is doubled. Conclusions The approach reduces errors arising from constraints by about an order of magnitude. Free energy simulations benefit from the use of constraints through enhanced convergence and higher precision. General Significance The primary utility of this approach is to calculate free energies for systems with disparate energy surfaces and bonded terms, especially in multi-scale molecular mechanics/quantum mechanics simulations. PMID:25218695
NASA Astrophysics Data System (ADS)
Liu, Chang; Lv, Xiangyu; Guo, Li; Cai, Lixia; Jie, Jinxing; Su, Kuo
2017-05-01
With the increasing of penetration of distributed in the smart grid, the problems that the power loss increasing and short circuit capacity beyond the rated capicity of circuit breaker will become more serious. In this paper, a methodology (Modified BPSO) is presented for network reconfiguration which is based on hybrid approach of Tabu Search and BPSO algorithms to prevent the local convergence and to decrease the calculation time using double fitnesses to consider the constraints. Moreover, an average load simulated method (ALS method) load variation considered is proposed that the average load value is used to instead of the actual load to calculation. Finally, from a case study, the results of simulation certify the approaches will decrease drastically the losses and improve the voltage profiles obviously, at the same time, the short circuit capacity is also decreased into less the shut-off capacity of circuit breaker. The power losses won’t be increased too much even if the short circuit capacity constraint is considered; voltage profiles are better with the constraint of short circuit capacity considering. The ALS method is simple and calculated time is speed.
Factorization of differential expansion for non-rectangular representations
NASA Astrophysics Data System (ADS)
Morozov, A.
2018-04-01
Factorization of the differential expansion (DE) coefficients for colored HOMFLY-PT polynomials of antiparallel double braids, originally discovered for rectangular representations R, in the case of rectangular representations R, is extended to the first non-rectangular representations R = [2, 1] and R = [3, 1]. This increases chances that such factorization will take place for generic R, thus fixing the shape of the DE. We illustrate the power of the method by conjecturing the DE-induced expression for double-braid polynomials for all R = [r, 1]. In variance with the rectangular case, the knowledge for double braids is not fully sufficient to deduce the exclusive Racah matrix S¯ — the entries in the sectors with nontrivial multiplicities sum up and remain unseparated. Still, a considerable piece of the matrix is extracted directly and its other elements can be found by solving the unitarity constraints.
Upscaling of Hydraulic Conductivity using the Double Constraint Method
NASA Astrophysics Data System (ADS)
El-Rawy, Mustafa; Zijl, Wouter; Batelaan, Okke
2013-04-01
The mathematics and modeling of flow through porous media is playing an increasingly important role for the groundwater supply, subsurface contaminant remediation and petroleum reservoir engineering. In hydrogeology hydraulic conductivity data are often collected at a scale that is smaller than the grid block dimensions of a groundwater model (e.g. MODFLOW). For instance, hydraulic conductivities determined from the field using slug and packer tests are measured in the order of centimeters to meters, whereas numerical groundwater models require conductivities representative of tens to hundreds of meters of grid cell length. Therefore, there is a need for upscaling to decrease the number of grid blocks in a groundwater flow model. Moreover, models with relatively few grid blocks are simpler to apply, especially when the model has to run many times, as is the case when it is used to assimilate time-dependent data. Since the 1960s different methods have been used to transform a detailed description of the spatial variability of hydraulic conductivity to a coarser description. In this work we will investigate a relatively simple, but instructive approach: the Double Constraint Method (DCM) to identify the coarse-scale conductivities to decrease the number of grid blocks. Its main advantages are robustness and easy implementation, enabling to base computations on any standard flow code with some post processing added. The inversion step of the double constraint method is based on a first forward run with all known fluxes on the boundary and in the wells, followed by a second forward run based on the heads measured on the phreatic surface (i.e. measured in shallow observation wells) and in deeper observation wells. Upscaling, in turn is inverse modeling (DCM) to determine conductivities in coarse-scale grid blocks from conductivities in fine-scale grid blocks. In such a way that the head and flux boundary conditions applied to the fine-scale model are also honored at the coarse-scale. Exemplification will be presented for the Kleine Nete catchment, Belgium. As a result we identified coarse-scale conductivities while decreasing the number of grid blocks with the advantage that a model run costs less computation time and requires less memory space. In addition, ranking of models was investigated.
Action detection by double hierarchical multi-structure space-time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-03-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
Action detection by double hierarchical multi-structure space–time statistical matching model
NASA Astrophysics Data System (ADS)
Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang
2018-06-01
Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.
The Double Cone: A Mechanical Paradox or a Geometrical Constraint?
ERIC Educational Resources Information Center
Gallitto, Aurelio Agliolo; Fiordilino, Emilio
2011-01-01
In the framework of the Italian National Plan "Lauree Scientifiche" (PLS) in collaboration with secondary schools, we have investigated the mechanical paradox of the double cone. We have calculated the geometric condition for obtaining an upward movement. Based on this result, we have built a mechanical model with a double cone made of aluminum…
Correcting for the free energy costs of bond or angle constraints in molecular dynamics simulations.
König, Gerhard; Brooks, Bernard R
2015-05-01
Free energy simulations are an important tool in the arsenal of computational biophysics, allowing the calculation of thermodynamic properties of binding or enzymatic reactions. This paper introduces methods to increase the accuracy and precision of free energy calculations by calculating the free energy costs of constraints during post-processing. The primary purpose of employing constraints for these free energy methods is to increase the phase space overlap between ensembles, which is required for accuracy and convergence. The free energy costs of applying or removing constraints are calculated as additional explicit steps in the free energy cycle. The new techniques focus on hard degrees of freedom and use both gradients and Hessian estimation. Enthalpy, vibrational entropy, and Jacobian free energy terms are considered. We demonstrate the utility of this method with simple classical systems involving harmonic and anharmonic oscillators, four-atomic benchmark systems, an alchemical mutation of ethane to methanol, and free energy simulations between alanine and serine. The errors for the analytical test cases are all below 0.0007kcal/mol, and the accuracy of the free energy results of ethane to methanol is improved from 0.15 to 0.04kcal/mol. For the alanine to serine case, the phase space overlaps of the unconstrained simulations range between 0.15 and 0.9%. The introduction of constraints increases the overlap up to 2.05%. On average, the overlap increases by 94% relative to the unconstrained value and precision is doubled. The approach reduces errors arising from constraints by about an order of magnitude. Free energy simulations benefit from the use of constraints through enhanced convergence and higher precision. The primary utility of this approach is to calculate free energies for systems with disparate energy surfaces and bonded terms, especially in multi-scale molecular mechanics/quantum mechanics simulations. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Yelle, Roger V.; Wallace, Lloyd
1989-01-01
A versatile and efficient technique for the solution of the resonance line scattering problem with frequency redistribution in planetary atmospheres is introduced. Similar to the doubling approach commonly used in monochromatic scattering problems, the technique has been extended to include the frequency dependence of the radiation field. Methods for solving problems with external or internal sources and coupled spectral lines are presented, along with comparison of some sample calculations with results from Monte Carlo and Feautrier techniques. The doubling technique has also been applied to the solution of resonance line scattering problems where the R-parallel redistribution function is appropriate, both neglecting and including polarization as developed by Yelle and Wallace (1989). With the constraint that the atmosphere is illuminated from the zenith, the only difficulty of consequence is that of performing precise frequency integrations over the line profiles. With that problem solved, it is no longer necessary to use the Monte Carlo method to solve this class of problem.
Qu, Xin; Hall, Alex; DeAngelis, Anthony M.; ...
2018-01-11
Differences among climate models in equilibrium climate sensitivity (ECS; the equilibrium surface temperature response to a doubling of atmospheric CO2) remain a significant barrier to the accurate assessment of societally important impacts of climate change. Relationships between ECS and observable metrics of the current climate in model ensembles, so-called emergent constraints, have been used to constrain ECS. Here a statistical method (including a backward selection process) is employed to achieve a better statistical understanding of the connections between four recently proposed emergent constraint metrics and individual feedbacks influencing ECS. The relationship between each metric and ECS is largely attributable tomore » a statistical connection with shortwave low cloud feedback, the leading cause of intermodel ECS spread. This result bolsters confidence in some of the metrics, which had assumed such a connection in the first place. Additional analysis is conducted with a few thousand artificial metrics that are randomly generated but are well correlated with ECS. The relationships between the contrived metrics and ECS can also be linked statistically to shortwave cloud feedback. Thus, any proposed or forthcoming ECS constraint based on the current generation of climate models should be viewed as a potential constraint on shortwave cloud feedback, and physical links with that feedback should be investigated to verify that the constraint is real. Additionally, any proposed ECS constraint should not be taken at face value since other factors influencing ECS besides shortwave cloud feedback could be systematically biased in the models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xin; Hall, Alex; DeAngelis, Anthony M.
Differences among climate models in equilibrium climate sensitivity (ECS; the equilibrium surface temperature response to a doubling of atmospheric CO2) remain a significant barrier to the accurate assessment of societally important impacts of climate change. Relationships between ECS and observable metrics of the current climate in model ensembles, so-called emergent constraints, have been used to constrain ECS. Here a statistical method (including a backward selection process) is employed to achieve a better statistical understanding of the connections between four recently proposed emergent constraint metrics and individual feedbacks influencing ECS. The relationship between each metric and ECS is largely attributable tomore » a statistical connection with shortwave low cloud feedback, the leading cause of intermodel ECS spread. This result bolsters confidence in some of the metrics, which had assumed such a connection in the first place. Additional analysis is conducted with a few thousand artificial metrics that are randomly generated but are well correlated with ECS. The relationships between the contrived metrics and ECS can also be linked statistically to shortwave cloud feedback. Thus, any proposed or forthcoming ECS constraint based on the current generation of climate models should be viewed as a potential constraint on shortwave cloud feedback, and physical links with that feedback should be investigated to verify that the constraint is real. Additionally, any proposed ECS constraint should not be taken at face value since other factors influencing ECS besides shortwave cloud feedback could be systematically biased in the models.« less
Galerkin-collocation domain decomposition method for arbitrary binary black holes
NASA Astrophysics Data System (ADS)
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-05-01
We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.
NASA Technical Reports Server (NTRS)
Pepin, T. J.
1977-01-01
The inversion methods are reported that have been used to determine the vertical profile of the extinction coefficient due to the stratospheric aerosols from data measured during the ASTP/SAM solar occultation experiment. Inversion methods include the onion skin peel technique and methods of solving the Fredholm equation for the problem subject to smoothing constraints. The latter of these approaches involves a double inversion scheme. Comparisons are made between the inverted results from the SAM experiment and near simultaneous measurements made by lidar and balloon born dustsonde. The results are used to demonstrate the assumptions required to perform the inversions for aerosols.
Cunningham, David A.; Varnerin, Nicole; Machado, Andre; Bonnett, Corin; Janini, Daniel; Roelle, Sarah; Potter-Baker, Kelsey; Sankarasubramanian, Vishwanath; Wang, Xiaofeng; Yue, Guang; Plow, Ela B.
2016-01-01
Purpose To demonstrate, in a proof-of-concept study, whether potentiating ipsilesional higher motor areas (premotor cortex and supplementary motor area) augments and accelerates recovery associated with constraint induced movement. Methods In a randomized, double-blinded pilot clinical study, 12 patients with chronic stroke were assigned to receive anodal transcranial direct current stimulation (tDCS) (n = 6) or sham (n = 6) to the ipsilesional higher motor areas during constraint-induced movement therapy. We assessed functional and neurophysiologic outcomes before and after 5 weeks of therapy. Results Only patients receiving tDCS demonstrated gains in function and dexterity. Gains were accompanied by an increase in excitability of the contralesional rather than the ipsilesional hemisphere. Conclusions Our proof-of-concept study provides early evidence that stimulating higher motor areas can help recruit the contralesional hemisphere in an adaptive role in cases of greater ipsilesional injury. Whether this early evidence of promise translates to remarkable gains in functional recovery compared to existing approaches of stimulation remains to be confirmed in large-scale clinical studies that can reasonably dissociate stimulation of higher motor areas from that of the traditional primary motor cortices. PMID:26484700
Geometric Stitching Method for Double Cameras with Weak Convergence Geometry
NASA Astrophysics Data System (ADS)
Zhou, N.; He, H.; Bao, Y.; Yue, C.; Xing, K.; Cao, S.
2017-05-01
In this paper, a new geometric stitching method is proposed which utilizes digital elevation model (DEM)-aided block adjustment to solve relative orientation parameters for dual-camera with weak convergence geometry. A rational function model (RFM) with affine transformation is chosen as the relative orientation model. To deal with the weak geometry, a reference DEM is used in this method as an additional constraint in the block adjustment, which only calculates the planimetry coordinates of tie points (TPs). After that we can use the obtained affine transform coefficients to generate virtual grid, and update rational polynomial coefficients (RPCs) to complete the geometric stitching. Our proposed method was tested on GaoFen-2(GF-2) dual-camera panchromatic (PAN) images. The test results show that the proposed method can achieve an accuracy of better than 0.5 pixel in planimetry and have a seamless visual effect. For regions with small relief, when global DEM with 1 km grid, SRTM with 90 m grid and ASTER GDEM V2 with 30 m grid replaced DEM with 1m grid as elevation constraint it is almost no loss of accuracy. The test results proved the effectiveness and feasibility of the stitching method.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models.
van Elburg, Ronald A J; van Ooyen, Arjen
2009-07-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on the time constants of the synaptic currents, which hamper its general applicability. This letter addresses this problem in two ways. First, we provide physical arguments demonstrating why these constraints on the time constants can be relaxed. Second, we give a formal proof showing which constraints can be abolished. As part of our formal proof, we introduce the generalized Carnevale-Hines lemma, a new tool for comparing double exponentials as they naturally occur in many cascaded decay systems, including receptor-neurotransmitter dissociation followed by channel closing. Through repeated application of the generalized lemma, we lift most of the original constraints on the time constants. Thus, we show that the Carnevale-Hines integration scheme for the integrate-and-fire model can be employed for simulating a much wider range of neuron and synapse types than was previously thought.
Earthquake mechanisms from linear-programming inversion of seismic-wave amplitude ratios
Julian, B.R.; Foulger, G.R.
1996-01-01
The amplitudes of radiated seismic waves contain far more information about earthquake source mechanisms than do first-motion polarities, but amplitudes are severely distorted by the effects of heterogeneity in the Earth. This distortion can be reduced greatly by using the ratios of amplitudes of appropriately chosen seismic phases, rather than simple amplitudes, but existing methods for inverting amplitude ratios are severely nonlinear and require computationally intensive searching methods to ensure that solutions are globally optimal. Searching methods are particularly costly if general (moment tensor) mechanisms are allowed. Efficient linear-programming methods, which do not suffer from these problems, have previously been applied to inverting polarities and wave amplitudes. We extend these methods to amplitude ratios, in which formulation on inequality constraint for an amplitude ratio takes the same mathematical form as a polarity observation. Three-component digital data for an earthquake at the Hengill-Grensdalur geothermal area in southwestern Iceland illustrate the power of the method. Polarities of P, SH, and SV waves, unusually well distributed on the focal sphere, cannot distinguish between diverse mechanisms, including a double couple. Amplitude ratios, on the other hand, clearly rule out the double-couple solution and require a large explosive isotropic component.
NASA Astrophysics Data System (ADS)
Sahara, D. P.; Widiyantoro, S.; Nugraha, A. D.; Sule, R.; Luehr, B. G.
2010-12-01
Seismic and volcanic activities in Central Java are highly related to the subduction of the Indo-Australian plate. In the MERapi AMphibious Experiments (MERAMEX), a network consisting of 169 seismographic stations was installed onshore and offshore in central Java and recorded 282 events during the operation. In this study, we present the results of relative hypocenters relocation by using Double Difference (DD) method to image the subduction beneath the volcanic chain in central Java. The DD method is an iterative procedure using Least Square optimization to determine high-resolution hypocenter locations over large distances. This relocation method uses absolute travel-time measurements and/or cross-correlation of P- and S-wave differential travel-time measurements. The preliminary results of our study showed that the algorithm could collapse the diffused event locations obtained from previous study into a sharp image of seismicity structure and reduce the residual travel time errors significantly (7 - 60%). As a result, narrow regions of a double seismic zone which correlated with the subducting slab can be determined more accurately. The dip angle of the slab increases gradually from almost horizontal beneath offshore to very steep (65-80 degrees) beneath the northern part of central Java. The aseismic gap at depths of 140 km - 185 km is also depicted clearly. The next step of the ongoing research is to provide detailed quantitative constraints on the structures of the mantle wedge and crust beneath central Java and to show the ascending paths of fluids and partially molten materials below the volcanic arc by applying Double-Difference Tomography method (TomoDD).
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Kremer, Kyle; Bueno, Michael; Larson, Shane L.; Coughlin, Scott; Kalogera, Vassiliki
2018-02-01
We demonstrate a method to fully characterize mass-transferring double white dwarf (DWD) systems with a helium-rich (He) white dwarf (WD) donor based on the mass–radius (M–R) relationship for He WDs. Using a simulated Galactic population of DWDs, we show that donor and accretor masses can be inferred for up to ∼60 systems observed by both Laser Interferometer Space Antenna (LISA) and Gaia. Half of these systems will have mass constraints {{Δ }} {M}{{D}} ≲ 0.2 {M}ȯ and {{Δ }} {M}{{A}} ≲ 2.3 {M}ȯ . We also show how the orbital frequency evolution due to astrophysical processes and gravitational radiation can be decoupled from the total orbital frequency evolution for up to ∼50 of these systems.
Giersch, C; Cornish-Bowden, A
1996-10-07
The double modulation method for determining the elasticities of pathway enzymes, originally devised by Kacser & Burns (Biochem. Soc. Trans. 7, 1149-1160, 1979), is extended to pathways of complex topological structure, including branching and feedback loops. An explicit system of linear equations for the unknown elasticities is derived. The constraints imposed on this linear system imply that modulations of more than one enzyme are not necessarily independent. Simple combinatorial rules are described for identifying without using any algebra the set of independent modulations that allow the determination of the elasticities of any enzyme. By repeated application, the minimum numbers of modulations required to determine the elasticities of all enzymes of a given pathway can be determined. The procedure is illustrated with numerous examples.
The structure of poly(carbonsuboxide) on the atomic scale: a solid-state NMR study.
Schmedt auf der Günne, Jörn; Beck, Johannes; Hoffbauer, Wilfried; Krieger-Beck, Petra
2005-07-18
In this contribution we present a study of the structure of amorphous poly(carbonsuboxide) (C3O2)x by 13C solid-state NMR spectroscopy supported by infrared spectroscopy and chemical analysis. Poly(carbonsuboxide) was obtained by polymerization of carbonsuboxide C3O2, which in turn was synthesized from malonic acid bis(trimethylsilylester). Two different 13C labeling schemes were applied to probe inter- and intramonomeric bonds in the polymer by dipolar solid-state NMR methods and also to allow quantitative 13C MAS NMR spectra. Four types of carbon environments can be distinguished in the NMR spectra. Double-quantum and triple-quantum 2D correlation experiments were used to assign the observed peaks using the through-space and through-bond dipolar coupling. In order to obtain distance constraints for the intermonomeric bonds, double-quantum constant-time experiments were performed. In these experiments an additional filter step was applied to suppress contributions from not directly bonded 13C,13C spin pairs. The 13C NMR intensities, chemical shifts, connectivities and distances gave constraints for both the polymerization mechanism and the short-range order of the polymer. The experimental results were complemented by bond lengths predicted by density functional theory methods for several previously suggested models. Based on the presented evidence we can unambiguously exclude models based on gamma-pyronic units and support models based on alpha-pyronic units. The possibility of planar ladder- and bracelet-like alpha-pyronic structures is discussed.
Pellejero-Ibanez, Marco; Chuang, Chia -Hsun; Rubino-Martin, J. A.; ...
2016-03-28
Here, we develop a new methodology called double-probe analysis with the aim of minimizing informative priors in the estimation of cosmological parameters. We extract the dark-energy-model-independent cosmological constraints from the joint data sets of Baryon Oscillation Spectroscopic Survey (BOSS) galaxy sample and Planck cosmic microwave background (CMB) measurement. We measure the mean values and covariance matrix of {R, l a, Ω bh 2, n s, log(A s), Ω k, H(z), D A(z), f(z)σ 8(z)}, which give an efficient summary of Planck data and 2-point statistics from BOSS galaxy sample, where R = √Ω mH 2 0, and l a =more » πr(z *)/r s(z *), z * is the redshift at the last scattering surface, and r(z *) and r s(z *) denote our comoving distance to z * and sound horizon at z * respectively. The advantage of this method is that we do not need to put informative priors on the cosmological parameters that galaxy clustering is not able to constrain well, i.e. Ω bh 2 and n s. Using our double-probe results, we obtain Ω m = 0.304 ± 0.009, H 0 = 68.2 ± 0.7, and σ 8 = 0.806 ± 0.014 assuming ΛCDM; and Ω k = 0.002 ± 0.003 and w = –1.00 ± 0.07 assuming owCDM. The results show no tension with the flat ΛCDM cosmological paradigm. By comparing with the full-likelihood analyses with fixed dark energy models, we demonstrate that the double-probe method provides robust cosmological parameter constraints which can be conveniently used to study dark energy models. We extend our study to measure the sum of neutrino mass and obtain Σm ν < 0.10/0.22 (68%/95%) assuming ΛCDM and Σm ν < 0.26/0.52 (68%/95%) assuming wCDM. This paper is part of a set that analyses the final galaxy clustering dataset from BOSS.« less
NASA Astrophysics Data System (ADS)
Ranaivomiarana, Narindra; Irisarri, François-Xavier; Bettebghor, Dimitri; Desmorat, Boris
2018-04-01
An optimization methodology to find concurrently material spatial distribution and material anisotropy repartition is proposed for orthotropic, linear and elastic two-dimensional membrane structures. The shape of the structure is parameterized by a density variable that determines the presence or absence of material. The polar method is used to parameterize a general orthotropic material by its elasticity tensor invariants by change of frame. A global structural stiffness maximization problem written as a compliance minimization problem is treated, and a volume constraint is applied. The compliance minimization can be put into a double minimization of complementary energy. An extension of the alternate directions algorithm is proposed to solve the double minimization problem. The algorithm iterates between local minimizations in each element of the structure and global minimizations. Thanks to the polar method, the local minimizations are solved explicitly providing analytical solutions. The global minimizations are performed with finite element calculations. The method is shown to be straightforward and efficient. Concurrent optimization of density and anisotropy distribution of a cantilever beam and a bridge are presented.
The topology of Double Field Theory
NASA Astrophysics Data System (ADS)
Hassler, Falk
2018-04-01
We describe the doubled space of Double Field Theory as a group manifold G with an arbitrary generalized metric. Local information from the latter is not relevant to our discussion and so G only captures the topology of the doubled space. Strong Constraint solutions are maximal isotropic submanifold M in G. We construct them and their Generalized Geometry in Double Field Theory on Group Manifolds. In general, G admits different physical subspace M which are Poisson-Lie T-dual to each other. By studying two examples, we reproduce the topology changes induced by T-duality with non-trivial H-flux which were discussed by Bouwknegt, Evslin and Mathai [1].
A Two-Phase Model for Trade Matching and Price Setting in Double Auction Water Markets
NASA Astrophysics Data System (ADS)
Xu, Tingting; Zheng, Hang; Zhao, Jianshi; Liu, Yicheng; Tang, Pingzhong; Yang, Y. C. Ethan; Wang, Zhongjing
2018-04-01
Delivery in water markets is generally operated by agencies through channel systems, which imposes physical and institutional market constraints. Many water markets allow water users to post selling and buying requests on a board. However, water users may not be able to choose efficiently when the information (including the constraints) becomes complex. This study proposes an innovative two-phase model to address this problem based on practical experience in China. The first phase seeks and determines the optimal assignment that maximizes the incremental improvement of the system's social welfare according to the bids and asks in the water market. The second phase sets appropriate prices under constraints. Applying this model to China's Xiying Irrigation District shows that it can improve social welfare more than the current "pool exchange" method can. Within the second phase, we evaluate three objective functions (minimum variance, threshold-based balance, and two-sided balance), which represent different managerial goals. The threshold-based balance function should be preferred by most users, while the two-sided balance should be preferred by players who post extreme prices.
Persson, U. Martin
2017-01-01
This paper presents a spatially explicit method for making regional estimates of the potential for biogas production from crop residues and manure, accounting for key technical, biochemical, environmental and economic constraints. Methods for making such estimates are important as biofuels from agricultural residues are receiving increasing policy support from the EU and major biogas producers, such as Germany and Italy, in response to concerns over unintended negative environmental and social impacts of conventional biofuels. This analysis comprises a spatially explicit estimate of crop residue and manure production for the EU at 250 m resolution, and a biogas production model accounting for local constraints such as the sustainable removal of residues, transportation of substrates, and the substrates’ biochemical suitability for anaerobic digestion. In our base scenario, the EU biogas production potential from crop residues and manure is about 0.7 EJ/year, nearly double the current EU production of biogas from agricultural substrates, most of which does not come from residues or manure. An extensive sensitivity analysis of the model shows that the potential could easily be 50% higher or lower, depending on the stringency of economic, technical and biochemical constraints. We find that the potential is particularly sensitive to constraints on the substrate mixtures’ carbon-to-nitrogen ratio and dry matter concentration. Hence, the potential to produce biogas from crop residues and manure in the EU depends to large extent on the possibility to overcome the challenges associated with these substrates, either by complementing them with suitable co-substrates (e.g. household waste and energy crops), or through further development of biogas technology (e.g. pretreatment of substrates and recirculation of effluent). PMID:28141827
Einarsson, Rasmus; Persson, U Martin
2017-01-01
This paper presents a spatially explicit method for making regional estimates of the potential for biogas production from crop residues and manure, accounting for key technical, biochemical, environmental and economic constraints. Methods for making such estimates are important as biofuels from agricultural residues are receiving increasing policy support from the EU and major biogas producers, such as Germany and Italy, in response to concerns over unintended negative environmental and social impacts of conventional biofuels. This analysis comprises a spatially explicit estimate of crop residue and manure production for the EU at 250 m resolution, and a biogas production model accounting for local constraints such as the sustainable removal of residues, transportation of substrates, and the substrates' biochemical suitability for anaerobic digestion. In our base scenario, the EU biogas production potential from crop residues and manure is about 0.7 EJ/year, nearly double the current EU production of biogas from agricultural substrates, most of which does not come from residues or manure. An extensive sensitivity analysis of the model shows that the potential could easily be 50% higher or lower, depending on the stringency of economic, technical and biochemical constraints. We find that the potential is particularly sensitive to constraints on the substrate mixtures' carbon-to-nitrogen ratio and dry matter concentration. Hence, the potential to produce biogas from crop residues and manure in the EU depends to large extent on the possibility to overcome the challenges associated with these substrates, either by complementing them with suitable co-substrates (e.g. household waste and energy crops), or through further development of biogas technology (e.g. pretreatment of substrates and recirculation of effluent).
ERIC Educational Resources Information Center
Agirre, Ainara Imaz; García Mayo, María del Pilar
2014-01-01
The present study examines the acquisition of double object constructions (DOCs) ("Susan gave Peter an apple") by 90 Basque/Spanish learners of English as a third language (L3). The aim of this study was to explore whether (i) learners established a distinction when accepting DOCs vs. prepositional phrase constructions (PPCs)…
NASA Astrophysics Data System (ADS)
Luo, G. W.; Chu, Y. D.; Zhang, Y. L.; Zhang, J. G.
2006-11-01
A multidegree-of-freedom system having symmetrically placed rigid stops and subjected to periodic excitation is considered. The system consists of linear components, but the maximum displacement of one of the masses is limited to a threshold value by the symmetrical rigid stops. Repeated impacts usually occur in the vibratory system due to the rigid amplitude constraints. Such models play an important role in the studies of mechanical systems with clearances or gaps. Double Neimark-Sacker bifurcation of the system is analyzed by using the center manifold and normal form method of maps. The period-one double-impact symmetrical motion and homologous disturbed map of the system are derived analytically. A center manifold theorem technique is applied to reduce the Poincaré map to a four-dimensional one, and the normal form map associated with double Neimark-Sacker bifurcation is obtained. The bifurcation sets for the normal-form map are illustrated in detail. Local behavior of the vibratory systems with symmetrical rigid stops, near the points of double Neimark-Sacker bifurcations, is reported by the presentation of results for a three-degree-of-freedom vibratory system with symmetrical stops. The existence and stability of period-one double-impact symmetrical motion are analyzed explicitly. Also, local bifurcations at the points of change in stability are analyzed, thus giving some information on dynamical behavior near the points of double Neimark-Sacker bifurcations. Near the value of double Neimark-Sacker bifurcation there exist period-one double-impact symmetrical motion and quasi-periodic impact motions. The quasi-periodic impact motions are represented by the closed circle and "tire-like" attractor in projected Poincaré sections. With change of system parameters, the quasi-periodic impact motions usually lead to chaos via "tire-like" torus doubling.
Recombineering: A Homologous Recombination-Based Method of Genetic Engineering
Sharan, Shyam K.; Thomason, Lynn C.; Kuznetsov, Sergey G.; Court, Donald L.
2009-01-01
Recombineering is an efficient method of in vivo genetic engineering applicable to chromosomal as well as episomal replicons in E. coli. This method circumvents the need for most standard in vitro cloning techniques. Recombineering allows construction of DNA molecules with precise junctions without constraints being imposed by restriction enzyme site location. Bacteriophage homologous recombination proteins catalyze these recombineering reactions using double- and single-strand linear DNA substrates, so-called targeting constructs, introduced by electroporation. Gene knockouts, deletions and point mutations are readily made, gene tags can be inserted, and regions of bacterial artificial chromosomes (BACs) or the E. coli genome can be subcloned by gene retrieval using recombineering. Most of these constructs can be made within about a week's time. PMID:19180090
ERIC Educational Resources Information Center
Frank, Stefan L.; Trompenaars, Thijs; Vasishth, Shravan
2016-01-01
An English double-embedded relative clause from which the middle verb is omitted can often be processed more easily than its grammatical counterpart, a phenomenon known as the grammaticality illusion. This effect has been found to be reversed in German, suggesting that the illusion is language specific rather than a consequence of universal…
Implications of PSR J0737-3039B for the Galactic NS-NS binary merger rate
NASA Astrophysics Data System (ADS)
Kim, Chunglee; Perera, Benetge Bhakthi Pranama; McLaughlin, Maura A.
2015-03-01
The Double Pulsar (PSR J0737-3039) is the only neutron star-neutron star (NS-NS) binary in which both NSs have been detectable as radio pulsars. The Double Pulsar has been assumed to dominate the Galactic NS-NS binary merger rate R_g among all known systems, solely based on the properties of the first-born, recycled pulsar (PSR J0737-3039A, or A) with an assumption for the beaming correction factor of 6. In this work, we carefully correct observational biases for the second-born, non-recycled pulsar (PSR J0737-0737B, or B) and estimate the contribution from the Double Pulsar on R_g using constraints available from both A and B. Observational constraints from the B pulsar favour a small beaming correction factor for A (˜2), which is consistent with a bipolar model. Considering known NS-NS binaries with the best observational constraints, including both A and B, we obtain R_g=21_{-14}^{+28} Myr-1 at 95 per cent confidence from our reference model. We expect the detection rate of gravitational waves from NS-NS inspirals for the advanced ground-based gravitational-wave detectors is to be 8^{+10}_{-5} yr-1 at 95 per cent confidence. Within several years, gravitational-wave detections relevant to NS-NS inspirals will provide us useful information to improve pulsar population models.
MaxEnt analysis of a water distribution network in Canberra, ACT, Australia
NASA Astrophysics Data System (ADS)
Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael; Noack, Bernd R.
2015-01-01
A maximum entropy (MaxEnt) method is developed to infer the state of a pipe flow network, for situations in which there is insufficient information to form a closed equation set. This approach substantially extends existing deterministic methods for the analysis of engineered flow networks (e.g. Newton's method or the Hardy Cross scheme). The network is represented as an undirected graph structure, in which the uncertainty is represented by a continuous relative entropy on the space of internal and external flow rates. The head losses (potential differences) on the network are treated as dependent variables, using specified pipe-flow resistance functions. The entropy is maximised subject to "observable" constraints on the mean values of certain flow rates and/or potential differences, and also "physical" constraints arising from the frictional properties of each pipe and from Kirchhoff's nodal and loop laws. A numerical method is developed in Matlab for solution of the integral equation system, based on multidimensional quadrature. Several nonlinear resistance functions (e.g. power-law and Colebrook) are investigated, necessitating numerical solution of the implicit Lagrangian by a double iteration scheme. The method is applied to a 1123-node, 1140-pipe water distribution network for the suburb of Torrens in the Australian Capital Territory, Australia, using network data supplied by water authority ACTEW Corporation Limited. A number of different assumptions are explored, including various network geometric representations, prior probabilities and constraint settings, yielding useful predictions of network demand and performance. We also propose this methodology be used in conjunction with in-flow monitoring systems, to obtain better inferences of user consumption without large investments in monitoring equipment and maintenance.
Existence domains of dust-acoustic solitons and supersolitons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maharaj, S. K.; Bharuthram, R.; Singh, S. V.
2013-08-15
Using the Sagdeev potential method, the existence of large amplitude dust-acoustic solitons and supersolitons is investigated in a plasma comprising cold negative dust, adiabatic positive dust, Boltzmann electrons, and non-thermal ions. This model supports the existence of positive potential supersolitons in a certain region in parameter space in addition to regular solitons having negative and positive potentials. The lower Mach number limit for supersolitons coincides with the occurrence of double layers whereas the upper limit is imposed by the constraint that the adiabatic positive dust number density must remain real valued. The upper Mach number limits for negative potential (positivemore » potential) solitons coincide with limiting values of the negative (positive) potential for which the negative (positive) dust number density is real valued. Alternatively, the existence of positive potential solitons can terminate when positive potential double layers occur.« less
Augmenting Transport versus Increasing Cold Storage to Improve Vaccine Supply Chains
Haidari, Leila A.; Connor, Diana L.; Wateska, Angela R.; Brown, Shawn T.; Mueller, Leslie E.; Norman, Bryan A.; Schmitz, Michelle M.; Paul, Proma; Rajgopal, Jayant; Welling, Joel S.; Leonard, Jim; Chen, Sheng-I; Lee, Bruce Y.
2013-01-01
Background When addressing the urgent task of improving vaccine supply chains, especially to accommodate the introduction of new vaccines, there is often a heavy emphasis on stationary storage. Currently, donations to vaccine supply chains occur largely in the form of storage equipment. Methods This study utilized a HERMES-generated detailed, dynamic, discrete event simulation model of the Niger vaccine supply chain to compare the impacts on vaccine availability of adding stationary cold storage versus transport capacity at different levels and to determine whether adding stationary storage capacity alone would be enough to relieve potential bottlenecks when pneumococcal and rotavirus vaccines are introduced by 2015. Results Relieving regional level storage bottlenecks increased vaccine availability (by 4%) more than relieving storage bottlenecks at the district (1% increase), central (no change), and clinic (no change) levels alone. Increasing transport frequency (or capacity) yielded far greater gains (e.g., 15% increase in vaccine availability when doubling transport frequency to the district level and 18% when tripling). In fact, relieving all stationary storage constraints could only increase vaccine availability by 11%, whereas doubling the transport frequency throughout the system led to a 26% increase and tripling the frequency led to a 30% increase. Increasing transport frequency also reduced the amount of stationary storage space needed in the supply chain. The supply chain required an additional 61,269L of storage to relieve constraints with the current transport frequency, 55,255L with transport frequency doubled, and 51,791L with transport frequency tripled. Conclusions When evaluating vaccine supply chains, it is important to understand the interplay between stationary storage and transport. The HERMES-generated dynamic simulation model showed how augmenting transport can result in greater gains than only augmenting stationary storage and can reduce stationary storage needs. PMID:23717590
Single-Receiver GPS Phase Bias Resolution
NASA Technical Reports Server (NTRS)
Bertiger, William I.; Haines, Bruce J.; Weiss, Jan P.; Harvey, Nathaniel E.
2010-01-01
Existing software has been modified to yield the benefits of integer fixed double-differenced GPS-phased ambiguities when processing data from a single GPS receiver with no access to any other GPS receiver data. When the double-differenced combination of phase biases can be fixed reliably, a significant improvement in solution accuracy is obtained. This innovation uses a large global set of GPS receivers (40 to 80 receivers) to solve for the GPS satellite orbits and clocks (along with any other parameters). In this process, integer ambiguities are fixed and information on the ambiguity constraints is saved. For each GPS transmitter/receiver pair, the process saves the arc start and stop times, the wide-lane average value for the arc, the standard deviation of the wide lane, and the dual-frequency phase bias after bias fixing for the arc. The second step of the process uses the orbit and clock information, the bias information from the global solution, and only data from the single receiver to resolve double-differenced phase combinations. It is called "resolved" instead of "fixed" because constraints are introduced into the problem with a finite data weight to better account for possible errors. A receiver in orbit has much shorter continuous passes of data than a receiver fixed to the Earth. The method has parameters to account for this. In particular, differences in drifting wide-lane values must be handled differently. The first step of the process is automated, using two JPL software sets, Longarc and Gipsy-Oasis. The resulting orbit/clock and bias information files are posted on anonymous ftp for use by any licensed Gipsy-Oasis user. The second step is implemented in the Gipsy-Oasis executable, gd2p.pl, which automates the entire process, including fetching the information from anonymous ftp
Loop corrections in double field theory: non-trivial dilaton potentials
NASA Astrophysics Data System (ADS)
Lv, Songlin; Wu, Houwen; Yang, Haitang
2014-10-01
It is believed that the invariance of the generalised diffeomorphisms prevents any non-trivial dilaton potential from double field theory. It is therefore difficult to include loop corrections in the formalism. We show that by redefining a non-local dilaton field, under strong constraint which is necessary to preserve the gauge invariance of double field theory, the theory does permit non-constant dilaton potentials and loop corrections. If the fields have dependence on only one single coordinate, the non-local dilaton is identical to the ordinary one with an additive constant.
Greenfield, P E; Roberts, D H; Burke, B F
1980-05-02
A full 12-hour synthesis at 6-centimeter wavelength with the Very Large Array confirms the major features previously reported for the double quasar 0957+561. In addition, the existence of radio jets apparently associated with both quasars is demonstrated. Gravitational lens models are now favored on the basis of recent optical observations, and the radio jets place severe constraints on such models. Further radio observations of the double quasar are needed to establish the expected relative time delay in variations between the images.
Aminoacyl transfer from an adenylate anhydride to polyribonucleotides
NASA Technical Reports Server (NTRS)
Weber, A. L.; Lacey, J. C., Jr.
1975-01-01
Imidazole catalysis of phenylalanyl transfer from phenylalanine adenylate to hydroxyl groups of homopolyribonucleotides is studied as a possible chemical model of biochemical aminoacylation of transfer RNA (tRNA). The effect of pH on imidazole-catalyzed transfer of phenylalanyl residues to poly(U) and poly(A) double helix strands, the number of peptide linkages and their lability to base and neutral hydroxylamine, and the nature of adenylate condensation products are investigated. The chemical model entertained exhibits a constraint by not acylating the hydroxyl groups of polyribonucleotides in a double helix. The constraint is consistent with selective biochemical aminoacylation at the tRNA terminus. Interest in imidazole as a model of histidine residue in protoenzymes participating in prebiotic aminoacyl transfer to polyribonucleotides, and in rendering the tRNA a more efficient adaptor, is indicated.
Penna, Frank J; Bowlin, Paul; Alyami, Fahad; Bägli, Darius J; Koyle, Martin A; Lorenzo, Armando J
2015-10-01
In children with congenital obstructive uropathy, including posterior urethral valves, lower urinary tract decompression is recommended pending definitive surgical intervention. Current options, which are limited to a feeding tube or Foley catheter, pose unappreciated constraints in luminal diameter and are associated with potential problems. We assess the impact of luminal diameter on the current draining options and present a novel alternative method, repurposing a widely available stent that optimizes drainage. We retrospectively reviewed patients diagnosed with posterior urethral valves between January 2013 and December 2014. In all patients a 6Fr 12 cm Double-J ureteral stent was advanced over a guidewire in a retrograde fashion into the bladder. Luminal flow and cross-sectional areas were also assessed for each of 3 tubes for urinary drainage, ie 6Fr Double-J stent, 5Fr feeding tube and 6Fr Foley catheter. A total of 30 patients underwent uneventful bedside Double-J stent placement. Mean ± SD age at valve ablation was 28.5 ± 16.6 days. Mean ± SD peak serum creatinine was 2.23 ± 0.97 mg/dl after birth and 0.56 ± 0.22 mg/dl at the procedure. Urine output after stent placement was excellent in all patients. The Foley catheter and feeding tube drained approximately 18 and 6 times more slowly, respectively, and exhibited half the calculated cross-sectional luminal area compared to the Double-J stent. Use of Double-J stents in neonates with posterior urethral valves is a safe and effective alternative method for lower urinary tract decompression that optimizes the flow/lumen relationship compared to conventional drainage options. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Wet refractivity tomography with an improved Kalman-Filter method
NASA Astrophysics Data System (ADS)
Cao, Yunchang; Chen, Yongqi; Li, Pingwha
2006-10-01
An improved retrieval method, which uses the solution with a Gaussian constraint as the initial state variables for the Kalman Filtering (KF) method, was developed to retrieve the wet refractivity profiles from slant wet delays (SWD) extracted by the double-differenced (DD) GPS method. The accuracy of the GPS-derived SWDs is also tested in this study against the measurements of a water vapor radiometer (WVR) and a weather model. It is concluded that the GPS-derived SWDs have similar accuracy to those measured with WVR and are much higher in quality than those derived from the weather model used. The developed method is used to retrieve the 3D wet refractivity distribution in the Hong Kong region. The retrieved profiles agree well with the radiosonde observations, with a difference of about 4 mm km-1 in the low levels. The accurate profiles obtained with this method are applicable in a number of meteorological applications.
Implications for Climate Sensitivity from the Response to Individual Forcings
NASA Technical Reports Server (NTRS)
Marvel, Kate; Schmidt, Gavin A.; Miller, Ron L.; Nazarenko, Larissa
2015-01-01
Climate sensitivity to doubled CO2 is a widely-used metric of the large-scale response to external forcing. Climate models predict a wide range for two commonly used definitions: the transient climate response (TCR: the warming after 70 years of CO2 concentrations that riseat 1 per year), and the equilibrium climate sensitivity (ECS: the equilibrium temperature change following a doubling of CO2 concentrations). Many observational datasets have been used to constrain these values, including temperature trends over the recent past 16, inferences from paleo-climate and process-based constraints from the modern satellite eras. However, as the IPCC recently reported different classes of observational constraints produce somewhat incongruent ranges. Here we show that climate sensitivity estimates derived from recent observations must account for the efficacy of each forcing active during the historical period. When we use single forcing experiments to estimate these efficacies and calculate climate sensitivity from the observed twentieth-century warming, our estimates of both TCR and ECS are revised upward compared to previous studies, improving the consistency with independent constraints.
NASA Astrophysics Data System (ADS)
Relaix, Sabrina; Mitov, Michel
2008-08-01
Polymer-stabilized cholesteric liquid crystals (PSCLCs) with a double-handed circularly polarized reflection band are fabricated. The geometric and electric constraints appear to be relevant parameters in obtaining a single-layer CLC structure with a clear-cut double-handed circularly polarized reflection band since light scattering phenomena can alter the reflection properties when the PSCLC is cooled from the elaboration temperature to the operating one. A compromise needs to be found between the LC molecule populations, which are bound to the polymer network due to strong surface effects or not. Besides, a monodomain texture is preserved if the PSCLC is subjected to an electric field at the same time as the thermal process intrinsic to the elaboration process. As a consequence, the light scattering is reduced and both kinds of circularly polarized reflected light beams are put in evidence. Related potential applications are smart reflective windows for the solar light management or reflective polarizer-free displays with higher brightness.
Methods for constraining fine structure constant evolution with OH microwave transitions.
Darling, Jeremy
2003-07-04
We investigate the constraints that OH microwave transitions in megamasers and molecular absorbers at cosmological distances may place on the evolution of the fine structure constant alpha=e(2)/ variant Planck's over 2pi c. The centimeter OH transitions are a combination of hyperfine splitting and lambda doubling that can constrain the cosmic evolution of alpha from a single species, avoiding systematic errors in alpha measurements from multiple species which may have relative velocity offsets. The most promising method compares the 18 and 6 cm OH lines, includes a calibration of systematic errors, and offers multiple determinations of alpha in a single object. Comparisons of OH lines to the HI 21 cm line and CO rotational transitions also show promise.
Electric Double-Layer Interaction between Dissimilar Charge-Conserved Conducting Plates.
Chan, Derek Y C
2015-09-15
Small metallic particles used in forming nanostructured to impart novel optical, catalytic, or tribo-rheological can be modeled as conducting particles with equipotential surfaces that carry a net surface charge. The value of the surface potential will vary with the separation between interacting particles, and in the absence of charge-transfer or electrochemical reactions across the particle surface, the total charge of each particle must also remain constant. These two physical conditions require the electrostatic boundary condition for metallic nanoparticles to satisfy an equipotential whole-of-particle charge conservation constraint that has not been studied previously. This constraint gives rise to a global charge conserved constant potential boundary condition that results in multibody effects in the electric double-layer interaction that are either absent or are very small in the familiar constant potential or constant charge or surface electrochemical equilibrium condition.
Observational constraints on varying neutrino-mass cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.
We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.
The impact of weight classification on safety: timing steps to adapt to external constraints
Gill, S.V.
2015-01-01
Objectives: The purpose of the current study was to evaluate how weight classification influences safety by examining adults’ ability to meet a timing constraint: walking to the pace of an audio metronome. Methods: With a cross-sectional design, walking parameters were collected as 55 adults with normal (n=30) and overweight (n=25) body mass index scores walked to slow, normal, and fast audio metronome paces. Results: Between group comparisons showed that at the fast pace, those with overweight body mass index (BMI) had longer double limb support and stance times and slower cadences than the normal weight group (all ps<0.05). Examinations of participants’ ability to meet the metronome paces revealed that participants who were overweight had higher cadences at the slow and fast paces (all ps<0.05). Conclusions: Findings suggest that those with overweight BMI alter their gait to maintain biomechanical stability. Understanding how excess weight influences gait adaptation can inform interventions to improve safety for individuals with obesity. PMID:25730658
Li, Zeyu; Li, Lei; Qin, Yu; Li, Guangbin; Wang, Du; Zhou, Xun
2016-09-05
We demonstrate the enhancement of resolution and image quality in terahertz (THz) lens-free in-line digital holography by sub-pixel sampling with double-distance reconstruction. Multiple sub-pixel shifted low-resolution (LR) holograms recorded by a pyroelectric array detector (100 μm × 100 μm pixel pitch, 124 × 124 pixels) are aligned precisely to synthesize a high-resolution (HR) hologram. By this method, the lateral resolution is no more limited by the pixel pitch, and lateral resolution of 150 μm is obtained, which corresponds to 1.26λ with respect to the illuminating wavelength of 118.8 μm (2.52 THz). Compared with other published works, to date, this is the highest resolution in THz digital holography when considering the illuminating wavelength. In addition, to suppress the twin-image and zero-order artifacts, the complex amplitude distributions of both object and illuminaing background wave fields are reconstructed simultaneously. This is achieved by iterative phase retrieval between the double HR holograms and background images at two recording planes, which does not require any constraints on object plane or a priori knowledge of the sample.
Visual and tactile information in double bass intonation control.
Lage, Guilherme Menezes; Borém, Fausto; Vieira, Maurílio Nunes; Barreiros, João Pardal
2007-04-01
Traditionally, the teaching of intonation on the non-tempered orchestral strings (violin, viola, cello, and double bass) has resorted to the auditory and proprioceptive senses only. This study aims at understanding the role of visual and tactile information in the control of the non-tempered intonation of the acoustic double bass. Eight musicians played 11 trials of an atonal sequence of musical notes on two double basses of different sizes under different sensorial constraints. The accuracy of the played notes was analyzed by measuring their frequencies and comparing them with respective target values. The main finding was that the performance which integrated visual and tactile information was superior in relation to the other performances in the control of double bass intonation. This contradicts the traditional belief that proprioception and hearing are the most effective feedback information in the performance of stringed instruments.
A triangle voting algorithm based on double feature constraints for star sensors
NASA Astrophysics Data System (ADS)
Fan, Qiaoyun; Zhong, Xuyang
2018-02-01
A novel autonomous star identification algorithm is presented in this study. In the proposed algorithm, each sensor star constructs multi-triangle with its bright neighbor stars and obtains its candidates by triangle voting process, in which the triangle is considered as the basic voting element. In order to accelerate the speed of this algorithm and reduce the required memory for star database, feature extraction is carried out to reduce the dimension of triangles and each triangle is described by its base and height. During the identification period, the voting scheme based on double feature constraints is proposed to implement triangle voting. This scheme guarantees that only the catalog star satisfying two features can vote for the sensor star, which improves the robustness towards false stars. The simulation and real star image test demonstrate that compared with the other two algorithms, the proposed algorithm is more robust towards position noise, magnitude noise and false stars.
NASA Astrophysics Data System (ADS)
Waqas, M.; Hayat, T.; Shehzad, S. A.; Alsaedi, A.
2018-03-01
A mathematical model is formulated to characterize the non-Fourier and Fick's double diffusive models of heat and mass in moving flow of modified Burger's liquid. Temperature-dependent conductivity of liquid is taken into account. The concept of stratification is utilized to govern the equations of energy and mass species. The idea of boundary layer theory is employed to obtain the mathematical model of considered physical problem. The obtained partial differential system is converted into ordinary ones with the help of relevant variables. The homotopic concept lead to the convergent solutions of governing expressions. Convergence is attained and acceptable values are certified by expressing the so called ℏ -curves and numerical benchmark. Several graphs are made for different values of physical constraints to explore the mechanism of heat and mass transportation. We explored that the liquid temperature and concentration are retard for the larger thermal/concentration relaxation time constraint.
Constraints of beyond Standard Model parameters from the study of neutrinoless double beta decay
NASA Astrophysics Data System (ADS)
Stoica, Sabin
2017-12-01
Neutrinoless double beta (0νββ) decay is a beyond Standard Model (BSM) process whose discovery would clarify if the lepton number is conserved, decide on the neutrinos character (are they Dirac or Majorana particles?) and give a hint on the scale of their absolute masses. Also, from the study of 0νββ one can constrain other BSM parameters related to different scenarios by which this process can occur. In this paper I make first a short review on the actual challenges to calculate precisely the phase space factors and nuclear matrix elements entering the 0νββ decay lifetimes, and I report results of our group for these quantities. Then, taking advance of the most recent experimental limits for 0νββ lifetimes, I present new constraints of the neutrino mass parameters associated with different mechanisms of occurrence of the 0νββ decay mode.
Aggarwal, Priya; Gupta, Anubha
2017-12-01
A number of reconstruction methods have been proposed recently for accelerated functional Magnetic Resonance Imaging (fMRI) data collection. However, existing methods suffer with the challenge of greater artifacts at high acceleration factors. This paper addresses the issue of accelerating fMRI collection via undersampled k-space measurements combined with the proposed method based on l 1 -l 1 norm constraints, wherein we impose first l 1 -norm sparsity on the voxel time series (temporal data) in the transformed domain and the second l 1 -norm sparsity on the successive difference of the same temporal data. Hence, we name the proposed method as Double Temporal Sparsity based Reconstruction (DTSR) method. The robustness of the proposed DTSR method has been thoroughly evaluated both at the subject level and at the group level on real fMRI data. Results are presented at various acceleration factors. Quantitative analysis in terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative analysis in terms of reproducibility of brain Resting State Networks (RSNs) demonstrate that the proposed method is accurate and robust. In addition, the proposed DTSR method preserves brain networks that are important for studying fMRI data. Compared to the existing methods, the DTSR method shows promising potential with an improvement of 10-12 dB in PSNR with acceleration factors upto 3.5 on resting state fMRI data. Simulation results on real data demonstrate that DTSR method can be used to acquire accelerated fMRI with accurate detection of RSNs. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, Louise O. V.; Fadda, Dario; Frayer, David T., E-mail: louise@ipac.caltech.ed
2010-12-01
We announce the first discovery of a bent double lobe radio source (DLRS) in a known cluster filament. The bent DLRS is found at a distance of 3.4 Mpc from the center of the rich galaxy cluster, A1763. We derive a bend angle {alpha} = 25{sup 0}, and infer that the source is most likely seen at a viewing angle of {Phi} = 10{sup 0}. From measuring the flux in the jet between the core and further lobe and assuming a spectral index of 1, we calculate the minimum pressure in the jet, (8.0 {+-} 3.2) x 10{sup -13} dynmore » cm{sup -2}, and derive constraints on the intrafilament medium (IFM) assuming the bend of the jet is due to ram pressure. We constrain the IFM to be between (1-20) x 10{sup -29} gm cm{sup -3}. This is consistent with recent direct probes of the IFM and theoretical models. These observations justify future searches for bent double lobe radio sources located several megaparsecs from cluster cores, as they may be good markers of super cluster filaments.« less
2014-01-01
The free vibration response of double-walled carbon nanotubes (DWCNTs) is investigated. The DWCNTs are modelled as two beams, interacting between them through the van der Waals forces, and the nonlocal Euler-Bernoulli beam theory is used. The governing equations of motion are derived using a variational approach and the free frequencies of vibrations are obtained employing two different approaches. In the first method, the two double-walled carbon nanotubes are discretized by means of the so-called “cell discretization method” (CDM) in which each nanotube is reduced to a set of rigid bars linked together by elastic cells. The resulting discrete system takes into account nonlocal effects, constraint elasticities, and the van der Waals forces. The second proposed approach, belonging to the semianalytical methods, is an optimized version of the classical Rayleigh quotient, as proposed originally by Schmidt. The resulting conditions are solved numerically. Numerical examples end the paper, in which the two approaches give lower-upper bounds to the true values, and some comparisons with existing results are offered. Comparisons of the present numerical results with those from the open literature show an excellent agreement. PMID:24715807
Lightweight Double Neutron Star Found
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2018-02-01
More than forty years after the first discovery of a double neutron star, we still havent found many others but a new survey is working to change that.The Hunt for PairsThe observed shift in the Hulse-Taylor binarys orbital period over time as it loses energy to gravitational-wave emission. [Weisberg Taylor, 2004]In 1974, Russell Hulse and Joseph Taylor discovered the first double neutron star: two compact objects locked in a close orbit about each other. Hulse and Taylors measurements of this binarys decaying orbit over subsequent years led to a Nobel prize and the first clear evidence of gravitational waves carrying energy and angular momentum away from massive binaries.Forty years later, we have since confirmed the existence of gravitational waves directly with the Laser Interferometer Gravitational-Wave Observatory (LIGO). Nonetheless, finding and studying pre-merger neutron-star binaries remains a top priority. Observing such systems before they merge reveals crucial information about late-stage stellar evolution, binary interactions, and the types of gravitational-wave signals we expect to find with current and future observatories.Since the Hulse-Taylor binary, weve found a total of 16 additional double neutron-star systems which represents only a tiny fraction of the more than 2,600 pulsars currently known. Recently, however, a large number of pulsar surveys are turning their eyes toward the sky, with a focus on finding more double neutron stars and at least one of them has had success.The pulse profile for PSR J1411+2551 at 327 MHz. [Martinez et al. 2017]A Low-Mass DoubleConducted with the 1,000-foot Arecibo radio telescope in Puerto Rico, the Arecibo 327 MHz Drift Pulsar Survey has enabled the recent discovery of dozens of pulsars and transients. Among them, as reported by Jose Martinez (Max Planck Institute for Radio Astronomy) and coauthors in a recent publication, is PSR J1411+2551: a new double neutron star with one of the lowest masses ever measured for such a system.Through meticulous observations over the span of 2.5 years, Martinez and collaborators were able to obtain a number of useful measurements for the system, including the pulsars period (62 ms), the period of the binary (2.62 days), and the systems eccentricity (e = 0.17).In addition, the team measured the rate of advance of periastron of the system, allowing them to estimate the total mass of the system: M = 2.54 solar masses. This mass, combined with the eccentricity of the orbit, demonstrate that the companion of the pulsar in PSR J1411+2551 is almost certainly a neutron star and the system is one of the lightest known to date, even including the double neutron-star merger that was observed by LIGO in August this past year.Constraining Stellar PhysicsBased on its measured properties, PSR J1411+2551 is most likely a recycled pulsar in a double neutron-star system. [Martinez et al. 2017]The intriguing orbital properties and low mass of PSR J1411+2551 have already allowed the authors to explore a number of constraints to stellar evolution models, including narrowing the possible equations of state for neutron stars that could produce such a system. These constraints will be interesting to compare to constraints from LIGO and Virgo in the future, as more merging neutron-star systems are observed.Meanwhile, our best bet for obtaining further constraints is to continue searching for more pre-merger double neutron-star systems like the Hulse-Taylor binary and PSR J1411+2551. Let the hunt continue!CitationJ. G. Martinez et al 2017 ApJL 851 L29. doi:10.3847/2041-8213/aa9d87
Uehara, Erica; Deguchi, Tetsuo
2014-01-28
For a double-ring polymer in solution we evaluate the mean-square radius of gyration and the diffusion coefficient through simulation of off-lattice self-avoiding double polygons consisting of cylindrical segments with radius rex of unit length. Here, a self-avoiding double polygon consists of twin self-avoiding polygons which are connected by a cylindrical segment. We show numerically that several statistical and dynamical properties of double-ring polymers in solution depend on the linking number of the constituent twin ring polymers. The ratio of the mean-square radius of gyration of self-avoiding double polygons with zero linking number to that of no topological constraint is larger than 1, in particular, when the radius of cylindrical segments rex is small. However, the ratio is almost constant with respect to the number of vertices, N, and does not depend on N. The large-N behavior of topological swelling is thus quite different from the case of knotted random polygons.
Constraints and spandrels of interareal connectomes
Rubinov, Mikail
2016-01-01
Interareal connectomes are whole-brain wiring diagrams of white-matter pathways. Recent studies have identified modules, hubs, module hierarchies and rich clubs as structural hallmarks of these wiring diagrams. An influential current theory postulates that connectome modules are adequately explained by evolutionary pressures for wiring economy, but that the other hallmarks are not explained by such pressures and are therefore less trivial. Here, we use constraint network models to test these postulates in current gold-standard vertebrate and invertebrate interareal-connectome reconstructions. We show that empirical wiring-cost constraints inadequately explain connectome module organization, and that simultaneous module and hub constraints induce the structural byproducts of hierarchies and rich clubs. These byproducts, known as spandrels in evolutionary biology, include the structural substrate of the default-mode network. Our results imply that currently standard connectome characterizations are based on circular analyses or double dipping, and we emphasize an integrative approach to future connectome analyses for avoiding such pitfalls. PMID:27924867
Constraints and spandrels of interareal connectomes.
Rubinov, Mikail
2016-12-07
Interareal connectomes are whole-brain wiring diagrams of white-matter pathways. Recent studies have identified modules, hubs, module hierarchies and rich clubs as structural hallmarks of these wiring diagrams. An influential current theory postulates that connectome modules are adequately explained by evolutionary pressures for wiring economy, but that the other hallmarks are not explained by such pressures and are therefore less trivial. Here, we use constraint network models to test these postulates in current gold-standard vertebrate and invertebrate interareal-connectome reconstructions. We show that empirical wiring-cost constraints inadequately explain connectome module organization, and that simultaneous module and hub constraints induce the structural byproducts of hierarchies and rich clubs. These byproducts, known as spandrels in evolutionary biology, include the structural substrate of the default-mode network. Our results imply that currently standard connectome characterizations are based on circular analyses or double dipping, and we emphasize an integrative approach to future connectome analyses for avoiding such pitfalls.
Improving rapeseed production practices in the southeastern United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, D.L.; Breve, M.A.; Raymer, P.L.
1990-04-01
Oilseed rape or rapeseed is a crop which offers a potential for double-cropping in the southeastern United States. This final project report describes the results from a three year study aimed at evaluating the effect of different planting and harvesting practices on establishment and yield of three rape cultivars, and the double cropping potential of rapeseed in the southeastern United States. The project was conducted on two yield sites in Tifton, Georgia during 1986--87, 1987--88 and 1988--89. The general objective of this research is to improve the seed and biomass yield of winter rapeseed in the southeastern United States bymore » developing appropriate agronomic practices for the region. The primary constraint is to grow rapeseed within the allowable period for double cropping with an economically desirable crop, such as peanut or soybean. Planting and harvesting are the most critical steps in this process. Therefore, the specific objectives of this research were: evaluate and improve the emergence of rapeseed by developing planting techniques that enhance the soil, water and seed regimes for winter rapeseed in the southeast, and evaluate and improve the yields of harvested rapeseed by developing techniques for determining the optimum timing of harvest and efficient methods for harvesting winter rapeseed in the southeast. 6 refs., 12 figs., 9 tabs.« less
T-duality constraints on higher derivatives revisited
NASA Astrophysics Data System (ADS)
Hohm, Olaf; Zwiebach, Barton
2016-04-01
We ask to what extent are the higher-derivative corrections of string theory constrained by T-duality. The seminal early work by Meissner tests T-duality by reduction to one dimension using a distinguished choice of field variables in which the bosonic string action takes a Gauss-Bonnet-type form. By analyzing all field redefinitions that may or may not be duality covariant and may or may not be gauge covariant we extend the procedure to test T-duality starting from an action expressed in arbitrary field variables. We illustrate the method by showing that it determines uniquely the first-order α' corrections of the bosonic string, up to terms that vanish in one dimension. We also use the method to glean information about the O({α}^' 2}) corrections in the double field theory with Green-Schwarz deformation.
Effect of shear stress on cell cultures and other reactor problems
NASA Technical Reports Server (NTRS)
Schleier, H.
1981-01-01
Anchorage dependent cell cultures in fluidized beds are tested. Feasibility calculations indicate the allowed parameters and estimate the shear stresses therein. In addition, the diffusion equation with first order reaction is solved for the spherical shell (double bubble) reactor with various constraints.
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Longitudinal Double-Spin Asymmetry for Inclusive Jet Production in p→+p→ Collisions at s=200GeV
NASA Astrophysics Data System (ADS)
Abelev, B. I.; Aggarwal, M. M.; Ahammed, Z.; Anderson, B. D.; Arkhipkin, D.; Averichev, G. S.; Bai, Y.; Balewski, J.; Barannikova, O.; Barnby, L. S.; Baudot, J.; Baumgart, S.; Belaga, V. V.; Bellingeri-Laurikainen, A.; Bellwied, R.; Benedosso, F.; Betts, R. R.; Bhardwaj, S.; Bhasin, A.; Bhati, A. K.; Bichsel, H.; Bielcik, J.; Bielcikova, J.; Bland, L. C.; Blyth, S.-L.; Bombara, M.; Bonner, B. E.; Botje, M.; Bouchet, J.; Brandin, A. V.; Burton, T. P.; Bystersky, M.; Cai, X. Z.; Caines, H.; Calderón de La Barca Sánchez, M.; Callner, J.; Catu, O.; Cebra, D.; Cervantes, M. C.; Chajecki, Z.; Chaloupka, P.; Chattopadhyay, S.; Chen, H. F.; Chen, J. H.; Chen, J. Y.; Cheng, J.; Cherney, M.; Chikanian, A.; Christie, W.; Chung, S. U.; Clarke, R. F.; Codrington, M. J. M.; Coffin, J. P.; Cormier, T. M.; Cosentino, M. R.; Cramer, J. G.; Crawford, H. J.; Das, D.; Dash, S.; Daugherity, M.; de Moura, M. M.; Dedovich, T. G.; Dephillips, M.; Derevschikov, A. A.; Didenko, L.; Dietel, T.; Djawotho, P.; Dogra, S. M.; Dong, X.; Drachenberg, J. L.; Draper, J. E.; Du, F.; Dunin, V. B.; Dunlop, J. C.; Dutta Mazumdar, M. R.; Edwards, W. R.; Efimov, L. G.; Elhalhuli, E.; Emelianov, V.; Engelage, J.; Eppley, G.; Erazmus, B.; Estienne, M.; Fachini, P.; Fatemi, R.; Fedorisin, J.; Feng, A.; Filip, P.; Finch, E.; Fine, V.; Fisyak, Y.; Fu, J.; Gagliardi, C. A.; Gaillard, L.; Ganti, M. S.; Garcia-Solis, E.; Ghazikhanian, V.; Ghosh, P.; Gorbunov, Y. N.; Gos, H.; Grebenyuk, O.; Grosnick, D.; Grube, B.; Guertin, S. M.; Guimaraes, K. S. F. F.; Gupta, A.; Gupta, N.; Haag, B.; Hallman, T. J.; Hamed, A.; Harris, J. W.; He, W.; Heinz, M.; Henry, T. W.; Heppelmann, S.; Hippolyte, B.; Hirsch, A.; Hjort, E.; Hoffman, A. M.; Hoffmann, G. W.; Hofman, D. J.; Hollis, R. S.; Horner, M. J.; Huang, H. Z.; Hughes, E. W.; Humanic, T. J.; Igo, G.; Iordanova, A.; Jacobs, P.; Jacobs, W. W.; Jakl, P.; Jones, P. G.; Judd, E. G.; Kabana, S.; Kang, K.; Kapitan, J.; Kaplan, M.; Keane, D.; Kechechyan, A.; Kettler, D.; Khodyrev, V. Yu.; Kiryluk, J.; Kisiel, A.; Kislov, E. M.; Klein, S. R.; Knospe, A. G.; Kocoloski, A.; Koetke, D. D.; Kollegger, T.; Kopytine, M.; Kotchenda, L.; Kouchpil, V.; Kowalik, K. L.; Kravtsov, P.; Kravtsov, V. I.; Krueger, K.; Kuhn, C.; Kulikov, A. I.; Kumar, A.; Kurnadi, P.; Kuznetsov, A. A.; Lamont, M. A. C.; Landgraf, J. M.; Lange, S.; Lapointe, S.; Laue, F.; Lauret, J.; Lebedev, A.; Lednicky, R.; Lee, C.-H.; Lehocka, S.; Levine, M. J.; Li, C.; Li, Q.; Li, Y.; Lin, G.; Lin, X.; Lindenbaum, S. J.; Lisa, M. A.; Liu, F.; Liu, H.; Liu, J.; Liu, L.; Ljubicic, T.; Llope, W. J.; Longacre, R. S.; Love, W. A.; Lu, Y.; Ludlam, T.; Lynn, D.; Ma, G. L.; Ma, J. G.; Ma, Y. G.; Mahapatra, D. P.; Majka, R.; Mangotra, L. K.; Manweiler, R.; Margetis, S.; Markert, C.; Martin, L.; Matis, H. S.; Matulenko, Yu. A.; McShane, T. S.; Meschanin, A.; Millane, J.; Miller, M. L.; Minaev, N. G.; Mioduszewski, S.; Mischke, A.; Mitchell, J.; Mohanty, B.; Morozov, D. A.; Munhoz, M. G.; Nandi, B. K.; Nattrass, C.; Nayak, T. K.; Nelson, J. M.; Nepali, C.; Netrakanti, P. K.; Nogach, L. V.; Nurushev, S. B.; Odyniec, G.; Ogawa, A.; Okorokov, V.; Olson, D.; Pachr, M.; Pal, S. K.; Panebratsev, Y.; Pavlinov, A. I.; Pawlak, T.; Peitzmann, T.; Perevoztchikov, V.; Perkins, C.; Peryt, W.; Phatak, S. C.; Planinic, M.; Pluta, J.; Poljak, N.; Porile, N.; Poskanzer, A. M.; Potekhin, M.; Potrebenikova, E.; Potukuchi, B. V. K. S.; Prindle, D.; Pruneau, C.; Pruthi, N. K.; Putschke, J.; Qattan, I. A.; Raniwala, R.; Raniwala, S.; Ray, R. L.; Relyea, D.; Ridiger, A.; Ritter, H. G.; Roberts, J. B.; Rogachevskiy, O. V.; Romero, J. L.; Rose, A.; Roy, C.; Ruan, L.; Russcher, M. J.; Sahoo, R.; Sakrejda, I.; Sakuma, T.; Salur, S.; Sandweiss, J.; Sarsour, M.; Sazhin, P. S.; Schambach, J.; Scharenberg, R. P.; Schmitz, N.; Seger, J.; Selyuzhenkov, I.; Seyboth, P.; Shabetai, A.; Shahaliev, E.; Shao, M.; Sharma, M.; Shen, W. Q.; Shimanskiy, S. S.; Sichtermann, E. P.; Simon, F.; Singaraju, R. N.; Skoby, M. J.; Smirnov, N.; Snellings, R.; Sorensen, P.; Sowinski, J.; Speltz, J.; Spinka, H. M.; Srivastava, B.; Stadnik, A.; Stanislaus, T. D. S.; Staszak, D.; Stock, R.; Strikhanov, M.; Stringfellow, B.; Suaide, A. A. P.; Suarez, M. C.; Subba, N. L.; Sumbera, M.; Sun, X. M.; Sun, Z.; Surrow, B.; Symons, T. J. M.; Szanto de Toledo, A.; Takahashi, J.; Tang, A. H.; Tarnowsky, T.; Thomas, J. H.; Timmins, A. R.; Timoshenko, S.; Tokarev, M.; Trainor, T. A.; Tram, V. N.; Trentalange, S.; Tribble, R. E.; Tsai, O. D.; Ulery, J.; Ullrich, T.; Underwood, D. G.; van Buren, G.; van der Kolk, N.; van Leeuwen, M.; Vander Molen, A. M.; Varma, R.; Vasilevski, I. M.; Vasiliev, A. N.; Vernet, R.; Vigdor, S. E.; Viyogi, Y. P.; Vokal, S.; Voloshin, S. A.; Wada, M.; Waggoner, W. T.; Wang, F.; Wang, G.; Wang, J. S.; Wang, X. L.; Wang, Y.; Webb, J. C.; Westfall, G. D.; Whitten, C., Jr.; Wieman, H.; Wissink, S. W.; Witt, R.; Wu, J.; Wu, Y.; Xu, N.; Xu, Q. H.; Xu, Z.; Yepes, P.; Yoo, I.-K.; Yue, Q.; Yurevich, V. I.; Zawisza, M.; Zhan, W.; Zhang, H.; Zhang, W. M.; Zhang, Y.; Zhang, Z. P.; Zhao, Y.; Zhong, C.; Zhou, J.; Zoulkarneev, R.; Zoulkarneeva, Y.; Zubarev, A. N.; Zuo, J. X.
2008-06-01
We report a new STAR measurement of the longitudinal double-spin asymmetry ALL for inclusive jet production at midrapidity in polarized p+p collisions at a center-of-mass energy of s=200GeV. The data, which cover jet transverse momenta 5
Stabilization of computational procedures for constrained dynamical systems
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1988-01-01
A new stabilization method of treating constraints in multibody dynamical systems is presented. By tailoring a penalty form of the constraint equations, the method achieves stabilization without artificial damping and yields a companion matrix differential equation for the constraint forces; hence, the constraint forces are obtained by integrating the companion differential equation for the constraint forces in time. A principal feature of the method is that the errors committed in each constraint condition decay with its corresponding characteristic time scale associated with its constraint force. Numerical experiments indicate that the method yields a marked improvement over existing techniques.
Reassessing the fundamentals: On the evolution, ages and masses of neutron stars
NASA Astrophysics Data System (ADS)
Kiziltan, Bulent
The evolution, ages and masses of neutron stars are the fundamental threads that make pulsars accessible to other sub-disciplines of astronomy and physics. A realistic and accurate determination of these indirectly probed features play an important role in understanding a very broad range of astrophysical processes that are, in many cases, not empirically accessible otherwise. For the majority of pulsars, the only observables are the rotational period (P), and its derivative (P˙) which gives the rate of change in the spin. I start with calculating the joint P-P˙ distributions of millisecond pulsars for the standard evolutionary model in order to assess whether millisecond pulsars are the unequivocal descendants of low mass X-ray binaries. We show that the P-P˙ density implied by the standard evolutionary model is inconsistent with observations, which suggests that it is unlikely that millisecond pulsars have evolved from a single coherent progenitor population. In the absence of constraints from the binary companion or supernova remnant, the standard method for estimating pulsar ages is to infer an age from the rate of spin-down. I parametrically incorporate constraints that arise from binary evolution and limiting physics to derive a "modified spin-down age" for millisecond pulsars. We show that the standard method can be improved by this approach to achieve age estimates closer to the true age. Then, I critically review radio pulsar mass measurements and present a detailed examination through which we are able to put stringent constraints on the underlying neutron star mass distribution. For the first time, we are able to analyze a sizable population of neutron star-white dwarf systems in addition to double neutron star systems with a technique that accounts for systematically different measurement errors. We find that neutron stars that have evolved through different evolutionary paths reflect distinctive signatures through dissimilar distribution peak and mass cutoff values. Neutron stars in double neutron star and neutron star-white dwarf systems show consistent respective peaks at 1.35 M⊙ and 1.50 M⊙ , which suggest significant mass accretion (Deltam ≈ 0.15 M⊙ ) has occurred during the spin up phase. We find a mass cutoff at 2 M⊙ for neutron stars with white dwarf companions which establishes a firm lower bound for the maximum neutron star mass. This rules out the majority of strange quark and soft equation of state models as viable configurations for neutron star matter. The lack of truncation close to the maximum mass cutoff suggests that the 2 M⊙ limit is set by evolutionary constraints rather than nuclear physics or general relativity, and the existence of rare super-massive neutron stars is possible.
Food, Feed, and Fuel: Integrating Energy Double Crops in Conventional Farming Systems
USDA-ARS?s Scientific Manuscript database
The increasing demand for renewable energy, coupled with global demand for agricultural products and a range of environmental constraints, requires a re-thinking of current agricultural practices. Growing markets for cellulosic and other biomass feedstocks create new opportunities for farmers to div...
Computer-Aided Design of RNA Origami Structures.
Sparvath, Steffen L; Geary, Cody W; Andersen, Ebbe S
2017-01-01
RNA nanostructures can be used as scaffolds to organize, combine, and control molecular functionalities, with great potential for applications in nanomedicine and synthetic biology. The single-stranded RNA origami method allows RNA nanostructures to be folded as they are transcribed by the RNA polymerase. RNA origami structures provide a stable framework that can be decorated with functional RNA elements such as riboswitches, ribozymes, interaction sites, and aptamers for binding small molecules or protein targets. The rich library of RNA structural and functional elements combined with the possibility to attach proteins through aptamer-based binding creates virtually limitless possibilities for constructing advanced RNA-based nanodevices.In this chapter we provide a detailed protocol for the single-stranded RNA origami design method using a simple 2-helix tall structure as an example. The first step involves 3D modeling of a double-crossover between two RNA double helices, followed by decoration with tertiary motifs. The second step deals with the construction of a 2D blueprint describing the secondary structure and sequence constraints that serves as the input for computer programs. In the third step, computer programs are used to design RNA sequences that are compatible with the structure, and the resulting outputs are evaluated and converted into DNA sequences to order.
Mayer, Carl; Li, Nan; Mara, Nathan Allan; ...
2014-11-07
Nanolaminate composites show promise as high strength and toughness materials. Still, due to the limited volume of these materials, micron scale mechanical testing methods must be used to determine the properties of these films. To this end, a novel approach combining a double notch shear testing geometry and compression with a flat punch in a nanoindenter was developed to determine the mechanical properties of these films under shear loading. To further elucidate the failure mechanisms under shear loading, in situ TEM experiments were performed using a double notch geometry cut into the TEM foil. Aluminum layer thicknesses of 50nm andmore » 100nm were used to show the effect of constraint on the deformation. Higher shear strength was observed in the 50 nm sample (690±54 MPa) compared to the 100 nm sample (423±28.7 MPa). Additionally, failure occurred along the Al-SiC interface in the 50 nm sample as opposed to failure within the Al layer in the 100 nm sample.« less
Mobility and Position Error Analysis of a Complex Planar Mechanism with Redundant Constraints
NASA Astrophysics Data System (ADS)
Sun, Qipeng; Li, Gangyan
2018-03-01
Nowadays mechanisms with redundant constraints have been created and attracted much attention for their merits. The mechanism of the redundant constraints in a mechanical system is analyzed in this paper. A analysis method of Planar Linkage with a repetitive structure is proposed to get the number and type of constraints. According to the difference of applications and constraint characteristics, the redundant constraints are divided into the theoretical planar redundant constraints and the space-planar redundant constraints. And the calculation formula for the number of redundant constraints and type of judging method are carried out. And a complex mechanism with redundant constraints is analyzed of the influence about redundant constraints on mechanical performance. With the combination of theoretical derivation and simulation research, a mechanism analysis method is put forward about the position error of complex mechanism with redundant constraints. It points out the direction on how to eliminate or reduce the influence of redundant constraints.
Design of Optimally Robust Control Systems.
1980-01-01
approach is that the optimization framework is an artificial device. While some design constraints can easily be incorporated into a single cost function...indicating that that point was indeed the solution. Also, an intellegent initial guess for k was important in order to avoid being hung up at the double
Diagnosis of Enzyme Inhibition Using Excel Solver: A Combined Dry and Wet Laboratory Exercise
ERIC Educational Resources Information Center
Dias, Albino A.; Pinto, Paula A.; Fraga, Irene; Bezerra, Rui M. F.
2014-01-01
In enzyme kinetic studies, linear transformations of the Michaelis-Menten equation, such as the Lineweaver-Burk double-reciprocal transformation, present some constraints. The linear transformation distorts the experimental error and the relationship between "x" and "y" axes; consequently, linear regression of transformed data…
Analysis of gun barrel rifling twist
NASA Astrophysics Data System (ADS)
Sun, Jia; Chen, Guangsong; Qian, Linfang; Liu, Taisu
2017-05-01
Aiming at the problem of gun barrel rifling twist, the constraint relation between rifling and projectile is investigated. The constraint model of rifling and projectile is established and the geometric relation between the twist and the motion of projectile is analyzed. Based on the constraint model, according to the rotating band that is fired, the stress and the motion law of the rotating band in bore are analyzed. The effects to rotating band (double rotating band or wide driving band) caused by different rifling (rib rifling, increasing rifling and combined rifling) are also investigated. The model is demonstrated by several examples. The results of numerical examples and the constraint mode show that the uncertainty factors will be brought in the increasing rifling and combined rifling during the projectile move in the bore. According to the amplitude and the strength of the twist acting on rotating band, the steady property of rotational motion of the projectile, the rib rifling is a better choose.
A survey of methods of feasible directions for the solution of optimal control problems
NASA Technical Reports Server (NTRS)
Polak, E.
1972-01-01
Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, C; Wessels, B; Mansur, D
2015-06-15
Purpose: We investigate the effect of residual setup and motion errors in lung irradiation for VMAT, double scattering (DS) proton beams and spot scanning (IMPT) in a case study. Methods: The CT image and contour sets of a lung patient treated with 6 MV VMAT is re-planned with DS as well as IMPT subject to the same constraints; V20(lung), V10(lung) and V5(lung)< 15%, 20% and 25% respectively, V20(heart)<25% and V100%(PTV)≥95%. In addition, uncertainty analysis in the form of isocenter shifts (±1–3mm) was incorporated in the DVH calculations to assess the plan robustness. Results: Only the IMPT plan satisfies all themore » specified constraints. The 3D-conformal DS proton plan is able to achieve better sparing of the lung and heart dose compared to VMAT. For the lung, V20, V10 and V5 are 13%, 19% and 25% respectively for IMPT, 18%, 23% and 30% respectively for DS, and 20%, 30% and 42% respectively for VMAT. For heart: 0.6% for IMPT, 2.4% for DS and 30% for VMAT. When incorporating isocenter shifts in DVH calculations, the maximum changes in V20, V10 and V5 for lung are 14%, 21% and 28% respectively for IMPT. The corresponding max changes are19%, 24% and 32% respectively for DS, and 22%, 32% and 44% respectively for VMAT. The largest change occurs in the PTV coverage. For IMPT, V100%(PTV) varies between 88–96%, while V100%(PTV) for VMAT suffers a larger change compared to DS (Δ=5.5% vs 3.3%). Conclusion: While only IMPT satisfies the stringent dose-volume constraints for the lung irradiation, it is not as robust as the 3D conformal DS plan. DS also has better sparing in lung and heart compared to VMAT and similar PTV coverage. By including isocenter shifts in dose-volume calculations in treatment planning of lung, DS appears to be more robust than VMAT.« less
Adamczyk, L.
2015-08-26
We report a new measurement of the midrapidity inclusive jet longitudinal double-spin asymmetry, A LL, in polarized pp collisions at center-of-mass energy √s = 200 GeV. The STAR data place stringent constraints on polarized parton distribution functions extracted at next-to-leading order from global analyses of inclusive deep-inelastic scattering (DIS), semi-inclusive DIS, and RHIC pp data. Lastly, the measured asymmetries provide evidence at the 3σ level for positive gluon polarization in the Bjorken-x region x > 0.05 .
Double-Cascade Events from New Physics in Icecube [Double Bangs from New Physics in IceCube
Coloma, Pilar; Machado, Pedro A. N.; Martinez-Soler, Ivan; ...
2017-11-16
A variety of new physics models allows for neutrinos to up-scatter into heavier states. If the incident neutrino is energetic enough, the heavy neutrino may travel some distance before decaying. In this work, we consider the atmospheric neutrino flux as a source of such events. At IceCube, this would lead to a “double-bang” (DB) event topology, similar to what is predicted to occur for tau neutrinos at ultrahigh energies. The DB event topology has an extremely low background rate from coincident atmospheric cascades, making this a distinctive signature of new physics. Finally, our results indicate that IceCube should already bemore » able to derive new competitive constraints on models with GeV-scale sterile neutrinos using existing data.« less
Double-Cascade Events from New Physics in Icecube [Double Bangs from New Physics in IceCube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coloma, Pilar; Machado, Pedro A. N.; Martinez-Soler, Ivan
A variety of new physics models allows for neutrinos to up-scatter into heavier states. If the incident neutrino is energetic enough, the heavy neutrino may travel some distance before decaying. In this work, we consider the atmospheric neutrino flux as a source of such events. At IceCube, this would lead to a “double-bang” (DB) event topology, similar to what is predicted to occur for tau neutrinos at ultrahigh energies. The DB event topology has an extremely low background rate from coincident atmospheric cascades, making this a distinctive signature of new physics. Finally, our results indicate that IceCube should already bemore » able to derive new competitive constraints on models with GeV-scale sterile neutrinos using existing data.« less
Optimization of a bundle divertor for FED
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hively, L.M.; Rothe, K.E.; Minkoff, M.
1982-01-01
Optimal double-T bundle divertor configurations have been obtained for the Fusion Engineering Device (FED). On-axis ripple is minimized, while satisfying a series of engineering constraints. The ensuing non-linear optimization problem is solved via a sequence of quadratic programming subproblems, using the VMCON algorithm. The resulting divertor designs are substantially improved over previous configurations.
NASA Astrophysics Data System (ADS)
Onoyama, Takashi; Maekawa, Takuya; Kubota, Sen; Tsuruta, Setuso; Komoda, Norihisa
To build a cooperative logistics network covering multiple enterprises, a planning method that can build a long-distance transportation network is required. Many strict constraints are imposed on this type of problem. To solve these strict-constraint problems, a selfish constraint satisfaction genetic algorithm (GA) is proposed. In this GA, each gene of an individual satisfies only its constraint selfishly, disregarding the constraints of other genes in the same individuals. Moreover, a constraint pre-checking method is also applied to improve the GA convergence speed. The experimental result shows the proposed method can obtain an accurate solution in a practical response time.
NASA Astrophysics Data System (ADS)
Kang, Yeon June
In this thesis an elastic-absorption finite element model of isotropic elastic porous noise control materials is first presented as a means of investigating the effects of finite dimension and edge constraints on the sound absorption by, and transmission through, layers of acoustical foams. Methods for coupling foam finite elements with conventional acoustic and structural finite elements are also described. The foam finite element model based on the Biot theory allows for the simultaneous propagation of the three types of waves known to exist in an elastic porous material. Various sets of boundary conditions appropriate for modeling open, membrane-sealed and panel-bonded foam surfaces are formulated and described. Good agreement was achieved when finite element predictions were compared with previously established analytical results for the plane wave absorption coefficient and transmission loss in the case of wave propagation both in foam-filled waveguides and through foam-lined double panel structures of infinite lateral extent. The primary effect of the edge constraints of a foam layer was found to be an acoustical stiffening of the foam. Constraining the ends of the facing panels in foam-lined double panel systems was also found to increase the sound transmission loss significantly in the low frequency range. In addition, a theoretical multi-dimensional model for wave propagation in anisotropic elastic porous materials was developed to study the effect of anisotropy on the sound transmission of foam-lined noise control treatments. The predictions of the theoretical anisotropic model have been compared with experimental measurements for the random incidence sound transmission through double panel structure lined with polyimide foam. The predictions were made by using the measured and estimated macroscopic physical parameters of polyimide foam samples which were known to be anisotropic. It has been found that the macroscopic physical parameters in the direction normal to the face of foam layer play the principal role in determining the acoustical behavior of polyimide foam layers, although more satisfactory agreement between experimental measurements and theoretical predictions of transmission loss is obtained when the anisotropic properties are allowed in the model.
[Addictions: Motivated or forced care].
Cottencin, Olivier; Bence, Camille
2016-12-01
Patients presenting with addictions are often obliged to consult. This constraint can be explicit (partner, children, parents, doctor, police, justice) or can be implicit (for their children, for their families, or for their health). Thus, beyond the fact that the caregiver faces the paradox of caring for subjects who do not ask treatment, he faces as well a double bind considered to be supporter of the social order or helper of patients. The transtheoretical model of change is complex showing us that it was neither fixed in time, nor perpetual for a given individual. This model includes ambivalence, resistance and even relapse, but it still considers constraint as a brake than an effective tool. Therapist must have adequate communication tools to enable everyone (forced or not) understand that involvement in care will enable him/her to regain his free will, even though it took to go through coercion. We propose in this article to detail the first steps with the patient presenting with addiction looking for constraint (implicit or explicit), how to work with constraint, avoid making resistances ourselves and make of constraint a powerful motivator for change. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
On constraining the speed of gravitational waves following GW150914
NASA Astrophysics Data System (ADS)
Blas, D.; Ivanov, M. M.; Sawicki, I.; Sibiryakov, S.
2016-05-01
We point out that the observed time delay between the detection of the signal at the Hanford and Livingston LIGO sites from the gravitational wave event GW150914 places an upper bound on the speed of propagation of gravitational waves, c gw ≲ 1.7 in the units of speed of light. Combined with the lower bound from the absence of gravitational Cherenkov losses by cosmic rays that rules out most of subluminal velocities, this gives a model-independent double-sided constraint 1 ≲ c gw ≲ 1.7. We compare this result to model-specific constraints from pulsar timing and cosmology.
NASA Astrophysics Data System (ADS)
Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Ambrogi, F.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Grossmann, J.; Hrubec, J.; Jeitler, M.; König, A.; Krammer, N.; Krätschmer, I.; Liko, D.; Madlener, T.; Mikulec, I.; Pree, E.; Rabady, D.; Rad, N.; Rohringer, H.; Schieck, J.; Schöfbeck, R.; Spanring, M.; Spitzbart, D.; Waltenberger, W.; Wittmann, J.; Wulz, C.-E.; Zarucki, M.; Chekhovsky, V.; Mossolov, V.; Suarez Gonzalez, J.; De Wolf, E. A.; Di Croce, D.; Janssen, X.; Lauwers, J.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; De Bruyn, I.; De Clercq, J.; Deroover, K.; Flouris, G.; Lontkovskyi, D.; Lowette, S.; Moortgat, S.; Moreels, L.; Python, Q.; Skovpen, K.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Parijs, I.; Beghin, D.; Brun, H.; Clerbaux, B.; De Lentdecker, G.; Delannoy, H.; Dorney, B.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Luetic, J.; Maerschalk, T.; Marinov, A.; Randle-conde, A.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Vannerom, D.; Yonamine, R.; Zenoni, F.; Zhang, F.; Cimmino, A.; Cornelis, T.; Dobur, D.; Fagot, A.; Gul, M.; Khvastunov, I.; Poyraz, D.; Roskas, C.; Salva, S.; Tytgat, M.; Verbeke, W.; Zaganidis, N.; Bakhshiansohi, H.; Bondu, O.; Brochet, S.; Bruno, G.; Caputo, C.; Caudron, A.; De Visscher, S.; Delaere, C.; Delcourt, M.; Francois, B.; Giammanco, A.; Jafari, A.; Komm, M.; Krintiras, G.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Musich, M.; Piotrzkowski, K.; Quertenmont, L.; Vidal Marono, M.; Wertz, S.; Beliy, N.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Coelho, E.; Da Costa, E. M.; Da Silveira, G. G.; De Jesus Damiao, D.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Melo De Almeida, M.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Torres Da Silva De Araujo, F.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Misheva, M.; Rodozov, M.; Shopova, M.; Sultanov, G.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Fang, W.; Gao, X.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Chen, Y.; Jiang, C. H.; Leggat, D.; Liao, H.; Liu, Z.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Yazgan, E.; Zhang, H.; Zhang, S.; Zhao, J.; Ban, Y.; Chen, G.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; González Hernández, C. F.; Ruiz Alvarez, J. D.; Courbon, B.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Sculac, T.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Ferencek, D.; Kadija, K.; Mesic, B.; Starodumov, A.; Susa, T.; Ather, M. W.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Finger, M.; Finger, M.; Carrera Jarrin, E.; Assran, Y.; Mahmoud, M. A.; Mahrous, A.; Dewanjee, R. K.; Kadastik, M.; Perrini, L.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Järvinen, T.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Tuominen, E.; Tuominiemi, J.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Faure, J. L.; Ferri, F.; Ganjour, S.; Ghosh, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Kucher, I.; Locci, E.; Machet, M.; Malcles, J.; Negro, G.; Rander, J.; Rosowsky, A.; Sahin, M. Ö.; Titov, M.; Abdulsalam, A.; Amendola, C.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Charlot, C.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Lobanov, A.; Martin Blanco, J.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Stahl Leiton, A. G.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Zghiche, A.; Agram, J.-L.; Andrea, J.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Jansová, M.; Le Bihan, A.-C.; Tonon, N.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fay, J.; Finco, L.; Gascon, S.; Gouzevitch, M.; Grenier, G.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Popov, A.; Sordini, V.; Vander Donckt, M.; Viret, S.; Toriashvili, T.; Tsamalaidze, Z.; Autermann, C.; Feld, L.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Preuten, M.; Schomakers, C.; Schulz, J.; Verlage, T.; Zhukov, V.; Albert, A.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hamer, M.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Teyssier, D.; Thüer, S.; Flügge, G.; Kargoll, B.; Kress, T.; Künsken, A.; Lingemann, J.; Müller, T.; Nehrkorn, A.; Nowack, A.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Arndt, T.; Asawatangtrakuldee, C.; Beernaert, K.; Behnke, O.; Behrens, U.; Bermúdez Martínez, A.; Bin Anuar, A. A.; Borras, K.; Botta, V.; Campbell, A.; Connor, P.; Contreras-Campana, C.; Costanza, F.; Diez Pardos, C.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Eren, E.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Grados Luyando, J. M.; Grohsjean, A.; Gunnellini, P.; Guthoff, M.; Harb, A.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Kasemann, M.; Keaveney, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Lelek, A.; Lenz, T.; Leonard, J.; Lipka, K.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Ntomari, E.; Pitzl, D.; Raspereza, A.; Roland, B.; Savitskyi, M.; Saxena, P.; Shevchenko, R.; Spannagel, S.; Stefaniuk, N.; Van Onsem, G. P.; Walsh, R.; Wen, Y.; Wichmann, K.; Wissing, C.; Zenaiev, O.; Bein, S.; Blobel, V.; Centis Vignali, M.; Dreyer, T.; Garutti, E.; Gonzalez, D.; Haller, J.; Hinzmann, A.; Hoffmann, M.; Karavdina, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Kurz, S.; Lapsien, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Niedziela, M.; Nowatschin, D.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Scharf, C.; Schleper, P.; Schmidt, A.; Schumann, S.; Schwandt, J.; Sonneveld, J.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Stöver, M.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Akbiyik, M.; Barth, C.; Baur, S.; Butz, E.; Caspart, R.; Chwalek, T.; Colombo, F.; De Boer, W.; Dierlamm, A.; Freund, B.; Friese, R.; Giffels, M.; Haitz, D.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Kassel, F.; Kudella, S.; Mildner, H.; Mozer, M. U.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Schröder, M.; Shvetsov, I.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Topsis-Giotis, I.; Karathanasis, G.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Kousouris, K.; Evangelou, I.; Foudas, C.; Kokkas, P.; Mallios, S.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Triantis, F. A.; Csanad, M.; Filipovic, N.; Pasztor, G.; Veres, G. I.; Bencze, G.; Hajdu, C.; Horvath, D.; Hunyadi, Á.; Sikler, F.; Veszpremi, V.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Makovec, A.; Molnar, J.; Szillasi, Z.; Bartók, M.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Choudhury, S.; Komaragiri, J. R.; Bahinipati, S.; Bhowmik, S.; Mal, P.; Mandal, K.; Nayak, A.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Dhingra, N.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Kumari, P.; Mehta, A.; Singh, J. B.; Walia, G.; Kumar, Ashok; Shah, Aashaq; Bhardwaj, A.; Chauhan, S.; Choudhary, B. C.; Garg, R. B.; Keshri, S.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, R.; Bhardwaj, R.; Bhattacharya, R.; Bhattacharya, S.; Bhawandeep, U.; Dey, S.; Dutt, S.; Dutta, S.; Ghosh, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Nandan, S.; Purohit, A.; Roy, A.; Roy, D.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Thakur, S.; Behera, P. K.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Netrakanti, P. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Dugad, S.; Mahakud, B.; Mitra, S.; Mohanty, G. B.; Sur, N.; Sutar, B.; Banerjee, S.; Bhattacharya, S.; Chatterjee, S.; Das, P.; Guchait, M.; Jain, Sa.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Sarkar, T.; Wickramage, N.; Chauhan, S.; Dube, S.; Hegde, V.; Kapoor, A.; Kothekar, K.; Pandey, S.; Rane, A.; Sharma, S.; Chenarani, S.; Eskandari Tadavani, E.; Etesami, S. M.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Errico, F.; Fiore, L.; Iaselli, G.; Lezki, S.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Albergo, S.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Chatterjee, K.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Lenzi, P.; Meschini, M.; Paoletti, S.; Russo, L.; Sguazzoni, G.; Strom, D.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Robutti, E.; Tosi, S.; Benaglia, A.; Brianza, L.; Brivio, F.; Ciriolo, V.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Govoni, P.; Malberti, M.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pauwels, K.; Pedrini, D.; Pigazzini, S.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Fabozzi, F.; Fienga, F.; Iorio, A. O. M.; Khan, W. A.; Lista, L.; Meola, S.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Benettoni, M.; Boletti, A.; Carlin, R.; Carvalho Antunes De Oliveira, A.; Checchia, P.; Dall'Osso, M.; De Castro Manzano, P.; Dorigo, T.; Dosselli, U.; Gasparini, F.; Gasparini, U.; Gozzelino, A.; Lacaprara, S.; Lujan, P.; Margoni, M.; Pozzobon, N.; Ronchese, P.; Rossin, R.; Simonetto, F.; Torassa, E.; Ventura, S.; Zanetti, M.; Zotto, P.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Ressegotti, M.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Biasini, M.; Bilei, G. M.; Cecchi, C.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Leonardi, R.; Manoni, E.; Mantovani, G.; Mariani, V.; Menichelli, M.; Rossi, A.; Santocchia, A.; Spiga, D.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Boccali, T.; Borrello, L.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Fedi, G.; Giannini, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Manca, E.; Mandorli, G.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Cipriani, M.; Daci, N.; Del Re, D.; Di Marco, E.; Diemoz, M.; Gelli, S.; Longo, E.; Margaroli, F.; Marzocchi, B.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bartosik, N.; Bellan, R.; Biino, C.; Cartiglia, N.; Cenna, F.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Monteno, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Ruspa, M.; Sacchi, R.; Shchelina, K.; Sola, V.; Solano, A.; Staiano, A.; Traczyk, P.; Belforte, S.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Zanetti, A.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Lee, J.; Lee, S.; Lee, S. W.; Moon, C. S.; Oh, Y. D.; Sekmen, S.; Son, D. C.; Yang, Y. C.; Lee, A.; Kim, H.; Moon, D. H.; Oh, G.; Brochero Cifuentes, J. A.; Goh, J.; Kim, T. J.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Ha, S.; Hong, B.; Jo, Y.; Kim, Y.; Lee, K.; Lee, K. S.; Lee, S.; Lim, J.; Park, S. K.; Roh, Y.; Almond, J.; Kim, J.; Kim, J. S.; Lee, H.; Lee, K.; Nam, K.; Oh, S. B.; Radburn-Smith, B. C.; Seo, S. h.; Yang, U. K.; Yoo, H. D.; Yu, G. B.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Choi, Y.; Hwang, C.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Zolkapli, Z.; Reyes-Almanza, R.; Ramirez-Sanchez, G.; Duran-Osuna, M. C.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Rabadan-Trejo, R. I.; Lopez-Fernandez, R.; Mejia Guisao, J.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Oropeza Barrera, C.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Uribe Estrada, C.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Saddique, A.; Shah, M. A.; Shoaib, M.; Waqas, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Szleper, M.; Zalewski, P.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Pyskir, A.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Di Francesco, A.; Faccioli, P.; Galinhas, B.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nemallapudi, M. V.; Seixas, J.; Strong, G.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Baginyan, A.; Golunov, A.; Golutvin, I.; Karjavin, V.; Korenkov, V.; Kozlov, G.; Lanev, A.; Malakhov, A.; Matveev, V.; Mitsyn, V. V.; Palichik, V.; Perelygin, V.; Shmatov, S.; Smirnov, V.; Voytishin, N.; Yuldashev, B. S.; Zarubin, A.; Zhiltsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Stepennov, A.; Toms, M.; Vlasov, E.; Zhokin, A.; Aushev, T.; Bylinkin, A.; Chadeeva, M.; Parygin, P.; Philippov, D.; Polikarpov, S.; Popova, E.; Rusinov, V.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Terkulov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Ershov, A.; Gribushin, A.; Khein, L.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Lukina, O.; Miagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Blinov, V.; Shtol, D.; Skovpen, Y.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Elumakhov, D.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Devetak, D.; Dordevic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Barrio Luna, M.; Cerrada, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Moran, D.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; Álvarez Fernández, A.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Cuevas, J.; Erice, C.; Fernandez Menendez, J.; Gonzalez Caballero, I.; González Fernández, J. R.; Palencia Cortezon, E.; Sanchez Cruz, S.; Vischia, P.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Chazin Quero, B.; Curras, E.; Duarte Campderros, J.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Martinez Rivero, C.; Martinez Ruiz del Arbol, P.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Baillon, P.; Ball, A. H.; Barney, D.; Bianco, M.; Bloch, P.; Bocci, A.; Botta, C.; Camporesi, T.; Castello, R.; Cepeda, M.; Cerminara, G.; Chapon, E.; Chen, Y.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Roeck, A.; Dobson, M.; du Pree, T.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Everaerts, P.; Fallavollita, F.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gilbert, A.; Gill, K.; Glege, F.; Gulhan, D.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Karacheban, O.; Kieseler, J.; Kirschenmann, H.; Knünz, V.; Kornmayer, A.; Kortelainen, M. J.; Krammer, M.; Lange, C.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Malgeri, L.; Mannelli, M.; Martelli, A.; Meijers, F.; Merlin, J. A.; Mersi, S.; Meschi, E.; Milenovic, P.; Moortgat, F.; Mulders, M.; Neugebauer, H.; Ngadiuba, J.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Seidel, M.; Selvaggi, M.; Sharma, A.; Silva, P.; Sphicas, P.; Stakia, A.; Steggemann, J.; Stoye, M.; Tosi, M.; Treille, D.; Triossi, A.; Tsirou, A.; Veckalns, V.; Verweij, M.; Zeuner, W. D.; Bertl, W.; Caminada, L.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Wiederkehr, S. A.; Bäni, L.; Berger, P.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Klijnsma, T.; Lustermann, W.; Mangano, B.; Marionneau, M.; Meinhard, M. T.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrin, G.; Perrozzi, L.; Quittnat, M.; Reichmann, M.; Schönenberger, M.; Shchutska, L.; Tavolaro, V. R.; Theofilatos, K.; Vesterbacka Olsson, M. L.; Wallny, R.; Zhu, D. H.; Aarrestad, T. K.; Amsler, C.; Canelli, M. F.; De Cosa, A.; Del Burgo, R.; Donato, S.; Galloni, C.; Hreus, T.; Kilminster, B.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Seitz, C.; Takahashi, Y.; Zucchetta, A.; Candelise, V.; Doan, T. H.; Jain, Sh.; Khurana, R.; Kuo, C. M.; Lin, W.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chao, Y.; Chen, K. F.; Chen, P. H.; Fiori, F.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Paganis, E.; Psallidas, A.; Steen, A.; Tsai, J. f.; Asavapibhop, B.; Kovitanggoon, K.; Singh, G.; Srimanobhas, N.; Bakirci, M. N.; Boran, F.; Cerci, S.; Damarseckin, S.; Demiroglu, Z. S.; Dozen, C.; Eskut, E.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Polatoz, A.; Topakli, H.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Bilin, B.; Karapinar, G.; Ocalan, K.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Tekten, S.; Yetkin, E. A.; Agaras, M. N.; Atay, S.; Cakir, A.; Cankocak, K.; Grynyov, B.; Levchuk, L.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Burns, D.; Clement, E.; Cussans, D.; Davignon, O.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Auzinger, G.; Bainbridge, R.; Borg, J.; Breeze, S.; Buchmuller, O.; Bundock, A.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Di Maria, R.; Elwood, A.; Haddad, Y.; Hall, G.; Iles, G.; James, T.; Lane, R.; Laner, C.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mastrolorenzo, L.; Matsushita, T.; Nash, J.; Nikitenko, A.; Palladino, V.; Pesaresi, M.; Raymond, D. M.; Richards, A.; Rose, A.; Scott, E.; Seez, C.; Shtipliyski, A.; Summers, S.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Wardle, N.; Winterbottom, D.; Wright, J.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Smith, C.; Bartek, R.; Dominguez, A.; Buccilli, A.; Cooper, S. I.; Henderson, C.; Rumerio, P.; West, C.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Benelli, G.; Cutts, D.; Garabedian, A.; Hakala, J.; Heintz, U.; Hogan, J. M.; Kwok, K. H. M.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Pazzini, J.; Piperov, S.; Sagir, S.; Syarif, R.; Yu, D.; Band, R.; Brainerd, C.; Breedon, R.; Burns, D.; Calderon De La Barca Sanchez, M.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Flores, C.; Funk, G.; Gardner, M.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Shalhout, S.; Shi, M.; Smith, J.; Stolp, D.; Tos, K.; Tripathi, M.; Wang, Z.; Bachtis, M.; Bravo, C.; Cousins, R.; Dasgupta, A.; Florent, A.; Hauser, J.; Ignatenko, M.; Mccoll, N.; Regnard, S.; Saltzberg, D.; Schnaible, C.; Valuev, V.; Bouvier, E.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Ghiasi Shirazi, S. M. A.; Hanson, G.; Heilman, J.; Kennedy, E.; Lacroix, F.; Long, O. R.; Olmedo Negrete, M.; Paneva, M. I.; Shrinivas, A.; Si, W.; Wang, L.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cittolin, S.; Derdzinski, M.; Gerosa, R.; Hashemi, B.; Holzner, A.; Klein, D.; Kole, G.; Krutelyov, V.; Letts, J.; Macneill, I.; Masciovecchio, M.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Wood, J.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Amin, N.; Bhandari, R.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Franco Sevilla, M.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Heller, R.; Incandela, J.; Mullin, S. D.; Ovcharova, A.; Qu, H.; Richman, J.; Stuart, D.; Suarez, I.; Yoo, J.; Anderson, D.; Bendavid, J.; Bornheim, A.; Lawhorn, J. M.; Newman, H. B.; Nguyen, T.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhang, Z.; Zhu, R. Y.; Andrews, M. B.; Ferguson, T.; Mudholkar, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Weinberg, M.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Leontsinis, S.; Mulholland, T.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; Mcdermott, K.; Mirman, N.; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Tan, S. M.; Tao, Z.; Thom, J.; Tucker, J.; Wittich, P.; Zientek, M.; Abdullin, S.; Albrow, M.; Alyari, M.; Apollinari, G.; Apresyan, A.; Apyan, A.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Canepa, A.; Cerati, G. B.; Cheung, H. W. K.; Chlebana, F.; Cremonesi, M.; Duarte, J.; Elvira, V. D.; Freeman, J.; Gecse, Z.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Lincoln, D.; Lipton, R.; Liu, M.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Magini, N.; Marraffino, J. M.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Ristori, L.; Schneider, B.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strait, J.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Brinkerhoff, A.; Carnes, A.; Carver, M.; Curry, D.; Field, R. D.; Furic, I. K.; Konigsberg, J.; Korytov, A.; Kotov, K.; Ma, P.; Matchev, K.; Mei, H.; Mitselmakher, G.; Rank, D.; Sperka, D.; Terentyev, N.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Joshi, Y. R.; Linn, S.; Markowitz, P.; Rodriguez, J. L.; Ackert, A.; Adams, T.; Askew, A.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Kolberg, T.; Martinez, G.; Perry, T.; Prosper, H.; Saha, A.; Santra, A.; Sharma, V.; Yohay, R.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Cavanaugh, R.; Chen, X.; Evdokimov, O.; Gerber, C. E.; Hangal, D. A.; Hofman, D. J.; Jung, K.; Kamin, J.; Sandoval Gonzalez, I. D.; Tonjes, M. B.; Trauger, H.; Varelas, N.; Wang, H.; Wu, Z.; Zhang, J.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Blumenfeld, B.; Cocoros, A.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Roskes, J.; Sarica, U.; Swartz, M.; Xiao, M.; You, C.; Al-bataineh, A.; Baringer, P.; Bean, A.; Boren, S.; Bowen, J.; Castle, J.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Mcbrayer, W.; Murray, M.; Royon, C.; Sanders, S.; Schmitz, E.; Tapia Takaki, J. D.; Wang, Q.; Ivanov, A.; Kaadze, K.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Hadley, N. J.; Jabeen, S.; Jeng, G. Y.; Kellogg, R. G.; Kunkle, J.; Mignerey, A. C.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Tonwar, S. C.; Abercrombie, D.; Allen, B.; Azzolini, V.; Barbieri, R.; Baty, A.; Bi, R.; Brandt, S.; Busza, W.; Cali, I. A.; D'Alfonso, M.; Demiragli, Z.; Gomez Ceballos, G.; Goncharov, M.; Hsu, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Maier, B.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Tatar, K.; Velicanu, D.; Wang, J.; Wang, T. W.; Wyslouch, B.; Benvenuti, A. C.; Chatterjee, R. M.; Evans, A.; Hansen, P.; Kalafut, S.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Claes, D. R.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Kravchenko, I.; Monroy, J.; Siado, J. E.; Snow, G. R.; Stieger, B.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Nguyen, D.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wood, D.; Bhattacharya, S.; Charaf, O.; Hahn, K. A.; Mucia, N.; Odell, N.; Pollack, B.; Schmitt, M. H.; Sung, K.; Trovato, M.; Velasco, M.; Dev, N.; Hildreth, M.; Hurtado Anampa, K.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Loukas, N.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Smith, G.; Taroni, S.; Wayne, M.; Wolf, M.; Woodard, A.; Alimena, J.; Antonelli, L.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Francis, B.; Hart, A.; Hill, C.; Ji, W.; Liu, B.; Luo, W.; Puigh, D.; Winer, B. L.; Wulsin, H. W.; Cooperstein, S.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Higginbotham, S.; Lange, D.; Luo, J.; Marlow, D.; Mei, K.; Ojalvo, I.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Tully, C.; Malik, S.; Norberg, S.; Barker, A.; Barnes, V. E.; Das, S.; Folgueras, S.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, A. W.; Khatiwada, A.; Miller, D. H.; Neumeister, N.; Peng, C. C.; Schulte, J. F.; Sun, J.; Wang, F.; Xie, W.; Cheng, T.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Bodek, A.; de Barbaro, P.; Demina, R.; Duh, Y. t.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Tan, P.; Verzetti, M.; Ciesielski, R.; Goulianos, K.; Mesropian, C.; Agapitos, A.; Chou, J. P.; Gershtein, Y.; Gómez Espinosa, T. A.; Halkiadakis, E.; Heindl, M.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Kyriacou, S.; Lath, A.; Montalvo, R.; Nash, K.; Osherson, M.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Delannoy, A. G.; Foerster, M.; Heideman, J.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Castaneda Hernandez, A.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Kamon, T.; Mueller, R.; Pakhotin, Y.; Patel, R.; Perloff, A.; Perniè, L.; Rathjens, D.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Damgov, J.; De Guio, F.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Peltola, T.; Undleeb, S.; Volobouev, I.; Wang, Z.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Melo, A.; Ni, H.; Padeken, K.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Barria, P.; Cox, B.; Hirosky, R.; Joyce, M.; Ledovskoy, A.; Li, H.; Neu, C.; Sinthuprasith, T.; Wang, Y.; Wolfe, E.; Xia, F.; Harr, R.; Karchin, P. E.; Sturdy, J.; Zaleski, S.; Brodski, M.; Buchanan, J.; Caillol, C.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Hussain, U.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Polese, G.; Ruggles, T.; Savin, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.
2018-02-01
A first search for same-sign WW production via double-parton scattering is performed based on proton-proton collision data at a center-of-mass energy of 8 TeV using dimuon and electron-muon final states. The search is based on the analysis of data corresponding to an integrated luminosity of 19.7 fb-1. No significant excess of events is observed above the expected single-parton scattering yields. A 95% confidence level upper limit of 0.32 pb is set on the inclusive cross section for same-sign WW production via the double-parton scattering process. This upper limit is used to place a 95% confidence level lower limit of 12.2 mb on the effective double-parton cross section parameter, closely related to the transverse distribution of partons in the proton. This limit on the effective cross section is consistent with previous measurements as well as with Monte Carlo event generator predictions.
Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit
NASA Technical Reports Server (NTRS)
Smith, Robert A.
1987-01-01
The evolution and long-time stability of a double layer (DL) in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double layer potential structure. A simple model is presented in which this current redistribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double layer potential. The flank charging may be represented as that of a nonlinear transmission line. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a one-dimensional simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.
Anomalous transport in discrete arcs and simulation of double layers in a model auroral circuit
NASA Technical Reports Server (NTRS)
Smith, Robert A.
1987-01-01
The evolution and long-time stability of a double layer in a discrete auroral arc requires that the parallel current in the arc, which may be considered uniform at the source, be diverted within the arc to charge the flanks of the U-shaped double-layer potential structure. A simple model is presented in which this current re-distribution is effected by anomalous transport based on electrostatic lower hybrid waves driven by the flank structure itself. This process provides the limiting constraint on the double-layer potential. The flank charging may be represented as that of a nonlinear transmission. A simplified model circuit, in which the transmission line is represented by a nonlinear impedance in parallel with a variable resistor, is incorporated in a 1-d simulation model to give the current density at the DL boundaries. Results are presented for the scaling of the DL potential as a function of the width of the arc and the saturation efficiency of the lower hybrid instability mechanism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ungun, B; Stanford University School of Medicine, Stanford, CA; Fu, A
2016-06-15
Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction,more » we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the Stanford BioX Graduate Fellowship and NIH Grant 5R01CA176553.« less
Narrative Configuration: Some Notes on the Workings of Hindsight
ERIC Educational Resources Information Center
Kvernbekk, Tone
2013-01-01
In this paper I analyze the role of hindsight in narrative configuration. Configuration means the grasping together of disparate elements into a coherent whole. I argue that hindsight, importantly, brings the temporal constraints on what we can know to the fore, but is a double-edged sword. On the one hand, hindsight is an indispensable tool both…
NASA Astrophysics Data System (ADS)
Gueddana, Amor; Attia, Moez; Chatta, Rihab
2015-03-01
In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.
NASA Astrophysics Data System (ADS)
Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico
2017-06-01
Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency range of a silicon complementary metal-oxide-semiconductor quantum processor to be within 1 and 100 GHz. Such constraint limits the feasibility of fault-tolerant quantum information processing with complementary metal-oxide-semiconductor technology only to the most advanced nodes. The compatibility with classical complementary metal-oxide-semiconductor control circuitry is discussed, focusing on the cryogenic complementary metal-oxide-semiconductor operation required to bring the classical controller as close as possible to the quantum processor and to enable interfacing thousands of qubits on the same chip via time-division, frequency-division, and space-division multiplexing. The operation time range prospected for cryogenic control electronics is found to be compatible with the operation time expected for qubits. By combining the forecast of the development of scaled technology nodes with operation time and classical circuitry constraints, we derive a maximum quantum information density for logical qubits of 2.8 and 4 Mqb/cm2 for the 10 and 7-nm technology nodes, respectively, for the Steane code. The density is one and two orders of magnitude less for surface codes and for concatenated codes, respectively. Such values provide a benchmark for the development of fault-tolerant quantum algorithms by circuital quantum information based on silicon platforms and a guideline for other technologies in general.
Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering
NASA Astrophysics Data System (ADS)
Koehler, Sarah Muraoka
Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is suited for nonlinear optimization problems. The parallel computation of the algorithm exploits iterative linear algebra methods for the main linear algebra computations in the algorithm. We show that the splitting of the algorithm is flexible and can thus be applied to various distributed platform configurations. The two proposed algorithms are applied to two main energy and transportation control problems. The first application is energy efficient building control. Buildings represent 40% of energy consumption in the United States. Thus, it is significant to improve the energy efficiency of buildings. The goal is to minimize energy consumption subject to the physics of the building (e.g. heat transfer laws), the constraints of the actuators as well as the desired operating constraints (thermal comfort of the occupants), and heat load on the system. In this thesis, we describe the control systems of forced air building systems in practice. We discuss the "Trim and Respond" algorithm which is a distributed control algorithm that is used in practice, and show that it performs similarly to a one-step explicit DMPC algorithm. Then, we apply the novel distributed primal-dual active-set method and provide extensive numerical results for the building MPC problem. The second main application is the control of ramp metering signals to optimize traffic flow through a freeway system. This application is particularly important since urban congestion has more than doubled in the past few decades. The ramp metering problem is to maximize freeway throughput subject to freeway dynamics (derived from mass conservation), actuation constraints, freeway capacity constraints, and predicted traffic demand. In this thesis, we develop a hybrid model predictive controller for ramp metering that is guaranteed to be persistently feasible and stable. This contrasts to previous work on MPC for ramp metering where such guarantees are absent. We apply a smoothing method to the hybrid model predictive controller and apply the inexact interior point method to this nonlinear non-convex ramp metering problem.
Ibe, Masahiro; Kusenko, Alexander; Yanagida, Tsutomu T.
2016-05-12
Here, we discuss an anthropic explanation of why there exist three generations of fermions. If one assumes that the right-handed neutrino sector is responsible for both the matter-antimatter asymmetry and the dark matter, then anthropic selection favors three or more families of fermions. For successful leptogenesis, at least two right-handed neutrinos are needed, while the third right-handed neutrino is invoked to play the role of dark matter. The number of the right-handed neutrinos is tied to the number of generations by the anomaly constraints of the U(1) B-L gauge symmetry. Combining anthropic arguments with observational constraints, we obtain predictions formore » the X-ray observations, as well as for neutrinoless double-beta decay.« less
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Method and Apparatus for Powered Descent Guidance
NASA Technical Reports Server (NTRS)
Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)
2013-01-01
A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.
Sensitivity of Lumped Constraints Using the Adjoint Method
NASA Technical Reports Server (NTRS)
Akgun, Mehmet A.; Haftka, Raphael T.; Wu, K. Chauncey; Walsh, Joanne L.
1999-01-01
Adjoint sensitivity calculation of stress, buckling and displacement constraints may be much less expensive than direct sensitivity calculation when the number of load cases is large. Adjoint stress and displacement sensitivities are available in the literature. Expressions for local buckling sensitivity of isotropic plate elements are derived in this study. Computational efficiency of the adjoint method is sensitive to the number of constraints and, therefore, the method benefits from constraint lumping. A continuum version of the Kreisselmeier-Steinhauser (KS) function is chosen to lump constraints. The adjoint and direct methods are compared for three examples: a truss structure, a simple HSCT wing model, and a large HSCT model. These sensitivity derivatives are then used in optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amendt, P; Robey, H F; Park, H-S
2003-08-22
An experimental campaign to study hohlraum-driven ignition-like double-shell target performance using the Omega laser facility has begun. These targets are intended to incorporate as many ignition-like properties of the proposed National Ignition Facility (NIF) double-shell ignition design [1,2] as possible, given the energy constraints of the Omega laser. In particular, this latest generation of Omega double-shells is nominally predicted to produce over 99% of the (clean) DD neutron yield from the compressional or stagnation phase of the implosion as required in the NIF ignition design. By contrast, previous double-shell experience on Omega [3] was restricted to cases where a significantmore » fraction of the observed neutron yield was produced during the earlier shock convergence phase where the effects of mix are deemed negligibly small. These new targets are specifically designed to have optimized fall-line behavior for mitigating the effects of pusher-fuel mix after deceleration onset and, thereby, providing maximum neutron yield from the stagnation phase. Experimental results from this recent Omega ignition-like double-shell implosion campaign show favorable agreement with two-dimensional integrated hohlraum simulation studies when enhanced (gold) hohlraum M-band (2-5 keV) radiation is included at a level consistent with observations.« less
Merits and limitations of optimality criteria method for structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo
1993-01-01
The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.
Systems and methods for maintaining multiple objects within a camera field-of-view
Gans, Nicholas R.; Dixon, Warren
2016-03-15
In one embodiment, a system and method for maintaining objects within a camera field of view include identifying constraints to be enforced, each constraint relating to an attribute of the viewed objects, identifying a priority rank for the constraints such that more important constraints have a higher priority that less important constraints, and determining the set of solutions that satisfy the constraints relative to the order of their priority rank such that solutions that satisfy lower ranking constraints are only considered viable if they also satisfy any higher ranking constraints, each solution providing an indication as to how to control the camera to maintain the objects within the camera field of view.
ERIC Educational Resources Information Center
Barnes, Norine R.; Frazier, Billie H.
This series of single- and double-sheet articles is designed to help parents better understand the role of parents, the skills and constraints involved in parenting, the effects of parenting on child development, and the effects of child development on parenting. The series contains a set of articles which address general aspects of parenting,…
Dynamic Generation of Reduced Ontologies to Support Resource Constraints of Mobile Devices
ERIC Educational Resources Information Center
Schrimpsher, Dan
2011-01-01
As Web Services and the Semantic Web become more important, enabling technologies such as web service ontologies will grow larger. At the same time, use of mobile devices to access web services has doubled in the last year. The ability of these resource constrained devices to download and reason across these ontologies to support service discovery…
Star products on graded manifolds and α′-corrections to Courant algebroids from string theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deser, Andreas, E-mail: andreas.deser@itp.uni-hannover.de
2015-09-15
Courant algebroids, originally used to study integrability conditions for Dirac structures, have turned out to be of central importance to study the effective supergravity limit of string theory. The search for a geometric description of T-duality leads to Double Field Theory (DFT), whose gauge algebra is governed by the C-bracket, a generalization of the Courant bracket in the sense that it reduces to the latter by solving a specific constraint. Recently, in DFT deformations of the C-bracket and O(d, d)-invariant bilinear form to first order in the closed string sigma model coupling, α′ were derived by analyzing the transformation propertiesmore » of the Neveu-Schwarz B-field. By choosing a particular Poisson structure on the Drinfel’d double corresponding to the Courant algebroid structure of the generalized tangent bundle, we are able to interpret the C-bracket and bilinear form in terms of Poisson brackets. As a result, we reproduce the α′-deformations for a specific solution to the strong constraint of DFT as expansion of a graded version of the Moyal-Weyl star product.« less
A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints
NASA Astrophysics Data System (ADS)
Estiningsih, Y.; Farikhin; Tjahjana, R. H.
2018-03-01
Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.
NASA Technical Reports Server (NTRS)
Fadel, G. M.
1991-01-01
The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.
Performance characteristics of a nanoscale double-gate reconfigurable array
NASA Astrophysics Data System (ADS)
Beckett, Paul
2008-12-01
The double gate transistor is a promising device applicable to deep sub-micron design due to its inherent resistance to short-channel effects and superior subthreshold performance. Using both TCAD and SPICE circuit simulation, it is shown that the characteristics of fully depleted dual-gate thin-body Schottky barrier silicon transistors will not only uncouple the conflicting requirements of high performance and low standby power in digital logic, but will also allow the development of a locally-connected reconfigurable computing mesh. The magnitude of the threshold shift effect will scale with device dimensions and will remain compatible with oxide reliability constraints. A field-programmable architecture based on the double gate transistor is described in which the operating point of the circuit is biased via one gate while the other gate is used to form the logic array, such that complex heterogeneous computing functions may be developed from this homogeneous, mesh-connected organization.
Cirigliano, V.; Dekens, W.; de Vries, J.; ...
2017-12-15
Here, we analyze neutrinoless double beta decay (0νββ) within the framework of the Standard Model Effective Field Theory. Apart from the dimension-five Weinberg operator, the first contributions appear at dimension seven. We classify the operators and evolve them to the electroweak scale, where we match them to effective dimension-six, -seven, and -nine operators. In the next step, after renormalization group evolution to the QCD scale, we construct the chiral Lagrangian arising from these operators. We then develop a power-counting scheme and derive the two-nucleon 0νββ currents up to leading order in the power counting for each lepton-number-violating operator. We arguemore » that the leading-order contribution to the decay rate depends on a relatively small number of nuclear matrix elements. We test our power counting by comparing nuclear matrix elements obtained by various methods and by different groups. We find that the power counting works well for nuclear matrix elements calculated from a specific method, while, as in the case of light Majorana neutrino exchange, the overall magnitude of the matrix elements can differ by factors of two to three between methods. We also calculate the constraints that can be set on dimension-seven lepton-number-violating operators from 0νββ experiments and study the interplay between dimension-five and -seven operators, discussing how dimension-seven contributions affect the interpretation of 0νββ in terms of the effective Majorana mass m ββ .« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cirigliano, V.; Dekens, W.; de Vries, J.
Here, we analyze neutrinoless double beta decay (0νββ) within the framework of the Standard Model Effective Field Theory. Apart from the dimension-five Weinberg operator, the first contributions appear at dimension seven. We classify the operators and evolve them to the electroweak scale, where we match them to effective dimension-six, -seven, and -nine operators. In the next step, after renormalization group evolution to the QCD scale, we construct the chiral Lagrangian arising from these operators. We then develop a power-counting scheme and derive the two-nucleon 0νββ currents up to leading order in the power counting for each lepton-number-violating operator. We arguemore » that the leading-order contribution to the decay rate depends on a relatively small number of nuclear matrix elements. We test our power counting by comparing nuclear matrix elements obtained by various methods and by different groups. We find that the power counting works well for nuclear matrix elements calculated from a specific method, while, as in the case of light Majorana neutrino exchange, the overall magnitude of the matrix elements can differ by factors of two to three between methods. We also calculate the constraints that can be set on dimension-seven lepton-number-violating operators from 0νββ experiments and study the interplay between dimension-five and -seven operators, discussing how dimension-seven contributions affect the interpretation of 0νββ in terms of the effective Majorana mass m ββ .« less
Crack Instability Predictions Using a Multi-Term Approach
NASA Technical Reports Server (NTRS)
Zanganeh, Mohammad; Forman, Royce G.
2015-01-01
Present crack instability analysis for fracture critical flight hardware is normally performed using a single parameter, K(sub C), fracture toughness value obtained from standard ASTM 2D geometry test specimens made from the appropriate material. These specimens do not sufficiently match the boundary conditions and the elastic-plastic constraint characteristics of the hardware component, and also, the crack instability of most commonly used aircraft and aerospace structural materials have some amount of stable crack growth before fracture which makes the normal use of a K(sub C) single parameter toughness value highly approximate. In the past, extensive studies have been conducted to improve the single parameter (K or J controlled) approaches by introducing parameters accounting for the geometry or in-plane constraint effects. Using 'J-integral' and 'A' parameter as a measure of constraint is one of the most accurate elastic-plastic crack solutions currently available. In this work the feasibility of the J-A approach for prediction of the crack instability was investigated first by ignoring the effects of stable crack growth i.e. using a critical J and A and second by considering the effects of stable crack growth using the corrected J-delta a using the 'A' parameter. A broad range of initial crack lengths and a wide range of specimen geometries including C(T), M(T), ESE(T), SE(T), Double Edge Crack (DEC), Three-Hole-Tension (THT) and NC (crack from a notch) manufactured from Al7075 were studied. Improvements in crack instability predictions were observed compared to the other methods available in the literature.
Mean protein evolutionary distance: a method for comparative protein evolution and its application.
Wise, Michael J
2013-01-01
Proteins are under tight evolutionary constraints, so if a protein changes it can only do so in ways that do not compromise its function. In addition, the proteins in an organism evolve at different rates. Leveraging the history of patristic distance methods, a new method for analysing comparative protein evolution, called Mean Protein Evolutionary Distance (MeaPED), measures differential resistance to evolutionary pressure across viral proteomes and is thereby able to point to the proteins' roles. Different species' proteomes can also be compared because the results, consistent across virus subtypes, concisely reflect the very different lifestyles of the viruses. The MeaPED method is here applied to influenza A virus, hepatitis C virus, human immunodeficiency virus (HIV), dengue virus, rotavirus A, polyomavirus BK and measles, which span the positive and negative single-stranded, doubled-stranded and reverse transcribing RNA viruses, and double-stranded DNA viruses. From this analysis, host interaction proteins including hemagglutinin (influenza), and viroporins agnoprotein (polyomavirus), p7 (hepatitis C) and VPU (HIV) emerge as evolutionary hot-spots. By contrast, RNA-directed RNA polymerase proteins including L (measles), PB1/PB2 (influenza) and VP1 (rotavirus), and internal serine proteases such as NS3 (dengue and hepatitis C virus) emerge as evolutionary cold-spots. The hot spot influenza hemagglutinin protein is contrasted with the related cold spot H protein from measles. It is proposed that evolutionary cold-spot proteins can become significant targets for second-line anti-viral therapeutics, in cases where front-line vaccines are not available or have become ineffective due to mutations in the hot-spot, generally more antigenically exposed proteins. The MeaPED package is available from www.pam1.bcs.uwa.edu.au/~michaelw/ftp/src/meaped.tar.gz.
Mean Protein Evolutionary Distance: A Method for Comparative Protein Evolution and Its Application
Wise, Michael J.
2013-01-01
Proteins are under tight evolutionary constraints, so if a protein changes it can only do so in ways that do not compromise its function. In addition, the proteins in an organism evolve at different rates. Leveraging the history of patristic distance methods, a new method for analysing comparative protein evolution, called Mean Protein Evolutionary Distance (MeaPED), measures differential resistance to evolutionary pressure across viral proteomes and is thereby able to point to the proteins’ roles. Different species’ proteomes can also be compared because the results, consistent across virus subtypes, concisely reflect the very different lifestyles of the viruses. The MeaPED method is here applied to influenza A virus, hepatitis C virus, human immunodeficiency virus (HIV), dengue virus, rotavirus A, polyomavirus BK and measles, which span the positive and negative single-stranded, doubled-stranded and reverse transcribing RNA viruses, and double-stranded DNA viruses. From this analysis, host interaction proteins including hemagglutinin (influenza), and viroporins agnoprotein (polyomavirus), p7 (hepatitis C) and VPU (HIV) emerge as evolutionary hot-spots. By contrast, RNA-directed RNA polymerase proteins including L (measles), PB1/PB2 (influenza) and VP1 (rotavirus), and internal serine proteases such as NS3 (dengue and hepatitis C virus) emerge as evolutionary cold-spots. The hot spot influenza hemagglutinin protein is contrasted with the related cold spot H protein from measles. It is proposed that evolutionary cold-spot proteins can become significant targets for second-line anti-viral therapeutics, in cases where front-line vaccines are not available or have become ineffective due to mutations in the hot-spot, generally more antigenically exposed proteins. The MeaPED package is available from www.pam1.bcs.uwa.edu.au/~michaelw/ftp/src/meaped.tar.gz. PMID:23613826
Constraints on Yukawa parameters by double pulsars
NASA Astrophysics Data System (ADS)
Deng, Xue-Mei; Xie, Yi; Huang, Tian-Yi
2013-03-01
Although Einstein's general relativity has passed all the tests so far, alternative theories are still required for deeper understanding of the nature of gravity. Double pulsars provide us a significant opportunity to test them. In order to probe some modified gravities which try to explain some astrophysical phenomena without dark matter, we use periastron advance dot{ω} of four binary pulsars (PSR B1913+16, PSR B1534+12, PSR J0737-3039 and PSR B2127+11C) to constrain their Yukawa parameters: λ = (3.97 ± 0.01) × 108m and α = (2.40 ± 0.02) × 10-8. It might help us to distinguish different gravity theories and get closer to the new physics.
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Gemperline, Paul J; Cash, Eric
2003-08-15
A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.
A Novel Face-on-Face Contact Method for Nonlinear Solid Mechanics
NASA Astrophysics Data System (ADS)
Wopschall, Steven Robert
The implicit solution to contact problems in nonlinear solid mechanics poses many difficulties. Traditional node-to-segment methods may suffer from locking and experience contact force chatter in the presence of sliding. More recent developments include mortar based methods, which resolve local contact interactions over face-pairs and feature a kinematic constraint in integral form that smoothes contact behavior, especially in the presence of sliding. These methods have been shown to perform well in the presence of geometric nonlinearities and are demonstratively more robust than node-to-segment methods. These methods are typically biased, however, interpolating contact tractions and gap equations on a designated non-mortar face, which leads to an asymmetry in the formulation. Another challenge is constraint enforcement. The general selection of the active set of constraints is brought with difficulty, often leading to non-physical solutions and easily resulting in missed face-pair interactions. Details on reliable constraint enforcement methods are lacking in the greater contact literature. This work presents an unbiased contact formulation utilizing a median-plane methodology. Up to linear polynomials are used for the discrete pressure representation and integral gap constraints are enforced using a novel subcycling procedure. This procedure reliably determines the active set of contact constraints leading to physical and kinematically admissible solutions void of heuristics and user action. The contact method presented herein successfully solves difficult quasi-static contact problems in the implicit computational setting. These problems feature finite deformations, material nonlinearity, and complex interface geometries, all of which are challenging characteristics for contact implementations and constraint enforcement algorithms. The subcycling procedure is a key feature of this method, handling active constraint selection for complex interfaces and mesh geometries.
Proposal of Constraints Analysis Method Based on Network Model for Task Planning
NASA Astrophysics Data System (ADS)
Tomiyama, Tomoe; Sato, Tatsuhiro; Morita, Toyohisa; Sasaki, Toshiro
Deregulation has been accelerating several activities toward reengineering business processes, such as railway through service and modal shift in logistics. Making those activities successful, business entities have to regulate new business rules or know-how (we call them ‘constraints’). According to the new constraints, they need to manage business resources such as instruments, materials, workers and so on. In this paper, we propose a constraint analysis method to define constraints for task planning of the new business processes. To visualize each constraint's influence on planning, we propose a network model which represents allocation relations between tasks and resources. The network can also represent task ordering relations and resource grouping relations. The proposed method formalizes the way of defining constraints manually as repeatedly checking the network structure and finding conflicts between constraints. Being applied to crew scheduling problems shows that the method can adequately represent and define constraints of some task planning problems with the following fundamental features, (1) specifying work pattern to some resources, (2) restricting the number of resources for some works, (3) requiring multiple resources for some works, (4) prior allocation of some resources to some works and (5) considering the workload balance between resources.
A Framework of Covariance Projection on Constraint Manifold for Data Fusion.
Bakr, Muhammad Abu; Lee, Sukhan
2018-05-17
A general framework of data fusion is presented based on projecting the probability distribution of true states and measurements around the predicted states and actual measurements onto the constraint manifold. The constraint manifold represents the constraints to be satisfied among true states and measurements, which is defined in the extended space with all the redundant sources of data such as state predictions and measurements considered as independent variables. By the general framework, we mean that it is able to fuse any correlated data sources while directly incorporating constraints and identifying inconsistent data without any prior information. The proposed method, referred to here as the Covariance Projection (CP) method, provides an unbiased and optimal solution in the sense of minimum mean square error (MMSE), if the projection is based on the minimum weighted distance on the constraint manifold. The proposed method not only offers a generalization of the conventional formula for handling constraints and data inconsistency, but also provides a new insight into data fusion in terms of a geometric-algebraic point of view. Simulation results are provided to show the effectiveness of the proposed method in handling constraints and data inconsistency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sirunyan, A. M.; Tumasyan, A.; Adam, W.
A first search for same-sign WW production via double-parton scattering is performed based on proton-proton collision data at a center-of-mass energy of 8 TeV using dimuon and electron-muon final states. The search is based on the analysis of data corresponding to an integrated luminosity of 19.7 fb –1. No significant excess of events is observed above the expected single-parton scattering yields. A 95% confidence level upper limit of 0.32 pb is set on the inclusive cross section for same-sign WW production via the double-parton scattering process. This upper limit is used to place a 95% confidence level lower limit ofmore » 12.2 mb on the effective double-parton cross section parameter, closely related to the transverse distribution of partons in the proton. As a result, this limit on the effective cross section is consistent with previous measurements as well as with Monte Carlo event generator predictions.« less
Estimating acreage by double sampling using LANDSAT data
NASA Technical Reports Server (NTRS)
Pont, F.; Horwitz, H.; Kauth, R. (Principal Investigator)
1982-01-01
Double sampling techniques employing LANDSAT data for estimating the acreage of corn and soybeans was investigated and evaluated. The evaluation was based on estimated costs and correlations between two existing procedures having differing cost/variance characteristics, and included consideration of their individual merits when coupled with a fictional 'perfect' procedure of zero bias and variance. Two features of the analysis are: (1) the simultaneous estimation of two or more crops; and (2) the imposition of linear cost constraints among two or more types of resource. A reasonably realistic operational scenario was postulated. The costs were estimated from current experience with the measurement procedures involved, and the correlations were estimated from a set of 39 LACIE-type sample segments located in the U.S. Corn Belt. For a fixed variance of the estimate, double sampling with the two existing LANDSAT measurement procedures can result in a 25% or 50% cost reduction. Double sampling which included the fictional perfect procedure results in a more cost effective combination when it is used with the lower cost/higher variance representative of the existing procedures.
Sirunyan, A. M.; Tumasyan, A.; Adam, W.; ...
2018-02-06
A first search for same-sign WW production via double-parton scattering is performed based on proton-proton collision data at a center-of-mass energy of 8 TeV using dimuon and electron-muon final states. The search is based on the analysis of data corresponding to an integrated luminosity of 19.7 fb –1. No significant excess of events is observed above the expected single-parton scattering yields. A 95% confidence level upper limit of 0.32 pb is set on the inclusive cross section for same-sign WW production via the double-parton scattering process. This upper limit is used to place a 95% confidence level lower limit ofmore » 12.2 mb on the effective double-parton cross section parameter, closely related to the transverse distribution of partons in the proton. As a result, this limit on the effective cross section is consistent with previous measurements as well as with Monte Carlo event generator predictions.« less
Tailored Hypersound Generation in Single Plasmonic Nanoantennas.
Della Picca, Fabricio; Berte, Rodrigo; Rahmani, Mohsen; Albella, Pablo; Bujjamer, Juan M; Poblet, Martín; Cortés, Emiliano; Maier, Stefan A; Bragas, Andrea V
2016-02-10
Ultrashort laser pulses impinging on a plasmonic nanostructure trigger a highly dynamic scenario in the interplay of electronic relaxation with lattice vibrations, which can be experimentally probed via the generation of coherent phonons. In this Letter, we present studies of hypersound generation in the range of a few to tens of gigahertz on single gold plasmonic nanoantennas, which have additionally been subjected to predesigned mechanical constraints via silica bridges. Using these hybrid gold/silica nanoantennas, we demonstrate experimentally and via numerical simulations how mechanical constraints allow control over their vibrational mode spectrum. Degenerate pump-probe techniques with double modulation are performed in order to detect the small changes produced in the probe transmission by the mechanical oscillations of these single nanoantennas.
A coarse-grid projection method for accelerating incompressible flow computations
NASA Astrophysics Data System (ADS)
San, Omer; Staples, Anne
2011-11-01
We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.
Structural optimization of large structural systems by optimality criteria methods
NASA Technical Reports Server (NTRS)
Berke, Laszlo
1992-01-01
The fundamental concepts of the optimality criteria method of structural optimization are presented. The effect of the separability properties of the objective and constraint functions on the optimality criteria expressions is emphasized. The single constraint case is treated first, followed by the multiple constraint case with a more complex evaluation of the Lagrange multipliers. Examples illustrate the efficiency of the method.
Beyramysoltan, Samira; Rajkó, Róbert; Abdollahi, Hamid
2013-08-12
The obtained results by soft modeling multivariate curve resolution methods often are not unique and are questionable because of rotational ambiguity. It means a range of feasible solutions equally fit experimental data and fulfill the constraints. Regarding to chemometric literature, a survey of useful constraints for the reduction of the rotational ambiguity is a big challenge for chemometrician. It is worth to study the effects of applying constraints on the reduction of rotational ambiguity, since it can help us to choose the useful constraints in order to impose in multivariate curve resolution methods for analyzing data sets. In this work, we have investigated the effect of equality constraint on decreasing of the rotational ambiguity. For calculation of all feasible solutions corresponding with known spectrum, a novel systematic grid search method based on Species-based Particle Swarm Optimization is proposed in a three-component system. Copyright © 2013 Elsevier B.V. All rights reserved.
Multiconstrained gene clustering based on generalized projections
2010-01-01
Background Gene clustering for annotating gene functions is one of the fundamental issues in bioinformatics. The best clustering solution is often regularized by multiple constraints such as gene expressions, Gene Ontology (GO) annotations and gene network structures. How to integrate multiple pieces of constraints for an optimal clustering solution still remains an unsolved problem. Results We propose a novel multiconstrained gene clustering (MGC) method within the generalized projection onto convex sets (POCS) framework used widely in image reconstruction. Each constraint is formulated as a corresponding set. The generalized projector iteratively projects the clustering solution onto these sets in order to find a consistent solution included in the intersection set that satisfies all constraints. Compared with previous MGC methods, POCS can integrate multiple constraints from different nature without distorting the original constraints. To evaluate the clustering solution, we also propose a new performance measure referred to as Gene Log Likelihood (GLL) that considers genes having more than one function and hence in more than one cluster. Comparative experimental results show that our POCS-based gene clustering method outperforms current state-of-the-art MGC methods. Conclusions The POCS-based MGC method can successfully combine multiple constraints from different nature for gene clustering. Also, the proposed GLL is an effective performance measure for the soft clustering solutions. PMID:20356386
NASA Astrophysics Data System (ADS)
Dzuba, Sergei A.
2016-08-01
Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.
Obstacle avoidance handling and mixed integer predictive control for space robots
NASA Astrophysics Data System (ADS)
Zong, Lijun; Luo, Jianjun; Wang, Mingming; Yuan, Jianping
2018-04-01
This paper presents a novel obstacle avoidance constraint and a mixed integer predictive control (MIPC) method for space robots avoiding obstacles and satisfying physical limits during performing tasks. Firstly, a novel kind of obstacle avoidance constraint of space robots, which needs the assumption that the manipulator links and the obstacles can be represented by convex bodies, is proposed by limiting the relative velocity between two closest points which are on the manipulator and the obstacle, respectively. Furthermore, the logical variables are introduced into the obstacle avoidance constraint, which have realized the constraint form is automatically changed to satisfy different obstacle avoidance requirements in different distance intervals between the space robot and the obstacle. Afterwards, the obstacle avoidance constraint and other system physical limits, such as joint angle ranges, the amplitude boundaries of joint velocities and joint torques, are described as inequality constraints of a quadratic programming (QP) problem by using the model predictive control (MPC) method. To guarantee the feasibility of the obtained multi-constraint QP problem, the constraints are treated as soft constraints and assigned levels of priority based on the propositional logic theory, which can realize that the constraints with lower priorities are always firstly violated to recover the feasibility of the QP problem. Since the logical variables have been introduced, the optimization problem including obstacle avoidance and system physical limits as prioritized inequality constraints is termed as MIPC method of space robots, and its computational complexity as well as possible strategies for reducing calculation amount are analyzed. Simulations of the space robot unfolding its manipulator and tracking the end-effector's desired trajectories with the existence of obstacles and physical limits are presented to demonstrate the effectiveness of the proposed obstacle avoidance strategy and MIPC control method of space robots.
Relating constrained motion to force through Newton's second law
NASA Astrophysics Data System (ADS)
Roithmayr, Carlos M.
When a mechanical system is subject to constraints its motion is in some way restricted. In accordance with Newton's second law, motion is a direct result of forces acting on a system; hence, constraint is inextricably linked to force. The presence of a constraint implies the application of particular forces needed to compel motion in accordance with the constraint; absence of a constraint implies the absence of such forces. The objective of this thesis is to formulate a comprehensive, consistent, and concise method for identifying a set of forces needed to constrain the behavior of a mechanical system modeled as a set of particles and rigid bodies. The goal is accomplished in large part by expressing constraint equations in vector form rather than entirely in terms of scalars. The method developed here can be applied whenever constraints can be described at the acceleration level by a set of independent equations that are linear in acceleration. Hence, the range of applicability extends to servo-constraints or program constraints described at the velocity level with relationships that are nonlinear in velocity. All configuration constraints, and an important class of classical motion constraints, can be expressed at the velocity level by using equations that are linear in velocity; therefore, the associated constraint equations are linear in acceleration when written at the acceleration level. Two new approaches are presented for deriving equations governing motion of a system subject to constraints expressed at the velocity level with equations that are nonlinear in velocity. By using partial accelerations instead of the partial velocities normally employed with Kane's method, it is possible to form dynamical equations that either do or do not contain evidence of the constraint forces, depending on the analyst's interests.
Current-mode subthreshold MOS implementation of the Herault-Jutten autoadaptive network
NASA Astrophysics Data System (ADS)
Cohen, Marc H.; Andreou, Andreas G.
1992-05-01
The translinear circuits in subthreshold MOS technology and current-mode design techniques for the implementation of neuromorphic analog network processing are investigated. The architecture, also known as the Herault-Jutten network, performs an independent component analysis and is essentially a continuous-time recursive linear adaptive filter. Analog I/O interface, weight coefficients, and adaptation blocks are all integrated on the chip. A small network with six neurons and 30 synapses was fabricated in a 2-microns n-well double-polysilicon, double-metal CMOS process. Circuit designs at the transistor level yield area-efficient implementations for neurons, synapses, and the adaptation blocks. The design methodology and constraints as well as test results from the fabricated chips are discussed.
Large neighborhood search for the double traveling salesman problem with multiple stacks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, Russell W; Van Hentenryck, Pascal
This paper considers a complex real-life short-haul/long haul pickup and delivery application. The problem can be modeled as double traveling salesman problem (TSP) in which the pickups and the deliveries happen in the first and second TSPs respectively. Moreover, the application features multiple stacks in which the items must be stored and the pickups and deliveries must take place in reserve (LIFO) order for each stack. The goal is to minimize the total travel time satisfying these constraints. This paper presents a large neighborhood search (LNS) algorithm which improves the best-known results on 65% of the available instances and ismore » always within 2% of the best-known solutions.« less
Duraisamy, P; Malathy, R
1991-01-01
Cross sectional and time series analyses are conducted with 1971 and 1981 rural district level data for India in order to estimate variations in program impacts on household decisionmaking concerning fertility, child mortality, and schooling; to analyze how the variation in public program subsidies and services influences sex specific investments in schooling; and to examine the bias in cross sectional estimates by employing fixed effects methodology. The theory of household production uses the framework development by Rosenzweig and Wolpin. The utility function is expressed as a function of families' desired number of children, sex specific investment in human capital of children measured by schooling of males and females, and a composite consumption good. Budget constraints are characterized in terms of the biological supply of births or natural fertility, the number of births averted by fertility control, exogenous money income, the prices of number of children, contraceptives, child schooling, and consumption of goods. Demand functions are constructed from maximizing the utility function subject to the budget constraint. Data constitute 40% of the total districts and 50% of the rural population. The empirical specification of the linear model and variable description are provided. Other explanatory variables included are adult educational attainment; % of scheduled castes and tribes and % Muslim; and % rural population. Estimation methods are described and justification is provided for the use of ordinary least squares and fixed effects methods. The results of the cross sectional analysis reveal that own-program effects of family planning and primary health centers reduced family size in 1971 and 81. The increase in secondary school enrollment is evidenced in only 1971. There is a significant effect of family planning (FP) clinics on the demand for surviving children only in 1971. The presence of a seconary school in a village reduces the demand for children in both years. Primary health centers (PHC) and hospitals in a village only encourage boys and girls schooling in 1981. Doubling the number of PHCs/1000 population would reduce the total fertility rate from 4.05 to 3.85. Doubling secondary schools alone would reduce the total fertility rate to 3.75. A 12% decline in fertility or a 20% decrease in populaiton growth would be realized with this doubling. Promotion of female higher education would reduce family size and increase the schooling of females, to equalize the enrollments between the sexes. Muslim population increases fertility and reduces schooling for both sexes. The panel results suggest that the effects of hospitals are overstated cross sectionally, and the effects of FP and secondary schools are understated. Both analyses showed increases in schools to improve female educational attainment.
A novel method for trajectory planning of cooperative mobile manipulators.
Bolandi, Hossein; Ehyaei, Amir Farhad
2011-01-01
We have designed a two-stage scheme to consider the trajectory planning problem of two mobile manipulators for cooperative transportation of a rigid body in the presence of static obstacles. In the first stage, with regard to the static obstacles, we develop a method that searches the workspace for the shortest possible path between the start and goal configurations, by constructing a graph on a portion of the configuration space that satisfies the collision and closure constraints. The final stage is to calculate a sequence of time-optimal trajectories to go between the consecutive points of the path, with regard to the nonholonomic constraints and the maximum allowed joint accelerations. This approach allows geometric constraints such as joint limits and closed-chain constraints, along with differential constraints such as nonholonomic velocity constraints and acceleration limits, to be incorporated into the planning scheme. The simulation results illustrate the effectiveness of the proposed method.
A Novel Method for Trajectory Planning of Cooperative Mobile Manipulators
Bolandi, Hossein; Ehyaei, Amir Farhad
2011-01-01
We have designed a two-stage scheme to consider the trajectory planning problem of two mobile manipulators for cooperative transportation of a rigid body in the presence of static obstacles. In the first stage, with regard to the static obstacles, we develop a method that searches the workspace for the shortest possible path between the start and goal configurations, by constructing a graph on a portion of the configuration space that satisfies the collision and closure constraints. The final stage is to calculate a sequence of time-optimal trajectories to go between the consecutive points of the path, with regard to the nonholonomic constraints and the maximum allowed joint accelerations. This approach allows geometric constraints such as joint limits and closed-chain constraints, along with differential constraints such as nonholonomic velocity constraints and acceleration limits, to be incorporated into the planning scheme. The simulation results illustrate the effectiveness of the proposed method. PMID:22606656
NASA Astrophysics Data System (ADS)
Wolin, Scott; Phenix Collaboration
2011-10-01
The gluon polarization, ΔG =∫01 g(x) dx , is constrained in the region 0 . 05 < x < 0 . 2 from measurements of double spin asymmetries, ALL, for inclusive hadron and jet production at mid-rapidity at RHIC. Theoretical analysis of experimental results shows that ∫0. 05 0 . 2 Δg(x) dx = 0 .013-0 . 120 + 0 . 106 . This is not large enough to account for the missing proton spin. However, Δg(x) is unconstrained at low-x, and a measurement sensitive to this region will provide important input for future global analyses. The measurement of ALL for inclusive hadrons and di-hadrons with the Muon Piston Calorimeter (MPC) 3 . 1 < η < 3 . 9 provides this sensitivity down to x 10-3 and will lead to the first constraints of Δg(x) at x < 0 . 05 . The di-hadron measurement is especially interesting as it is sensitive to the sign of ΔG and best constrains the parton kinematics giving the most precise access to xgluon. The inclusive measurement provides a looser constraint on the event kinematics but has a higher yield. We will present the status of these measurements for the 2009 dataset at √{ s} = 500 GeV and √{ s} = 200 GeV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Reid F.; Zhai, Huifang; Both, Stefan
Purpose: Uncontrolled local growth is the cause of death in ∼30% of patients with unresectable pancreatic cancers. The addition of standard-dose radiotherapy to gemcitabine has been shown to confer a modest survival benefit in this population. Radiation dose escalation with three-dimensional planning is not feasible, but high-dose intensity-modulated radiation therapy (IMRT) has been shown to improve local control. Still, dose-escalation remains limited by gastrointestinal toxicity. In this study, the authors investigate the potential use of double scattering (DS) and pencil beam scanning (PBS) proton therapy in limiting dose to critical organs at risk. Methods: The authors compared DS, PBS, andmore » IMRT plans in 13 patients with unresectable cancer of the pancreatic head, paying particular attention to duodenum, small intestine, stomach, liver, kidney, and cord constraints in addition to target volume coverage. All plans were calculated to 5500 cGy in 25 fractions with equivalent constraints and normalized to prescription dose. All statistics were by two-tailed paired t-test. Results: Both DS and PBS decreased stomach, duodenum, and small bowel dose in low-dose regions compared to IMRT (p < 0.01). However, protons yielded increased doses in the mid to high dose regions (e.g., 23.6–53.8 and 34.9–52.4 Gy for duodenum using DS and PBS, respectively; p < 0.05). Protons also increased generalized equivalent uniform dose to duodenum and stomach, however these differences were small (<5% and 10%, respectively; p < 0.01). Doses to other organs-at-risk were within institutional constraints and placed no obvious limitations on treatment planning. Conclusions: Proton therapy does not appear to reduce OAR volumes receiving high dose. Protons are able to reduce the treated volume receiving low-intermediate doses, however the clinical significance of this remains to be determined in future investigations.« less
State estimation with incomplete nonlinear constraint
NASA Astrophysics Data System (ADS)
Huang, Yuan; Wang, Xueying; An, Wei
2017-10-01
A problem of state estimation with a new constraints named incomplete nonlinear constraint is considered. The targets are often move in the curve road, if the width of road is neglected, the road can be considered as the constraint, and the position of sensors, e.g., radar, is known in advance, this info can be used to enhance the performance of the tracking filter. The problem of how to incorporate the priori knowledge is considered. In this paper, a second-order sate constraint is considered. A fitting algorithm of ellipse is adopted to incorporate the priori knowledge by estimating the radius of the trajectory. The fitting problem is transformed to the nonlinear estimation problem. The estimated ellipse function is used to approximate the nonlinear constraint. Then, the typical nonlinear constraint methods proposed in recent works can be used to constrain the target state. Monte-Carlo simulation results are presented to illustrate the effectiveness proposed method in state estimation with incomplete constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz
This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less
The structure and dynamics in solution of Cu(I) pseudoazurin from Paracoccus pantotrophus.
Thompson, G. S.; Leung, Y. C.; Ferguson, S. J.; Radford, S. E.; Redfield, C.
2000-01-01
The solution structure and backbone dynamics of Cu(I) pseudoazurin, a 123 amino acid electron transfer protein from Paracoccus pantotrophus, have been determined using NMR methods. The structure was calculated to high precision, with a backbone RMS deviation for secondary structure elements of 0.35+/-0.06 A, using 1,498 distance and 55 torsion angle constraints. The protein has a double-wound Greek-key fold with two alpha-helices toward its C-terminus, similar to that of its oxidized counterpart determined by X-ray crystallography. Comparison of the Cu(I) solution structure with the X-ray structure of the Cu(II) protein shows only small differences in the positions of some of the secondary structure elements. Order parameters S2, measured for amide nitrogens, indicate that the backbone of the protein is rigid on the picosecond to nanosecond timescale. PMID:10850794
Neutron Characterization of Encapsulated ATF-1/LANL-1 Mockup Fuel Capsules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogel, Sven C.; Borges, Nicholas Paul; Losko, Adrian Simon
Twenty pellets of mock-up accident tolerant fuels UN-U3Si5 were produced at LANL and loaded in two rodlet/capsule assemblies. Tomographic imaging and diffraction measurements were performed to characterize these samples at the Flight-Path 5 and HIPPO beam lines at LANSCE/LANL between November 2016 and January 2017 as well as in August 2017. The entire ~10 cm long, ~1 cm diameter fuel volume could be characterized, however due to time constraints only 2 mm slices in 4mm increments were characterized with neutron diffraction and a 28mm subset of the entire sample was characterized with energy-resolved neutron imaging. The double encapsulation of themore » fuel into two steel containers does not pose a problem for the neutron analysis and the methods could be applied to enriched as well irradiated fuels.« less
NASA Astrophysics Data System (ADS)
Meng, Qizhi; Xie, Fugui; Liu, Xin-Jun
2018-06-01
This paper deals with the conceptual design, kinematic analysis and workspace identification of a novel four degrees-of-freedom (DOFs) high-speed spatial parallel robot for pick-and-place operations. The proposed spatial parallel robot consists of a base, four arms and a 1½ mobile platform. The mobile platform is a major innovation that avoids output singularity and offers the advantages of both single and double platforms. To investigate the characteristics of the robot's DOFs, a line graph method based on Grassmann line geometry is adopted in mobility analysis. In addition, the inverse kinematics is derived, and the constraint conditions to identify the correct solution are also provided. On the basis of the proposed concept, the workspace of the robot is identified using a set of presupposed parameters by taking input and output transmission index as the performance evaluation criteria.
NASA Astrophysics Data System (ADS)
Valsecchi, Francesca
Binary star systems hosting black holes, neutron stars, and white dwarfs are unique laboratories for investigating both extreme physical conditions, and stellar and binary evolution. Black holes and neutron stars are observed in X-ray binaries, where mass accretion from a stellar companion renders them X-ray bright. Although instruments like Chandra have revolutionized the field of X-ray binaries, our theoretical understanding of their origin and formation lags behind. Progress can be made by unravelling the evolutionary history of observed systems. As part of my thesis work, I have developed an analysis method that uses detailed stellar models and all the observational constraints of a system to reconstruct its evolutionary path. This analysis models the orbital evolution from compact-object formation to the present time, the binary orbital dynamics due to explosive mass loss and a possible kick at core collapse, and the evolution from the progenitor's Zero Age Main Sequence to compact-object formation. This method led to a theoretical model for M33 X-7, one of the most massive X-ray binaries known and originally marked as an evolutionary challenge. Compact objects are also expected gravitational wave (GW) sources. In particular, double white dwarfs are both guaranteed GW sources and observed electromagnetically. Although known systems show evidence of tidal deformation and a successful GW astronomy requires realistic models of the sources, detached double white dwarfs are generally approximated to point masses. For the first time, I used realistic models to study tidally-driven periastron precession in eccentric binaries. I demonstrated that its imprint on the GW signal yields constrains on the components' masses and that the source would be misclassified if tides are neglected. Beyond this adiabatic precession, tidal dissipation creates a sink of orbital angular momentum. Its efficiency is strongest when tides are dynamic and excite the components' free oscillation modes. Accounting for this effect will determine whether our interpretation of current and future observations will constrain the sources' true physical properties. To investigate dynamic tides I have developed CAFein, a novel code that calculates forced non-adiabatic stellar oscillations using a highly stable and efficient numerical method.
Variability-aware double-patterning layout optimization for analog circuits
NASA Astrophysics Data System (ADS)
Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Lee, Zhao Chuan; Tseng, I.-Lun; Ong, Jonathan Yoong Seang
2018-03-01
The semiconductor industry has adopted multi-patterning techniques to manage the delay in the extreme ultraviolet lithography technology. During the design process of double-patterning lithography layout masks, two polygons are assigned to different masks if their spacing is less than the minimum printable spacing. With these additional design constraints, it is very difficult to find experienced layout-design engineers who have a good understanding of the circuit to manually optimize the mask layers in order to minimize color-induced circuit variations. In this work, we investigate the impact of double-patterning lithography on analog circuits and provide quantitative analysis for our designers to select the optimal mask to minimize the circuit's mismatch. To overcome the problem and improve the turn-around time, we proposed our smart "anchoring" placement technique to optimize mask decomposition for analog circuits. We have developed a software prototype that is capable of providing anchoring markers in the layout, allowing industry standard tools to perform automated color decomposition process.
Study report on a double isotope method of calcium absorption
NASA Technical Reports Server (NTRS)
1978-01-01
Some of the pros and cons of three methods to study gastrointestinal calcium absorption are briefly discussed. The methods are: (1) a balance study; (2) a single isotope method; and (3) a double isotope method. A procedure for the double isotope method is also included.
Advanced Doubling Adding Method for Radiative Transfer in Planetary Atmospheres
NASA Astrophysics Data System (ADS)
Liu, Quanhua; Weng, Fuzhong
2006-12-01
The doubling adding method (DA) is one of the most accurate tools for detailed multiple-scattering calculations. The principle of the method goes back to the nineteenth century in a problem dealing with reflection and transmission by glass plates. Since then the doubling adding method has been widely used as a reference tool for other radiative transfer models. The method has never been used in operational applications owing to tremendous demand on computational resources from the model. This study derives an analytical expression replacing the most complicated thermal source terms in the doubling adding method. The new development is called the advanced doubling adding (ADA) method. Thanks also to the efficiency of matrix and vector manipulations in FORTRAN 90/95, the advanced doubling adding method is about 60 times faster than the doubling adding method. The radiance (i.e., forward) computation code of ADA is easily translated into tangent linear and adjoint codes for radiance gradient calculations. The simplicity in forward and Jacobian computation codes is very useful for operational applications and for the consistency between the forward and adjoint calculations in satellite data assimilation.
Multi-Objective Programming for Lot-Sizing with Quantity Discount
NASA Astrophysics Data System (ADS)
Kang, He-Yau; Lee, Amy H. I.; Lai, Chun-Mei; Kang, Mei-Sung
2011-11-01
Multi-objective programming (MOP) is one of the popular methods for decision making in a complex environment. In a MOP, decision makers try to optimize two or more objectives simultaneously under various constraints. A complete optimal solution seldom exists, and a Pareto-optimal solution is usually used. Some methods, such as the weighting method which assigns priorities to the objectives and sets aspiration levels for the objectives, are used to derive a compromise solution. The ɛ-constraint method is a modified weight method. One of the objective functions is optimized while the other objective functions are treated as constraints and are incorporated in the constraint part of the model. This research considers a stochastic lot-sizing problem with multi-suppliers and quantity discounts. The model is transformed into a mixed integer programming (MIP) model next based on the ɛ-constraint method. An illustrative example is used to illustrate the practicality of the proposed model. The results demonstrate that the model is an effective and accurate tool for determining the replenishment of a manufacturer from multiple suppliers for multi-periods.
Level-Set Topology Optimization with Aeroelastic Constraints
NASA Technical Reports Server (NTRS)
Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia
2015-01-01
Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.
Augmenting transport versus increasing cold storage to improve vaccine supply chains.
Haidari, Leila A; Connor, Diana L; Wateska, Angela R; Brown, Shawn T; Mueller, Leslie E; Norman, Bryan A; Schmitz, Michelle M; Paul, Proma; Rajgopal, Jayant; Welling, Joel S; Leonard, Jim; Chen, Sheng-I; Lee, Bruce Y
2013-01-01
When addressing the urgent task of improving vaccine supply chains, especially to accommodate the introduction of new vaccines, there is often a heavy emphasis on stationary storage. Currently, donations to vaccine supply chains occur largely in the form of storage equipment. This study utilized a HERMES-generated detailed, dynamic, discrete event simulation model of the Niger vaccine supply chain to compare the impacts on vaccine availability of adding stationary cold storage versus transport capacity at different levels and to determine whether adding stationary storage capacity alone would be enough to relieve potential bottlenecks when pneumococcal and rotavirus vaccines are introduced by 2015. Relieving regional level storage bottlenecks increased vaccine availability (by 4%) more than relieving storage bottlenecks at the district (1% increase), central (no change), and clinic (no change) levels alone. Increasing transport frequency (or capacity) yielded far greater gains (e.g., 15% increase in vaccine availability when doubling transport frequency to the district level and 18% when tripling). In fact, relieving all stationary storage constraints could only increase vaccine availability by 11%, whereas doubling the transport frequency throughout the system led to a 26% increase and tripling the frequency led to a 30% increase. Increasing transport frequency also reduced the amount of stationary storage space needed in the supply chain. The supply chain required an additional 61,269L of storage to relieve constraints with the current transport frequency, 55,255L with transport frequency doubled, and 51,791L with transport frequency tripled. When evaluating vaccine supply chains, it is important to understand the interplay between stationary storage and transport. The HERMES-generated dynamic simulation model showed how augmenting transport can result in greater gains than only augmenting stationary storage and can reduce stationary storage needs.
Oh, Keonyoung; Baek, Juhyun; Park, Sukyung
2012-11-15
To maintain steady and level walking, push-off propulsion during the double support phase compensates for the energy loss through heel strike collisions in an energetically optimal manner. However, a large portion of daily gait activities also contains transient gait responses, such as acceleration or deceleration, during which the observed dominance of the push-off work or the energy optimality may not hold. In this study, we examined whether the push-off propulsion during the double support phase served as a major energy source for gait acceleration, and we also studied the energetic optimality of accelerated gait using a simple bipedal walking model. Seven healthy young subjects participated in the over-ground walking experiments. The subjects walked at four different constant gait speeds ranging from a self-selected speed to a maximum gait speed, and then they accelerated their gait from zero to the maximum gait speed using a self-selected acceleration ratio. We measured the ground reaction force (GRF) of three consecutive steps and the corresponding leg configuration using force platforms and an optical marker system, respectively, and we compared the mechanical work performed by the GRF during each single and double support phase. In contrast to the model prediction of an increase in the push-off propulsion that is proportional to the acceleration and minimizes the mechanical energy cost, the push-off propulsion was slightly increased, and a significant increase in the mechanical work during the single support phase was observed. The results suggest that gait acceleration occurs while accommodating a feasible push-off propulsion constraint. Copyright © 2012 Elsevier Ltd. All rights reserved.
Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert
2011-01-01
Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729
NASA Astrophysics Data System (ADS)
Halpaap, Felix; Rondenay, Stéphane; Ottemöller, Lars
2016-04-01
The Western Hellenic subduction zone is characterized by a transition from oceanic to continental subduction. In the southern oceanic portion of the system, abundant seismicity reaches intermediate depths of 100-120 km, while the northern continental portion rarely exhibits deep earthquakes. Our study aims to investigate how this oceanic-continental transition affects fluid release and related seismicity along strike, by focusing on the distribution of intermediate depth earthquakes. To obtain a detailed image of the seismicity, we carry out a tomographic inversion for P- and S-velocities and double-difference earthquake relocation using a dataset of unprecedented spatial coverage in this area. Here we present results of these analyses in conjunction with high-resolution profiles from migrated receiver function images obtained from the MEDUSA experiment. We generate tomographic models by inverting data from 237 manually picked, well locatable events recorded at up to 130 stations. Stations from the permanent Greek network and the EGELADOS experiment supplement the 3-D coverage of the modeled domain, which covers a large part of mainland Greece and surrounding offshore areas. Corrections for the sphericity of the Earth and our update to the SIMULR16 package, which now allows S-inversion, help improve our previous models. Flexible gridding focusses the inversion on the domains of highest gradient around the slab, and we evaluate the resolution with checker board tests. We use the resulting velocity model to relocate earthquakes via the Double-Difference method, using a large dataset of differential traveltimes obtained by crosscorrelation of seismograms. Tens of earthquakes align along two planes forming a double seismic zone in the southern, oceanic portion of the subduction zone. With increasing subduction depth, the earthquakes appear closer to the center of the slab, outlining probable deserpentinization of the slab and concomitant eclogitization of dry crustal rocks. Against expectations, we relocate one robust deep event at ≈70 km depth in the northern, continental part of the subduction zone.
A Compact, Tunable Near-UV Source for Quantitative Microgravity Combustion Diagnostics
NASA Technical Reports Server (NTRS)
Peterson, K. A.; Oh, D. B.
1999-01-01
There is a need for improved optical diagnostic methods for use in microgravity combustion research. Spectroscopic methods with fast time response that can provide absolute concentrations and concentration profiles of important chemical species in flames are needed to facilitate the understanding of combustion kinetics in microgravity. Although a variety of sophisticated laser-based diagnostics (such as planar laser induced fluorescence, degenerate four wave mixing and coherent Raman methods) have been applied to the study of combustion in laboratory flames, the instrumentation associated with these methods is not well suited to microgravity drop tower or space station platforms. Important attributes of diagnostic systems for such applications include compact size, low power consumption, ruggedness, and reliability. We describe a diode laser-based near-UV source designed with the constraints of microgravity research in mind. Coherent light near 420 nm is generated by frequency doubling in a nonlinear crystal. This light source is single mode with a very narrow bandwidth suitable for gas phase diagnostics, can be tuned over several 1/cm and can be wavelength modulated at up to MHz frequencies. We demonstrate the usefulness of this source for combustion diagnostics by measuring CH radical concentration profiles in an atmospheric pressure laboratory flame. The radical concentrations are measured using wavelength modulation spectroscopy (WMS) to obtain the line-of-sight integrated absorption for different paths through the flame. Laser induced fluorescence (LIF) measurements are also demonstrated with this instrument, showing the feasibility of simultaneous WMS absorption and LIF measurements with the same light source. LIF detection perpendicular to the laser beam can be used to map relative species densities along the line-of-sight while the integrated absorption available through WMS provides a mathematical constraint on the extraction of quantitative information from the LIF data. Combining absorption with LIF - especially if the measurements are made simultaneously with the same excitation beam - may allow elimination of geometrical factors and effects of intensity fluctuations (common difficulties with the analysis of LIF data) from the analysis.
Tracking fin whales in the northeast Pacific Ocean with a seafloor seismic network.
Wilcock, William S D
2012-10-01
Ocean bottom seismometer (OBS) networks represent a tool of opportunity to study fin and blue whales. A small OBS network on the Juan de Fuca Ridge in the northeast Pacific Ocean in ~2.3 km of water recorded an extensive data set of 20-Hz fin whale calls. An automated method has been developed to identify arrival times based on instantaneous frequency and amplitude and to locate calls using a grid search even in the presence of a few bad arrival times. When only one whale is calling near the network, tracks can generally be obtained up to distances of ~15 km from the network. When the calls from multiple whales overlap, user supervision is required to identify tracks. The absolute and relative amplitudes of arrivals and their three-component particle motions provide additional constraints on call location but are not useful for extending the distance to which calls can be located. The double-difference method inverts for changes in relative call locations using differences in residuals for pairs of nearby calls recorded on a common station. The method significantly reduces the unsystematic component of the location error, especially when inconsistencies in arrival time observations are minimized by cross-correlation.
Optimization of structures to satisfy aeroelastic requirements
NASA Technical Reports Server (NTRS)
Rudisill, C. S.
1975-01-01
A method for the optimization of structures to satisfy flutter velocity constraints is presented along with a method for determining the flutter velocity. A method for the optimization of structures to satisfy divergence velocity constraints is included.
An implicit adaptation algorithm for a linear model reference control system
NASA Technical Reports Server (NTRS)
Mabius, L.; Kaufman, H.
1975-01-01
This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.
Delamination Modeling of Composites for Improved Crash Analysis
NASA Technical Reports Server (NTRS)
Fleming, David C.
1999-01-01
Finite element crash modeling of composite structures is limited by the inability of current commercial crash codes to accurately model delamination growth. Efforts are made to implement and assess delamination modeling techniques using a current finite element crash code, MSC/DYTRAN. Three methods are evaluated, including a straightforward method based on monitoring forces in elements or constraints representing an interface; a cohesive fracture model proposed in the literature; and the virtual crack closure technique commonly used in fracture mechanics. Results are compared with dynamic double cantilever beam test data from the literature. Examples show that it is possible to accurately model delamination propagation in this case. However, the computational demands required for accurate solution are great and reliable property data may not be available to support general crash modeling efforts. Additional examples are modeled including an impact-loaded beam, damage initiation in laminated crushing specimens, and a scaled aircraft subfloor structures in which composite sandwich structures are used as energy-absorbing elements. These examples illustrate some of the difficulties in modeling delamination as part of a finite element crash analysis.
Importance of parametrizing constraints in quantum-mechanical variational calculations
NASA Technical Reports Server (NTRS)
Chung, Kwong T.; Bhatia, A. K.
1992-01-01
In variational calculations of quantum mechanics, constraints are sometimes imposed explicitly on the wave function. These constraints, which are deduced by physical arguments, are often not uniquely defined. In this work, the advantage of parametrizing constraints and letting the variational principle determine the best possible constraint for the problem is pointed out. Examples are carried out to show the surprising effectiveness of the variational method if constraints are parameterized. It is also shown that misleading results may be obtained if a constraint is not parameterized.
Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign
2007-01-01
Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273
Evolutionary branching under multi-dimensional evolutionary constraints.
Ito, Hiroshi; Sasaki, Akira
2016-10-21
The fitness of an existing phenotype and of a potential mutant should generally depend on the frequencies of other existing phenotypes. Adaptive evolution driven by such frequency-dependent fitness functions can be analyzed effectively using adaptive dynamics theory, assuming rare mutation and asexual reproduction. When possible mutations are restricted to certain directions due to developmental, physiological, or physical constraints, the resulting adaptive evolution may be restricted to subspaces (constraint surfaces) with fewer dimensionalities than the original trait spaces. To analyze such dynamics along constraint surfaces efficiently, we develop a Lagrange multiplier method in the framework of adaptive dynamics theory. On constraint surfaces of arbitrary dimensionalities described with equality constraints, our method efficiently finds local evolutionarily stable strategies, convergence stable points, and evolutionary branching points. We also derive the conditions for the existence of evolutionary branching points on constraint surfaces when the shapes of the surfaces can be chosen freely. Copyright © 2016 Elsevier Ltd. All rights reserved.
Study on dynamic deformation synchronized measurement technology of double-layer liquid surfaces
NASA Astrophysics Data System (ADS)
Tang, Huiying; Dong, Huimin; Liu, Zhanwei
2017-11-01
Accurate measurement of the dynamic deformation of double-layer liquid surfaces plays an important role in many fields, such as fluid mechanics, biomechanics, petrochemical industry and aerospace engineering. It is difficult to measure dynamic deformation of double-layer liquid surfaces synchronously for traditional methods. In this paper, a novel and effective method for full-field static and dynamic deformation measurement of double-layer liquid surfaces has been developed, that is wavefront distortion of double-wavelength transmission light with geometric phase analysis (GPA) method. Double wavelength lattice patterns used here are produced by two techniques, one is by double wavelength laser, and the other is by liquid crystal display (LCD). The techniques combine the characteristics such as high transparency, low reflectivity and fluidity of liquid. Two color lattice patterns produced by laser and LCD were adjusted at a certain angle through the tested double-layer liquid surfaces simultaneously. On the basis of the refractive indexes difference of two transmitted lights, the double-layer liquid surfaces were decoupled with GPA method. Combined with the derived relationship between phase variation of transmission-lattice patterns and out-of plane heights of two surfaces, as well as considering the height curves of the liquid level, the double-layer liquid surfaces can be reconstructed successfully. Compared with the traditional measurement method, the developed method not only has the common advantages of the optical measurement methods, such as high-precision, full-field and non-contact, but also simple, low cost and easy to set up.
Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.
2013-01-01
Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
Structural optimization for joined-wing synthesis
NASA Technical Reports Server (NTRS)
Gallman, John W.; Kroo, Ilan M.
1992-01-01
The differences between fully stressed and minimum-weight joined-wing structures are identified, and these differences are quantified in terms of weight, stress, and direct operating cost. A numerical optimization method and a fully stressed design method are used to design joined-wing structures. Both methods determine the sizes of 204 structural members, satisfying 1020 stress constraints and five buckling constraints. Monotonic splines are shown to be a very effective way of linking spanwise distributions of material to a few design variables. Both linear and nonlinear analyses are employed to formulate the buckling constraints. With a constraint on buckling, the fully stressed design is shown to be very similar to the minimum-weight structure. It is suggested that a fully stressed design method based on nonlinear analysis is adequate for an aircraft optimization study.
Zhao, Bo; Haldar, Justin P.; Christodoulou, Anthony G.; Liang, Zhi-Pei
2012-01-01
Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled (k, t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method. PMID:22695345
A dual method for optimal control problems with initial and final boundary constraints.
NASA Technical Reports Server (NTRS)
Pironneau, O.; Polak, E.
1973-01-01
This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.
NASA Astrophysics Data System (ADS)
Holanda, R. F. L.
2018-05-01
In this paper, we propose a new method to obtain the depletion factor γ(z), the ratio by which the measured baryon fraction in galaxy clusters is depleted with respect to the universal mean. We use exclusively galaxy cluster data, namely, X-ray gas mass fraction (fgas) and angular diameter distance measurements from Sunyaev-Zel'dovich effect plus X-ray observations. The galaxy clusters are the same in both data set and the non-isothermal spherical double β-model was used to describe their electron density and temperature profiles. In order to compare our results with those from recent cosmological hydrodynamical simulations, we suppose a possible time evolution for γ(z), such as, γ(z) =γ0(1 +γ1 z) . As main conclusions we found that: the γ0 value is in full agreement with the simulations. On the other hand, although the γ1 value found in our analysis is compatible with γ1 = 0 within 2σ c.l., our results show a non-negligible time evolution for the depletion factor, unlike the results of the simulations. However, we also put constraints on γ(z) by using the fgas measurements and angular diameter distances obtained from the flat ΛCDM model (Planck results) and from a sample of galaxy clusters described by an elliptical profile. For these cases no significant time evolution for γ(z) was found. Then, if a constant depletion factor is an inherent characteristic of these structures, our results show that the spherical double β-model used to describe the galaxy clusters considered does not affect the quality of their fgas measurements.
Rolling-circle amplification under topological constraints
Kuhn, Heiko; Demidov, Vadim V.; Frank-Kamenetskii, Maxim D.
2002-01-01
We have performed rolling-circle amplification (RCA) reactions on three DNA templates that differ distinctly in their topology: an unlinked DNA circle, a linked DNA circle within a pseudorotaxane-type structure and a linked DNA circle within a catenane. In the linked templates, the single-stranded circle (dubbed earring probe) is threaded, with the aid of two peptide nucleic acid openers, between the two strands of double-stranded DNA (dsDNA). We have found that the RCA efficiency of amplification was essentially unaffected when the linked templates were employed. By showing that the DNA catenane remains intact after RCA reactions, we prove that certain DNA polymerases can carry out the replicative synthesis under topological constraints allowing detection of several hundred copies of a dsDNA marker without DNA denaturation. Our finding may have practical implications in the area of DNA diagnostics. PMID:11788721
European Non-Dissipative Bypass Switch For Li-Ion Batteries And Prospective
NASA Astrophysics Data System (ADS)
Pasquier, E.; Castric, AF.; Mosset, E.; Chandeneau, A.
2011-10-01
Li-ion batteries are made of cells or modules connected in series. In case one may be too weak or failed, it becomes necessary to remove it from the serial circuit. This is the by-pass operation which provides overcharge/open-circuitprotection,limitation of possible constraints linked to over- discharge/reversal/"self-short" and avoid to jeopardize rest of battery. One system is particularly adapted to Space Li-ion batteries: the "make before break" Single Pole Double Throw (SPDT) switch which avoids open- circuit on power circuit when correctly activated. This paper presents the component constraints, the development in the frame of ESA Artès 3 program up to its qualification, as well as the motorization approach linked to ECSS-E-30 (mechanical - Part 3: Mechanisms) and future opportunities of such system.
Constraining compensated isocurvature perturbations using the CMB
NASA Astrophysics Data System (ADS)
Smith, Tristan L.; Rhiannon Smith, Kyle Yee, Julian Munoz, Daniel Grin
2017-01-01
Compensated isocurvature perturbations (CIPs) are variations in the cosmic baryon fraction which leave the total non-relativistic matter (and radiation) density unchanged. They are predicted by models of inflation which involve more than one scalar field, such as the curvaton scenario. At linear order, they leave the CMB two-point correlation function nearly unchanged: this is why existing constraints to CIPs are so much more permissive than constraints to typical isocurvature perturbations. Recent work articulated an efficient way to calculate the second order CIP effects on the CMB two-point correlation. We have implemented this method in order to explore constraints to the CIP amplitude using current Planck temperature and polarization data. In addition, we have computed the contribution of CIPs to the CMB lensing estimator which provides us with a novel method to use CMB data to place constraints on CIPs. We find that Planck data places a constraint to the CIP amplitude which is competitive with other methods.
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
Systems and methods for energy cost optimization in a building system
Turney, Robert D.; Wenzel, Michael J.
2016-09-06
Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.
High Velocity Firings of Slug Projectiles in a Double-Travel 120-MM Gun System
1991-04-01
constraints presented by TBD. This charge configuration was then tested using aluminium slug projectiles to avoid the unnecessary expenditure of APFSDS...test projectile was a depleted uranium alloy (U-.75Ti) rod with a standard, four piece, aluminum sabot assembly. The launch package had a nominal...the rod is shown in Figure 2. Figure 2. Scaled, Long Rod Penetrator. Figure 3. Aluminium Slug Projectile. The aluminium slug rounds, fired at Range 18
Procurement of a fully licensed radioisotope thermoelectric generator transportation system
NASA Astrophysics Data System (ADS)
Adkins, Harold E.; Bearden, Thomas E.
The present transportation system for radioisotope thermoelectric generators and heater units is being developed to comply with all applicable U.S. DOT regulations, including a doubly-contained 'bell jar' concept for the required double-containment of plutonium. Modifications in handling equipment and procedures are entailed by this novel packaging design, and will affect high-capacity forklifts, overhead cranes, He-backfilling equipment, etc. Attention is given to the design constraints involved, and to the Federal procurement process.
The role of diffusion tensor imaging tractography for Gamma Knife thalamotomy planning.
Gomes, João Gabriel Ribeiro; Gorgulho, Alessandra Augusta; de Oliveira López, Amanda; Saraiva, Crystian Wilian Chagas; Damiani, Lucas Petri; Pássaro, Anderson Martins; Salvajoli, João Victor; de Oliveira Siqueira, Ludmila; Salvajoli, Bernardo Peres; De Salles, Antônio Afonso Ferreira
2016-12-01
OBJECTIVE The role of tractography in Gamma Knife thalamotomy (GK-T) planning is still unclear. Pyramidal tractography might reduce the risk of radiation injury to the pyramidal tract and reduce motor complications. METHODS In this study, the ventralis intermedius nucleus (VIM) targets of 20 patients were bilaterally defined using Iplannet Stereotaxy Software, according to the anterior commissure-posterior commissure (AC-PC) line and considering the localization of the pyramidal tract. The 40 targets and tractography were transferred as objects to the GammaPlan Treatment Planning System (GP-TPS). New targets were defined, according to the AC-PC line in the functional targets section of the GP-TPS. The target offsets required to maintain the internal capsule (IC) constraint of < 15 Gy were evaluated. In addition, the strategies available in GP-TPS to maintain the minimum conventional VIM target dose at > 100 Gy were determined. RESULTS A difference was observed between the positions of both targets and the doses to the IC. The lateral (x) and the vertical (z) coordinates were adjusted 1.9 mm medially and 1.3 mm cranially, respectively. The targets defined considering the position of the pyramidal tract were more medial and superior, based on the constraint of 15 Gy touching the object representing the IC in the GP-TPS. The best strategy to meet the set constraints was 90° Gamma angle (GA) with automatic shaping of dose distribution; this was followed by 110° GA. The worst GA was 70°. Treatment time was substantially increased by the shaping strategy, approximately doubling delivery time. CONCLUSIONS Routine use of DTI pyramidal tractography might be important to fine-tune GK-T planning. DTI tractography, as well as anisotropy showing the VIM, promises to improve Gamma Knife functional procedures. They allow for a more objective definition of dose constraints to the IC and targeting. DTI pyramidal tractography introduced into the treatment planning may reduce the incidence of motor complications and improve efficacy. This needs to be validated in a large clinical series.
MO-FG-CAMPUS-TeP3-04: Deliverable Robust Optimization in IMPT Using Quadratic Objective Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shan, J; Liu, W; Bues, M
Purpose: To find and evaluate the way of applying deliverable MU constraints into robust spot intensity optimization in Intensity-Modulated- Proton-Therapy (IMPT) to prevent plan quality and robustness from degrading due to machine deliverable MU-constraints. Methods: Currently, the influence of the deliverable MU-constraints is retrospectively evaluated by post-processing immediately following optimization. In this study, we propose a new method based on the quasi-Newton-like L-BFGS-B algorithm with which we turn deliverable MU-constraints on and off alternatively during optimization. Seven patients with two different machine settings (small and large spot size) were planned with both conventional and new methods. For each patient, threemore » kinds of plans were generated — conventional non-deliverable plan (plan A), conventional deliverable plan with post-processing (plan B), and new deliverable plan (plan C). We performed this study with both realistic (small) and artificial (large) deliverable MU-constraints. Results: With small minimum MU-constraints considered, new method achieved a slightly better plan quality than conventional method (D95% CTV normalized to the prescription dose: 0.994[0.992∼0.996] (Plan C) vs 0.992[0.986∼0.996] (Plan B)). With large minimum MU constraints considered, results show that the new method maintains plan quality while plan quality from the conventional method is degraded greatly (D95% CTV normalized to the prescription dose: 0.987[0.978∼0.994] (Plan C) vs 0.797[0.641∼1.000] (Plan B)). Meanwhile, plan robustness of these two method’s results is comparable. (For all 7 patients, CTV DVH band gap at D95% normalized to the prescription dose: 0.015[0.005∼0.043] (Plan C) vs 0.012[0.006∼0.038] (Plan B) with small MU-constraints and 0.019[0.009∼0.039] (Plan C) vs 0.030[0.015∼0.041] (Plan B) with large MU-constraints) Conclusion: Positive correlation has been found between plan quality degeneration and magnitude of deliverable minimal MU. Compared to conventional post-processing method, our new method of incorporating deliverable minimal MU-constraints directly into plan optimization, can produce machine-deliverable plans with better plan qualities and non-compromised plan robustness. This research was supported by the National Cancer Institute Career Developmental Award K25CA168984, by the Fraternal Order of Eagles Cancer Research Fund Career Development Award, by The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, by Mayo Arizona State University Seed Grant and by The Kemper Marley Foundation.« less
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
NASA Astrophysics Data System (ADS)
Birkby, Jayne; Alonso, Roi; Brogi, Matteo; Charbonneau, David; Fortney, Jonathan; Hoyer, Sergio; Johnson, John Asher; de Kok, Remco; Lopez-Morales, Mercedes; Montet, Ben; Snellen, Ignas
2015-12-01
High-resolution spectroscopy (R>25,000) is a robust and powerful tool in the near-infrared characterization of exoplanet atmospheres. It has unambiguously revealed the presence of carbon monoxide and water in several hot Jupiters, measured the rotation rate of beta Pic b, and suggested the presence of fast day-to-night winds in one atmosphere. The method is applicable to transiting, non-transiting, and directly-imaged planets. It works by resolving broad molecular bands in the planetary spectrum into a dense, unique forest of individual lines and tracing them directly by their Doppler shift, while the star and tellurics remain essentially stationary. I will focus on two ongoing efforts to expand this technique. First, I will present new results on 51 Peg b revealing its infrared atmospheric compositional properties, then I will discuss an ongoing optical HARPS-N/TNG campaign (due mid October 2015) to obtain a detailed albedo spectrum of 51 Peg b at 387-691 nm in bins of 50nm. This spectrum would provide strong constraints on the previously claimed high albedo and potentially cloudy nature of this planet. Second, I will discuss preliminary results from Keck/NIRSPAO observations (due late September 2015) of LHS 6343 C, a 1000 K transiting brown dwarf with an M-dwarf host star. The high-resolution method converts this system into an eclipsing, double-lined spectroscopic binary, thus allowing dynamical mass and radius estimates of the components, free from astrophysical assumptions. Alongside probing the atmospheric composition of the brown dwarf, these data would provide the first model-independent study of the bulk properties of an old brown dwarf, with masses accurate to <5%, placing a crucial constraint on brown dwarf evolution models.
Method and System for Air Traffic Rerouting for Airspace Constraint Resolution
NASA Technical Reports Server (NTRS)
Erzberger, Heinz (Inventor); Morando, Alexander R. (Inventor); Sheth, Kapil S. (Inventor); McNally, B. David (Inventor); Clymer, Alexis A. (Inventor); Shih, Fu-tai (Inventor)
2017-01-01
A dynamic constraint avoidance route system automatically analyzes routes of aircraft flying, or to be flown, in or near constraint regions and attempts to find more time and fuel efficient reroutes around current and predicted constraints. The dynamic constraint avoidance route system continuously analyzes all flight routes and provides reroute advisories that are dynamically updated in real time. The dynamic constraint avoidance route system includes a graphical user interface that allows users to visualize, evaluate, modify if necessary, and implement proposed reroutes.
NASA Astrophysics Data System (ADS)
Pellejero-Ibanez, Marcos; Chuang, Chia-Hsun; Rubiño-Martín, J. A.; Cuesta, Antonio J.; Wang, Yuting; Zhao, Gongbo; Ross, Ashley J.; Rodríguez-Torres, Sergio; Prada, Francisco; Slosar, Anže; Vazquez, Jose A.; Alam, Shadab; Beutler, Florian; Eisenstein, Daniel J.; Gil-Marín, Héctor; Grieb, Jan Niklas; Ho, Shirley; Kitaura, Francisco-Shu; Percival, Will J.; Rossi, Graziano; Salazar-Albornoz, Salvador; Samushia, Lado; Sánchez, Ariel G.; Satpathy, Siddharth; Seo, Hee-Jong; Tinker, Jeremy L.; Tojeiro, Rita; Vargas-Magaña, Mariana; Brownstein, Joel R.; Nichol, Robert C.; Olmstead, Matthew D.
2017-07-01
We develop a new computationally efficient methodology called double-probe analysis with the aim of minimizing informative priors (those coming from extra probes) in the estimation of cosmological parameters. Using our new methodology, we extract the dark energy model-independent cosmological constraints from the joint data sets of the Baryon Oscillation Spectroscopic Survey (BOSS) galaxy sample and Planck cosmic microwave background (CMB) measurements. We measure the mean values and covariance matrix of {R, la, Ωbh2, ns, log(As), Ωk, H(z), DA(z), f(z)σ8(z)}, which give an efficient summary of the Planck data and two-point statistics from the BOSS galaxy sample. The CMB shift parameters are R=√{Ω _m H_0^2} r(z_*) and la = πr(z*)/rs(z*), where z* is the redshift at the last scattering surface, and r(z*) and rs(z*) denote our comoving distance to the z* and sound horizon at z*, respectively; Ωb is the baryon fraction at z = 0. This approximate methodology guarantees that we will not need to put informative priors on the cosmological parameters that galaxy clustering is unable to constrain, I.e. Ωbh2 and ns. The main advantage is that the computational time required for extracting these parameters is decreased by a factor of 60 with respect to exact full-likelihood analyses. The results obtained show no tension with the flat Λ cold dark matter (ΛCDM) cosmological paradigm. By comparing with the full-likelihood exact analysis with fixed dark energy models, on one hand we demonstrate that the double-probe method provides robust cosmological parameter constraints that can be conveniently used to study dark energy models, and on the other hand we provide a reliable set of measurements assuming dark energy models to be used, for example, in distance estimations. We extend our study to measure the sum of the neutrino mass using different methodologies, including double-probe analysis (introduced in this study), full-likelihood analysis and single-probe analysis. From full-likelihood analysis, we obtain Σmν < 0.12 (68 per cent), assuming ΛCDM and Σmν < 0.20 (68 per cent) assuming owCDM. We also find that there is degeneracy between observational systematics and neutrino masses, which suggests that one should take great care when estimating these parameters in the case of not having control over the systematics of a given sample.
Prosthetic Leg Control in the Nullspace of Human Interaction.
Gregg, Robert D; Martin, Anne E
2016-07-01
Recent work has extended the control method of virtual constraints, originally developed for autonomous walking robots, to powered prosthetic legs for lower-limb amputees. Virtual constraints define desired joint patterns as functions of a mechanical phasing variable, which are typically enforced by torque control laws that linearize the output dynamics associated with the virtual constraints. However, the output dynamics of a powered prosthetic leg generally depend on the human interaction forces, which must be measured and canceled by the feedback linearizing control law. This feedback requires expensive multi-axis load cells, and actively canceling the interaction forces may minimize the human's influence over the prosthesis. To address these limitations, this paper proposes a method for projecting virtual constraints into the nullspace of the human interaction terms in the output dynamics. The projected virtual constraints naturally render the output dynamics invariant with respect to the human interaction forces, which instead enter into the internal dynamics of the partially linearized prosthetic system. This method is illustrated with simulations of a transfemoral amputee model walking with a powered knee-ankle prosthesis that is controlled via virtual constraints with and without the proposed projection.
Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L
2013-07-16
Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2003-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.
ZAHABIUN, Farzaneh; SADJJADI, Seyed Mahmoud; ESFANDIARI, Farideh
2015-01-01
Background: Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. Methods: A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. Results: The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Conclusion: Using this method is cost effective and fast for mounting of small nematodes comparing to classic method. PMID:26811729
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1971-01-01
The method of steepest descent used in optimizing one-dimensional layered radiation shields is extended to multidimensional, multiconstraint situations. The multidimensional optimization algorithm and equations are developed for the case of a dose constraint in any one direction being dependent only on the shield thicknesses in that direction and independent of shield thicknesses in other directions. Expressions are derived for one-, two-, and three-dimensional cases (one, two, and three constraints). The precedure is applicable to the optimization of shields where there are different dose constraints and layering arrangements in the principal directions.
Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.
Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner
2011-09-26
Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America
Zahabiun, Farzaneh; Sadjjadi, Seyed Mahmoud; Esfandiari, Farideh
2015-01-01
Permanent slide preparation of nematodes especially small ones is time consuming, difficult and they become scarious margins. Regarding this problem, a modified double glass mounting method was developed and compared with classic method. A total of 209 nematode samples from human and animal origin were fixed and stained with Formaldehyde Alcohol Azocarmine Lactophenol (FAAL) followed by double glass mounting and classic dehydration method using Canada balsam as their mounting media. The slides were evaluated in different dates and times, more than four years. Different photos were made with different magnification during the evaluation time. The double glass mounting method was stable during this time and comparable with classic method. There were no changes in morphologic structures of nematodes using double glass mounting method with well-defined and clear differentiation between different organs of nematodes in this method. Using this method is cost effective and fast for mounting of small nematodes comparing to classic method.
An approach to constrained aerodynamic design with application to airfoils
NASA Technical Reports Server (NTRS)
Campbell, Richard L.
1992-01-01
An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.
The Evolution of the Type Ia Supernova Luminosity Function
NASA Astrophysics Data System (ADS)
Shen, Ken J.; Toonen, Silvia; Graur, Or
2017-12-01
Type Ia supernovae (SNe Ia) exhibit a wide diversity of peak luminosities and light curve shapes: the faintest SNe Ia are 10 times less luminous and evolve more rapidly than the brightest SNe Ia. Their differing characteristics also extend to their stellar age distributions, with fainter SNe Ia preferentially occurring in old stellar populations and vice versa. In this Letter, we quantify this SN Ia luminosity–stellar age connection using data from the Lick Observatory Supernova Search (LOSS). Our binary population synthesis calculations agree qualitatively with the observed trend in the > 1 {Gyr} old populations probed by LOSS if the majority of SNe Ia arise from prompt detonations of sub-Chandrasekhar-mass white dwarfs (WDs) in double WD systems. Under appropriate assumptions, we show that double WD systems with less massive primaries, which yield fainter SNe Ia, interact and explode at older ages than those with more massive primaries. We find that prompt detonations in double WD systems are capable of reproducing the observed evolution of the SN Ia luminosity function, a constraint that any SN Ia progenitor scenario must confront.
Structural design and static analysis of a double-ring deployable truss for mesh antennas
NASA Astrophysics Data System (ADS)
Xu, Yan; Guan, Fuling; Chen, Jianjun; Zheng, Yao
2012-12-01
This paper addresses the structural design, the deployment control design, the static analysis and the model testing of a new double-ring deployable truss that is intended for large mesh antennas. This deployable truss is a multi-DOF (degree-of-freedom), over-constrained mechanism. Two kinds of deployable basic elements were introduced, as well as a process to synthesise the structure of the deployable truss. The geometric equations were formulated to determine the length of each strut, including the effects of the joint size. A DOF evaluation showed that the mechanism requires two active cables and requires deployment control. An open-loop control system was designed to control the rotational velocities of two motors. The structural stiffness of the truss was assessed by static analysis that considered the effects of the constraint condition and the pre-stress of the passive cables. A 4.2-metre demonstration model of an antenna was designed and fabricated. The geometry and the deployment behaviour of the double-ring truss were validated by the experiments using this model.
Spinning particles, axion radiation, and the classical double copy
NASA Astrophysics Data System (ADS)
Goldberger, Walter D.; Li, Jingping; Prabhu, Siddharth G.
2018-05-01
We extend the perturbative double copy between radiating classical sources in gauge theory and gravity to the case of spinning particles. We construct, to linear order in spins, perturbative radiating solutions to the classical Yang-Mills equations sourced by a set of interacting color charges with chromomagnetic dipole spin couplings. Using a color-to-kinematics replacement rule proposed earlier by one of the authors, these solutions map onto radiation in a theory of interacting particles coupled to massless fields that include the graviton, a scalar (dilaton) ϕ and the Kalb-Ramond axion field Bμ ν. Consistency of the double copy imposes constraints on the parameters of the theory on both the gauge and gravity sides of the correspondence. In particular, the color charges carry a chromomagnetic interaction which, in d =4 , corresponds to a gyromagnetic ratio equal to Dirac's value g =2 . The color-to-kinematics map implies that on the gravity side, the bulk theory of the fields (ϕ ,gμ ν,Bμ ν) has interactions which match those of d -dimensional "string gravity," as is the case both in the BCJ double copy of pure gauge theory scattering amplitudes and the KLT relations between the tree-level S -matrix elements of open and closed string theory.
NASA Astrophysics Data System (ADS)
Webb, G. M.; Hu, Q.; Dasgupta, B.; Zank, G. P.
2012-02-01
Double Alfvén wave solutions of the magnetohydrodynamic equations in which the physical variables (the gas density ρ, fluid velocity u, gas pressure p, and magnetic field induction B) depend only on two independent wave phases ϕ1(x,t) and ϕ2(x,t) are obtained. The integrals for the double Alfvén wave are the same as for simple waves, namely, the gas pressure, magnetic pressure, and group velocity of the wave are constant. Compatibility conditions on the evolution of the magnetic field B due to changes in ϕ1 and ϕ2, as well as constraints due to Gauss's law ∇ · B = 0 are discussed. The magnetic field lines and hodographs of B in which the tip of the magnetic field B moves on the sphere |B| = B = const. are used to delineate the physical characteristics of the wave. Hamilton's equations for the simple Alfvén wave with wave normal n(ϕ), and with magnetic induction B(ϕ) in which ϕ is the wave phase, are obtained by using the Frenet-Serret equations for curves x=X(ϕ) in differential geometry. The use of differential geometry of 2D surfaces in a 3D Euclidean space to describe double Alfvén waves is briefly discussed.
Standard Model as a Double Field Theory.
Choi, Kang-Sin; Park, Jeong-Hyuck
2015-10-23
We show that, without any extra physical degree introduced, the standard model can be readily reformulated as a double field theory. Consequently, the standard model can couple to an arbitrary stringy gravitational background in an O(4,4) T-duality covariant manner and manifest two independent local Lorentz symmetries, Spin(1,3)×Spin(3,1). While the diagonal gauge fixing of the twofold spin groups leads to the conventional formulation on the flat Minkowskian background, the enhanced symmetry makes the standard model more rigid, and also stringy, than it appeared. The CP violating θ term may no longer be allowed by the symmetry, and hence the strong CP problem can be solved. There are now stronger constraints imposed on the possible higher order corrections. We speculate that the quarks and the leptons may belong to the two different spin classes.
Has neutrinoless double /β decay of 76Ge been really observed?
NASA Astrophysics Data System (ADS)
Zdesenko, Yu. G.; Danevich, F. A.; Tretyak, V. I.
2002-10-01
The claim of discovery of the neutrinoless double beta (0ν2β) decay of 76Ge [Mod. Phys. Lett. A 16 (2001) 2409] is considered critically and firm conclusion about, at least, prematurity of such a claim is derived on the basis of a simple statistical analysis of the measured spectra. This result is also proved by analyzing the cumulative data sets of the Heidelberg-Moscow and IGEX experiments. Besides, it allows us to establish the highest worldwide half-life limit on the 0ν2β decay of 76Ge: T1/20ν⩾2.5 (4.2)×1025 yr at 90% (68%) C.L. This bound corresponds to the most stringent constraint on the Majorana neutrino mass: mν⩽0.3 (0.2) eV at 90% (68%) C.L.
Three-Triplet Model with Double SU(3) Symmetry
DOE R&D Accomplishments Database
Han, M. Y.; Nambu, Y.
1965-01-01
With a view to avoiding some of the kinematical and dynamical difficulties involved in the single triplet quark model, a model for the low lying baryons and mesons based on three triplets with integral charges is proposed, somewhat similar to the two-triplet model introduced earlier by one of us (Y. N.). It is shown that in a U(3) scheme of triplets with integral charges, one is naturally led to three triplets located symmetrically about the origin of I{sub 3} - Y diagram under the constraint that Nishijima-Gell-Mann relation remains intact. A double SU(3) symmetry scheme is proposed in which the large mass splittings between different representations are ascribed to one of the SU(3), while the other SU(3) is the usual one for the mass splittings within a representation of the first SU(3).
Method for double-sided processing of thin film transistors
Yuan, Hao-Chih; Wang, Guogong; Eriksson, Mark A.; Evans, Paul G.; Lagally, Max G.; Ma, Zhenqiang
2008-04-08
This invention provides methods for fabricating thin film electronic devices with both front- and backside processing capabilities. Using these methods, high temperature processing steps may be carried out during both frontside and backside processing. The methods are well-suited for fabricating back-gate and double-gate field effect transistors, double-sided bipolar transistors and 3D integrated circuits.
NASA Technical Reports Server (NTRS)
Newman, C. M.
1976-01-01
The constraints and limitations for STS Consumables Management are studied. Variables imposing constraints on the consumables related subsystems are identified, and a method determining constraint violations with the simplified consumables model in the Mission Planning Processor is presented.
Kato, Akio
2006-11-14
The invention provides methods for chromosome doubling in plants. The technique overcomes the low yields of doubled progeny associated with the use of prior techniques for doubling chromosomes in plants such as grasses. The technique can be used in large scale applications and has been demonstrated to be highly effective in maize. Following treatment in accordance with the invention, plants remain amenable to self fertilization, thereby allowing the efficient isolation of doubled progeny plants.
A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
1996-01-01
Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.
NASA Astrophysics Data System (ADS)
Xu, Feng; Dubovik, Oleg; Zhai, Peng-Wang; Diner, David J.; Kalashnikova, Olga V.; Seidel, Felix C.; Litvinov, Pavel; Bovchaliuk, Andrii; Garay, Michael J.; van Harten, Gerard; Davis, Anthony B.
2016-07-01
An optimization approach has been developed for simultaneous retrieval of aerosol properties and normalized water-leaving radiance (nLw) from multispectral, multiangular, and polarimetric observations over ocean. The main features of the method are (1) use of a simplified bio-optical model to estimate nLw, followed by an empirical refinement within a specified range to improve its accuracy; (2) improved algorithm convergence and stability by applying constraints on the spatial smoothness of aerosol loading and Chlorophyll a (Chl a) concentration across neighboring image patches and spectral constraints on aerosol optical properties and nLw across relevant bands; and (3) enhanced Jacobian calculation by modeling and storing the radiative transfer (RT) in aerosol/Rayleigh mixed layer, pure Rayleigh-scattering layers, and ocean medium separately, then coupling them to calculate the field at the sensor. This approach avoids unnecessary and time-consuming recalculations of RT in unperturbed layers in Jacobian evaluations. The Markov chain method is used to model RT in the aerosol/Rayleigh mixed layer and the doubling method is used for the uniform layers of the atmosphere-ocean system. Our optimization approach has been tested using radiance and polarization measurements acquired by the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) over the AERONET USC_SeaPRISM ocean site (6 February 2013) and near the AERONET La Jolla site (14 January 2013), which, respectively, reported relatively high and low aerosol loadings. Validation of the results is achieved through comparisons to AERONET aerosol and ocean color products. For comparison, the USC_SeaPRISM retrieval is also performed by use of the Generalized Retrieval of Aerosol and Surface Properties algorithm (Dubovik et al., 2011). Uncertainties of aerosol and nLw retrievals due to random and systematic instrument errors are analyzed by truth-in/truth-out tests with three Chl a concentrations, five aerosol loadings, three different types of aerosols, and nine combinations of solar incidence and viewing geometries.
Convolutional coding combined with continuous phase modulation
NASA Technical Reports Server (NTRS)
Pizzi, S. V.; Wilson, S. G.
1985-01-01
Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.
Development of Sudanese women in physics
NASA Astrophysics Data System (ADS)
Eassa, Nashwa; Elmardi, Maye; Adam, Buthaina; Elgadi, Mayada; Abass, Sara
2015-12-01
At Al Neelain University, Khartoum, Sudan, enrollment of female undergraduate physics students is significantly higher than that of males (double or more). However, most of the female staff in the Physics Department at this university hold lecturer positions because few women hold the necessary higher degrees. Sudanese students who seek higher qualifications must study abroad. Lack of financial support for such study affects both genders, but this situation imposes additional challenges on female students because of cultural and religious constraints that limit women's ability to study abroad.
Updated constraints on the light-neutrino exchange mechanisms of the 0νββ-decay
NASA Astrophysics Data System (ADS)
Štefánik, Dušan; Dvornický, Rastislav; Šimkovic, Fedor
2015-10-01
The neutrinoless double-beta (0νββ) decay associated with light neutrino exchange mechanisms, which are due to both left-handed V-A and right-handed V+A leptonic and hadronic currents, is discussed by using the recent progress achieved by the GERDA, EXO and KamlandZen experiments. The upper limits for effective neutrino mass mββ and the parameters <λ> and <η> characterizing the right handed current mechanisms are deduced from the data on the 0νββ-decay of 76Ge and 136Xe using nuclear matrix elements calculated within the nuclear shell model and quasiparticle random phase approximation and phase-space factors calculated with exact Dirac wave functions with finite nuclear size and electron screening. The careful analysis of upper constraints on effective lepton number violating parameters assumes a competition of the above mechanisms and arbitrary values of involved CP violating phases.
Rolling scheduling of electric power system with wind power based on improved NNIA algorithm
NASA Astrophysics Data System (ADS)
Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.
2017-11-01
This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.
Streamwise-Localized Solutions with natural 1-fold symmetry
NASA Astrophysics Data System (ADS)
Altmeyer, Sebastian; Willis, Ashley; Hof, Björn
2014-11-01
It has been proposed in recent years that turbulence is organized around unstable invariant solutions, which provide the building blocks of the chaotic dynamics. In direct numerical simulations of pipe flow we show that when imposing a minimal symmetry constraint (reflection in an axial plane only) the formation of turbulence can indeed be explained by dynamical systems concepts. The hypersurface separating laminar from turbulent motion, the edge of turbulence, is spanned by the stable manifolds of an exact invariant solution, a periodic orbit of a spatially localized structure. The turbulent states themselves (turbulent puffs in this case) are shown to arise in a bifurcation sequence from a related localized solution (the upper branch orbit). The rather complex bifurcation sequence involves secondary Hopf bifurcations, frequency locking and a period doubling cascade until eventually turbulent puffs arise. In addition we report preliminary results of the transition sequence for pipe flow without symmetry constraints.
Development of a Fatigue Crack Growth Coupon for Highly Plastic Stress Conditions
NASA Technical Reports Server (NTRS)
Allen, Phillip A.; Aggarwal, Pravin K.; Swanson, Gregory R.
2003-01-01
The analytical approach used to develop a novel fatigue crack growth coupon for highly plastic stress field condition is presented in this paper. The flight hardware investigated is a large separation bolt that has a deep notch, which produces a large plastic zone at the notch root when highly loaded. Four test specimen configurations are analyzed in an attempt to match the elastic-plastic stress field and crack constraint conditions present in the separation bolt. Elastic-plastic finite element analysis is used to compare the stress fields and critical fracture parameters. Of the four test specimens analyzed, the modified double-edge notch tension - 3 (MDENT-3) most closely approximates the stress field, J values, and crack constraint conditions found in the flight hardware. The MDENT-3 is also most insensitive to load misalignment and/or load redistribution during crack growth.
Analyses of deep mammalian sequence alignments and constraint predictions for 1% of the human genome
Margulies, Elliott H.; Cooper, Gregory M.; Asimenos, George; Thomas, Daryl J.; Dewey, Colin N.; Siepel, Adam; Birney, Ewan; Keefe, Damian; Schwartz, Ariel S.; Hou, Minmei; Taylor, James; Nikolaev, Sergey; Montoya-Burgos, Juan I.; Löytynoja, Ari; Whelan, Simon; Pardi, Fabio; Massingham, Tim; Brown, James B.; Bickel, Peter; Holmes, Ian; Mullikin, James C.; Ureta-Vidal, Abel; Paten, Benedict; Stone, Eric A.; Rosenbloom, Kate R.; Kent, W. James; Bouffard, Gerard G.; Guan, Xiaobin; Hansen, Nancy F.; Idol, Jacquelyn R.; Maduro, Valerie V.B.; Maskeri, Baishali; McDowell, Jennifer C.; Park, Morgan; Thomas, Pamela J.; Young, Alice C.; Blakesley, Robert W.; Muzny, Donna M.; Sodergren, Erica; Wheeler, David A.; Worley, Kim C.; Jiang, Huaiyang; Weinstock, George M.; Gibbs, Richard A.; Graves, Tina; Fulton, Robert; Mardis, Elaine R.; Wilson, Richard K.; Clamp, Michele; Cuff, James; Gnerre, Sante; Jaffe, David B.; Chang, Jean L.; Lindblad-Toh, Kerstin; Lander, Eric S.; Hinrichs, Angie; Trumbower, Heather; Clawson, Hiram; Zweig, Ann; Kuhn, Robert M.; Barber, Galt; Harte, Rachel; Karolchik, Donna; Field, Matthew A.; Moore, Richard A.; Matthewson, Carrie A.; Schein, Jacqueline E.; Marra, Marco A.; Antonarakis, Stylianos E.; Batzoglou, Serafim; Goldman, Nick; Hardison, Ross; Haussler, David; Miller, Webb; Pachter, Lior; Green, Eric D.; Sidow, Arend
2007-01-01
A key component of the ongoing ENCODE project involves rigorous comparative sequence analyses for the initially targeted 1% of the human genome. Here, we present orthologous sequence generation, alignment, and evolutionary constraint analyses of 23 mammalian species for all ENCODE targets. Alignments were generated using four different methods; comparisons of these methods reveal large-scale consistency but substantial differences in terms of small genomic rearrangements, sensitivity (sequence coverage), and specificity (alignment accuracy). We describe the quantitative and qualitative trade-offs concomitant with alignment method choice and the levels of technical error that need to be accounted for in applications that require multisequence alignments. Using the generated alignments, we identified constrained regions using three different methods. While the different constraint-detecting methods are in general agreement, there are important discrepancies relating to both the underlying alignments and the specific algorithms. However, by integrating the results across the alignments and constraint-detecting methods, we produced constraint annotations that were found to be robust based on multiple independent measures. Analyses of these annotations illustrate that most classes of experimentally annotated functional elements are enriched for constrained sequences; however, large portions of each class (with the exception of protein-coding sequences) do not overlap constrained regions. The latter elements might not be under primary sequence constraint, might not be constrained across all mammals, or might have expendable molecular functions. Conversely, 40% of the constrained sequences do not overlap any of the functional elements that have been experimentally identified. Together, these findings demonstrate and quantify how many genomic functional elements await basic molecular characterization. PMID:17567995
Rotational-path decomposition based recursive planning for spacecraft attitude reorientation
NASA Astrophysics Data System (ADS)
Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying
2018-02-01
The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.
Modeling Multibody Stage Separation Dynamics Using Constraint Force Equation Methodology
NASA Technical Reports Server (NTRS)
Tartabini, Paul V.; Roithmayr, Carlos M.; Toniolo, Matthew D.; Karlgaard, Christopher D.; Pamadi, Bandu N.
2011-01-01
This paper discusses the application of the constraint force equation methodology and its implementation for multibody separation problems using three specially designed test cases. The first test case involves two rigid bodies connected by a fixed joint, the second case involves two rigid bodies connected with a universal joint, and the third test case is that of Mach 7 separation of the X-43A vehicle. For the first two cases, the solutions obtained using the constraint force equation method compare well with those obtained using industry- standard benchmark codes. For the X-43A case, the constraint force equation solutions show reasonable agreement with the flight-test data. Use of the constraint force equation method facilitates the analysis of stage separation in end-to-end simulations of launch vehicle trajectories
Constraints on binary neutron star merger product from short GRB observations
NASA Astrophysics Data System (ADS)
Gao, He; Zhang, Bing; Lü, Hou-Jun
2016-02-01
Binary neutron star (NS) mergers are strong gravitational-wave (GW) sources and the leading candidates to interpret short-duration gamma-ray bursts (SGRBs). Under the assumptions that SGRBs are produced by double neutron star mergers and that the x-ray plateau followed by a steep decay as observed in SGRB x-ray light curves marks the collapse of a supramassive neutron star to a black hole (BH), we use the statistical observational properties of Swift SGRBs and the mass distribution of Galactic double neutron star systems to place constraints on the neutron star equation of state (EoS) and the properties of the post-merger product. We show that current observations already impose the following interesting constraints. (1) A neutron star EoS with a maximum mass close to a parametrization of Mmax=2.37 M⊙(1 +1.58 ×10-10P-2.84) is favored. (2) The fractions for the several outcomes of NS-NS mergers are as follows: ˜40 % prompt BHs, ˜30 % supramassive NSs that collapse to BHs in a range of delay time scales, and ˜30 % stable NSs that never collapse. (3) The initial spin of the newly born supramassive NSs should be near the breakup limit (Pi˜1 ms ), which is consistent with the merger scenario. (4) The surface magnetic field of the merger products is typically ˜1015 G . (5) The ellipticity of the supramassive NSs is ɛ ˜(0.004 -0.007 ), so that strong GW radiation is released after the merger. (6) Even though the initial spin energy of the merger product is similar, the final energy output of the merger product that goes into the electromagnetic channel varies in a wide range from several 1049 to several 1052 erg , since a good fraction of the spin energy is either released in the form of GWs or falls into the black hole as the supramassive NS collapses.
Future climate stimulates population out-breaks by relaxing constraints on reproduction.
Heldt, Katherine A; Connell, Sean D; Anderson, Kathryn; Russell, Bayden D; Munguia, Pablo
2016-09-14
When conditions are stressful, reproduction and population growth are reduced, but when favourable, reproduction and population size can boom. Theory suggests climate change is an increasingly stressful environment, predicting extinctions or decreased abundances. However, if favourable conditions align, such as an increase in resources or release from competition and predation, future climate can fuel population growth. Tests of such population growth models and the mechanisms by which they are enabled are rare. We tested whether intergenerational increases in population size might be facilitated by adjustments in reproductive success to favourable environmental conditions in a large-scale mesocosm experiment. Herbivorous amphipod populations responded to future climate by increasing 20 fold, suggesting that future climate might relax environmental constraints on fecundity. We then assessed whether future climate reduces variation in mating success, boosting population fecundity and size. The proportion of gravid females doubled, and variance in phenotypic variation of male secondary sexual characters (i.e. gnathopods) was significantly reduced. While future climate can enhance individual growth and survival, it may also reduce constraints on mechanisms of reproduction such that enhanced intra-generational productivity and reproductive success transfers to subsequent generations. Where both intra and intergenerational production is enhanced, population sizes might boom.
NUCLEOSYNTHESIS CONSTRAINTS ON THE NEUTRON STAR-BLACK HOLE MERGER RATE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauswein, A.; Ardevol Pulpillo, R.; Janka, H.-T.
2014-11-01
We derive constraints on the time-averaged event rate of neutron star-black hole (NS-BH) mergers by using estimates of the population-integrated production of heavy rapid neutron-capture (r-process) elements with nuclear mass numbers A > 140 by such events in comparison to the Galactic repository of these chemical species. Our estimates are based on relativistic hydrodynamical simulations convolved with theoretical predictions of the binary population. This allows us to determine a strict upper limit of the average NS-BH merger rate of ∼6× 10{sup –5} per year. We quantify the uncertainties of this estimate to be within factors of a few mostly becausemore » of the unknown BH spin distribution of such systems, the uncertain equation of state of NS matter, and possible errors in the Galactic content of r-process material. Our approach implies a correlation between the merger rates of NS-BH binaries and of double NS systems. Predictions of the detection rate of gravitational-wave signals from such compact object binaries by Advanced LIGO and Advanced Virgo on the optimistic side are incompatible with the constraints set by our analysis.« less
Direct handling of equality constraints in multilevel optimization
NASA Technical Reports Server (NTRS)
Renaud, John E.; Gabriele, Gary A.
1990-01-01
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
NASA Astrophysics Data System (ADS)
Rahman, Md. Saifur; Lee, Yiu-Yin
2017-10-01
In this study, a new modified multi-level residue harmonic balance method is presented and adopted to investigate the forced nonlinear vibrations of axially loaded double beams. Although numerous nonlinear beam or linear double-beam problems have been tackled and solved, there have been few studies of this nonlinear double-beam problem. The geometric nonlinear formulations for a double-beam model are developed. The main advantage of the proposed method is that a set of decoupled nonlinear algebraic equations is generated at each solution level. This heavily reduces the computational effort compared with solving the coupled nonlinear algebraic equations generated in the classical harmonic balance method. The proposed method can generate the higher-level nonlinear solutions that are neglected by the previous modified harmonic balance method. The results from the proposed method agree reasonably well with those from the classical harmonic balance method. The effects of damping, axial force, and excitation magnitude on the nonlinear vibrational behaviour are examined.
Finite-time stabilisation of a class of switched nonlinear systems with state constraints
NASA Astrophysics Data System (ADS)
Huang, Shipei; Xiang, Zhengrong
2018-06-01
This paper investigates the finite-time stabilisation for a class of switched nonlinear systems with state constraints. Some power orders of the system are allowed to be ratios of positive even integers over odd integers. A Barrier Lyapunov function is introduced to guarantee that the state constraint is not violated at any time. Using the convex combination method and a recursive design approach, a state-dependent switching law and state feedback controllers of individual subsystems are constructed such that the closed-loop system is finite-time stable without violation of the state constraint. Two examples are provided to show the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
NASA Technical Reports Server (NTRS)
Ider, Sitki Kemal
1989-01-01
Conventionally kinematical constraints in multibody systems are treated similar to geometrical constraints and are modeled by constraint reaction forces which are perpendicular to constraint surfaces. However, in reality, one may want to achieve the desired kinematical conditions by control forces having different directions in relation to the constraint surfaces. The conventional equations of motion for multibody systems subject to kinematical constraints are generalized by introducing general direction control forces. Conditions for the selections of the control force directions are also discussed. A redundant robotic system subject to prescribed end-effector motion is analyzed to illustrate the methods proposed.
Compact binary merger rates: Comparison with LIGO/Virgo upper limits
Belczynski, Krzysztof; Repetto, Serena; Holz, Daniel E.; ...
2016-03-03
Here, we compare evolutionary predictions of double compact object merger rate densities with initial and forthcoming LIGO/Virgo upper limits. We find that: (i) Due to the cosmological reach of advanced detectors, current conversion methods of population synthesis predictions into merger rate densities are insufficient. (ii) Our optimistic models are a factor of 18 below the initial LIGO/Virgo upper limits for BH–BH systems, indicating that a modest increase in observational sensitivity (by a factor of ~2.5) may bring the first detections or first gravitational wave constraints on binary evolution. (iii) Stellar-origin massive BH–BH mergers should dominate event rates in advanced LIGO/Virgo and can be detected out to redshift z sime 2 with templates including inspiral, merger, and ringdown. Normal stars (more » $$\\lt 150\\;{M}_{\\odot }$$) can produce such mergers with total redshifted mass up to $${M}_{{\\rm{tot,z}}}\\simeq 400\\;{M}_{\\odot }$$. (iv) High black hole (BH) natal kicks can severely limit the formation of massive BH–BH systems (both in isolated binary and in dynamical dense cluster evolution), and thus would eliminate detection of these systems even at full advanced LIGO/Virgo sensitivity. We find that low and high BH natal kicks are allowed by current observational electromagnetic constraints. (v) The majority of our models yield detections of all types of mergers (NS–NS, BH–NS, BH–BH) with advanced detectors. Numerous massive BH–BH merger detections will indicate small (if any) natal kicks for massive BHs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jallet, Denis; Caballero, Michael A.; Gallina, Alessandra A.
Photosynthetic microbes respond to changing light environments to balance photosynthetic process with light induced damage and photoinhibition. There have been very few characterizations of photosynthetic physiology or biomass partitioning during the day in mass culture. Understanding the constraints on photosynthetic efficiency and biomass accumulation are necessary for engineering superior strains or cultivation methods. We observed the photosynthetic physiology of nutrient replete Phaeodactylum tricornutum growing in light environments that mimic those found in rapidly mixing, outdoor, low biomass photobioreactors. We found little evidence for photoinhibition or non-photochemical quenching in situ, suggesting photosynthesis remains highly efficient throughout the day. Cells doubled theirmore » organic carbon from dawn to dusk and a small percentage – around 20% – of this carbon was allocated to carbohydrates or triacylglycerol. We thus conclude that the self-shading provided by dense culturing of P. tricornutum inhibits the induction of photodamage, and energy dissipation processes that would otherwise lower productivity in an outdoor photobioreactor.« less
Jallet, Denis; Caballero, Michael A.; Gallina, Alessandra A.; ...
2016-06-11
Photosynthetic microbes respond to changing light environments to balance photosynthetic process with light induced damage and photoinhibition. There have been very few characterizations of photosynthetic physiology or biomass partitioning during the day in mass culture. Understanding the constraints on photosynthetic efficiency and biomass accumulation are necessary for engineering superior strains or cultivation methods. We observed the photosynthetic physiology of nutrient replete Phaeodactylum tricornutum growing in light environments that mimic those found in rapidly mixing, outdoor, low biomass photobioreactors. We found little evidence for photoinhibition or non-photochemical quenching in situ, suggesting photosynthesis remains highly efficient throughout the day. Cells doubled theirmore » organic carbon from dawn to dusk and a small percentage – around 20% – of this carbon was allocated to carbohydrates or triacylglycerol. We thus conclude that the self-shading provided by dense culturing of P. tricornutum inhibits the induction of photodamage, and energy dissipation processes that would otherwise lower productivity in an outdoor photobioreactor.« less
Toward Optimal Transport Networks
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Kincaid, Rex K.; Vargo, Erik P.
2008-01-01
Strictly evolutionary approaches to improving the air transport system a highly complex network of interacting systems no longer suffice in the face of demand that is projected to double or triple in the near future. Thus evolutionary approaches should be augmented with active design methods. The ability to actively design, optimize and control a system presupposes the existence of predictive modeling and reasonably well-defined functional dependences between the controllable variables of the system and objective and constraint functions for optimization. Following recent advances in the studies of the effects of network topology structure on dynamics, we investigate the performance of dynamic processes on transport networks as a function of the first nontrivial eigenvalue of the network's Laplacian, which, in turn, is a function of the network s connectivity and modularity. The last two characteristics can be controlled and tuned via optimization. We consider design optimization problem formulations. We have developed a flexible simulation of network topology coupled with flows on the network for use as a platform for computational experiments.
Analytical Model-Based Design Optimization of a Transverse Flux Machine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hasan, Iftekhar; Husain, Tausif; Sozer, Yilmaz
This paper proposes an analytical machine design tool using magnetic equivalent circuit (MEC)-based particle swarm optimization (PSO) for a double-sided, flux-concentrating transverse flux machine (TFM). The magnetic equivalent circuit method is applied to analytically establish the relationship between the design objective and the input variables of prospective TFM designs. This is computationally less intensive and more time efficient than finite element solvers. A PSO algorithm is then used to design a machine with the highest torque density within the specified power range along with some geometric design constraints. The stator pole length, magnet length, and rotor thickness are the variablesmore » that define the optimization search space. Finite element analysis (FEA) was carried out to verify the performance of the MEC-PSO optimized machine. The proposed analytical design tool helps save computation time by at least 50% when compared to commercial FEA-based optimization programs, with results found to be in agreement with less than 5% error.« less
Research on liquid impact forming technology of double-layered tubes
NASA Astrophysics Data System (ADS)
Sun, Changying; Liu, Jianwei; Yao, Xinqi; Huang, Beixing; Li, Yuhan
2018-03-01
A double-layered tube is widely used and developed in various fields because of its perfect comprehensive performance and design. With the advent of the era of a double-layered tube, the requirements for double layered tube forming quality, manufacturing cost and forming efficiency are getting higher, so forming methods of a double-layered tube are emerged in an endless stream, the forming methods of a double-layered tube have a great potential in the future. The liquid impact forming technology is a combination of stamping technology and hydroforming technology. Forming a double-layered tube has huge advantages in production cost, quality and efficiency.
NASA Astrophysics Data System (ADS)
Nagai, S.; Wu, Y.; Suppe, J.; Hirata, N.
2009-12-01
The island of Taiwan is located in the site of ongoing arc-continent collision zone between the Philippine Sea Plate and the Eurasian Plate. Numerous geophysical and geological studies are done in and around Taiwan to develop various models to explain the tectonic processes in the Taiwan region. The active and young tectonics and the associated high seismicity in Taiwan provide us with unique opportunity to explore and understand the processes in the region related to the arc-continent collision. Nagai et al. [2009] imaged eastward dipping alternate high- and low-velocity bodies at depths of 5 to 25 km from the western side of the Central Mountain Range to the eastern part of Taiwan, by double-difference tomography [Zhang and Thurber, 2003] using three temporary seismic networks with the Central Weather Bureau Seismic Network(CWBSN). These three temporary networks are the aftershock observation after the 1999 Chi-Chi Taiwan earthquake and two dense linear array observations; one is across central Taiwan in 2001, another is across southern Taiwan in 2005, respectively. We proposed a new orogenic model, ’Upper Crustal Stacking Model’ inferred from our tomographic images. To understand the detailed seismic structure more, we carry on relocating earthquakes more precisely in central and southern Taiwan, using three-dimensional velocity model [Nagai et al., 2009] and P- and S-wave arrival times both from the CWBSN and three temporary networks. We use the double-difference tomography method to improve relative and absolute location accuracy simultaneously. The relocated seismicity is concentrated and limited along the parts of boundaries between low- and high-velocity bodies. Especially, earthquakes occurred beneath the Eastern Central Range, triggered by 1999 Chi-Chi earthquake, delineate subsurface structural boundaries, compared with profiles of estimated seismic velocity. The relocated catalog and 3-D seismic velocity model give us some constraints to reconstruct the orogenic model in Taiwan. We show these relocated seismicity with P- and S-wave velocity profiles, with focal mechanisms [e.g. Wu et al., 2008] and spatio-temporal variation, in central and southern Taiwan and discuss tectonic processes in Taiwan.
NASA Astrophysics Data System (ADS)
Li, Leihong
A modular structural design methodology for composite blades is developed. This design method can be used to design composite rotor blades with sophisticate geometric cross-sections. This design method hierarchically decomposed the highly-coupled interdisciplinary rotor analysis into global and local levels. In the global level, aeroelastic response analysis and rotor trim are conduced based on multi-body dynamic models. In the local level, variational asymptotic beam sectional analysis methods are used for the equivalent one-dimensional beam properties. Compared with traditional design methodology, the proposed method is more efficient and accurate. Then, the proposed method is used to study three different design problems that have not been investigated before. The first is to add manufacturing constraints into design optimization. The introduction of manufacturing constraints complicates the optimization process. However, the design with manufacturing constraints benefits the manufacturing process and reduces the risk of violating major performance constraints. Next, a new design procedure for structural design against fatigue failure is proposed. This procedure combines the fatigue analysis with the optimization process. The durability or fatigue analysis employs a strength-based model. The design is subject to stiffness, frequency, and durability constraints. Finally, the manufacturing uncertainty impacts on rotor blade aeroelastic behavior are investigated, and a probabilistic design method is proposed to control the impacts of uncertainty on blade structural performance. The uncertainty factors include dimensions, shapes, material properties, and service loads.
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU.
Zhao, Xu; Dou, Lihua; Su, Zhong; Liu, Ning
2018-03-16
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot's motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot's motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot's navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots.
Testing deformation hypotheses by constraints on a time series of geodetic observations
NASA Astrophysics Data System (ADS)
Velsink, Hiddo
2018-01-01
In geodetic deformation analysis observations are used to identify form and size changes of a geodetic network, representing objects on the earth's surface. The network points are monitored, often continuously, because of suspected deformations. A deformation may affect many points during many epochs. The problem is that the best description of the deformation is, in general, unknown. To find it, different hypothesised deformation models have to be tested systematically for agreement with the observations. The tests have to be capable of stating with a certain probability the size of detectable deformations, and to be datum invariant. A statistical criterion is needed to find the best deformation model. Existing methods do not fulfil these requirements. Here we propose a method that formulates the different hypotheses as sets of constraints on the parameters of a least-squares adjustment model. The constraints can relate to subsets of epochs and to subsets of points, thus combining time series analysis and congruence model analysis. The constraints are formulated as nonstochastic observations in an adjustment model of observation equations. This gives an easy way to test the constraints and to get a quality description. The proposed method aims at providing a good discriminating method to find the best description of a deformation. The method is expected to improve the quality of geodetic deformation analysis. We demonstrate the method with an elaborate example.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
NASA Astrophysics Data System (ADS)
Campolina, Bruno L.
The prediction of aircraft interior noise involves the vibroacoustic modelling of the fuselage with noise control treatments. This structure is composed of a stiffened metallic or composite panel, lined with a thermal and acoustic insulation layer (glass wool), and structurally connected via vibration isolators to a commercial lining panel (trim). The goal of this work aims at tailoring the noise control treatments taking design constraints such as weight and space optimization into account. For this purpose, a representative aircraft double-wall is modelled using the Statistical Energy Analysis (SEA) method. Laboratory excitations such as diffuse acoustic field and point force are addressed and trends are derived for applications under in-flight conditions, considering turbulent boundary layer excitation. The effect of the porous layer compression is firstly addressed. In aeronautical applications, compression can result from the installation of equipment and cables. It is studied analytically and experimentally, using a single panel and a fibrous uniformly compressed over 100% of its surface. When compression increases, a degradation of the transmission loss up to 5 dB for a 50% compression of the porous thickness is observed mainly in the mid-frequency range (around 800 Hz). However, for realistic cases, the effect should be reduced since the compression rate is lower and compression occurs locally. Then the transmission through structural connections between panels is addressed using a four-pole approach that links the force-velocity pair at each side of the connection. The modelling integrates experimental dynamic stiffness of isolators, derived using an adapted test rig. The structural transmission is then experimentally validated and included in the double-wall SEA model as an equivalent coupling loss factor (CLF) between panels. The tested structures being flat, only axial transmission is addressed. Finally, the dominant sound transmission paths are identified in the 100 Hz to 10 kHz frequency range for double-walls under diffuse acoustic field and under point-force excitations. Non-resonant transmission is higher at low frequencies (frequencies lower than 1 kHz) while the structure-borne and the airborne paths dominate at mid- and high-frequencies, around 1 kHz and higher, respectively. An experimental validation on double-walls shows that the model is able to predict changes in the overall transmission caused by different structural couplings (rigid coupling, coupling via isolators and structurally uncoupled). Noise reduction means adapted to each transmission path, such as absorption, dissipation and structural decoupling, may be then derived. Keywords: Statistical energy analysis, Vibration isolator, Double-wall, Transfer path analysis, Transmission Loss.
Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.
Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph
2018-06-01
There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.
NASA Astrophysics Data System (ADS)
Howlader, Harun Or Rashid; Matayoshi, Hidehito; Noorzad, Ahmad Samim; Muarapaz, Cirio Celestino; Senjyu, Tomonobu
2018-05-01
This paper presents a smart house-based power system for thermal unit commitment programme. The proposed power system consists of smart houses, renewable energy plants and conventional thermal units. The transmission constraints are considered for the proposed system. The generated power of the large capacity renewable energy plant leads to the violated transmission constraints in the thermal unit commitment programme, therefore, the transmission constraint should be considered. This paper focuses on the optimal operation of the thermal units incorporated with controllable loads such as Electrical Vehicle and Heat Pump water heater of the smart houses. The proposed method is compared with the power flow in thermal units operation without controllable loads and the optimal operation without the transmission constraints. Simulation results show the validation of the proposed method.
Guzman, Gustavo; Fitzgerald, Janna Anneke; Fulop, Liz; Hayes, Kathryn; Poropat, Arthur; Avery, Mark; Campbell, Steve; Fisher, Ron; Gapp, Rod; Herington, Carmel; McPhail, Ruth; Vecchio, Nerina
2015-01-01
In spite of significant investment in quality programs and activities, there is a persistent struggle to achieve quality outcomes and performance improvements within the constraints and support of sociopolitical parsimonies. Equally, such constraints have intensified the need to better understand the best practice methods for achieving quality improvements in health care organizations over time.This study proposes a conceptual framework to assist with strategies for the copying, transferring, and/or translation of best practice between different health care facilities. Applying a deductive logic, the conceptual framework was developed by blending selected theoretical lenses drawn from the knowledge management and organizational learning literatures. The proposed framework highlighted that (a) major constraints need to be addressed to turn best practices into everyday practices and (b) double-loop learning is an adequate learning mode to copy and to transfer best practices and deuteron learning mode is a more suitable learning mode for translating best practice. We also found that, in complex organizations, copying, transferring, and translating new knowledge is more difficult than in smaller, less complex organizations. We also posit that knowledge translation cannot happen without transfer and copy, and transfer cannot happen without copy of best practices. Hence, an integration of all three learning processes is required for knowledge translation (copy best practice-transfer knowledge about best practice-translation of best practice into new context). In addition, the higher the level of complexity of the organization, the more best practice is tacit oriented and, in this case, the higher the level of K&L capabilities are required to successfully copy, transfer, and/or translate best practices between organizations. The approach provides a framework for assessing organizational context and capabilities to guide copy/transfer/translation of best practices. A roadmap is provided to assist managers and practitioners to select appropriate learning modes for building success and positive systemic change.
A point cloud modeling method based on geometric constraints mixing the robust least squares method
NASA Astrophysics Data System (ADS)
Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan
2016-10-01
The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results show that the internal accuracy is improved, and it is shown that the improved method for point clouds modeling proposed by this paper outperforms the traditional point clouds modeling methods.
Boiret, Mathieu; de Juan, Anna; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel
2015-09-10
Raman chemical imaging provides chemical and spatial information about pharmaceutical drug product. By using resolution methods on acquired spectra, the objective is to calculate pure spectra and distribution maps of image compounds. With multivariate curve resolution-alternating least squares, constraints are used to improve the performance of the resolution and to decrease the ambiguity linked to the final solution. Non negativity and spatial local rank constraints have been identified as the most powerful constraints to be used. In this work, an alternative method to set local rank constraints is proposed. The method is based on orthogonal projections pretreatment. For each drug product compound, raw Raman spectra are orthogonally projected to a basis including all the variability from the formulation compounds other than the product of interest. Presence or absence of the compound of interest is obtained by observing the correlations between the orthogonal projected spectra and a pure spectrum orthogonally projected to the same basis. By selecting an appropriate threshold, maps of presence/absence of compounds can be set up for all the product compounds. This method appears as a powerful approach to identify a low dose compound within a pharmaceutical drug product. The maps of presence/absence of compounds can be used as local rank constraints in resolution methods, such as multivariate curve resolution-alternating least squares process in order to improve the resolution of the system. The method proposed is particularly suited for pharmaceutical systems, where the identity of all compounds in the formulations is known and, therefore, the space of interferences can be well defined. Copyright © 2015 Elsevier B.V. All rights reserved.
Robust, Optimal Subsonic Airfoil Shapes
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2014-01-01
A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.
Iterative repair for scheduling and rescheduling
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Deale, Michael
1991-01-01
An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.
A DATA-DRIVEN MODEL FOR SPECTRA: FINDING DOUBLE REDSHIFTS IN THE SLOAN DIGITAL SKY SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsalmantza, P.; Hogg, David W., E-mail: vivitsal@mpia.de
2012-07-10
We present a data-driven method-heteroscedastic matrix factorization, a kind of probabilistic factor analysis-for modeling or performing dimensionality reduction on observed spectra or other high-dimensional data with known but non-uniform observational uncertainties. The method uses an iterative inverse-variance-weighted least-squares minimization procedure to generate a best set of basis functions. The method is similar to principal components analysis (PCA), but with the substantial advantage that it uses measurement uncertainties in a responsible way and accounts naturally for poorly measured and missing data; it models the variance in the noise-deconvolved data space. A regularization can be applied, in the form of a smoothnessmore » prior (inspired by Gaussian processes) or a non-negative constraint, without making the method prohibitively slow. Because the method optimizes a justified scalar (related to the likelihood), the basis provides a better fit to the data in a probabilistic sense than any PCA basis. We test the method on Sloan Digital Sky Survey (SDSS) spectra, concentrating on spectra known to contain two redshift components: these are spectra of gravitational lens candidates and massive black hole binaries. We apply a hypothesis test to compare one-redshift and two-redshift models for these spectra, utilizing the data-driven model trained on a random subset of all SDSS spectra. This test confirms 129 of the 131 lens candidates in our sample and all of the known binary candidates, and turns up very few false positives.« less
Constraints on Stress Components at the Internal Singular Point of an Elastic Compound Structure
NASA Astrophysics Data System (ADS)
Pestrenin, V. M.; Pestrenina, I. V.
2017-03-01
The classical analytical and numerical methods for investigating the stress-strain state (SSS) in the vicinity of a singular point consider the point as a mathematical one (having no linear dimensions). The reliability of the solution obtained by such methods is valid only outside a small vicinity of the singular point, because the macroscopic equations become incorrect and microscopic ones have to be used to describe the SSS in this vicinity. Also, it is impossible to set constraint or to formulate solutions in stress-strain terms for a mathematical point. These problems do not arise if the singular point is identified with the representative volume of material of the structure studied. In authors' opinion, this approach is consistent with the postulates of continuum mechanics. In this case, the formulation of constraints at a singular point and their investigation becomes an independent problem of mechanics for bodies with singularities. This method was used to explore constraints at an internal singular point (representative volume) of a compound wedge and a compound rib. It is shown that, in addition to the constraints given in the classical approach, there are also constraints depending on the macroscopic parameters of constituent materials. These constraints turn the problems of deformable bodies with an internal singular point into nonclassical ones. Combinations of material parameters determine the number of additional constraints and the critical stress state at the singular point. Results of this research can be used in the mechanics of composite materials and fracture mechanics and in studying stress concentrations in composite structural elements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adare, A.; Aidala, C.; Ajitanand, N. N.
2015-02-02
We present midrapidity charged-pion invariant cross sections, the ratio of the π⁻ to π⁺ cross sections and the charge-separated double-spin asymmetries in polarized p+p collisions at √s = 200 GeV. While the cross section measurements are consistent within the errors of next-to-leadingorder (NLO) perturbative quantum chromodynamics predictions (pQCD), the same calculations over estimate the ratio of the charged-pion cross sections. This discrepancy arises from the cancellation of the substantial systematic errors associated with the NLO-pQCD predictions in the ratio and highlights the constraints these data will place on flavor dependent pion fragmentation functions. Thus, the charge-separated pion asymmetries presented heremore » sample an x range of ~0.03–0.16 and provide unique information on the sign of the gluon-helicity distribution.« less
NASA Astrophysics Data System (ADS)
Adare, A.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Akimoto, R.; Al-Ta'Ani, H.; Alexander, J.; Andrews, K. R.; Angerami, A.; Aoki, K.; Apadula, N.; Appelt, E.; Aramaki, Y.; Armendariz, R.; Aschenauer, E. C.; Atomssa, E. T.; Awes, T. C.; Azmoun, B.; Babintsev, V.; Bai, M.; Bannier, B.; Barish, K. N.; Bassalleck, B.; Basye, A. T.; Bathe, S.; Baublis, V.; Baumann, C.; Bazilevsky, A.; Belmont, R.; Ben-Benjamin, J.; Bennett, R.; Blau, D. S.; Bok, J. S.; Boyle, K.; Brooks, M. L.; Broxmeyer, D.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Butsyk, S.; Campbell, S.; Castera, P.; Chen, C.-H.; Chi, C. Y.; Chiu, M.; Choi, I. J.; Choi, J. B.; Choudhury, R. K.; Christiansen, P.; Chujo, T.; Chvala, O.; Cianciolo, V.; Citron, Z.; Cole, B. A.; Conesa Del Valle, Z.; Connors, M.; Csanád, M.; Csörgő, T.; Dairaku, S.; Datta, A.; David, G.; Dayananda, M. K.; Denisov, A.; Deshpande, A.; Desmond, E. J.; Dharmawardane, K. V.; Dietzsch, O.; Dion, A.; Donadelli, M.; Drapier, O.; Drees, A.; Drees, K. A.; Durham, J. M.; Durum, A.; D'Orazio, L.; Efremenko, Y. V.; Engelmore, T.; Enokizono, A.; En'yo, H.; Esumi, S.; Fadem, B.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fukao, Y.; Fusayasu, T.; Gal, C.; Garishvili, I.; Giordano, F.; Glenn, A.; Gong, X.; Gonin, M.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grosse Perdekamp, M.; Gunji, T.; Guo, L.; Gustafsson, H.-Å.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Hamblen, J.; Han, R.; Hanks, J.; Harper, C.; Hashimoto, K.; Haslum, E.; Hayano, R.; He, X.; Hemmick, T. K.; Hester, T.; Hill, J. C.; Hollis, R. S.; Holzmann, W.; Homma, K.; Hong, B.; Horaguchi, T.; Hori, Y.; Hornback, D.; Huang, S.; Ichihara, T.; Ichimiya, R.; Iinuma, H.; Ikeda, Y.; Imai, K.; Inaba, M.; Iordanova, A.; Isenhower, D.; Ishihara, M.; Issah, M.; Ivanischev, D.; Iwanaga, Y.; Jacak, B. V.; Jia, J.; Jiang, X.; John, D.; Johnson, B. M.; Jones, T.; Joo, K. S.; Jouan, D.; Kamin, J.; Kaneti, S.; Kang, B. H.; Kang, J. H.; Kang, J. S.; Kapustinsky, J.; Karatsu, K.; Kasai, M.; Kawall, D.; Kazantsev, A. V.; Kempel, T.; Khanzadeev, A.; Kijima, K. M.; Kim, B. I.; Kim, D. J.; Kim, E.-J.; Kim, Y.-J.; Kim, Y. K.; Kinney, E.; Kiss, Á.; Kistenev, E.; Kleinjan, D.; Kline, P.; Kochenda, L.; Komkov, B.; Konno, M.; Koster, J.; Kotov, D.; Král, A.; Kunde, G. J.; Kurita, K.; Kurosawa, M.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lebedev, A.; Lee, D. M.; Lee, J.; Lee, K. B.; Lee, K. S.; Lee, S. H.; Lee, S. R.; Leitch, M. J.; Leite, M. A. L.; Li, X.; Lim, S. H.; Linden Levy, L. A.; Liu, H.; Liu, M. X.; Love, B.; Lynch, D.; Maguire, C. F.; Makdisi, Y. I.; Manion, A.; Manko, V. I.; Mannel, E.; Mao, Y.; Masui, H.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; McKinney, C.; Means, N.; Mendoza, M.; Meredith, B.; Miake, Y.; Mibe, T.; Mignerey, A. C.; Miki, K.; Milov, A.; Mitchell, J. T.; Miyachi, Y.; Mohanty, A. K.; Moon, H. J.; Morino, Y.; Morreale, A.; Morrison, D. P.; Motschwiller, S.; Moukhanova, T. V.; Murakami, T.; Murata, J.; Nagamiya, S.; Nagle, J. L.; Naglis, M.; Nagy, M. I.; Nakagawa, I.; Nakamiya, Y.; Nakamura, K. R.; Nakamura, T.; Nakano, K.; Newby, J.; Nguyen, M.; Nihashi, M.; Nouicer, R.; Nyanin, A. S.; Oakley, C.; O'Brien, E.; Ogilvie, C. A.; Oka, M.; Okada, K.; Oskarsson, A.; Ouchida, M.; Ozawa, K.; Pak, R.; Pantuev, V.; Papavassiliou, V.; Park, B. H.; Park, I. H.; Park, S. K.; Pate, S. F.; Patel, L.; Pei, H.; Peng, J.-C.; Pereira, H.; Peressounko, D. Yu.; Petti, R.; Pinkenburg, C.; Pisani, R. P.; Proissl, M.; Purschke, M. L.; Qu, H.; Rak, J.; Ravinovich, I.; Read, K. F.; Reygers, K.; Riabov, V.; Riabov, Y.; Richardson, E.; Roach, D.; Roche, G.; Rolnick, S. D.; Rosati, M.; Rosendahl, S. S. E.; Rubin, J. G.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Samsonov, V.; Sano, S.; Sarsour, M.; Sato, T.; Savastio, M.; Sawada, S.; Sedgwick, K.; Seidl, R.; Seto, R.; Sharma, D.; Shein, I.; Shibata, T.-A.; Shigaki, K.; Shim, H. H.; Shimomura, M.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Silvestre, C.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Slunečka, M.; Sodre, T.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Stankus, P. W.; Stenlund, E.; Stoll, S. P.; Sugitate, T.; Sukhanov, A.; Sun, J.; Sziklai, J.; Takagui, E. M.; Takahara, A.; Taketani, A.; Tanabe, R.; Tanaka, Y.; Taneja, S.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Tennant, E.; Themann, H.; Thomas, D.; Togawa, M.; Tomášek, L.; Tomášek, M.; Torii, H.; Towell, R. S.; Tserruya, I.; Tsuchimoto, Y.; Utsunomiya, K.; Vale, C.; van Hecke, H. W.; Vazquez-Zambrano, E.; Veicht, A.; Velkovska, J.; Vértesi, R.; Virius, M.; Vossen, A.; Vrba, V.; Vznuzdaev, E.; Wang, X. R.; Watanabe, D.; Watanabe, K.; Watanabe, Y.; Watanabe, Y. S.; Wei, F.; Wei, R.; Wessels, J.; White, S. N.; Winter, D.; Woody, C. L.; Wright, R. M.; Wysocki, M.; Yamaguchi, Y. L.; Yang, R.; Yanovich, A.; Ying, J.; Yokkaichi, S.; Yoo, J. S.; You, Z.; Young, G. R.; Younus, I.; Yushmanov, I. E.; Zajc, W. A.; Zelenski, A.; Zhou, S.; Phenix Collaboration
2015-02-01
We present midrapidity charged-pion invariant cross sections, the ratio of the π- to π+ cross sections and the charge-separated double-spin asymmetries in polarized p +p collisions at √{s }=200 GeV . While the cross section measurements are consistent within the errors of next-to-leading-order (NLO) perturbative quantum chromodynamics predictions (pQCD), the same calculations overestimate the ratio of the charged-pion cross sections. This discrepancy arises from the cancellation of the substantial systematic errors associated with the NLO-pQCD predictions in the ratio and highlights the constraints these data will place on flavor-dependent pion fragmentation functions. The charge-separated pion asymmetries presented here sample an x range of ˜0.03 - 0.16 and provide unique information on the sign of the gluon-helicity distribution.
Klous, Miriam; Klous, Sander
2010-07-01
The aim of skin-marker-based motion analysis is to reconstruct the motion of a kinematical model from noisy measured motion of skin markers. Existing kinematic models for reconstruction of chains of segments can be divided into two categories: analytical methods that do not take joint constraints into account and numerical global optimization methods that do take joint constraints into account but require numerical optimization of a large number of degrees of freedom, especially when the number of segments increases. In this study, a new and largely analytical method for a chain of rigid bodies is presented, interconnected in spherical joints (chain-method). In this method, the number of generalized coordinates to be determined through numerical optimization is three, irrespective of the number of segments. This new method is compared with the analytical method of Veldpaus et al. [1988, "A Least-Squares Algorithm for the Equiform Transformation From Spatial Marker Co-Ordinates," J. Biomech., 21, pp. 45-54] (Veldpaus-method, a method of the first category) and the numerical global optimization method of Lu and O'Connor [1999, "Bone Position Estimation From Skin-Marker Co-Ordinates Using Global Optimization With Joint Constraints," J. Biomech., 32, pp. 129-134] (Lu-method, a method of the second category) regarding the effects of continuous noise simulating skin movement artifacts and regarding systematic errors in joint constraints. The study is based on simulated data to allow a comparison of the results of the different algorithms with true (noise- and error-free) marker locations. Results indicate a clear trend that accuracy for the chain-method is higher than the Veldpaus-method and similar to the Lu-method. Because large parts of the equations in the chain-method can be solved analytically, the speed of convergence in this method is substantially higher than in the Lu-method. With only three segments, the average number of required iterations with the chain-method is 3.0+/-0.2 times lower than with the Lu-method when skin movement artifacts are simulated by applying a continuous noise model. When simulating systematic errors in joint constraints, the number of iterations for the chain-method was almost a factor 5 lower than the number of iterations for the Lu-method. However, the Lu-method performs slightly better than the chain-method. The RMSD value between the reconstructed and actual marker positions is approximately 57% of the systematic error on the joint center positions for the Lu-method compared with 59% for the chain-method.
Predit: A temporal predictive framework for scheduling systems
NASA Technical Reports Server (NTRS)
Paolucci, E.; Patriarca, E.; Sem, M.; Gini, G.
1992-01-01
Scheduling can be formalized as a Constraint Satisfaction Problem (CSP). Within this framework activities belonging to a plan are interconnected via temporal constraints that account for slack among them. Temporal representation must include methods for constraints propagation and provide a logic for symbolic and numerical deductions. In this paper we describe a support framework for opportunistic reasoning in constraint directed scheduling. In order to focus the attention of an incremental scheduler on critical problem aspects, some discrete temporal indexes are presented. They are also useful for the prediction of the degree of resources contention. The predictive method expressed through our indexes can be seen as a Knowledge Source for an opportunistic scheduler with a blackboard architecture.
Application of singular value decomposition to structural dynamics systems with constraints
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Pinson, L. D.
1985-01-01
Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.
Sunada, Keijiro; Yamamoto, Hironori; Kita, Hiroto; Yano, Tomonori; Sato, Hiroyuki; Hayashi, Yoshikazu; Miyata, Tomohiko; Sekine, Yutaka; Kuno, Akiko; Iwamoto, Michiko; Ohnishi, Hirohide; Ido, Kenichi; Sugano, Kentaro
2005-01-01
AIM: To evaluate the clinical outcome of enteroscopy, using the double-balloon method, focusing on the involvement of neoplasms in strictures of the small intestine. METHODS: Enteroscopy, using the double-balloon method, was performed between December 1999 and December 2002 at Jichi Medical School Hospital, Japan and strictures of the small intestine were found in 17 out of 62 patients. These 17 consecutive patients were subjected to analysis. RESULTS: The double-balloon enteroscopy contributed to the diagnosis of small intestinal neoplasms found in 3 out of 17 patients by direct observation of the strictures as well as biopsy sampling. Surgical procedures were chosen for these three patients, while balloon dilation was chosen for the strictures in four patients diagnosed with inflammation without involvement of neoplasm. CONCLUSION: Double-balloon enteroscopy is a useful method for the diagnosis and treatment of strictures in the small bowel. PMID:15742422
Okuthe, O S; McLeod, A; Otte, J M; Buyu, G E
2003-09-01
Assessment of livestock production constraints in the smallholder dairy systems in the western Kenya highlands was carried out using both qualitative and quantitative epidemiological methods. Rapid rural appraisals (qualitative) were conducted in rural and peri-urban areas. A cross-sectional survey (quantitative) was then conducted on a random sample of farms in the study area. Diseases, poor communication, lack of marketing of livestock produce, lack of artificial insemination services, and feed and water shortages during the dry season were identified as the major constraints to cattle production in both areas. Tick borne diseases (especially East Coast fever) were identified as the major constraint to cattle production. Qualitative methods were found to be more flexible and cheaper than the quantitative methods by a ratio of between 2.19-2.0. The two methods were found to complement each other. Qualitative studies could be applied in preliminary studies before initiating more specific follow up quantitative studies.
Complete denture tooth arrangement technology driven by a reconfigurable rule.
Dai, Ning; Yu, Xiaoling; Fan, Qilei; Yuan, Fulai; Liu, Lele; Sun, Yuchun
2018-01-01
The conventional technique for the fabrication of complete dentures is complex, with a long fabrication process and difficult-to-control restoration quality. In recent years, digital complete denture design has become a research focus. Digital complete denture tooth arrangement is a challenging issue that is difficult to efficiently implement under the constraints of complex tooth arrangement rules and the patient's individualized functional aesthetics. The present study proposes a complete denture automatic tooth arrangement method driven by a reconfigurable rule; it uses four typical operators, including a position operator, a scaling operator, a posture operator, and a contact operator, to establish the constraint mapping association between the teeth and the constraint set of the individual patient. By using the process reorganization of different constraint operators, this method can flexibly implement different clinical tooth arrangement rules. When combined with a virtual occlusion algorithm based on progressive iterative Laplacian deformation, the proposed method can achieve automatic and individual tooth arrangement. Finally, the experimental results verify that the proposed method is flexible and efficient.
Constraints on B and Higgs physics in minimal low energy supersymmetric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carena, Marcela; /Fermilab; Menon, A.
2006-03-01
We study the implications of minimal flavor violating low energy supersymmetry scenarios for the search of new physics in the B and Higgs sectors at the Tevatron collider and the LHC. We show that the already stringent Tevatron bound on the decay rate B{sub s} {yields} {mu}{sup +}{mu}{sup -} sets strong constraints on the possibility of generating large corrections to the mass difference {Delta} M{sub s} of the B{sub s} eigenstates. We also show that the B{sub s} {yields} {mu}{sup +}{mu}{sup -} bound together with the constraint on the branching ratio of the rare decay b {yields} s{gamma} has strongmore » implications for the search of light, non-standard Higgs bosons at hadron colliders. In doing this, we demonstrate that the former expressions derived for the analysis of the double penguin contributions in the Kaon sector need to be corrected by additional terms for a realistic analysis of these effects. We also study a specific non-minimal flavor violating scenario, where there are flavor changing gluino-squark-quark interactions, governed by the CKM matrix elements, and show that the B and Higgs physics constraints are similar to the ones in the minimal flavor violating case. Finally we show that, in scenarios like electroweak baryogenesis which have light stops and charginos, there may be enhanced effects on the B and K mixing parameters, without any significant effect on the rate of B{sub s} {yields} {mu}{sup +}{mu}{sup -}.« less
Invalid-point removal based on epipolar constraint in the structured-light method
NASA Astrophysics Data System (ADS)
Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin
2018-06-01
In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.
Model-based control strategies for systems with constraints of the program type
NASA Astrophysics Data System (ADS)
Jarzębowska, Elżbieta
2006-08-01
The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.
Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun
2014-01-01
This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation. PMID:24757433
Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun
2014-01-01
This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation.
HEARTBEAT STARS: SPECTROSCOPIC ORBITAL SOLUTIONS FOR SIX ECCENTRIC BINARY SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smullen, Rachel A.; Kobulnicky, Henry A., E-mail: rsmullen@email.arizona.edu
2015-08-01
We present multi-epoch spectroscopy of “heartbeat stars,” eccentric binaries with dynamic tidal distortions and tidally induced pulsations originally discovered with the Kepler satellite. Optical spectra of six known heartbeat stars using the Wyoming Infrared Observatory 2.3 m telescope allow measurement of stellar effective temperatures and radial velocities from which we determine orbital parameters including the periods, eccentricities, approximate mass ratios, and component masses. These spectroscopic solutions confirm that the stars are members of eccentric binary systems with eccentricities e > 0.34 and periods P = 7–20 days, strengthening conclusions from prior works that utilized purely photometric methods. Heartbeat stars inmore » this sample have A- or F-type primary components. Constraints on orbital inclinations indicate that four of the six systems have minimum mass ratios q = 0.3–0.5, implying that most secondaries are probable M dwarfs or earlier. One system is an eclipsing, double-lined spectroscopic binary with roughly equal-mass mid-A components (q = 0.95), while another shows double-lined behavior only near periastron, indicating that the F0V primary has a G1V secondary (q = 0.65). This work constitutes the first measurements of the masses of secondaries in a statistical sample of heartbeat stars. The good agreement between our spectroscopic orbital elements and those derived using a photometric model support the idea that photometric data are sufficient to derive reliable orbital parameters for heartbeat stars.« less
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
Double sampling to estimate density and population trends in birds
Bart, Jonathan; Earnst, Susan L.
2002-01-01
We present a method for estimating density of nesting birds based on double sampling. The approach involves surveying a large sample of plots using a rapid method such as uncorrected point counts, variable circular plot counts, or the recently suggested double-observer method. A subsample of those plots is also surveyed using intensive methods to determine actual density. The ratio of the mean count on those plots (using the rapid method) to the mean actual density (as determined by the intensive searches) is used to adjust results from the rapid method. The approach works well when results from the rapid method are highly correlated with actual density. We illustrate the method with three years of shorebird surveys from the tundra in northern Alaska. In the rapid method, surveyors covered ~10 ha h-1 and surveyed each plot a single time. The intensive surveys involved three thorough searches, required ~3 h ha-1, and took 20% of the study effort. Surveyors using the rapid method detected an average of 79% of birds present. That detection ratio was used to convert the index obtained in the rapid method into an essentially unbiased estimate of density. Trends estimated from several years of data would also be essentially unbiased. Other advantages of double sampling are that (1) the rapid method can be changed as new methods become available, (2) domains can be compared even if detection rates differ, (3) total population size can be estimated, and (4) valuable ancillary information (e.g. nest success) can be obtained on intensive plots with little additional effort. We suggest that double sampling be used to test the assumption that rapid methods, such as variable circular plot and double-observer methods, yield density estimates that are essentially unbiased. The feasibility of implementing double sampling in a range of habitats needs to be evaluated.
Zhou, Guoxu; Yang, Zuyuan; Xie, Shengli; Yang, Jun-Mei
2011-04-01
Online blind source separation (BSS) is proposed to overcome the high computational cost problem, which limits the practical applications of traditional batch BSS algorithms. However, the existing online BSS methods are mainly used to separate independent or uncorrelated sources. Recently, nonnegative matrix factorization (NMF) shows great potential to separate the correlative sources, where some constraints are often imposed to overcome the non-uniqueness of the factorization. In this paper, an incremental NMF with volume constraint is derived and utilized for solving online BSS. The volume constraint to the mixing matrix enhances the identifiability of the sources, while the incremental learning mode reduces the computational cost. The proposed method takes advantage of the natural gradient based multiplication updating rule, and it performs especially well in the recovery of dependent sources. Simulations in BSS for dual-energy X-ray images, online encrypted speech signals, and high correlative face images show the validity of the proposed method.
Power Distribution System Planning with GIS Consideration
NASA Astrophysics Data System (ADS)
Wattanasophon, Sirichai; Eua-Arporn, Bundhit
This paper proposes a method for solving radial distribution system planning problems taking into account geographical information. The proposed method can automatically determine appropriate location and size of a substation, routing of feeders, and sizes of conductors while satisfying all constraints, i.e. technical constraints (voltage drop and thermal limit) and geographical constraints (obstacle, existing infrastructure, and high-cost passages). Sequential quadratic programming (SQP) and minimum path algorithm (MPA) are applied to solve the planning problem based on net price value (NPV) consideration. In addition this method integrates planner's experience and optimization process to achieve an appropriate practical solution. The proposed method has been tested with an actual distribution system, from which the results indicate that it can provide satisfactory plans.
NASA Technical Reports Server (NTRS)
Britt, Daniel L.; Geoffroy, Amy L.; Gohring, John R.
1990-01-01
Various temporal constraints on the execution of activities are described, and their representation in the scheduling system MAESTRO is discussed. Initial examples are presented using a sample activity described. Those examples are expanded to include a second activity, and the types of temporal constraints that can obtain between two activities are explored. Soft constraints, or preferences, in activity placement are discussed. Multiple performances of activities are considered, with respect to both hard and soft constraints. The primary methods used in MAESTRO to handle temporal constraints are described as are certain aspects of contingency handling with respect to temporal constraints. A discussion of the overall approach, with indications of future directions for this research, concludes the study.
DNA purification by triplex-affinity capture and affinity capture electrophoresis
Cantor, Charles R.; Ito, Takashi; Smith, Cassandra L.
1996-01-01
The invention provides a method for purifying or isolating double stranded DNA intact using triple helix formation. The method includes the steps of complexing an oligonucleotide and double stranded DNA to generate a triple helix and immobilization of the triple helix on a solid phase by means of a molecular recognition system such as avidin/biotin. The purified DNA is then recovered intact by treating the solid phase with a reagent that breaks the bonds between the oligonucleotide and the intact double stranded DNA while not affecting the Watson-Crick base pairs of the double helix. The present invention also provides a method for purifying or isolating double stranded DNA intact by complexing the double stranded DNA with a specific binding partner and recovering the complex during electrophoresis by immobilizing it on a solid phase trap imbedded in an electrophoretic gel.
Development and application of a unified balancing approach with multiple constraints
NASA Technical Reports Server (NTRS)
Zorzi, E. S.; Lee, C. C.; Giordano, J. C.
1985-01-01
The development of a general analytic approach to constrained balancing that is consistent with past influence coefficient methods is described. The approach uses Lagrange multipliers to impose orbit and/or weight constraints; these constraints are combined with the least squares minimization process to provide a set of coupled equations that result in a single solution form for determining correction weights. Proper selection of constraints results in the capability to: (1) balance higher speeds without disturbing previously balanced modes, thru the use of modal trial weight sets; (2) balance off-critical speeds; and (3) balance decoupled modes by use of a single balance plane. If no constraints are imposed, this solution form reduces to the general weighted least squares influence coefficient method. A test facility used to examine the use of the general constrained balancing procedure and application of modal trial weight ratios is also described.
Constraint reasoning in deep biomedical models.
Cruz, Jorge; Barahona, Pedro
2005-05-01
Deep biomedical models are often expressed by means of differential equations. Despite their expressive power, they are difficult to reason about and make decisions, given their non-linearity and the important effects that the uncertainty on data may cause. The objective of this work is to propose a constraint reasoning framework to support safe decisions based on deep biomedical models. The methods used in our approach include the generic constraint propagation techniques for reducing the bounds of uncertainty of the numerical variables complemented with new constraint reasoning techniques that we developed to handle differential equations. The results of our approach are illustrated in biomedical models for the diagnosis of diabetes, tuning of drug design and epidemiology where it was a valuable decision-supporting tool notwithstanding the uncertainty on data. The main conclusion that follows from the results is that, in biomedical decision support, constraint reasoning may be a worthwhile alternative to traditional simulation methods, especially when safe decisions are required.
26 CFR 1.381(c)(5)-1 - Inventories.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...
26 CFR 1.381(c)(5)-1 - Inventories.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the dollar-value method, use the double-extension method, pool under the natural business unit method... double-extension method, pool under the natural business unit method, and value annual inventory... natural business unit method while P corporation pools under the multiple pool method. In addition, O...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Cheng-Chung; Tsai, Tsung-Yuan; Hsu, Shih-Jung
2013-03-15
Purpose: The study aimed to propose a new single-plane fluoroscopy-to-CT registration method integrated with intervertebral anticollision constraints for measuring three-dimensional (3D) intervertebral kinematics of the spine; and to evaluate the performance of the method without anticollision and with three variations of the anticollision constraints via an in vitro experiment. Methods: The proposed fluoroscopy-to-CT registration approach, called the weighted edge-matching with anticollision (WEMAC) method, was based on the integration of geometrical anticollision constraints for adjacent vertebrae and the weighted edge-matching score (WEMS) method that matched the digitally reconstructed radiographs of the CT models of the vertebrae and the measured single-plane fluoroscopymore » images. Three variations of the anticollision constraints, namely, T-DOF, R-DOF, and A-DOF methods, were proposed. An in vitro experiment using four porcine cervical spines in different postures was performed to evaluate the performance of the WEMS and the WEMAC methods. Results: The WEMS method gave high precision and small bias in all components for both vertebral pose and intervertebral pose measurements, except for relatively large errors for the out-of-plane translation component. The WEMAC method successfully reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five degrees of freedom (DOF) more or less unaltered. The means (standard deviations) of the out-of-plane translational errors were less than -0.5 (0.6) and -0.3 (0.8) mm for the T-DOF method and the R-DOF method, respectively. Conclusions: The proposed single-plane fluoroscopy-to-CT registration method reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five DOF more or less unaltered. With the submillimeter and subdegree accuracy, the WEMAC method was considered accurate for measuring 3D intervertebral kinematics during various functional activities for research and clinical applications.« less
Study of the Navigation Method for a Snake Robot Based on the Kinematics Model with MEMS IMU
Dou, Lihua; Su, Zhong; Liu, Ning
2018-01-01
A snake robot is a type of highly redundant mobile robot that significantly differs from a tracked robot, wheeled robot and legged robot. To address the issue of a snake robot performing self-localization in the application environment without assistant orientation, an autonomous navigation method is proposed based on the snake robot’s motion characteristic constraints. The method realized the autonomous navigation of the snake robot with non-nodes and an external assistant using its own Micro-Electromechanical-Systems (MEMS) Inertial-Measurement-Unit (IMU). First, it studies the snake robot’s motion characteristics, builds the kinematics model, and then analyses the motion constraint characteristics and motion error propagation properties. Second, it explores the snake robot’s navigation layout, proposes a constraint criterion and the fixed relationship, and makes zero-state constraints based on the motion features and control modes of a snake robot. Finally, it realizes autonomous navigation positioning based on the Extended-Kalman-Filter (EKF) position estimation method under the constraints of its motion characteristics. With the self-developed snake robot, the test verifies the proposed method, and the position error is less than 5% of Total-Traveled-Distance (TDD). In a short-distance environment, this method is able to meet the requirements of a snake robot in order to perform autonomous navigation and positioning in traditional applications and can be extended to other familiar multi-link robots. PMID:29547515
Multi-Maneuver Clohessy-Wiltshire Targeting
NASA Technical Reports Server (NTRS)
Dannemiller, David P.
2011-01-01
Orbital rendezvous involves execution of a sequence of maneuvers by a chaser vehicle to bring the chaser to a desired state relative to a target vehicle while meeting intermediate and final relative constraints. Intermediate and final relative constraints are necessary to meet a multitude of requirements such as to control approach direction, ensure relative position is adequate for operation of space-to-space communication systems and relative sensors, provide fail-safe trajectory features, and provide contingency hold points. The effect of maneuvers on constraints is often coupled, so the maneuvers must be solved for as a set. For example, maneuvers that affect orbital energy change both the chaser's height and downrange position relative to the target vehicle. Rendezvous designers use experience and rules-of-thumb to design a sequence of maneuvers and constraints. A non-iterative method is presented for targeting a rendezvous scenario that includes a sequence of maneuvers and relative constraints. This method is referred to as Multi-Maneuver Clohessy-Wiltshire Targeting (MM_CW_TGT). When a single maneuver is targeted to a single relative position, the classic CW targeting solution is obtained. The MM_CW_TGT method involves manipulation of the CW state transition matrix to form a linear system. As a starting point for forming the algorithm, the effects of a series of impulsive maneuvers on the state are derived. Simple and moderately complex examples are used to demonstrate the pattern of the resulting linear system. The general form of the pattern results in an algorithm for formation of the linear system. The resulting linear system relates the effect of maneuver components and initial conditions on relative constraints specified by the rendezvous designer. Solution of the linear system includes the straight-forward inverse of a square matrix. Inversion of the square matrix is assured if the designer poses a controllable scenario - a scenario where the the constraints can be met by the sequence of maneuvers. Matrices in the linear system are dependent on selection of maneuvers and constraints by the designer, but the matrices are independent of the chaser's initial conditions. For scenarios where the sequence of maneuvers and constraints are fixed, the linear system can be formed and the square matrix inverted prior to real-time operations. Example solutions are presented for several rendezvous scenarios to illustrate the utility of the method. The MM_CW_TGT method has been used during the preliminary design of rendezvous scenarios and is expected to be useful for iterative methods in the generation of an initial guess and corrections.
A Framework for Dynamic Constraint Reasoning Using Procedural Constraints
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Frank, Jeremy D.
1999-01-01
Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.
NASA Astrophysics Data System (ADS)
Guo, H.; Zhang, H.
2016-12-01
Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data can be directly constructed from the station-pair data means that double-pair DD method can be used for improving NVT locations. We have applied the new method to the NVTs beneath the SAF near Cholame, California. Compared to the previous results, the new double-pair DD tremor locations are more concentrated and show more detailed structures.
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
The Casalbuoni-Brink-Schwarz superparticle with covariant, reducible constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayi, O.F.
1992-04-30
This paper discusses the fermionic constraints of the massless Casalbuoni-Brink-Schwarz superparticle in d = 10 which are separated covariantly as first- and second-class constraints which are infinitely reducible. Although the reducibility conditions of the second-class constraints include the first-class ones a consistent quantization is possible. The ghost structure of the system for quantizing it in terms of the BFV-BRST methods is given and unitarity is shown.
Maggi's equations of motion and the determination of constraint reactions
NASA Astrophysics Data System (ADS)
Papastavridis, John G.
1990-04-01
This paper presents a geometrical derivation of the constraint reaction-free equations of Maggi for mechanical systems subject to linear (first-order) nonholonomic and/or holonomic constraints. These results follow directly from the proper application of the concepts of virtual displacement and quasi-coordinates to the variational equation of motion, i.e., Lagrange's principle. The method also makes clear how to compute the constraint reactions (kinetostatics) without introducing Lagrangian multipliers.
Double quick, double click reversible peptide "stapling".
Grison, Claire M; Burslem, George M; Miles, Jennifer A; Pilsl, Ludwig K A; Yeo, David J; Imani, Zeynab; Warriner, Stuart L; Webb, Michael E; Wilson, Andrew J
2017-07-01
The development of constrained peptides for inhibition of protein-protein interactions is an emerging strategy in chemical biology and drug discovery. This manuscript introduces a versatile, rapid and reversible approach to constrain peptides in a bioactive helical conformation using BID and RNase S peptides as models. Dibromomaleimide is used to constrain BID and RNase S peptide sequence variants bearing cysteine (Cys) or homocysteine ( h Cys) amino acids spaced at i and i + 4 positions by double substitution. The constraint can be readily removed by displacement of the maleimide using excess thiol. This new constraining methodology results in enhanced α-helical conformation (BID and RNase S peptide) as demonstrated by circular dichroism and molecular dynamics simulations, resistance to proteolysis (BID) as demonstrated by trypsin proteolysis experiments and retained or enhanced potency of inhibition for Bcl-2 family protein-protein interactions (BID), or greater capability to restore the hydrolytic activity of the RNAse S protein (RNase S peptide). Finally, use of a dibromomaleimide functionalized with an alkyne permits further divergent functionalization through alkyne-azide cycloaddition chemistry on the constrained peptide with fluorescein, oligoethylene glycol or biotin groups to facilitate biophysical and cellular analyses. Hence this methodology may extend the scope and accessibility of peptide stapling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konacki, Maciej; Helminiak, Krzysztof G.; Muterspaugh, Matthew W.
2009-10-10
We present preliminary results of the first and on-going radial velocity survey for circumbinary planets. With a novel radial velocity technique employing an iodine absorption cell, we achieve an unprecedented radial velocity (RV) precision of up to 2 m s{sup -1} for double-lined binary stars. The high-resolution spectra collected with the Keck I/Hires, TNG/Sarg, and Shane/CAT/Hamspec telescopes/spectrographs over the years 2003-2008 allow us to derive RVs and compute planet detection limits for 10 double-lined binary stars. For this initial sample of targets, we can rule out planets on dynamically stable orbits with masses as small as approx0.3 to 3 Mmore » {sub Jup} for the orbital periods of up to approx5.3 years. Even though the presented sample of stars is too small to make any strong conclusions, it is clear that the search for circumbinary planets is now technique-wise possible and eventually will provide new constraints for the planet formation theories.« less
A neural-network-based approach to the double traveling salesman problem.
Plebe, Alessio; Anile, Angelo Marcello
2002-02-01
The double traveling salesman problem is a variation of the basic traveling salesman problem where targets can be reached by two salespersons operating in parallel. The real problem addressed by this work concerns the optimization of the harvest sequence for the two independent arms of a fruit-harvesting robot. This application poses further constraints, like a collision-avoidance function. The proposed solution is based on a self-organizing map structure, initialized with as many artificial neurons as the number of targets to be reached. One of the key components of the process is the combination of competitive relaxation with a mechanism for deleting and creating artificial neurons. Moreover, in the competitive relaxation process, information about the trajectory connecting the neurons is combined with the distance of neurons from the target. This strategy prevents tangles in the trajectory and collisions between the two tours. Results of tests indicate that the proposed approach is efficient and reliable for harvest sequence planning. Moreover, the enhancements added to the pure self-organizing map concept are of wider importance, as proved by a traveling salesman problem version of the program, simplified from the double version for comparison.
Zolghadri, Jaleh; Younesi, Masoumeh; Asadi, Nasrin; Khosravi, Dezire; Behdin, Shabnam; Tavana, Zohre; Ghaffarpasand, Fariborz
2014-02-01
To compare the effectiveness of the double cervical cerclage method versus the single method in women with recurrent second-trimester delivery. In this randomized clinical trial, we included 33 singleton pregnancies suffering from recurrent second-trimester pregnancy loss (≥2 consecutive fetal loss during second-trimester or with a history of unsuccessful procedures utilizing the McDonald method), due to cervical incompetence. Patients were randomly assigned to undergo either the classic McDonald method (n = 14) or the double cerclage method (n = 19). The successful pregnancy rate and gestational age at delivery was also compared between the two groups. The two study groups were comparable regarding their baseline characteristics. The successful pregnancy rate did not differ significantly between those who underwent the double cerclage method or the classic McDonald cerclage method (100% vs 85.7%; P = 0.172). In the same way, the preterm delivery rate (<34 weeks of gestation) was comparable between the two study groups (10.5% vs 35.7%; P = 0.106). Those undergoing the double cerclage method had longer gestational duration (37.2 ± 2.6 vs 34.3 ± 3.8 weeks; P = 0.016). The double cervical cerclage method seems to provide better cervical support, as compared with the classic McDonald cerclage method, in those suffering from recurrent pregnancy loss, due to cervical incompetence. © 2013 The Authors. Journal of Obstetrics and Gynaecology Research © 2013 Japan Society of Obstetrics and Gynecology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Yajun
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
NASA Astrophysics Data System (ADS)
Smith, Deborah C.; Jang, Shinho
2011-12-01
This case study of a fifth-year elementary intern's pathway in learning to teach science focused on her science methods course, placement science teaching, and reflections as a first-year teacher. We studied the sociocultural contexts within which the intern learned, their affordances and constraints, and participants' perspectives on their roles and responsibilities, and her learning. Semi-structured interviews were conducted with all participants. Audiotapes of the science methods class, videotapes of her science teaching, and field notes were collected. Data were transcribed and searched for affordances or constraints within contexts, perspectives on roles and responsibilities, and how views of her progress changed. Findings show the intern's substantial progress, the ways in which affordances sometimes became constraints, and participants' sometimes contradictory perspectives.
NASA Astrophysics Data System (ADS)
Yang, Yi-Yan; Chen, Li; Linghu, Rong-Feng; Zhang, Li-Yun; Taani, Ali
2017-12-01
Not Available Supported by the National Program on Key Research and Development Project under Grant No 2016YFA0400801, the National Natural Science Foundation of China under Grant Nos 11173034, 11673023 and 11364007, the Fundamental Research Funds for the Central University, the Key Support Disciplines of Theoretical Physics of Guizhou Province Education Bureau under Grant No ZDXK[2015]38, and the Youth Talents Project of Science and Technology in Education Bureau of Guizhou Province under Grant No KY[2017]204.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
Modifier constraint in alkali borophosphate glasses using topological constraint theory
NASA Astrophysics Data System (ADS)
Li, Xiang; Zeng, Huidan; Jiang, Qi; Zhao, Donghui; Chen, Guorong; Wang, Zhaofeng; Sun, Luyi; Chen, Jianding
2016-12-01
In recent years, composition-dependent properties of glasses have been successfully predicted using the topological constraint theory. The constraints of the glass network are derived from two main parts: network formers and network modifiers. The constraints of the network formers can be calculated on the basis of the topological structure of the glass. However, the latter cannot be accurately calculated in this way, because of the existing of ionic bonds. In this paper, the constraints of the modifier ions in phosphate glasses were thoroughly investigated using the topological constraint theory. The results show that the constraints of the modifier ions are gradually increased with the addition of alkali oxides. Furthermore, an improved topological constraint theory for borophosphate glasses is proposed by taking the composition-dependent constraints of the network modifiers into consideration. The proposed theory is subsequently evaluated by analyzing the composition dependence of the glass transition temperature in alkali borophosphate glasses. This method is supposed to be extended to other similar glass systems containing alkali ions.
Covariant constraints in ghost free massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deffayet, C.; Mourad, J.; Zahariade, G., E-mail: deffayet@iap.fr, E-mail: mourad@apc.univ-paris7.fr, E-mail: zahariad@apc.univ-paris7.fr
2013-01-01
We show that the reformulation of the de Rham-Gabadadze-Tolley massive gravity theory using vielbeins leads to a very simple and covariant way to count constraints, and hence degrees of freedom. Our method singles out a subset of theories, in the de Rham-Gabadadze-Tolley family, where an extra constraint, needed to eliminate the Boulware Deser ghost, is easily seen to appear. As a side result, we also introduce a new method, different from the Stuckelberg trick, to extract kinetic terms for the polarizations propagating in addition to those of the massless graviton.
A Method for Scheduling Air Traffic with Uncertain En Route Capacity Constraints
NASA Technical Reports Server (NTRS)
Arneson, Heather; Bloem, Michael
2009-01-01
A method for scheduling ground delay and airborne holding for flights scheduled to fly through airspace with uncertain capacity constraints is presented. The method iteratively solves linear programs for departure rates and airborne holding as new probabilistic information about future airspace constraints becomes available. The objective function is the expected value of the weighted sum of ground and airborne delay. In order to limit operationally costly changes to departure rates, they are updated only when such an update would lead to a significant cost reduction. Simulation results show a 13% cost reduction over a rough approximation of current practices. Comparison between the proposed as needed replanning method and a similar method that uses fixed frequency replanning shows a typical cost reduction of 1% to 2%, and even up to a 20% cost reduction in some cases.
Beyramysoltan, Samira; Abdollahi, Hamid; Rajkó, Róbert
2014-05-27
Analytical self-modeling curve resolution (SMCR) methods resolve data sets to a range of feasible solutions using only non-negative constraints. The Lawton-Sylvestre method was the first direct method to analyze a two-component system. It was generalized as a Borgen plot for determining the feasible regions in three-component systems. It seems that a geometrical view is required for considering curve resolution methods, because the complicated (only algebraic) conceptions caused a stop in the general study of Borgen's work for 20 years. Rajkó and István revised and elucidated the principles of existing theory in SMCR methods and subsequently introduced computational geometry tools for developing an algorithm to draw Borgen plots in three-component systems. These developments are theoretical inventions and the formulations are not always able to be given in close form or regularized formalism, especially for geometric descriptions, that is why several algorithms should have been developed and provided for even the theoretical deductions and determinations. In this study, analytical SMCR methods are revised and described using simple concepts. The details of a drawing algorithm for a developmental type of Borgen plot are given. Additionally, for the first time in the literature, equality and unimodality constraints are successfully implemented in the Lawton-Sylvestre method. To this end, a new state-of-the-art procedure is proposed to impose equality constraint in Borgen plots. Two- and three-component HPLC-DAD data set were simulated and analyzed by the new analytical curve resolution methods with and without additional constraints. Detailed descriptions and explanations are given based on the obtained abstract spaces. Copyright © 2014 Elsevier B.V. All rights reserved.
A Monte Carlo Approach for Adaptive Testing with Content Constraints
ERIC Educational Resources Information Center
Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander
2008-01-01
This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…
DNA purification by triplex-affinity capture and affinity capture electrophoresis
Cantor, C.R.; Ito, Takashi; Smith, C.L.
1996-01-09
The invention provides a method for purifying or isolating double stranded DNA intact using triple helix formation. The method includes the steps of complexing an oligonucleotide and double stranded DNA to generate a triple helix and immobilization of the triple helix on a solid phase by means of a molecular recognition system such as avidin/biotin. The purified DNA is then recovered intact by treating the solid phase with a reagent that breaks the bonds between the oligonucleotide and the intact double stranded DNA while not affecting the Watson-Crick base pairs of the double helix. The present invention also provides a method for purifying or isolating double stranded DNA intact by complexing the double stranded DNA with a specific binding partner and recovering the complex during electrophoresis by immobilizing it on a solid phase trap imbedded in an electrophoretic gel. 6 figs.
Magnetic Pair Creation Attenuation Altitude Constraints in Gamma-Ray Pulsars
NASA Astrophysics Data System (ADS)
Baring, Matthew; Story, Sarah
The Fermi gamma-ray pulsar database now exceeds 150 sources and has defined an important part of Fermi's science legacy, providing rich information for the interpretation of young energetic pulsars and old millisecond pulsars. Among the well established population characteristics is the common occurrence of exponential turnovers in the 1-10 GeV range. These turnovers are too gradual to arise from magnetic pair creation in the strong magnetic fields of pulsar inner magnetospheres, so their energy can be used to provide lower bounds to the typical altitude of GeV band emission. We explore such constraints due to single-photon pair creation transparency at and below the turnover energy. Our updated computations span both domains when general relativistic influences are important and locales where flat spacetime photon propagation is modified by rotational aberration effects. The altitude bounds, typically in the range of 2-5 stellar radii, provide key information on the emission altitude in radio quiet pulsars that do not possess double-peaked pulse profiles. However, the exceptional case of the Crab pulsar provides an altitude bound of around 20% of the light cylinder radius if pair transparency persists out to 350 GeV, the maximum energy detected by MAGIC. This is an impressive new physics-based constraint on the Crab's gamma-ray emission locale.
Middleton, David A
2011-02-01
Solid-state nuclear magnetic resonance (SSNMR) is a powerful technique for the structural analysis of amyloid fibrils. With suitable isotope labelling patterns, SSNMR can provide constraints on the secondary structure, alignment and registration of β-strands within amyloid fibrils and identify the tertiary and quaternary contacts defining the packing of the β-sheet layers. Detection of (14)N-(13)C dipolar couplings may provide potentially useful additional structural constraints on β-sheet packing within amyloid fibrils but has not until now been exploited for this purpose. Here a frequency-selective, transfer of population in double resonance SSNMR experiment is used to detect a weak (14)N-(13)C dipolar coupling in amyloid-like fibrils of the peptide H(2)N-SNNFGAILSS-COOH, which was uniformly (13)C and (15)N labelled across the four C-terminal amino acids. The (14)N-(13)C interatomic distance between leucine and asparagine side groups is constrained between 2.4 and 3.8 Å, which allows current structural models of the β-spine arrangement within the fibrils to be refined. This procedure could be useful for the general structural analysis of other proteins in condensed phases and environments, such as biological membranes. Copyright © 2011 John Wiley & Sons, Ltd.
Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan
2016-08-22
Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the analysis of the dataset for Raf/MEK/ERK signaling provides novel biological insights regarding the existence of feedback regulation. Many optimization problems considered in systems and computational biology are subject to steady-state constraints. While most optimization methods have convergence problems if these steady-state constraints are highly nonlinear, the methods presented recover the convergence properties of optimizers which can exploit an analytical expression for the parameter-dependent steady state. This renders them an excellent alternative to methods which are currently employed in systems and computational biology.
van Aggelen, Helen; Verstichel, Brecht; Bultinck, Patrick; Van Neck, Dimitri; Ayers, Paul W; Cooper, David L
2011-02-07
Variational second order density matrix theory under "two-positivity" constraints tends to dissociate molecules into unphysical fractionally charged products with too low energies. We aim to construct a qualitatively correct potential energy surface for F(3)(-) by applying subspace energy constraints on mono- and diatomic subspaces of the molecular basis space. Monoatomic subspace constraints do not guarantee correct dissociation: the constraints are thus geometry dependent. Furthermore, the number of subspace constraints needed for correct dissociation does not grow linearly with the number of atoms. The subspace constraints do impose correct chemical properties in the dissociation limit and size-consistency, but the structure of the resulting second order density matrix method does not exactly correspond to a system of noninteracting units.
Free energy from molecular dynamics with multiple constraints
NASA Astrophysics Data System (ADS)
den Otter, W. K.; Briels, W. J.
In molecular dynamics simulations of reacting systems, the key step to determining the equilibrium constant and the reaction rate is the calculation of the free energy as a function of the reaction coordinate. Intuitively the derivative of the free energy is equal to the average force needed to constrain the reaction coordinate to a constant value, but the metric tensor effect of the constraint on the sampled phase space distribution complicates this relation. The appropriately corrected expression for the potential of mean constraint force method (PMCF) for systems in which only the reaction coordinate is constrained was published recently. Here we will consider the general case of a system with multiple constraints. This situation arises when both the reaction coordinate and the 'hard' coordinates are constrained, and also in systems with several reaction coordinates. The obvious advantage of this method over the established thermodynamic integration and free energy perturbation methods is that it avoids the cumbersome introduction of a full set of generalized coordinates complementing the constrained coordinates. Simulations of n -butane and n -pentane in vacuum illustrate the method.
NASA Astrophysics Data System (ADS)
Hassan, Said A.; Elzanfaly, Eman S.; Salem, Maissa Y.; El-Zeany, Badr A.
2016-01-01
A novel spectrophotometric method was developed for determination of ternary mixtures without previous separation, showing significant advantages over conventional methods. The new method is based on mean centering of double divisor ratio spectra. The mathematical explanation of the procedure is illustrated. The method was evaluated by determination of model ternary mixture and by the determination of Amlodipine (AML), Aliskiren (ALI) and Hydrochlorothiazide (HCT) in laboratory prepared mixtures and in a commercial pharmaceutical preparation. For proper presentation of the advantages and applicability of the new method, a comparative study was established between the new mean centering of double divisor ratio spectra (MCDD) and two similar methods used for analysis of ternary mixtures, namely mean centering (MC) and double divisor of ratio spectra-derivative spectrophotometry (DDRS-DS). The method was also compared with a reported one for analysis of the pharmaceutical preparation. The method was validated according to the ICH guidelines and accuracy, precision, repeatability and robustness were found to be within the acceptable limits.
[Design of Dual-Beam Spectrometer in Spectrophotometer for Colorimetry].
Liu, Yi-xuan; Yan, Chang-xiang
2015-07-01
Spectrophotometers for colorimetry are usually composed of two independent and identical spectrometers. In order to reduce the volume of spectrophotometer for colorimetry, a design method of double-beam spectrometer is put forward. A traditional spectrometer is modified so that a new spectrometer can realize the function of double spectrometers, which is especially suitable for portable instruments. One slit is replaced by the double-slit, than two beams of spectrum can be detected. The working principle and design requirement of double-beam spectrometer are described. A spectrometer of portable spectrophotometer is designed by this method. A toroidal imaging mirror is used for the Czerny-Turner double-beam spectrometer in this paper, which can better correct astigmatism, and prevent the dual-beam spectral crosstalk. The results demonstrate that the double-beam spectrometer designed by this method meets the design specifications, with the spectral resolution less than 10 nm, the spectral length of 9.12 mm, and the volume of 57 mm x 54 mm x 23 mm, and without the dual-beam spectral overlap in the detector either. Comparing with a traditional spectrophotometer, the modified spectrophotometer uses a set of double-beam spectrometer instead of two sets of spectrometers, which can greatly reduce the volume. This design method can be specially applied in portable spectrophotometers, also can be widely applied in other double-beam spectrophotometers, which offers a new idea for the design of dual-beam spectrophotometers.
Interferometric Methods of Measuring Refractive Indices and Double-Refraction of Fibres.
ERIC Educational Resources Information Center
Hamza, A. A.; El-Kader, H. I. Abd
1986-01-01
Presents two methods used to measure the refractive indices and double-refraction of fibers. Experiments are described, with one involving the use of Pluta microscope in the double-beam interference technique, the other employing the multiple-beam technique. Immersion liquids are discussed that can be used in the experiments. (TW)
Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.
ERIC Educational Resources Information Center
Rowell, R. Kevin
In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…
Scaling and kinematics optimisation of the scapula and thorax in upper limb musculoskeletal models
Prinold, Joe A.I.; Bull, Anthony M.J.
2014-01-01
Accurate representation of individual scapula kinematics and subject geometries is vital in musculoskeletal models applied to upper limb pathology and performance. In applying individual kinematics to a model׳s cadaveric geometry, model constraints are commonly prescriptive. These rely on thorax scaling to effectively define the scapula׳s path but do not consider the area underneath the scapula in scaling, and assume a fixed conoid ligament length. These constraints may not allow continuous solutions or close agreement with directly measured kinematics. A novel method is presented to scale the thorax based on palpated scapula landmarks. The scapula and clavicle kinematics are optimised with the constraint that the scapula medial border does not penetrate the thorax. Conoid ligament length is not used as a constraint. This method is simulated in the UK National Shoulder Model and compared to four other methods, including the standard technique, during three pull-up techniques (n=11). These are high-performance activities covering a large range of motion. Model solutions without substantial jumps in the joint kinematics data were improved from 23% of trials with the standard method, to 100% of trials with the new method. Agreement with measured kinematics was significantly improved (more than 10° closer at p<0.001) when compared to standard methods. The removal of the conoid ligament constraint and the novel thorax scaling correction factor were shown to be key. Separation of the medial border of the scapula from the thorax was large, although this may be physiologically correct due to the high loads and high arm elevation angles. PMID:25011621
Chiew, Mark; Graedel, Nadine N; Miller, Karla L
2018-07-01
Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Modeling delamination growth in composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reedy, E.D. Jr.; Mello, F.J.
1996-12-01
A method for modeling the initiation and growth of discrete delaminations in shell-like composite structures is presented. The laminate is divided into two or more sublaminates, with each sublaminate modeled with four-noded quadrilateral shell elements. A special, eight-noded hex constraint element connects opposing sublaminate shell elements. It supplies the nodal forces and moments needed to make the two opposing shell elements act as a single shell element until a prescribed failure criterion is satisfied. Once the failure criterion is attained, the connection is broken, creating or growing a discrete delamination. This approach has been implemented in a 3D finite elementmore » code. This code uses explicit time integration, and can analyze shell-like structures subjected to large deformations and complex contact conditions. The shell elements can use existing composite material models that include in-plane laminate failure modes. This analysis capability was developed to perform crashworthiness studies of composite structures, and is useful whenever there is a need to estimate peak loads, energy absorption, or the final shape of a highly deformed composite structure. This paper describes the eight-noded hex constraint element used to model the initiation and growth of a delamination, and discusses associated implementation issues. Particular attention is focused on the delamination growth criterion, and it is verified that calculated results do not depend on element size. In addition, results for double cantilever beam and end notched flexure specimens are presented and compared to measured data to assess the ability of the present approach to model a growing delamination.« less
Nakata, Maho; Braams, Bastiaan J; Fujisawa, Katsuki; Fukuda, Mituhiro; Percus, Jerome K; Yamashita, Makoto; Zhao, Zhengji
2008-04-28
The reduced density matrix (RDM) method, which is a variational calculation based on the second-order reduced density matrix, is applied to the ground state energies and the dipole moments for 57 different states of atoms, molecules, and to the ground state energies and the elements of 2-RDM for the Hubbard model. We explore the well-known N-representability conditions (P, Q, and G) together with the more recent and much stronger T1 and T2(') conditions. T2(') condition was recently rederived and it implies T2 condition. Using these N-representability conditions, we can usually calculate correlation energies in percentage ranging from 100% to 101%, whose accuracy is similar to CCSD(T) and even better for high spin states or anion systems where CCSD(T) fails. Highly accurate calculations are carried out by handling equality constraints and/or developing multiple precision arithmetic in the semidefinite programming (SDP) solver. Results show that handling equality constraints correctly improves the accuracy from 0.1 to 0.6 mhartree. Additionally, improvements by replacing T2 condition with T2(') condition are typically of 0.1-0.5 mhartree. The newly developed multiple precision arithmetic version of SDP solver calculates extraordinary accurate energies for the one dimensional Hubbard model and Be atom. It gives at least 16 significant digits for energies, where double precision calculations gives only two to eight digits. It also provides physically meaningful results for the Hubbard model in the high correlation limit.
Constraints on the symmetry energy from neutron star observations
NASA Astrophysics Data System (ADS)
Newton, W. G.; Gearheart, M.; Wen, De-Hua; Li, Bao-An
2013-03-01
The modeling of many neutron star observables incorporates the microphysics of both the stellar crust and core, which is tied intimately to the properties of the nuclear matter equation of state (EoS). We explore the predictions of such models over the range of experimentally constrained nuclear matter parameters, focusing on the slope of the symmetry energy at nuclear saturation density L. We use a consistent model of the composition and EoS of neutron star crust and core matter to model the binding energy of pulsar B of the double pulsar system J0737-3039, the frequencies of torsional oscillations of the neutron star crust and the instability region for r-modes in the neutron star core damped by electron-electron viscosity at the crust-core interface. By confronting these models with observations, we illustrate the potential of astrophysical observables to offer constraints on poorly known nuclear matter parameters complementary to terrestrial experiments, and demonstrate that our models consistently predict L < 70 MeV.
Silver (I) as DNA glue: Ag+-mediated guanine pairing revealed by removing Watson-Crick constraints
Swasey, Steven M.; Leal, Leonardo Espinosa; Lopez-Acevedo, Olga; Pavlovich, James; Gwinn, Elisabeth G.
2015-01-01
Metal ion interactions with DNA have far-reaching implications in biochemistry and DNA nanotechnology. Ag+ is uniquely interesting because it binds exclusively to the bases rather than the backbone of DNA, without the toxicity of Hg2+. In contrast to prior studies of Ag+ incorporation into double-stranded DNA, we remove the constraints of Watson-Crick pairing by focusing on homo-base DNA oligomers of the canonical bases. High resolution electro-spray ionization mass spectrometry reveals an unanticipated Ag+-mediated pairing of guanine homo-base strands, with higher stability than canonical guanine-cytosine pairing. By exploring unrestricted binding geometries, quantum chemical calculations find that Ag+ bridges between non-canonical sites on guanine bases. Circular dichroism spectroscopy shows that the Ag+-mediated structuring of guanine homobase strands persists to at least 90 °C under conditions for which canonical guanine-cytosine duplexes melt below 20 °C. These findings are promising for DNA nanotechnology and metal-ion based biomedical science. PMID:25973536
Swasey, Steven M; Leal, Leonardo Espinosa; Lopez-Acevedo, Olga; Pavlovich, James; Gwinn, Elisabeth G
2015-05-14
Metal ion interactions with DNA have far-reaching implications in biochemistry and DNA nanotechnology. Ag(+) is uniquely interesting because it binds exclusively to the bases rather than the backbone of DNA, without the toxicity of Hg(2+). In contrast to prior studies of Ag(+) incorporation into double-stranded DNA, we remove the constraints of Watson-Crick pairing by focusing on homo-base DNA oligomers of the canonical bases. High resolution electro-spray ionization mass spectrometry reveals an unanticipated Ag(+)-mediated pairing of guanine homo-base strands, with higher stability than canonical guanine-cytosine pairing. By exploring unrestricted binding geometries, quantum chemical calculations find that Ag(+) bridges between non-canonical sites on guanine bases. Circular dichroism spectroscopy shows that the Ag(+)-mediated structuring of guanine homobase strands persists to at least 90 °C under conditions for which canonical guanine-cytosine duplexes melt below 20 °C. These findings are promising for DNA nanotechnology and metal-ion based biomedical science.
Zhang, Zhao; Zhao, Mingbo; Chow, Tommy W S
2012-12-01
In this work, sub-manifold projections based semi-supervised dimensionality reduction (DR) problem learning from partial constrained data is discussed. Two semi-supervised DR algorithms termed Marginal Semi-Supervised Sub-Manifold Projections (MS³MP) and orthogonal MS³MP (OMS³MP) are proposed. MS³MP in the singular case is also discussed. We also present the weighted least squares view of MS³MP. Based on specifying the types of neighborhoods with pairwise constraints (PC) and the defined manifold scatters, our methods can preserve the local properties of all points and discriminant structures embedded in the localized PC. The sub-manifolds of different classes can also be separated. In PC guided methods, exploring and selecting the informative constraints is challenging and random constraint subsets significantly affect the performance of algorithms. This paper also introduces an effective technique to select the informative constraints for DR with consistent constraints. The analytic form of the projection axes can be obtained by eigen-decomposition. The connections between this work and other related work are also elaborated. The validity of the proposed constraint selection approach and DR algorithms are evaluated by benchmark problems. Extensive simulations show that our algorithms can deliver promising results over some widely used state-of-the-art semi-supervised DR techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.
Lee, Hyun-Soo; Choi, Seung Hong; Park, Sung-Hong
2017-07-01
To develop single and double acquisition methods to compensate for artifacts from eddy currents and transient oscillations in balanced steady-state free precession (bSSFP) with centric phase-encoding (PE) order for magnetization-prepared bSSFP imaging. A single and four different double acquisition methods were developed and evaluated with Bloch equation simulations, phantom/in vivo experiments, and quantitative analyses. For the single acquisition method, multiple PE groups, each of which was composed of N linearly changing PE lines, were ordered in a pseudocentric manner for optimal contrast and minimal signal fluctuations. Double acquisition methods used complex averaging of two images that had opposite artifact patterns from different acquisition orders or from different numbers of dummy scans. Simulation results showed high sensitivity of eddy-current and transient-oscillation artifacts to off-resonance frequency and PE schemes. The artifacts were reduced with the PE-grouping with N values from 3 to 8, similar to or better than the conventional pairing scheme of N = 2. The proposed double acquisition methods removed the remaining artifacts significantly. The proposed methods conserved detailed structures in magnetization transfer imaging well, compared with the conventional methods. The proposed single and double acquisition methods can be useful for artifact-free magnetization-prepared bSSFP imaging with desired contrast and minimized dummy scans. Magn Reson Med 78:254-263, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
A Method for the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.; Whitesides, John L.; Campbell, Richard L.; Mineck, Raymond E.
1996-01-01
A fully automated iterative design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. Drag reductions have been realized using the design method over a range of Mach numbers, Reynolds numbers and airfoil thicknesses. The thrusts of the method are its ability to calculate a target N-Factor distribution that forces the flow to undergo transition at the desired location; the target-pressure-N-Factor relationship that is used to reduce the N-Factors in order to prolong transition; and its ability to design airfoils to meet lift, pitching moment, thickness and leading-edge radius constraints while also being able to meet the natural laminar flow constraint. The method uses several existing CFD codes and can design a new airfoil in only a few days using a Silicon Graphics IRIS workstation.
Max-margin multiattribute learning with low-rank constraint.
Zhang, Qiang; Chen, Lin; Li, Baoxin
2014-07-01
Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Rossi, P; Oldner, A; Wanecek, M; Leksell, L G; Rudehill, A; Konrad, D; Weitzberg, E
2003-03-01
To compare a molecular double-indicator dilution technique with the gravimetrical reference method for measurement of extra-vascular lung water in porcine endotoxin shock. Open comparative experimental study. Animal research laboratory. In fourteen anaesthetised, mechanically ventilated landrace pigs, central and pulmonary haemodynamics as well as pulmonary gas exchange were measured. Extra-vascular lung water was quantitated gravimetrically as well as with a molecular double indicator dilution technique. Eight of these animals were subjected to endotoxaemia, the rest serving as sham controls. No difference in extra-vascular lung water was observed between the two methods in sham animals. Furthermore, extra-vascular lung water assessed with the molecular double-indicator dilution technique at the initiation of endotoxin infusion did not differ significantly from the corresponding values for sham animals. Endotoxaemia induced a hypodynamic shock with concurrent pulmonary hypertension and a pronounced deterioration in gas exchange. No increase in extra-vascular lung water was detected with the molecular double-indicator dilution technique in response to endotoxin, whereas this parameter was significantly higher when assessed with the gravimetric method. The molecular double-indicator dilution technique showed similar results as the gravimetrical method for assessment of extra-vascular lung water in non-endotoxaemic conditions. However, during endotoxin-induced lung injury the molecular double indicator dilution technique failed to detect the significant increase in extra-vascular lung water as measured by the gravimetric method. These data suggest that the molecular double indicator dilution technique may be of limited value during sepsis-induced lung injury.
Computing Determinants by Double-Crossing
ERIC Educational Resources Information Center
Leggett, Deanna; Perry, John; Torrence, Eve
2011-01-01
Dodgson's method of computing determinants is attractive, but fails if an interior entry of an intermediate matrix is zero. This paper reviews Dodgson's method and introduces a generalization, the double-crossing method, that provides a workaround for many interesting cases.
NASA Astrophysics Data System (ADS)
Bekkouche, Toufik; Bouguezel, Saad
2018-03-01
We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.
Constrained maximum likelihood modal parameter identification applied to structural dynamics
NASA Astrophysics Data System (ADS)
El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim
2016-05-01
A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
A Comparison of Climate Feedback Strength between CO2 Doubling and LGM Experiments
NASA Astrophysics Data System (ADS)
Yoshimori, M.; Yokohata, T.; Abe-Ouchi, A.
2008-12-01
Studies of past climate potentially provide a constraint on the uncertainty of climate sensitivity, but previous studies warn against a simple scaling to the future. The climate sensitivity is determined by various feedback processes and they may vary with climate states and forcings. In this study, we investigate similarities and differences of feedbacks for a CO2 doubling, a last glacial maximum (LGM), and LGM greenhouse gas (GHG) forcing experiments, using an atmospheric general circulation model coupled to a slab ocean model. After computing the radiative forcing, the individual feedback strengths: water vapor, lapse rate, albedo, and cloud feedbacks, are evaluated explicitly. For this particular model, the difference in the climate sensitivity among experiments is attributed to the shortwave cloud feedback in which there is a tendency that it becomes weaker or even negative in the cooling experiments. No significant difference is found in the water vapor feedback between warming and cooling experiments by GHGs despite the nonlinear dependence of the Clausius-Clapeyron relation on temperature. The weaker water vapor feedback in the LGM experiment due to a relatively weaker tropical forcing is compensated by the stronger lapse rate feedback due to a relatively stronger extratropical forcing. A hypothesis is proposed which explains the asymmetric cloud response between warming and cooling experiments associated with a displacement of the region of mixed- phase clouds. The difference in the total feedback strength between experiments is, however, relatively small compared to the current intermodel spread, and does not necessarily preclude the use of LGM climate as a future constraint.
A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles
Crawford, Broderick; Paredes, Fernando; Norero, Enrique
2015-01-01
The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n 2 × n 2 grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n 2. Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. PMID:26078751
Leisure activities following a lower limb amputation.
Couture, Mélanie; Caron, Chantal D; Desrosiers, Johanne
2010-01-01
The aim of this study was to describe leisure activities, leisure satisfaction and constraints on participation in leisure following a unilateral lower limb amputation due to vascular disease. This study used a mixed-method approach where 15 individuals with lower limb amputation completed the individual leisure profile 2-3 months post-discharge from rehabilitation. A subsample (n = 8) also participated in semi-structured interviews analysed using the Miles and Huberman analytic method. Results show that participants were involved in 12 different leisure activities on average. Compared to before the amputation, a decrease in participation was observed in all categories of leisure activity, and especially crafts, nature and outdoor activities, mechanics, sports and physical activities. Nonetheless, overall satisfaction was high. The most important constraints on participation in leisure were lack of accessibility, material considerations, functional abilities, affective constraints and social constraints. A decrease in leisure activity participation and the presence of constraints do not automatically translate into low levels of leisure satisfaction.
A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles.
Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Paredes, Fernando; Norero, Enrique
2015-01-01
The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n(2). Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods.
Reassessing The Fundamentals New Constraints on the Evolution, Ages and Masses of Neutron Stars
NASA Astrophysics Data System (ADS)
Kızıltan, Bülent
2011-09-01
The ages and masses of neutron stars (NSs) are two fundamental threads that make pulsars accessible to other sub-disciplines of astronomy and physics. A realistic and accurate determination of these two derived parameters play an important role in understanding of advanced stages of stellar evolution and the physics that govern relevant processes. Here I summarize new constraints on the ages and masses of NSs with an evolutionary perspective. I show that the observed P-Ṗ demographics is more diverse than what is theoretically predicted for the standard evolutionary channel. In particular, standard recycling followed by dipole spin-down fails to reproduce the population of millisecond pulsars with higher magnetic fields (B > 4 × 108 G) at rates deduced from observations. A proper inclusion of constraints arising from binary evolution and mass accretion offers a more realistic insight into the age distribution. By analytically implementing these constraints, I propose a ``modified'' spin-down age (τ~) for millisecond pulsars that gives estimates closer to the true age. Finally, I independently analyze the peak, skewness and cutoff values of the underlying mass distribution from a comprehensive list of radio pulsars for which secure mass measurements are available. The inferred mass distribution shows clear peaks at 1.35 Msolar and 1.50 Msolar for NSs in double neutron star (DNS) and neutron star-white dwarf (NS-WD) systems respectively. I find a mass cutoff at 2 Msolar for NSs with WD companions, which establishes a firm lower bound for the maximum mass of NSs.
Non-integer expansion embedding techniques for reversible image watermarking
NASA Astrophysics Data System (ADS)
Xiang, Shijun; Wang, Yi
2015-12-01
This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.
Constrained spectral clustering under a local proximity structure assumption
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri; Xu, Qianjun; des Jardins, Marie
2005-01-01
This work focuses on incorporating pairwise constraints into a spectral clustering algorithm. A new constrained spectral clustering method is proposed, as well as an active constraint acquisition technique and a heuristic for parameter selection. We demonstrate that our constrained spectral clustering method, CSC, works well when the data exhibits what we term local proximity structure.
N-person differential games. Part 2: The penalty method
NASA Technical Reports Server (NTRS)
Chen, G.; Mills, W. H.; Zheng, Q.; Shaw, W. H.
1983-01-01
The equilibrium strategy for N-person differential games can be found by studying a min-max problem subject to differential systems constraints. The differential constraints are penalized and finite elements are used to compute numerical solutions. Convergence proof and error estimates are given. Numerical results are also included and compared with those obtained by the dual method.
Optimum structural design with plate bending elements - A survey
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Prasad, B.
1981-01-01
A survey is presented of recently published papers in the field of optimum structural design of plates, largely with respect to the minimum-weight design of plates subject to such constraints as fundamental frequency maximization. It is shown that, due to the availability of powerful computers, the trend in optimum plate design is away from methods tailored to specific geometry and loads and toward methods that can be easily programmed for any kind of plate, such as finite element methods. A corresponding shift is seen in optimization from variational techniques to numerical optimization algorithms. Among the topics covered are fully stressed design and optimality criteria, mathematical programming, smooth and ribbed designs, design against plastic collapse, buckling constraints, and vibration constraints.
Flip-avoiding interpolating surface registration for skull reconstruction.
Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye
2018-03-30
Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.
Real-time inextensible surgical thread simulation.
Xu, Lang; Liu, Qian
2018-03-27
This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.
TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, H; Gordon, J; Chetty, I
2014-06-15
Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less
A continuum of periodic solutions to the planar four-body problem with two pairs of equal masses
NASA Astrophysics Data System (ADS)
Ouyang, Tiancheng; Xie, Zhifu
2018-04-01
In this paper, we apply the variational method with Structural Prescribed Boundary Conditions (SPBC) to prove the existence of periodic and quasi-periodic solutions for the planar four-body problem with two pairs of equal masses m1 =m3 and m2 =m4. A path q (t) on [ 0 , T ] satisfies the SPBC if the boundaries q (0) ∈ A and q (T) ∈ B, where A and B are two structural configuration spaces in (R2)4 and they depend on a rotation angle θ ∈ (0 , 2 π) and the mass ratio μ = m2/m1 ∈R+. We show that there is a region Ω ⊆ (0 , 2 π) ×R+ such that there exists at least one local minimizer of the Lagrangian action functional on the path space satisfying the SPBC { q (t) ∈H1 ([ 0 , T ] ,(R2)4) | q (0) ∈ A , q (T) ∈ B } for any (θ , μ) ∈ Ω. The corresponding minimizing path of the minimizer can be extended to a non-homographic periodic solution if θ is commensurable with π or a quasi-periodic solution if θ is not commensurable with π. In the variational method with the SPBC, we only impose constraints on the boundary and we do not impose any symmetry constraint on solutions. Instead, we prove that our solutions that are extended from the initial minimizing paths possess certain symmetries. The periodic solutions can be further classified as simple choreographic solutions, double choreographic solutions and non-choreographic solutions. Among the many stable simple choreographic orbits, the most extraordinary one is the stable star pentagon choreographic solution when (θ , μ) = (4 π/5, 1). Remarkably the unequal-mass variants of the stable star pentagon are just as stable as the equal mass choreographies.
Deformation effect simulation and optimization for double front axle steering mechanism
NASA Astrophysics Data System (ADS)
Wu, Jungang; Zhang, Siqin; Yang, Qinglong
2013-03-01
This paper research on tire wear problem of heavy vehicles with Double Front Axle Steering Mechanism from the flexible effect of Steering Mechanism, and proposes a structural optimization method which use both traditional static structural theory and dynamic structure theory - Equivalent Static Load (ESL) method to optimize key parts. The good simulated and test results show this method has high engineering practice and reference value for tire wear problem of Double Front Axle Steering Mechanism design.
Rocha, Sérgio; Silva, Evelyn; Foerster, Águida; Wiesiolek, Carine; Chagas, Anna Paula; Machado, Giselle; Baltar, Adriana; Monte-Silva, Katia
2016-01-01
This pilot double-blind sham-controlled randomized trial aimed to determine if the addition of anodal tDCS on the affected hemisphere or cathodal tDCS on unaffected hemisphere to modified constraint-induced movement therapy (mCIMT) would be superior to constraints therapy alone in improving upper limb function in chronic stroke patients. Twenty-one patients with chronic stroke were randomly assigned to receive 12 sessions of either (i) anodal, (ii) cathodal or (iii) sham tDCS combined with mCIMT. Fugl-Meyer assessment (FMA), motor activity log scale (MAL), and handgrip strength were analyzed before, immediately, and 1 month (follow-up) after the treatment. Minimal clinically important difference (mCID) was defined as an increase of ≥5.25 in the upper limb FMA. An increase in the FMA scores between the baseline and post-intervention and follow-up for active tDCS group was observed, whereas no difference was observed in the sham group. At post-intervention and follow-up, when compared with the sham group, only the anodal tDCS group achieved an improvement in the FMA scores. ANOVA showed that all groups demonstrated similar improvement over time for MAL and handgrip strength. In the active tDCS groups, 7/7 (anodal tDCS) 5/7 (cathodal tDCS) of patients experienced mCID against 3/7 in the sham group. The results support the merit of association of mCIMT with brain stimulation to augment clinical gains in rehabilitation after stroke. However, the anodal tDCS seems to have greater impact than the cathodal tDCS in increasing the mCIMT effects on motor function of chronic stroke patients. The association of mCIMT with brain stimulation improves clinical gains in rehabilitation after stroke. The improvement in motor recovery (assessed by Fugl-Meyer scale) was only observed after anodal tDCS. The modulation of damaged hemisphere demonstrated greater improvements than the modulation of unaffected hemispheres.
Fuel Optimal, Finite Thrust Guidance Methods to Circumnavigate with Lighting Constraints
NASA Astrophysics Data System (ADS)
Prince, E. R.; Carr, R. W.; Cobb, R. G.
This paper details improvements made to the authors' most recent work to find fuel optimal, finite-thrust guidance to inject an inspector satellite into a prescribed natural motion circumnavigation (NMC) orbit about a resident space object (RSO) in geosynchronous orbit (GEO). Better initial guess methodologies are developed for the low-fidelity model nonlinear programming problem (NLP) solver to include using Clohessy- Wiltshire (CW) targeting, a modified particle swarm optimization (PSO), and MATLAB's genetic algorithm (GA). These initial guess solutions may then be fed into the NLP solver as an initial guess, where a different NLP solver, IPOPT, is used. Celestial lighting constraints are taken into account in addition to the sunlight constraint, ensuring that the resulting NMC also adheres to Moon and Earth lighting constraints. The guidance is initially calculated given a fixed final time, and then solutions are also calculated for fixed final times before and after the original fixed final time, allowing mission planners to choose the lowest-cost solution in the resulting range which satisfies all constraints. The developed algorithms provide computationally fast and highly reliable methods for determining fuel optimal guidance for NMC injections while also adhering to multiple lighting constraints.
Sub-Chandrasekhar-mass White Dwarf Detonations Revisited
NASA Astrophysics Data System (ADS)
Shen, Ken J.; Kasen, Daniel; Miles, Broxton J.; Townsley, Dean M.
2018-02-01
The detonation of a sub-Chandrasekhar-mass white dwarf (WD) has emerged as one of the most promising Type Ia supernova (SN Ia) progenitor scenarios. Recent studies have suggested that the rapid transfer of a very small amount of helium from one WD to another is sufficient to ignite a helium shell detonation that subsequently triggers a carbon core detonation, yielding a “dynamically driven double-degenerate double-detonation” SN Ia. Because the helium shell that surrounds the core explosion is so minimal, this scenario approaches the limiting case of a bare C/O WD detonation. Motivated by discrepancies in previous literature and by a recent need for detailed nucleosynthetic data, we revisit simulations of naked C/O WD detonations in this paper. We disagree to some extent with the nucleosynthetic results of previous work on sub-Chandrasekhar-mass bare C/O WD detonations; for example, we find that a median-brightness SN Ia is produced by the detonation of a 1.0 {M}ȯ WD instead of a more massive and rarer 1.1 {M}ȯ WD. The neutron-rich nucleosynthesis in our simulations agrees broadly with some observational constraints, although tensions remain with others. There are also discrepancies related to the velocities of the outer ejecta and light curve shapes, but overall our synthetic light curves and spectra are roughly consistent with observations. We are hopeful that future multidimensional simulations will resolve these issues and further bolster the dynamically driven double-degenerate double-detonation scenario’s potential to explain most SNe Ia.
Dobrev, I.; Furlong, C.; Cheng, J. T.; Rosowski, J. J.
2014-01-01
In this paper, we propose a multi-pulsed double exposure (MPDE) acquisition method to quantify in full-field-of-view the transient (i.e., >10 kHz) acoustically induced nanometer scale displacements of the human tympanic membrane (TM or eardrum). The method takes advantage of the geometrical linearity and repeatability of the TM displacements to enable high-speed measurements with a conventional camera (i.e., <20 fps). The MPDE is implemented on a previously developed digital holographic system (DHS) to enhance its measurement capabilities, at a minimum cost, while avoiding constraints imposed by the spatial resolutions and dimensions of high-speed (i.e., >50 kfps) cameras. To our knowledge, there is currently no existing system to provide such capabilities for the study of the human TM. The combination of high temporal (i.e., >50 kHz) and spatial (i.e., >500k data points) resolutions enables measurements of the temporal and frequency response of all points across the surface of the TM simultaneously. The repeatability and accuracy of the MPDE method are verified against a Laser Doppler Vibrometer (LDV) on both artificial membranes and ex-vivo human TMs that are acoustically excited with a sharp (i.e., <100 μs duration) click. The measuring capabilities of the DHS, enhanced by the MPDE acquisition method, allow for quantification of spatially dependent motion parameters of the TM, such as modal frequencies, time constants, as well as inferring local material properties. PMID:25780271
Individuality and universality in the growth-division laws of single E. coli cells
NASA Astrophysics Data System (ADS)
Kennard, Andrew S.; Osella, Matteo; Javer, Avelino; Grilli, Jacopo; Nghe, Philippe; Tans, Sander J.; Cicuta, Pietro; Cosentino Lagomarsino, Marco
2016-01-01
The mean size of exponentially dividing Escherichia coli cells in different nutrient conditions is known to depend on the mean growth rate only. However, the joint fluctuations relating cell size, doubling time, and individual growth rate are only starting to be characterized. Recent studies in bacteria reported a universal trend where the spread in both size and doubling times is a linear function of the population means of these variables. Here we combine experiments and theory and use scaling concepts to elucidate the constraints posed by the second observation on the division control mechanism and on the joint fluctuations of sizes and doubling times. We found that scaling relations based on the means collapse both size and doubling-time distributions across different conditions and explain how the shape of their joint fluctuations deviates from the means. Our data on these joint fluctuations highlight the importance of cell individuality: Single cells do not follow the dependence observed for the means between size and either growth rate or inverse doubling time. Our calculations show that these results emerge from a broad class of division control mechanisms requiring a certain scaling form of the "division hazard rate function," which defines the probability rate of dividing as a function of measurable parameters. This "model free" approach gives a rationale for the universal body-size distributions observed in microbial ecosystems across many microbial species, presumably dividing with multiple mechanisms. Additionally, our experiments show a crossover between fast and slow growth in the relation between individual-cell growth rate and division time, which can be understood in terms of different regimes of genome replication control.
Modeling DNA bubble formation at the atomic scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beleva, V; Rasmussen, K. O.; Garcia, A. E.
We describe the fluctuations of double stranded DNA molecules using a minimalist Go model over a wide range of temperatures. Minimalist models allow us to describe, at the atomic level, the opening and formation of bubbles in DNA double helices. This model includes all the geometrical constraints in helix melting imposed by the 3D structure of the molecule. The DNA forms melted bubbles within double helices. These bubbles form and break as a function of time. The equilibrium average number of broken base pairs shows a sharp change as a function of T. We observe a temperature profile of sequencemore » dependent bubble formation similar to those measured by Zeng et al. Long nuclei acid molecules melt partially through the formations of bubbles. It is known that CG rich sequences melt at higher temperatures than AT rich sequences. The melting temperature, however, is not solely determined by the CG content, but by the sequence through base stacking and solvent interactions. Recently, models that incorporate the sequence and nonlinear dynamics of DNA double strands have shown that DNA exhibits a very rich dynamics. Recent extensions of the Bishop-Peyrard model show that fluctuations in the DNA structure lead to opening in localized regions, and that these regions in the DNA are associated with transcription initiation sites. 1D and 2D models of DNA may contain enough information about stacking and base pairing interactions, but lack the coupling between twisting, bending and base pair opening imposed by the double helical structure of DNA that all atom models easily describe. However, the complexity of the energy function used in all atom simulations (including solvent, ions, etc) does not allow for the description of DNA folding/unfolding events that occur in the microsecond time scale.« less
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Computing group cardinality constraint solutions for logistic regression problems.
Zhang, Yong; Kwon, Dongjin; Pohl, Kilian M
2017-01-01
We derive an algorithm to directly solve logistic regression based on cardinality constraint, group sparsity and use it to classify intra-subject MRI sequences (e.g. cine MRIs) of healthy from diseased subjects. Group cardinality constraint models are often applied to medical images in order to avoid overfitting of the classifier to the training data. Solutions within these models are generally determined by relaxing the cardinality constraint to a weighted feature selection scheme. However, these solutions relate to the original sparse problem only under specific assumptions, which generally do not hold for medical image applications. In addition, inferring clinical meaning from features weighted by a classifier is an ongoing topic of discussion. Avoiding weighing features, we propose to directly solve the group cardinality constraint logistic regression problem by generalizing the Penalty Decomposition method. To do so, we assume that an intra-subject series of images represents repeated samples of the same disease patterns. We model this assumption by combining series of measurements created by a feature across time into a single group. Our algorithm then derives a solution within that model by decoupling the minimization of the logistic regression function from enforcing the group sparsity constraint. The minimum to the smooth and convex logistic regression problem is determined via gradient descent while we derive a closed form solution for finding a sparse approximation of that minimum. We apply our method to cine MRI of 38 healthy controls and 44 adult patients that received reconstructive surgery of Tetralogy of Fallot (TOF) during infancy. Our method correctly identifies regions impacted by TOF and generally obtains statistically significant higher classification accuracy than alternative solutions to this model, i.e., ones relaxing group cardinality constraints. Copyright © 2016 Elsevier B.V. All rights reserved.
Daily, William D.; Laine, Daren L.; Laine, Edwin F.
2001-01-01
Methods are provided for detecting and locating leaks in liners used as barriers in the construction of landfills, surface impoundments, water reservoirs, tanks, and the like. Electrodes are placed in the ground around the periphery of the facility, in the leak detection zone located between two liners if present, and/or within the containment facility. Electrical resistivity data is collected using these electrodes. This data is used to map the electrical resistivity distribution beneath the containment liner or between two liners in a double-lined facility. In an alternative embodiment, an electrode placed within the lined facility is driven to an electrical potential with respect to another electrode placed at a distance from the lined facility (mise-a-la-masse). Voltage differences are then measured between various combinations of additional electrodes placed in the soil on the periphery of the facility, the leak detection zone, or within the facility. A leak of liquid through the liner material will result in an electrical potential distribution that can be measured at the electrodes. The leak position is located by determining the coordinates of an electrical current source pole that best fits the measured potentials with the constraints of the known or assumed resistivity distribution.
Daily, William D.; Laine, Daren L.; Laine, Edwin F.
1997-01-01
Methods are provided for detecting and locating leaks in liners used as barriers in the construction of landfills, surface impoundments, water reservoirs, tanks, and the like. Electrodes are placed in the ground around the periphery of the facility, in the leak detection zone located between two liners if present, and/or within the containment facility. Electrical resistivity data is collected using these electrodes. This data is used to map the electrical resistivity distribution beneath the containment liner between two liners in a double-lined facility. In an alternative embodiment, an electrode placed within the lined facility is driven to an electrical potential with respect to another electrode placed at a distance from the lined facility (mise-a-la-masse). Voltage differences are then measured between various combinations of additional electrodes placed in the soil on the periphery of the facility, the leak detection zone, or within the facility. A leak of liquid though the liner material will result in an electrical potential distribution that can be measured at the electrodes. The leak position is located by determining the coordinates of an electrical current source pole that best fits the measured potentials with the constraints of the known or assumed resistivity distribution.
Daily, W.D.; Laine, D.L.; Laine, E.F.
1997-08-26
Methods are provided for detecting and locating leaks in liners used as barriers in the construction of landfills, surface impoundments, water reservoirs, tanks, and the like. Electrodes are placed in the ground around the periphery of the facility, in the leak detection zone located between two liners if present, and/or within the containment facility. Electrical resistivity data is collected using these electrodes. This data is used to map the electrical resistivity distribution beneath the containment liner between two liners in a double-lined facility. In an alternative embodiment, an electrode placed within the lined facility is driven to an electrical potential with respect to another electrode placed at a distance from the lined facility (mise-a-la-masse). Voltage differences are then measured between various combinations of additional electrodes placed in the soil on the periphery of the facility, the leak detection zone, or within the facility. A leak of liquid though the liner material will result in an electrical potential distribution that can be measured at the electrodes. The leak position is located by determining the coordinates of an electrical current source pole that best fits the measured potentials with the constraints of the known or assumed resistivity distribution. 6 figs.
NASA Technical Reports Server (NTRS)
Williams, Robert L., III
1992-01-01
This paper presents three methods to solve the inverse position kinematics position problem of the double universal joint attached to a manipulator: (1) an analytical solution for two specific cases; (2) an approximate closed form solution based on ignoring the wrist offset; and (3) an iterative method which repeats closed form position and orientation calculations until the solution is achieved. Several manipulators are used to demonstrate the solution methods: cartesian, cylindrical, spherical, and an anthropomorphic articulated arm, based on the Flight Telerobotic Servicer (FTS) arm. A singularity analysis is presented for the double universal joint wrist attached to the above manipulator arms. While the double universal joint wrist standing alone is singularity-free in orientation, the singularity analysis indicates the presence of coupled position/orientation singularities of the spherical and articulated manipulators with the wrist. The cartesian and cylindrical manipulators with the double universal joint wrist were found to be singularity-free. The methods of this paper can be implemented in a real-time controller for manipulators with the double universal joint wrist. Such mechanically dextrous systems could be used in telerobotic and industrial applications, but further work is required to avoid the singularities.
Li, Weidong; Gao, Yanfei; Bei, Hongbin
2016-10-10
As a commonly used method to enhance the ductility in bulk metallic glasses (BMGs), the introduction of geometric constraints blocks and confines the propagation of the shear bands, reduces the degree of plastic strain on each shear band so that the catastrophic failure is prevented or delayed, and promotes the formation of multiple shear bands. The clustering of multiple shear bands near notches is often interpreted as the reason for improved ductility. Experimental works on the shear band arrangements in notched metallic glasses have been extensively carried out, but a systematic theoretical study is lacking. Using instability theory that predictsmore » the onset of strain localization and the free-volume- based nite element simulations that predict the evolution of shear bands, this work reveals various categories of shear band arrangements in double edge notched BMGs with respect to the mode mixity of the applied stress fields. In conclusion, a mechanistic explanation is thus provided to a number of related experiments and especially the correlation between various types of shear bands and the stress state.« less
NASA Astrophysics Data System (ADS)
Wang, Zhi-peng; Zhang, Shuai; Liu, Hong-zhao; Qin, Yi
2014-12-01
Based on phase retrieval algorithm and QR code, a new optical encryption technology that only needs to record one intensity distribution is proposed. In this encryption process, firstly, the QR code is generated from the information to be encrypted; and then the generated QR code is placed in the input plane of 4-f system to have a double random phase encryption. For only one intensity distribution in the output plane is recorded as the ciphertext, the encryption process is greatly simplified. In the decryption process, the corresponding QR code is retrieved using phase retrieval algorithm. A priori information about QR code is used as support constraint in the input plane, which helps solve the stagnation problem. The original information can be recovered without distortion by scanning the QR code. The encryption process can be implemented either optically or digitally, and the decryption process uses digital method. In addition, the security of the proposed optical encryption technology is analyzed. Theoretical analysis and computer simulations show that this optical encryption system is invulnerable to various attacks, and suitable for harsh transmission conditions.
NASA Astrophysics Data System (ADS)
Schunck, N.; Dobaczewski, J.; McDonnell, J.; Satuła, W.; Sheikh, J. A.; Staszczak, A.; Stoitsov, M.; Toivanen, P.
2012-01-01
We describe the new version (v2.49t) of the code HFODD which solves the nuclear Skyrme-Hartree-Fock (HF) or Skyrme-Hartree-Fock-Bogolyubov (HFB) problem by using the Cartesian deformed harmonic-oscillator basis. In the new version, we have implemented the following physics features: (i) the isospin mixing and projection, (ii) the finite-temperature formalism for the HFB and HF + BCS methods, (iii) the Lipkin translational energy correction method, (iv) the calculation of the shell correction. A number of specific numerical methods have also been implemented in order to deal with large-scale multi-constraint calculations and hardware limitations: (i) the two-basis method for the HFB method, (ii) the Augmented Lagrangian Method (ALM) for multi-constraint calculations, (iii) the linear constraint method based on the approximation of the RPA matrix for multi-constraint calculations, (iv) an interface with the axial and parity-conserving Skyrme-HFB code HFBTHO, (v) the mixing of the HF or HFB matrix elements instead of the HF fields. Special care has been paid to using the code on massively parallel leadership class computers. For this purpose, the following features are now available with this version: (i) the Message Passing Interface (MPI) framework, (ii) scalable input data routines, (iii) multi-threading via OpenMP pragmas, (iv) parallel diagonalization of the HFB matrix in the simplex-breaking case using the ScaLAPACK library. Finally, several little significant errors of the previous published version were corrected. New version program summaryProgram title:HFODD (v2.49t) Catalogue identifier: ADFL_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADFL_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence v3 No. of lines in distributed program, including test data, etc.: 190 614 No. of bytes in distributed program, including test data, etc.: 985 898 Distribution format: tar.gz Programming language: FORTRAN-90 Computer: Intel Pentium-III, Intel Xeon, AMD-Athlon, AMD-Opteron, Cray XT4, Cray XT5 Operating system: UNIX, LINUX, Windows XP Has the code been vectorized or parallelized?: Yes, parallelized using MPI RAM: 10 Mwords Word size: The code is written in single-precision for the use on a 64-bit processor. The compiler option -r8 or +autodblpad (or equivalent) has to be used to promote all real and complex single-precision floating-point items to double precision when the code is used on a 32-bit machine. Classification: 17.22 Catalogue identifier of previous version: ADFL_v2_2 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2361 External routines: The user must have access to the NAGLIB subroutine f02axe, or LAPACK subroutines zhpev, zhpevx, zheevr, or zheevd, which diagonalize complex hermitian matrices, the LAPACK subroutines dgetri and dgetrf which invert arbitrary real matrices, the LAPACK subroutines dsyevd, dsytrf and dsytri which compute eigenvalues and eigenfunctions of real symmetric matrices, the LINPACK subroutines zgedi and zgeco, which invert arbitrary complex matrices and calculate determinants, the BLAS routines dcopy, dscal, dgeem and dgemv for double-precision linear algebra and zcopy, zdscal, zgeem and zgemv for complex linear algebra, or provide another set of subroutines that can perform such tasks. The BLAS and LAPACK subroutines can be obtained from the Netlib Repository at the University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/. Does the new version supersede the previous version?: Yes Nature of problem: The nuclear mean field and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zero-range velocity-dependent Skyrme interaction, the nuclear mean field is local and velocity dependent. The locality allows for an effective and fast solution of the self-consistent Hartree-Fock equations, even for heavy nuclei, and for various nucleonic ( n-particle- n-hole) configurations, deformations, excitation energies, or angular momenta. Similarly, Local Density Approximation in the particle-particle channel, which is equivalent to using a zero-range interaction, allows for a simple implementation of pairing effects within the Hartree-Fock-Bogolyubov method. Solution method: The program uses the Cartesian harmonic oscillator basis to expand single-particle or single-quasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zero-range pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean-field Hamiltonians or Routhians which depend non-linearly on the local neutron and proton densities. Suitable constraints are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in: [J. Dobaczewski, J. Dudek, Comput. Phys. Commun. 102 (1997) 166]. Reasons for new version: Version 2.49s of HFODD provides a number of new options such as the isospin mixing and projection of the Skyrme functional, the finite-temperature HF and HFB formalism and optimized methods to perform multi-constrained calculations. It is also the first version of HFODD to contain threading and parallel capabilities. Summary of revisions: Isospin mixing and projection of the HF states has been implemented. The finite-temperature formalism for the HFB equations has been implemented. The Lipkin translational energy correction method has been implemented. Calculation of the shell correction has been implemented. The two-basis method for the solution to the HFB equations has been implemented. The Augmented Lagrangian Method (ALM) for calculations with multiple constraints has been implemented. The linear constraint method based on the cranking approximation of the RPA matrix has been implemented. An interface between HFODD and the axially-symmetric and parity-conserving code HFBTHO has been implemented. The mixing of the matrix elements of the HF or HFB matrix has been implemented. A parallel interface using the MPI library has been implemented. A scalable model for reading input data has been implemented. OpenMP pragmas have been implemented in three subroutines. The diagonalization of the HFB matrix in the simplex-breaking case has been parallelized using the ScaLAPACK library. Several little significant errors of the previous published version were corrected. Running time: In serial mode, running 6 HFB iterations for 152Dy for conserved parity and signature symmetries in a full spherical basis of N=14 shells takes approximately 8 min on an AMD Opteron processor at 2.6 GHz, assuming standard BLAS and LAPACK libraries. As a rule of thumb, runtime for HFB calculations for parity and signature conserved symmetries roughly increases as N, where N is the number of full HO shells. Using custom-built optimized BLAS and LAPACK libraries (such as in the ATLAS implementation) can bring down the execution time by 60%. Using the threaded version of the code with 12 threads and threaded BLAS libraries can bring an additional factor 2 speed-up, so that the same 6 HFB iterations now take of the order of 2 min 30 s.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubina, Sean Hyun, E-mail: sdubin2@uic.edu; Wedgewood, Lewis Edward, E-mail: wedge@uic.edu
2016-07-15
Ferrofluids are often favored for their ability to be remotely positioned via external magnetic fields. The behavior of particles in ferromagnetic clusters under uniformly applied magnetic fields has been computationally simulated using the Brownian dynamics, Stokesian dynamics, and Monte Carlo methods. However, few methods have been established that effectively handle the basic principles of magnetic materials, namely, Maxwell’s equations. An iterative constraint method was developed to satisfy Maxwell’s equations when a uniform magnetic field is imposed on ferrofluids in a heterogeneous Brownian dynamics simulation that examines the impact of ferromagnetic clusters in a mesoscale particle collection. This was accomplished bymore » allowing a particulate system in a simple shear flow to advance by a time step under a uniformly applied magnetic field, then adjusting the ferroparticles via an iterative constraint method applied over sub-volume length scales until Maxwell’s equations were satisfied. The resultant ferrofluid model with constraints demonstrates that the magnetoviscosity contribution is not as substantial when compared to homogeneous simulations that assume the material’s magnetism is a direct response to the external magnetic field. This was detected across varying intensities of particle-particle interaction, Brownian motion, and shear flow. Ferroparticle aggregation was still extensively present but less so than typically observed.« less
Kobayashi, Amane; Sekiguchi, Yuki; Takayama, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi
2014-11-17
Coherent X-ray diffraction imaging (CXDI) is a lensless imaging technique that is suitable for visualizing the structures of non-crystalline particles with micrometer to sub-micrometer dimensions from material science and biology. One of the difficulties inherent to CXDI structural analyses is the reconstruction of electron density maps of specimen particles from diffraction patterns because saturated detector pixels and a beam stopper result in missing data in small-angle regions. To overcome this difficulty, the dark-field phase-retrieval (DFPR) method has been proposed. The DFPR method reconstructs electron density maps from diffraction data, which are modified by multiplying Gaussian masks with an observed diffraction pattern in the high-angle regions. In this paper, we incorporated Friedel centrosymmetry for diffraction patterns into the DFPR method to provide a constraint for the phase-retrieval calculation. A set of model simulations demonstrated that this constraint dramatically improved the probability of reconstructing correct electron density maps from diffraction patterns that were missing data in the small-angle region. In addition, the DFPR method with the constraint was applied successfully to experimentally obtained diffraction patterns with significant quantities of missing data. We also discuss this method's limitations with respect to the level of Poisson noise in X-ray detection.
Causality constraints in conformal field theory
Hartman, Thomas; Jain, Sachin; Kundu, Sandipan
2016-05-17
Causality places nontrivial constraints on QFT in Lorentzian signature, for example fixing the signs of certain terms in the low energy Lagrangian. In d dimensional conformal field theory, we show how such constraints are encoded in crossing symmetry of Euclidean correlators, and derive analogous constraints directly from the conformal bootstrap (analytically). The bootstrap setup is a Lorentzian four-point function corresponding to propagation through a shockwave. Crossing symmetry fixes the signs of certain log terms that appear in the conformal block expansion, which constrains the interactions of low-lying operators. As an application, we use the bootstrap to rederive the well knownmore » sign constraint on the (Φ) 4 coupling in effective field theory, from a dual CFT. We also find constraints on theories with higher spin conserved currents. As a result, our analysis is restricted to scalar correlators, but we argue that similar methods should also impose nontrivial constraints on the interactions of spinning operators« less
A proof for loop-law constraints in stoichiometric metabolic networks
2012-01-01
Background Constraint-based modeling is increasingly employed for metabolic network analysis. Its underlying assumption is that natural metabolic phenotypes can be predicted by adding physicochemical constraints to remove unrealistic metabolic flux solutions. The loopless-COBRA approach provides an additional constraint that eliminates thermodynamically infeasible internal cycles (or loops) from the space of solutions. This allows the prediction of flux solutions that are more consistent with experimental data. However, it is not clear if this approach over-constrains the models by removing non-loop solutions as well. Results Here we apply Gordan’s theorem from linear algebra to prove for the first time that the constraints added in loopless-COBRA do not over-constrain the problem beyond the elimination of the loops themselves. Conclusions The loopless-COBRA constraints can be reliably applied. Furthermore, this proof may be adapted to evaluate the theoretical soundness for other methods in constraint-based modeling. PMID:23146116
A heuristic constraint programmed planner for deep space exploration problems
NASA Astrophysics Data System (ADS)
Jiang, Xiao; Xu, Rui; Cui, Pingyuan
2017-10-01
In recent years, the increasing numbers of scientific payloads and growing constraints on the probe have made constraint processing technology a hotspot in the deep space planning field. In the procedure of planning, the ordering of variables and values plays a vital role. This paper we present two heuristic ordering methods for variables and values. On this basis a graphplan-like constraint-programmed planner is proposed. In the planner we convert the traditional constraint satisfaction problem to a time-tagged form with different levels. Inspired by the most constrained first principle in constraint satisfaction problem (CSP), the variable heuristic is designed by the number of unassigned variables in the constraint and the value heuristic is designed by the completion degree of the support set. The simulation experiments show that the planner proposed is effective and its performance is competitive with other kind of planners.
NASA Astrophysics Data System (ADS)
Luo, Lin; Fan, Min; Shen, Mang-zuo
2008-01-01
Atmospheric turbulence severely restricts the spatial resolution of astronomical images obtained by a large ground-based telescope. In order to reduce effectively this effect, we propose a method of blind deconvolution, with a bandwidth constraint determined by the parameters of the telescope's optical system based on the principle of maximum likelihood estimation, in which the convolution error function is minimized by using the conjugate gradient algorithm. A relation between the parameters of the telescope optical system and the image's frequency-domain bandwidth is established, and the speed of convergence of the algorithm is improved by using the positivity constraint on the variables and the limited-bandwidth constraint on the point spread function. To avoid the effective Fourier frequencies exceed the cut-off frequency, it is required that each single image element (e.g., the pixel in the CCD imaging) in the sampling focal plane should be smaller than one fourth of the diameter of the diffraction spot. In the algorithm, no object-centered constraint was used, so the proposed method is suitable for the image restoration of a whole field of objects. By the computer simulation and by the restoration of an actually-observed image of α Piscium, the effectiveness of the proposed method is demonstrated.
Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms
NASA Astrophysics Data System (ADS)
Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid
2017-09-01
The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.
Kinetic mechanism of ATP-sulphurylase from rat chondrosarcoma.
Lyle, S; Geller, D H; Ng, K; Westley, J; Schwartz, N B
1994-01-01
ATP-sulphurylase catalyses the production of adenosine 5'-phosphosulphate (APS) from ATP and free sulphate with the release of PPi. APS kinase phosphorylates the APS intermediate to produce adenosine 3'-phosphate 5'-phosphosulphate (PAPS). The kinetic mechanism of rat chondrosarcoma ATP-sulphurylase was investigated by steady-state methods in the physiologically forward direction as well as the reverse direction. The sulphurylase activity was coupled to APS kinase activity in order to overcome the thermodynamic constraints of the sulphurylase reaction in the forward direction. Double-reciprocal initial-velocity plots for the forward sulphurylase intersect to the left of the ordinate for this reaction. KmATP and Kmsulphate were found to be 200 and 97 microM respectively. Chlorate, a competitive inhibitor with respect to sulphate, showed uncompetitive inhibition with respect to ATP with an apparent Ki of 1.97 mM. Steady-state data from experiments in the physiologically reverse direction also yielded double-reciprocal initial-velocity patterns that intersect to the left of the ordinate axis, with a KmAPS of 39 microM and a Kmpyrophosphate of 18 microM. The results of steady-state experiments in which Mg2+ was varied indicated that the true substrate is the MgPPi complex. An analogue of APS, adenosine 5'-[beta-methylene]phosphosulphate, was a linear inhibitor competitive with APS and non-competitive with respect to MgPPi. The simplest formal mechanism that agrees with all the data is an ordered steady-state single displacement with MgATP as the leading substrate in the forward direction and APS as the leading substrate in the reverse direction. PMID:8042976
Geometric low-energy effective action in a doubled spacetime
NASA Astrophysics Data System (ADS)
Ma, Chen-Te; Pezzella, Franco
2018-05-01
The ten-dimensional supergravity theory is a geometric low-energy effective theory and the equations of motion for its fields can be obtained from string theory by computing β functions. With d compact dimensions, an O (d , d ; Z) geometric structure can be added to it giving the supergravity theory with T-duality manifest. In this paper, this is constructed through the use of a suitable star product whose role is the one to implement the weak constraint on the fields and the gauge parameters in order to have a closed gauge symmetry algebra. The consistency of the action here proposed is based on the orthogonality of the momenta associated with fields in their triple star products in the cubic terms defined for d ≥ 1. This orthogonality holds also for an arbitrary number of star products of fields for d = 1. Finally, we extend our analysis to the double sigma model, non-commutative geometry and open string theory.
NASA Astrophysics Data System (ADS)
Rameez-ul-Islam; Ikram, Manzoor; Hasan Mujtaba, Abid; Abbas, Tasawar
2018-01-01
We propose an idea for symmetric measurements through the famous double slit experiment (DSE) in a new detection scenario. The interferometric setup is complemented here with quantum detectors that switch to an arbitrary superposition after interaction with the arms of the DSE. The envisioned schematics cover the full measurement range, i.e. from the weak to the strong projective situation with selectivity being a smoothly tunable open option, and suggests an alternative methodology for weak measurements based on information overlap from DSE paths. The results, though generally in agreement with the quantum paradigm, raise many questions over the nature of probabilities, the absurdity of the common language for phenomena’s description in the theory and the boundary separating the projective/non-projective measurements, and the related misconceived interpretations. Further, the results impose certain constraints over the hidden variable theories as well as on the repercussions of the weak measurements. Although described as a thought experiment, the proposal can equally be implemented experimentally under a prevailing research scenario.
NASA Astrophysics Data System (ADS)
Jang, Kyungmin; Saraya, Takuya; Kobayashi, Masaharu; Hiramoto, Toshiro
2018-02-01
We have investigated the gate stack scalability and energy efficiency of double-gate negative-capacitance FET (DGNCFET) with a CMOS-compatible ferroelectric HfO2 (FE:HfO2). Analytic model-based simulation is conducted to investigate the impacts of ferroelectric characteristic of FE:HfO2 and gate stack thickness on the I on/I off ratio of DGNCFET. DGNCFET has wider design window for the gate stack where higher I on/I off ratio can be achieved than DG classical MOSFET. Under a process-induced constraint with sub-10 nm gate length (L g), FE:HfO2-based DGNCFET still has a design point for high I on/I off ratio. With an optimized gate stack thickness for sub-10 nm L g, FE:HfO2-based DGNCFET has 2.5× higher energy efficiency than DG classical MOSFET even at ultralow operation voltage of sub-0.2 V.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
Rezapour, Ehsan; Pettersen, Kristin Y; Liljebäck, Pål; Gravdahl, Jan T; Kelasidi, Eleni
This paper considers path following control of planar snake robots using virtual holonomic constraints. In order to present a model-based path following control design for the snake robot, we first derive the Euler-Lagrange equations of motion of the system. Subsequently, we define geometric relations among the generalized coordinates of the system, using the method of virtual holonomic constraints. These appropriately defined constraints shape the geometry of a constraint manifold for the system, which is a submanifold of the configuration space of the robot. Furthermore, we show that the constraint manifold can be made invariant by a suitable choice of feedback. In particular, we analytically design a smooth feedback control law to exponentially stabilize the constraint manifold. We show that enforcing the appropriately defined virtual holonomic constraints for the configuration variables implies that the robot converges to and follows a desired geometric path. Numerical simulations and experimental results are presented to validate the theoretical approach.
Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings
NASA Astrophysics Data System (ADS)
Mader, Charles Alexander
A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.
NASA Astrophysics Data System (ADS)
Guo, H., II
2016-12-01
Spatial distribution information of mountainous area settlement place is of great significance to the earthquake emergency work because most of the key earthquake hazardous areas of china are located in the mountainous area. Remote sensing has the advantages of large coverage and low cost, it is an important way to obtain the spatial distribution information of mountainous area settlement place. At present, fully considering the geometric information, spectral information and texture information, most studies have applied object-oriented methods to extract settlement place information, In this article, semantic constraints is to be added on the basis of object-oriented methods. The experimental data is one scene remote sensing image of domestic high resolution satellite (simply as GF-1), with a resolution of 2 meters. The main processing consists of 3 steps, the first is pretreatment, including ortho rectification and image fusion, the second is Object oriented information extraction, including Image segmentation and information extraction, the last step is removing the error elements under semantic constraints, in order to formulate these semantic constraints, the distribution characteristics of mountainous area settlement place must be analyzed and the spatial logic relation between settlement place and other objects must be considered. The extraction accuracy calculation result shows that the extraction accuracy of object oriented method is 49% and rise up to 86% after the use of semantic constraints. As can be seen from the extraction accuracy, the extract method under semantic constraints can effectively improve the accuracy of mountainous area settlement place information extraction. The result shows that it is feasible to extract mountainous area settlement place information form GF-1 image, so the article proves that it has a certain practicality to use domestic high resolution optical remote sensing image in earthquake emergency preparedness.
Double ionization in R -matrix theory using a two-electron outer region
NASA Astrophysics Data System (ADS)
Wragg, Jack; Parker, J. S.; van der Hart, H. W.
2015-08-01
We have developed a two-electron outer region for use within R -matrix theory to describe double ionization processes. The capability of this method is demonstrated for single-photon double ionization of He in the photon energy region between 80 and 180 eV. The cross sections are in agreement with established data. The extended R -matrix with time dependence method also provides information on higher-order processes, as demonstrated by the identification of signatures for sequential double ionization processes involving an intermediate He+ state with n =2 .
Constraints on two accretion disks centered on the equatorial plane of a Kerr SMBH
NASA Astrophysics Data System (ADS)
Pugliese, Daniela; Stuchlík, Zdeněk
2017-12-01
The possibility that two toroidal accretion configurations may be orbiting around a super–massive Kerr black hole has been addressed. Such tori may be formed during different stages of the Kerr attractor accretion history. We consider the relative rotation of the tori and the corotation or counterrotation of a single torus with respect to the Kerr attractor. We give classification of the couples of accreting and non–accreting tori in dependence on the Kerr black hole dimensionless spin. We demonstrate that only in few cases a double accretion tori system may be formed under specific conditions.
Clinical outcomes of arthroscopic single and double row repair in full thickness rotator cuff tears
Ji, Jong-Hun; Shafi, Mohamed; Kim, Weon-Yoo; Kim, Young-Yul
2010-01-01
Background: There has been a recent interest in the double row repair method for arthroscopic rotator cuff repair following favourable biomechanical results reported by some studies. The purpose of this study was to compare the clinical results of arthroscopic single row and double row repair methods in the full-thickness rotator cuff tears. Materials and Methods: 22 patients of arthroscopic single row repair (group I) and 25 patients who underwent double row repair (group II) from March 2003 to March 2005 were retrospectively evaluated and compared for the clinical outcomes. The mean age was 58 years and 56 years respectively for group I and II. The average follow-up in the two groups was 24 months. The evaluation was done by using the University of California Los Angeles (UCLA) rating scale and the shoulder index of the American Shoulder and Elbow Surgeons (ASES). Results: In Group I, the mean ASES score increased from 30.48 to 87.40 and the mean ASES score increased from 32.00 to 91.45 in the Group II. The mean UCLA score increased from the preoperative 12.23 to 30.82 in Group I and from 12.20 to 32.40 in Group II. Each method has shown no statistical clinical differences between two methods, but based on the sub scores of UCLA score, the double row repair method yields better results for the strength, and it gives more satisfaction to the patients than the single row repair method. Conclusions: Comparing the two methods, double row repair group showed better clinical results in recovering strength and gave more satisfaction to the patients but no statistical clinical difference was found between 2 methods. PMID:20697485
Charon's Size And Orbit From Double Stellar Occultations
NASA Astrophysics Data System (ADS)
Sicardy, Bruno; Braga-Ribas, F.; Widemann, T.; Jehin, E.; Gillon, M.; Manfroid, J.; Ortiz, J. L.; Morales, N.; Maury, A.; Assafin, M.; Camargo, J. I. B.; Vieira Martins, R.; Dias Oliveira, A.; Ramos Gomes, A., Jr.; Vanzi, L.; Leiva, R.; Young, L. A.; Buie, M. W.; Olkin, C. B.; Young, E. F.; Howell, R. R.; French, R. G.; Bianco, F. B.; Fulton, B. J.; Lister, T. A.; Bode, H. J.; Barnard, B.; Merritt, J. C.; Shoemaker, K.; Vengel, T.; Tholen, D. J.; Hall, T.; Reitsema, H. J.; Wasserman, L. H.; Go, C.
2012-10-01
Stellar occultations of a same star by both Pluto and Charon (double events) yield instantaneous relative positions of the two bodies projected in the plane of the sky, at 10km-level accuracy. Assuming a given pole orientation for Charon's orbit, double events provide the satellite plutocentric distance r at a given orbital longitude L (counted from the ascending node on J2000 mean equator), and finally, constraints on its orbit. A double event observed on 22 June 2008 provides r=19,564+/-14 km at L=153.483+/-0.071 deg. (Sicardy et al. 2011), while another double event observed on 4 June 2011 yields: r=19,586+/-15 km at L = 343.211+/-0.072 deg. (all error bars at 1-sigma level). These two positions are consistent with a circular orbit for Charon, with a semi-major axis of a=19,575+\\-10 km. This can be compared to the circular orbit found by Buie et al. (2012), based on Hubble Space Telescope data, with a=19,573+/-2 km. The 4 June 2011 stellar occultation provides 3 chords across Charon, from which a radius of Rc= 602.4+/-1.6 km is derived. This value can be compared to that obtained from the 11 July 2005 occultation: Rc= 606.0+/-1.5 km (Person et al. 2006) and Rc= 603.6+/-1.4 km (Sicardy et al. 2006). A third double event, observed on 23 June 2011 is under ongoing analysis, and will be presented. Buie et al. (2012), AJ 144, 15-34 (2012) Person et al, AJ 132, 1575-1580 (2006) Sicardy et al., Nature 439, 52-54 (2006) Sicardy et al., AJ 141, 67-83 (2011) B.S. thanks ANR "Beyond Neptune II". L.A.Y. acknowledges support by NASA, New Horizons and National Geographic grants. We thank B. Barnard, M.J. Brucker, J. Daily, C. Erikson, W. Fukunaga, C. Harlinten, C. Livermore, C. Nance, J.R. Regester, L. Salas, P. Tamblyn, R. Westhoff for help in the observations.
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
2017-08-19
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Bounding the moment deficit rate on crustal faults using geodetic data: Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael
Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less
Constraints for the Trifocal Tensor
NASA Astrophysics Data System (ADS)
Alzati, Alberto; Tortora, Alfonso
In this chapter we give an account of two different methods to find constraints for the trifocal tensor Т, used in geometric computer vision. We also show how to single out a set of only eight equations that are generically complete, i.e. for a generic choice of Т, they suffice to decide whether Т is indeed trifocal. Note that eight is minimum possible number of constraints.
SPIKE: AI scheduling techniques for Hubble Space Telescope
NASA Astrophysics Data System (ADS)
Johnston, Mark D.
1991-09-01
AI (Artificial Intelligence) scheduling techniques for HST are presented in the form of the viewgraphs. The following subject areas are covered: domain; HST constraint timescales; HTS scheduling; SPIKE overview; SPIKE architecture; constraint representation and reasoning; use of suitability functions by scheduling agent; SPIKE screen example; advantages of suitability function framework; limiting search and constraint propagation; scheduling search; stochastic search; repair methods; implementation; and status.
Decomposition method for zonal resource allocation problems in telecommunication networks
NASA Astrophysics Data System (ADS)
Konnov, I. V.; Kashuba, A. Yu
2016-11-01
We consider problems of optimal resource allocation in telecommunication networks. We first give an optimization formulation for the case where the network manager aims to distribute some homogeneous resource (bandwidth) among users of one region with quadratic charge and fee functions and present simple and efficient solution methods. Next, we consider a more general problem for a provider of a wireless communication network divided into zones (clusters) with common capacity constraints. We obtain a convex quadratic optimization problem involving capacity and balance constraints. By using the dual Lagrangian method with respect to the capacity constraint, we suggest to reduce the initial problem to a single-dimensional optimization problem, but calculation of the cost function value leads to independent solution of zonal problems, which coincide with the above single region problem. Some results of computational experiments confirm the applicability of the new methods.
Petri Net controller synthesis based on decomposed manufacturing models.
Dideban, Abbas; Zeraatkar, Hashem
2018-06-01
Utilizing of supervisory control theory on the real systems in many modeling tools such as Petri Net (PN) becomes challenging in recent years due to the significant states in the automata models or uncontrollable events. The uncontrollable events initiate the forbidden states which might be removed by employing some linear constraints. Although there are many methods which have been proposed to reduce these constraints, enforcing them to a large-scale system is very difficult and complicated. This paper proposes a new method for controller synthesis based on PN modeling. In this approach, the original PN model is broken down into some smaller models in which the computational cost reduces significantly. Using this method, it is easy to reduce and enforce the constraints to a Petri net model. The appropriate results of our proposed method on the PN models denote worthy controller synthesis for the large scale systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Fuzzy Multi-Objective Transportation Planning with Modified S-Curve Membership Function
NASA Astrophysics Data System (ADS)
Peidro, D.; Vasant, P.
2009-08-01
In this paper, the S-Curve membership function methodology is used in a transportation planning decision (TPD) problem. An interactive method for solving multi-objective TPD problems with fuzzy goals, available supply and forecast demand is developed. The proposed method attempts simultaneously to minimize the total production and transportation costs and the total delivery time with reference to budget constraints and available supply, machine capacities at each source, as well as forecast demand and warehouse space constraints at each destination. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in TPD problems, with linear membership functions.
Collective coordinates and constrained hamiltonian systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayi, O.F.
1992-07-01
A general method of incorporating collective coordinates (transformation of fields into an overcomplete basis) with constrained hamiltonian systems is given where the original phase space variables and collective coordinates can be bosonic or/and fermionic. This method is illustrated by applying it to the SU(2) Yang-Mills-Higgs theory and its BFV-BRST quantization is discussed. Moreover, this formalism is used to give a systematic way of converting second class constraints into effectively first class ones, by considering second class constraints as first class constraints and gauge fixing conditions. This approach is applied to the massive superparticle. Proca lagrangian, and some topological quantum fieldmore » theories.« less
Motion Pattern Encapsulation for Data-Driven Constraint-Based Motion Editing
NASA Astrophysics Data System (ADS)
Carvalho, Schubert R.; Boulic, Ronan; Thalmann, Daniel
The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraint-based animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.
Constraints as a destriping tool for Hires images
NASA Technical Reports Server (NTRS)
Cao, YU; Prince, Thomas A.
1994-01-01
Images produced from the Maximum Correlation Method sometimes suffer from visible striping artifacts, especially for areas of extended sources. Possible causes are different baseline levels and calibration errors in the detectors. We incorporated these factors into the MCM algorithm, and tested the effects of different constraints on the output image. The result shows significant visual improvement over the standard MCM Method. In some areas the new images show intelligible structures that are otherwise corrupted by striping artifacts, and the removal of these artifacts could enhance performance of object classification algorithms. The constraints were also tested on low surface brightness areas, and were found to be effective in reducing the noise level.
Response of double cropping suitability to climate change in the United States
NASA Astrophysics Data System (ADS)
Seifert, Christopher A.; Lobell, David B.
2015-02-01
In adapting US agriculture to the climate of the 21st century, a key unknown is whether cropping frequency may increase, helping to offset projected negative yield impacts in major production regions. Combining daily weather data and crop phenology models, we find that cultivated area in the US suited to dryland winter wheat-soybeans, the most common double crop (DC) system, increased by up to 28% from 1988 to 2012. Changes in the observed distribution of DC area over the same period agree well with this suitability increase, evidence consistent with climate change playing a role in recent DC expansion in phenologically constrained states. We then apply the model to projections of future climate under the RCP45 and RCP85 scenarios and estimate an additional 126-239% increase, respectively, in DC area. Sensitivity tests reveal that in most instances, increases in mean temperature are more important than delays in fall freeze in driving increased DC suitability. The results suggest that climate change will relieve phenological constraints on wheat-soy DC systems over much of the United States, though it should be recognized that impacts on corn and soybean yields in this region are expected to be negative and larger in magnitude than the 0.4-0.75% per decade benefits we estimate here for double cropping.
NASA Astrophysics Data System (ADS)
Gui, Luying; He, Jian; Qiu, Yudong; Yang, Xiaoping
2017-01-01
This paper presents a variational level set approach to segment lesions with compact shapes on medical images. In this study, we investigate to address the problem of segmentation for hepatocellular carcinoma which are usually of various shapes, variable intensities, and weak boundaries. An efficient constraint which is called the isoperimetric constraint to describe the compactness of shapes is applied in this method. In addition, in order to ensure the precise segmentation and stable movement of the level set, a distance regularization is also implemented in the proposed variational framework. Our method is applied to segment various hepatocellular carcinoma regions on Computed Tomography images with promising results. Comparison results also prove that the proposed method is more accurate than other two approaches.
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1997-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
NASA Astrophysics Data System (ADS)
Nielsen, N. K.; Quaade, U. J.
1995-07-01
The physical phase space of the relativistic top, as defined by Hansson and Regge, is expressed in terms of canonical coordinates of the Poincaré group manifold. The system is described in the Hamiltonian formalism by the mass-shell condition and constraints that reduce the number of spin degrees of freedom. The constraints are second class and are modified into a set of first class constraints by adding combinations of gauge-fixing functions. The Batalin-Fradkin-Vilkovisky method is then applied to quantize the system in the path integral formalism in Hamiltonian form. It is finally shown that different gauge choices produce different equivalent forms of the constraints.
Gong, Ang; Zhao, Xiubin; Pang, Chunlei; Duan, Rong; Wang, Yong
2015-12-02
For Global Navigation Satellite System (GNSS) single frequency, single epoch attitude determination, this paper proposes a new reliable method with baseline vector constraint. First, prior knowledge of baseline length, heading, and pitch obtained from other navigation equipment or sensors are used to reconstruct objective function rigorously. Then, searching strategy is improved. It substitutes gradually Enlarged ellipsoidal search space for non-ellipsoidal search space to ensure correct ambiguity candidates are within it and make the searching process directly be carried out by least squares ambiguity decorrelation algorithm (LAMBDA) method. For all vector candidates, some ones are further eliminated by derived approximate inequality, which accelerates the searching process. Experimental results show that compared to traditional method with only baseline length constraint, this new method can utilize a priori baseline three-dimensional knowledge to fix ambiguity reliably and achieve a high success rate. Experimental tests also verify it is not very sensitive to baseline vector error and can perform robustly when angular error is not great.
Method and apparatus for creating time-optimal commands for linear systems
NASA Technical Reports Server (NTRS)
Seering, Warren P. (Inventor); Tuttle, Timothy D. (Inventor)
2004-01-01
A system for and method of determining an input command profile for substantially any dynamic system that can be modeled as a linear system, the input command profile for transitioning an output of the dynamic system from one state to another state. The present invention involves identifying characteristics of the dynamic system, selecting a command profile which defines an input to the dynamic system based on the identified characteristics, wherein the command profile comprises one or more pulses which rise and fall at switch times, imposing a plurality of constraints on the dynamic system, at least one of the constraints being defined in terms of the switch times, and determining the switch times for the input to the dynamic system based on the command profile and the plurality of constraints. The characteristics may be related to poles and zeros of the dynamic system, and the plurality of constraints may include a dynamics cancellation constraint which specifies that the input moves the dynamic system from a first state to a second state such that the dynamic system remains substantially at the second state.
An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.
Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim
2015-10-01
In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, J.P.; Shapiro, L.G.; Tanimoto, S.L.
1997-04-01
This paper describes a computing environment which supports computer-based scientific research work. Key features include support for automatic distributed scheduling and execution and computer-based scientific experimentation. A new flexible and extensible scheduling technique that is responsive to a user`s scheduling constraints, such as the ordering of program results and the specification of task assignments and processor utilization levels, is presented. An easy-to-use constraint language for specifying scheduling constraints, based on the relational database query language SQL, is described along with a search-based algorithm for fulfilling these constraints. A set of performance studies show that the environment can schedule and executemore » program graphs on a network of workstations as the user requests. A method for automatically generating computer-based scientific experiments is described. Experiments provide a concise method of specifying a large collection of parameterized program executions. The environment achieved significant speedups when executing experiments; for a large collection of scientific experiments an average speedup of 3.4 on an average of 5.5 scheduled processors was obtained.« less
Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints
NASA Technical Reports Server (NTRS)
Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale
1997-01-01
The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.
Constrained coding for the deep-space optical channel
NASA Technical Reports Server (NTRS)
Moision, B. E.; Hamkins, J.
2002-01-01
We investigate methods of coding for a channel subject to a large dead-time constraint, i.e. a constraint on the minimum spacing between transmitted pulses, with the deep-space optical channel as the motivating example.
Haploids: Constraints and opportunities in plant breeding.
Dwivedi, Sangam L; Britt, Anne B; Tripathi, Leena; Sharma, Shivali; Upadhyaya, Hari D; Ortiz, Rodomiro
2015-11-01
The discovery of haploids in higher plants led to the use of doubled haploid (DH) technology in plant breeding. This article provides the state of the art on DH technology including the induction and identification of haploids, what factors influence haploid induction, molecular basis of microspore embryogenesis, the genetics underpinnings of haploid induction and its use in plant breeding, particularly to fix traits and unlock genetic variation. Both in vitro and in vivo methods have been used to induce haploids that are thereafter chromosome doubled to produce DH. Various heritable factors contribute to the successful induction of haploids, whose genetics is that of a quantitative trait. Genomic regions associated with in vitro and in vivo DH production were noted in various crops with the aid of DNA markers. It seems that F2 plants are the most suitable for the induction of DH lines than F1 plants. Identifying putative haploids is a key issue in haploid breeding. DH technology in Brassicas and cereals, such as barley, maize, rice, rye and wheat, has been improved and used routinely in cultivar development, while in other food staples such as pulses and root crops the technology has not reached to the stage leading to its application in plant breeding. The centromere-mediated haploid induction system has been used in Arabidopsis, but not yet in crops. Most food staples are derived from genomic resources-rich crops, including those with sequenced reference genomes. The integration of genomic resources with DH technology provides new opportunities for the improving selection methods, maximizing selection gains and accelerate cultivar development. Marker-aided breeding and DH technology have been used to improve host plant resistance in barley, rice, and wheat. Multinational seed companies are using DH technology in large-scale production of inbred lines for further development of hybrid cultivars, particularly in maize. The public sector provides support to national programs or small-medium private seed for the exploitation of DH technology in plant breeding. Copyright © 2015 Elsevier Inc. All rights reserved.
An-Min Zou; Kumar, K D; Zeng-Guang Hou; Xi Liu
2011-08-01
A finite-time attitude tracking control scheme is proposed for spacecraft using terminal sliding mode and Chebyshev neural network (NN) (CNN). The four-parameter representations (quaternion) are used to describe the spacecraft attitude for global representation without singularities. The attitude state (i.e., attitude and velocity) error dynamics is transformed to a double integrator dynamics with a constraint on the spacecraft attitude. With consideration of this constraint, a novel terminal sliding manifold is proposed for the spacecraft. In order to guarantee that the output of the NN used in the controller is bounded by the corresponding bound of the approximated unknown function, a switch function is applied to generate a switching between the adaptive NN control and the robust controller. Meanwhile, a CNN, whose basis functions are implemented using only desired signals, is introduced to approximate the desired nonlinear function and bounded external disturbances online, and the robust term based on the hyperbolic tangent function is applied to counteract NN approximation errors in the adaptive neural control scheme. Most importantly, the finite-time stability in both the reaching phase and the sliding phase can be guaranteed by a Lyapunov-based approach. Finally, numerical simulations on the attitude tracking control of spacecraft in the presence of an unknown mass moment of inertia matrix, bounded external disturbances, and control input constraints are presented to demonstrate the performance of the proposed controller.
Huang, Yan; Bi, Duyan; Wu, Dongpeng
2018-04-11
There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods.
Image-optimized Coronal Magnetic Field Models
NASA Astrophysics Data System (ADS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-08-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.
Huang, Yan; Bi, Duyan; Wu, Dongpeng
2018-01-01
There are many artificial parameters when fuse infrared and visible images, to overcome the lack of detail in the fusion image because of the artifacts, a novel fusion algorithm for infrared and visible images that is based on different constraints in non-subsampled shearlet transform (NSST) domain is proposed. There are high bands and low bands of images that are decomposed by the NSST. After analyzing the characters of the bands, fusing the high level bands by the gradient constraint, the fused image can obtain more details; fusing the low bands by the constraint of saliency in the images, the targets are more salient. Before the inverse NSST, the Nash equilibrium is used to update the coefficient. The fused images and the quantitative results demonstrate that our method is more effective in reserving details and highlighting the targets when compared with other state-of-the-art methods. PMID:29641505
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
Image-Optimized Coronal Magnetic Field Models
NASA Technical Reports Server (NTRS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-01-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.
NASA Astrophysics Data System (ADS)
Bartlett, Philip L.; Stelbovics, Andris T.
2010-02-01
The propagating exterior complex scaling (PECS) method is extended to all four-body processes in electron impact on helium in an S-wave model. Total and energy-differential cross sections are presented with benchmark accuracy for double ionization, single ionization with excitation, and double excitation (to autoionizing states) for incident-electron energies from threshold to 500 eV. While the PECS three-body cross sections for this model given in the preceding article [Phys. Rev. A 81, 022715 (2010)] are in good agreement with other methods, there are considerable discrepancies for these four-body processes. With this model we demonstrate the suitability of the PECS method for the complete solution of the electron-helium system.
Clinical outcomes of arthroscopic single and double row repair in full thickness rotator cuff tears.
Ji, Jong-Hun; Shafi, Mohamed; Kim, Weon-Yoo; Kim, Young-Yul
2010-07-01
There has been a recent interest in the double row repair method for arthroscopic rotator cuff repair following favourable biomechanical results reported by some studies. The purpose of this study was to compare the clinical results of arthroscopic single row and double row repair methods in the full-thickness rotator cuff tears. 22 patients of arthroscopic single row repair (group I) and 25 patients who underwent double row repair (group II) from March 2003 to March 2005 were retrospectively evaluated and compared for the clinical outcomes. The mean age was 58 years and 56 years respectively for group I and II. The average follow-up in the two groups was 24 months. The evaluation was done by using the University of California Los Angeles (UCLA) rating scale and the shoulder index of the American Shoulder and Elbow Surgeons (ASES). In Group I, the mean ASES score increased from 30.48 to 87.40 and the mean ASES score increased from 32.00 to 91.45 in the Group II. The mean UCLA score increased from the preoperative 12.23 to 30.82 in Group I and from 12.20 to 32.40 in Group II. Each method has shown no statistical clinical differences between two methods, but based on the sub scores of UCLA score, the double row repair method yields better results for the strength, and it gives more satisfaction to the patients than the single row repair method. Comparing the two methods, double row repair group showed better clinical results in recovering strength and gave more satisfaction to the patients but no statistical clinical difference was found between 2 methods.
Pareto Tracer: a predictor-corrector method for multi-objective optimization problems
NASA Astrophysics Data System (ADS)
Martín, Adanay; Schütze, Oliver
2018-03-01
This article proposes a novel predictor-corrector (PC) method for the numerical treatment of multi-objective optimization problems (MOPs). The algorithm, Pareto Tracer (PT), is capable of performing a continuation along the set of (local) solutions of a given MOP with k objectives, and can cope with equality and box constraints. Additionally, the first steps towards a method that manages general inequality constraints are also introduced. The properties of PT are first discussed theoretically and later numerically on several examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breton, J.; Berger, G.; Nabedryk, E.
The photoreduction of the secondary quinone acceptor Q{sub B} in reaction centers (RCs) of the photosynthetic bacteria Rhodobacter sphaeroides and Rhodopseudomonas viridis has been investigated by light-induced FTIR difference spectroscopy of RCs reconstituted with several isotopically labeled ubiquinones. The labels used were {sup 18}O on both carbonyls and {sup 13}C either uniformly or selectively at the 1- or the 4-position, i.e., on either one of the two carbonyls. The Q{sub B}{sup {minus}}/Q{sub B} spectra of RCs reconstituted with the isotopically labeled and unlabeled quinones as well as the double differences calculated form these spectra exhibit distinct isotopic shifts for amore » numer of bands attributed to vibrations of Q{sub B} and Q{sub B}{sup {minus}}. The vibrational modes of the quinone in the Q{sub B} site are compared to those of ubiquinone in vitro, leading to band assignments for the C{double_bond}O and C{double_bond}C vibrations of the neutral Q{sub B} and for the C---O and C---C of the semiquinone. The C{double_bond}O frequency of each of the carbonyls of the unlabeled quinone is revealed at 1641 cm{sup {minus}1} for both species. This demonstrates symmetrical and weak hydrogen bonding of the two C{double_bond}O groups to the protein at the Q{sub B} site. In contrast, the C{double_bond}C vibrations are not equivalent for selective labeling at C{sub 1} or at C{sub 4}, although they both contribute to the {approximately}1611-cm{sup {minus}1} band in the Q{sub B}{sup {minus}}/Q{sub B} spectra of the two species. Compared to the vibrations of isolated ubiquinone, the C{double_bond}C mode of Q{sub B} does not involve displacement of the C{sub 4} carbon atom, while the motion of C{sub 1} is not hindered. Further analysis of the spectra suggests that the protein at the binding site imposes a specific constraint on the methoxy and/or the methyl group proximal to the C{sub 4} carbonyl. 49 refs., 5 figs.« less
Method and apparatus for automated assembly
Jones, Rondall E.; Wilson, Randall H.; Calton, Terri L.
1999-01-01
A process and apparatus generates a sequence of steps for assembly or disassembly of a mechanical system. Each step in the sequence is geometrically feasible, i.e., the part motions required are physically possible. Each step in the sequence is also constraint feasible, i.e., the step satisfies user-definable constraints. Constraints allow process and other such limitations, not usually represented in models of the completed mechanical system, to affect the sequence.
Forces Associated with Nonlinear Nonholonomic Constraint Equations
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Hodges, Dewey H.
2010-01-01
A concise method has been formulated for identifying a set of forces needed to constrain the behavior of a mechanical system, modeled as a set of particles and rigid bodies, when it is subject to motion constraints described by nonholonomic equations that are inherently nonlinear in velocity. An expression in vector form is obtained for each force; a direction is determined, together with the point of application. This result is a consequence of expressing constraint equations in terms of dot products of vectors rather than in the usual way, which is entirely in terms of scalars and matrices. The constraint forces in vector form are used together with two new analytical approaches for deriving equations governing motion of a system subject to such constraints. If constraint forces are of interest they can be brought into evidence in explicit dynamical equations by employing the well-known nonholonomic partial velocities associated with Kane's method; if they are not of interest, equations can be formed instead with the aid of vectors introduced here as nonholonomic partial accelerations. When the analyst requires only the latter, smaller set of equations, they can be formed directly; it is not necessary to expend the labor to form the former, larger set first and subsequently perform matrix multiplications.
Linear Quadratic Tracking Design for a Generic Transport Aircraft with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Burken, John J.; Frost, Susan A.; Taylor, Brian R.
2011-01-01
When designing control laws for systems with constraints added to the tracking performance, control allocation methods can be utilized. Control allocations methods are used when there are more command inputs than controlled variables. Constraints that require allocators are such task as; surface saturation limits, structural load limits, drag reduction constraints or actuator failures. Most transport aircraft have many actuated surfaces compared to the three controlled variables (such as angle of attack, roll rate & angle of side slip). To distribute the control effort among the redundant set of actuators a fixed mixer approach can be utilized or online control allocation techniques. The benefit of an online allocator is that constraints can be considered in the design whereas the fixed mixer cannot. However, an online control allocator mixer has a disadvantage of not guaranteeing a surface schedule, which can then produce ill defined loads on the aircraft. The load uncertainty and complexity has prevented some controller designs from using advanced allocation techniques. This paper considers actuator redundancy management for a class of over actuated systems with real-time structural load limits using linear quadratic tracking applied to the generic transport model. A roll maneuver example of an artificial load limit constraint is shown and compared to the same no load limitation maneuver.
Framing Young Childrens Oral Health: A Participatory Action Research Project
Collins, Chimere C.; Villa-Torres, Laura; Sams, Lattice D.; Zeldin, Leslie P.
2016-01-01
Background and Objectives Despite the widespread acknowledgement of the importance of childhood oral health, little progress has been made in preventing early childhood caries. Limited information exists regarding specific daily-life and community-related factors that impede optimal oral hygiene, diet, care, and ultimately oral health for children. We sought to understand what parents of young children consider important and potentially modifiable factors and resources influencing their children’s oral health, within the contexts of the family and the community. Methods This qualitative study employed Photovoice among 10 English-speaking parents of infants and toddlers who were clients of an urban WIC clinic in North Carolina. The primary research question was: “What do you consider as important behaviors, as well as family and community resources to prevent cavities among young children?” Five group sessions were conducted and they were recorded, transcribed verbatim and analyzed using qualitative research methodology. Inductive analyses were based on analytical summaries, double-coding, and summary matrices and were done using Atlas.ti.7.5.9 software. Findings Good oral health was associated with avoidance of problems or restorations for the participants. Financial constraints affected healthy food and beverage choices, as well as access to oral health care. Time constraints and occasional frustration related to children’s oral hygiene emerged as additional barriers. Establishment of rules/routines and commitment to them was a successful strategy to promote their children’s oral health, as well as modeling of older siblings, cooperation among caregivers and peer support. Community programs and organizations, social hubs including playgrounds, grocery stores and social media emerged as promising avenues for gaining support and sharing resources. Conclusions Low-income parents of young children are faced with daily life struggles that interfere with oral health and care. Financial constraints are pervasive, but parents identified several strategies involving home care and community agents that can be helpful. Future interventions aimed to improve children’s oral health must take into consideration the role of families and the communities in which they live. PMID:27548714
Designing Measurement Studies under Budget Constraints: Controlling Error of Measurement and Power.
ERIC Educational Resources Information Center
Marcoulides, George A.
1995-01-01
A methodology is presented for minimizing the mean error variance-covariance component in studies with resource constraints. The method is illustrated using a one-facet multivariate design. Extensions to other designs are discussed. (SLD)
Acceleration constraints in modeling and control of nonholonomic systems
NASA Astrophysics Data System (ADS)
Bajodah, Abdulrahman H.
2003-10-01
Acceleration constraints are used to enhance modeling techniques for dynamical systems. In particular, Kane's equations of motion subjected to bilateral constraints, unilateral constraints, and servo-constraints are modified by utilizing acceleration constraints for the purpose of simplifying the equations and increasing their applicability. The tangential properties of Kane's method provide relationships between the holonomic and the nonholonomic partial velocities, and hence allow one to describe nonholonomic generalized active and inertia forces in terms of their holonomic counterparts, i.e., those which correspond to the system without constraints. Therefore, based on the modeling process objectives, the holonomic and the nonholonomic vector entities in Kane's approach are used interchangeably to model holonomic and nonholonomic systems. When the holonomic partial velocities are used to model nonholonomic systems, the resulting models are full-order (also called nonminimal or unreduced) and separated in accelerations. As a consequence, they are readily integrable and can be used for generic system analysis. Other related topics are constraint forces, numerical stability of the nonminimal equations of motion, and numerical constraint stabilization. Two types of unilateral constraints considered are impulsive and friction constraints. Impulsive constraints are modeled by means of a continuous-in-velocities and impulse-momentum approaches. In controlled motion, the acceleration form of constraints is utilized with the Moore-Penrose generalized inverse of the corresponding constraint matrix to solve for the inverse dynamics of servo-constraints, and for the redundancy resolution of overactuated manipulators. If control variables are involved in the algebraic constraint equations, then these tools are used to modify the controlled equations of motion in order to facilitate control system design. An illustrative example of spacecraft stabilization is presented.
Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints
NASA Astrophysics Data System (ADS)
Kmet', Tibor; Kmet'ová, Mária
2009-09-01
A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.
Hughes, Eric; Maan, Abid Aslam; Acquistapace, Simone; Burbidge, Adam; Johns, Michael L; Gunes, Deniz Z; Clausen, Pascal; Syrbe, Axel; Hugo, Julien; Schroen, Karin; Miralles, Vincent; Atkins, Tim; Gray, Richard; Homewood, Philip; Zick, Klaus
2013-01-01
Monodisperse water-in-oil-in-water (WOW) double emulsions have been prepared using microfluidic glass devices designed and built primarily from off the shelf components. The systems were easy to assemble and use. They were capable of producing double emulsions with an outer droplet size from 100 to 40 μm. Depending on how the devices were operated, double emulsions containing either single or multiple water droplets could be produced. Pulsed-field gradient self-diffusion NMR experiments have been performed on the monodisperse water-in-oil-in-water double emulsions to obtain information on the inner water droplet diameter and the distribution of the water in the different phases of the double emulsion. This has been achieved by applying regularization methods to the self-diffusion data. Using these methods the stability of the double emulsions to osmotic pressure imbalance has been followed by observing the change in the size of the inner water droplets over time. Copyright © 2012 Elsevier Inc. All rights reserved.
Properties of an eclipsing double white dwarf binary NLTT 11748
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, David L.; Walker, Arielle N.; Marsh, Thomas R.
2014-01-10
We present high-quality ULTRACAM photometry of the eclipsing detached double white dwarf binary NLTT 11748. This system consists of a carbon/oxygen white dwarf and an extremely low mass (<0.2 M {sub ☉}) helium-core white dwarf in a 5.6 hr orbit. To date, such extremely low-mass white dwarfs, which can have thin, stably burning outer layers, have been modeled via poorly constrained atmosphere and cooling calculations where uncertainties in the detailed structure can strongly influence the eventual fates of these systems when mass transfer begins. With precise (individual precision ≈1%), high-cadence (≈2 s), multicolor photometry of multiple primary and secondary eclipsesmore » spanning >1.5 yr, we constrain the masses and radii of both objects in the NLTT 11748 system to a statistical uncertainty of a few percent. However, we find that overall uncertainty in the thickness of the envelope of the secondary carbon/oxygen white dwarf leads to a larger (≈13%) systematic uncertainty in the primary He WD's mass. Over the full range of possible envelope thicknesses, we find that our primary mass (0.136-0.162 M {sub ☉}) and surface gravity (log (g) = 6.32-6.38; radii are 0.0423-0.0433 R {sub ☉}) constraints do not agree with previous spectroscopic determinations. We use precise eclipse timing to detect the Rømer delay at 7σ significance, providing an additional weak constraint on the masses and limiting the eccentricity to ecos ω = (– 4 ± 5) × 10{sup –5}. Finally, we use multicolor data to constrain the secondary's effective temperature (7600 ± 120 K) and cooling age (1.6-1.7 Gyr).« less
NASA Astrophysics Data System (ADS)
Murase, Kohta; Toomey, Michael W.; Fang, Ke; Oikonomou, Foteini; Kimura, Shigeo S.; Hotokezaka, Kenta; Kashiyama, Kazumi; Ioka, Kunihito; Mészáros, Peter
2018-02-01
The recent detection of gravitational waves and electromagnetic counterparts from the double neutron star merger event GW+EM170817 supports the standard paradigm of short gamma-ray bursts (SGRBs) and kilonovae/macronovae. It is important to reveal the nature of the compact remnant left after the merger, either a black hole or neutron star, and their physical link to the origin of the long-lasting emission observed in SGRBs. The diversity of the merger remnants may also lead to different kinds of transients that can be detected in future. Here we study the high-energy emission from the long-lasting central engine left after the coalescence, under certain assumptions. In particular, we consider the X-ray emission from a remnant disk and the nonthermal nebular emission from disk-driven outflows or pulsar winds. We demonstrate that late-time X-ray and high-frequency radio emission can provide useful constraints on properties of the hidden compact remnants and their connections to long-lasting SGRB emission, and we discuss the detectability of nearby merger events through late-time observations at ∼30–100 days after the coalescence. We also investigate the GeV–TeV gamma-ray emission that occurs in the presence of long-lasting central engines and show the importance of external inverse Compton radiation due to upscattering of X-ray photons by relativistic electrons in the jet. We also search for high-energy gamma rays from GW170817 in the Fermi-LAT data and report upper limits on such long-lasting emission. Finally, we consider the implications of GW+EM170817 and discuss the constraints placed by X-ray and high-frequency radio observations.
HST Imaging of the Eye of Horus, a Double Source Plane Gravitational Lens
NASA Astrophysics Data System (ADS)
Wong, Kenneth
2017-08-01
Double source plane (DSP) gravitational lenses are extremely rare alignments of a massive lens galaxy with two background sources at distinct redshifts. The presence of two source planes provides important constraints on cosmology and galaxy structure beyond that of typical lens systems by breaking degeneracies between parameters that vary with source redshift. While these systems are extremely valuable, only a handful are known. We have discovered the first DSP lens, the Eye of Horus, in the Hyper Suprime-Cam survey and have confirmed both source redshifts with follow-up spectroscopy, making this the only known DSP lens with both source redshifts measured. Furthermore, the brightest image of the most distant source (S2) is split into a pair of images by a mass component that is undetected in our ground-based data, suggesting the presence of a satellite or line-of-sight galaxy causing this splitting. In order to better understand this system and use it for cosmology and galaxy studies, we must construct an accurate lens model, accounting for the lensing effects of both the main lens galaxy and the intermediate source. Only with deep, high-resolution imaging from HST/ACS can we accurately model this system. Our proposed multiband imaging will clearly separate out the two sources by their distinct colors, allowing us to use their extended surface brightness distributions as constraints on our lens model. These data may also reveal the satellite galaxy responsible for the splitting of the brightest image of S2. With these observations, we will be able to take full advantage of the wealth of information provided by this system.
Projected land photosynthesis constrained by changes in the seasonal cycle of atmospheric CO2.
Wenzel, Sabrina; Cox, Peter M; Eyring, Veronika; Friedlingstein, Pierre
2016-10-27
Uncertainties in the response of vegetation to rising atmospheric CO 2 concentrations contribute to the large spread in projections of future climate change. Climate-carbon cycle models generally agree that elevated atmospheric CO 2 concentrations will enhance terrestrial gross primary productivity (GPP). However, the magnitude of this CO 2 fertilization effect varies from a 20 per cent to a 60 per cent increase in GPP for a doubling of atmospheric CO 2 concentrations in model studies. Here we demonstrate emergent constraints on large-scale CO 2 fertilization using observed changes in the amplitude of the atmospheric CO 2 seasonal cycle that are thought to be the result of increasing terrestrial GPP. Our comparison of atmospheric CO 2 measurements from Point Barrow in Alaska and Cape Kumukahi in Hawaii with historical simulations of the latest climate-carbon cycle models demonstrates that the increase in the amplitude of the CO 2 seasonal cycle at both measurement sites is consistent with increasing annual mean GPP, driven in part by climate warming, but with differences in CO 2 fertilization controlling the spread among the model trends. As a result, the relationship between the amplitude of the CO 2 seasonal cycle and the magnitude of CO 2 fertilization of GPP is almost linear across the entire ensemble of models. When combined with the observed trends in the seasonal CO 2 amplitude, these relationships lead to consistent emergent constraints on the CO 2 fertilization of GPP. Overall, we estimate a GPP increase of 37 ± 9 per cent for high-latitude ecosystems and 32 ± 9 per cent for extratropical ecosystems under a doubling of atmospheric CO 2 concentrations on the basis of the Point Barrow and Cape Kumukahi records, respectively.
Retuning Rieske-type Oxygenases to Expand Substrate Range
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammadi, Mahmood; Viger, Jean-François; Kumar, Pravindra
2012-09-17
Rieske-type oxygenases are promising biocatalysts for the destruction of persistent pollutants or for the synthesis of fine chemicals. In this work, we explored pathways through which Rieske-type oxygenases evolve to expand their substrate range. BphAE{sub p4}, a variant biphenyl dioxygenase generated from Burkholderia xenovorans LB400 BphAE{sub LB400} by the double substitution T335A/F336M, and BphAE{sub RR41}, obtained by changing Asn{sup 338}, Ile{sup 341}, and Leu{sup 409} of BphAE{sub p4} to Gln{sup 338}, Val{sup 341}, and Phe{sup 409}, metabolize dibenzofuran two and three times faster than BphAE{sub LB400}, respectively. Steady-state kinetic measurements of single- and multiple-substitution mutants of BphAE{sub LB400} showed thatmore » the single T335A and the double N338Q/L409F substitutions contribute significantly to enhanced catalytic activity toward dibenzofuran. Analysis of crystal structures showed that the T335A substitution relieves constraints on a segment lining the catalytic cavity, allowing a significant displacement in response to dibenzofuran binding. The combined N338Q/L409F substitutions alter substrate-induced conformational changes of protein groups involved in subunit assembly and in the chemical steps of the reaction. This suggests a responsive induced fit mechanism that retunes the alignment of protein atoms involved in the chemical steps of the reaction. These enzymes can thus expand their substrate range through mutations that alter the constraints or plasticity of the catalytic cavity to accommodate new substrates or that alter the induced fit mechanism required to achieve proper alignment of reaction-critical atoms or groups.« less
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-12-01
We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
New approach to CT pixel-based photon dose calculations in heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, J.W.; Henkelman, R.M.
The effects of small cavities on dose in water and the dose in a homogeneous nonunit density medium illustrate that inhomogeneities do not act independently in photon dose perturbation, and serve as two constraints which should be satisfied by approximate methods of computed tomography (CT) pixel-based dose calculations. Current methods at best satisfy only one of the two constraints and show inadequacies in some intermediate geometries. We have developed an approximate method that satisfies both these constraints and treats much of the synergistic effect of multiple inhomogeneities correctly. The method calculates primary and first-scatter doses by first-order ray tracing withmore » the first-scatter contribution augmented by a component of second scatter that behaves like first scatter. Multiple-scatter dose perturbation values extracted from small cavity experiments are used in a function which approximates the small residual multiple-scatter dose. For a wide range of geometries tested, our method agrees very well with measurements. The average deviation is less than 2% with a maximum of 3%. In comparison, calculations based on existing methods can have errors larger than 10%.« less
BRDF invariant stereo using light transport constancy.
Wang, Liang; Yang, Ruigang; Davis, James E
2007-09-01
Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints.
Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter
2016-12-30
Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.
High-Precision Registration of Point Clouds Based on Sphere Feature Constraints
Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter
2016-01-01
Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method. PMID:28042846
Study on Building Extraction from High-Resolution Images Using Mbi
NASA Astrophysics Data System (ADS)
Ding, Z.; Wang, X. Q.; Li, Y. L.; Zhang, S. S.
2018-04-01
Building extraction from high resolution remote sensing images is a hot research topic in the field of photogrammetry and remote sensing. However, the diversity and complexity of buildings make building extraction methods still face challenges in terms of accuracy, efficiency, and so on. In this study, a new building extraction framework based on MBI and combined with image segmentation techniques, spectral constraint, shadow constraint, and shape constraint is proposed. In order to verify the proposed method, worldview-2, GF-2, GF-1 remote sensing images covered Xiamen Software Park were used for building extraction experiments. Experimental results indicate that the proposed method improve the original MBI significantly, and the correct rate is over 86 %. Furthermore, the proposed framework reduces the false alarms by 42 % on average compared to the performance of the original MBI.
Shaping low-thrust trajectories with thrust-handling feature
NASA Astrophysics Data System (ADS)
Taheri, Ehsan; Kolmanovsky, Ilya; Atkins, Ella
2018-02-01
Shape-based methods are becoming popular in low-thrust trajectory optimization due to their fast computation speeds. In existing shape-based methods constraints are treated at the acceleration level but not at the thrust level. These two constraint types are not equivalent since spacecraft mass decreases over time as fuel is expended. This paper develops a shape-based method based on a Fourier series approximation that is capable of representing trajectories defined in spherical coordinates and that enforces thrust constraints. An objective function can be incorporated to minimize overall mission cost, i.e., achieve minimum ΔV . A representative mission from Earth to Mars is studied. The proposed Fourier series technique is demonstrated capable of generating feasible and near-optimal trajectories. These attributes can facilitate future low-thrust mission designs where different trajectory alternatives must be rapidly constructed and evaluated.
Brain stimulation and constraint for perinatal stroke hemiparesis
Andersen, John; Herrero, Mia; Nettel-Aguirre, Alberto; Carsolio, Lisa; Damji, Omar; Keess, Jamie; Mineyko, Aleksandra; Hodge, Jacquie; Hill, Michael D.
2016-01-01
Objective: To determine whether the addition of repetitive transcranial magnetic stimulation (rTMS) and/or constraint-induced movement therapy (CIMT) to intensive therapy increases motor function in children with perinatal stroke and hemiparesis. Methods: A factorial-design, blinded, randomized controlled trial (clinicaltrials.gov/NCT01189058) assessed rTMS and CIMT effects in hemiparetic children (aged 6–19 years) with MRI-confirmed perinatal stroke. All completed a 2-week, goal-directed, peer-supported motor learning camp randomized to daily rTMS, CIMT, both, or neither. Primary outcomes were the Assisting Hand Assessment and the Canadian Occupational Performance Measure at baseline, and 1 week, 2 and 6 months postintervention. Outcome assessors were blinded to treatment. Interim safety analyses occurred after 12 and 24 participants. Intention-to-treat analysis examined treatment effects over time (linear mixed effects model). Results: All 45 participants completed the trial. Addition of rTMS, CIMT, or both doubled the chances of clinically significant improvement. Assisting Hand Assessment gains at 6 months were additive and largest with rTMS + CIMT (β coefficient = 5.54 [2.57–8.51], p = 0.0004). The camp alone produced large improvements in Canadian Occupational Performance Measure scores, maximal at 6 months (Cohen d = 1.6, p = 0.002). Quality-of-life scores improved. Interventions were well tolerated and safe with no decrease in function of either hand. Conclusions: Hemiparetic children participating in intensive, psychosocial rehabilitation programs can achieve sustained functional gains. Addition of CIMT and rTMS increases the chances of improvement. Classification of evidence: This study provides Class II evidence that combined rTMS and CIMT enhance therapy-induced functional motor gains in children with stroke-induced hemiparetic cerebral palsy. PMID:27029628
NASA Astrophysics Data System (ADS)
Xu, Feng; van Harten, Gerard; Diner, David J.; Kalashnikova, Olga V.; Seidel, Felix C.; Bruegge, Carol J.; Dubovik, Oleg
2017-07-01
The Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) has been flying aboard the NASA ER-2 high-altitude aircraft since October 2010. In step-and-stare operation mode, AirMSPI acquires radiance and polarization data in bands centered at 355, 380, 445, 470*, 555, 660*, 865*, and 935 nm (* denotes polarimetric bands). The imaged area covers about 10 km by 11 km and is typically observed from nine viewing angles between ±66° off nadir. For a simultaneous retrieval of aerosol properties and surface reflection using AirMSPI, an efficient and flexible retrieval algorithm has been developed. It imposes multiple types of physical constraints on spectral and spatial variations of aerosol properties as well as spectral and temporal variations of surface reflection. Retrieval uncertainty is formulated by accounting for both instrumental errors and physical constraints. A hybrid Markov-chain/adding-doubling radiative transfer (RT) model is developed to combine the computational strengths of these two methods in modeling polarized RT in vertically inhomogeneous and homogeneous media, respectively. Our retrieval approach is tested using 27 AirMSPI data sets with low to moderately high aerosol loadings, acquired during four NASA field campaigns plus one AirMSPI preengineering test flight. The retrieval results including aerosol optical depth, single-scattering albedo, aerosol size and refractive index are compared with Aerosol Robotic Network reference data. We identify the best angular combinations for 2, 3, 5, and 7 angle observations from the retrieval quality assessment of various angular combinations. We also explore the benefits of polarimetric and multiangular measurements and target revisits in constraining aerosol property and surface reflection retrieval.
NASA Astrophysics Data System (ADS)
Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed
2016-10-01
The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.
NASA Astrophysics Data System (ADS)
Winicour, Jeffrey
2017-08-01
An algebraic-hyperbolic method for solving the Hamiltonian and momentum constraints has recently been shown to be well posed for general nonlinear perturbations of the initial data for a Schwarzschild black hole. This is a new approach to solving the constraints of Einstein’s equations which does not involve elliptic equations and has potential importance for the construction of binary black hole data. In order to shed light on the underpinnings of this approach, we consider its application to obtain solutions of the constraints for linearized perturbations of Minkowski space. In that case, we find the surprising result that there are no suitable Cauchy hypersurfaces in Minkowski space for which the linearized algebraic-hyperbolic constraint problem is well posed.
A probability space for quantum models
NASA Astrophysics Data System (ADS)
Lemmens, L. F.
2017-06-01
A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.
NASA Technical Reports Server (NTRS)
Motiwalla, S. K.
1973-01-01
Using the first and the second derivative of flutter velocity with respect to the parameters, the velocity hypersurface is made quadratic. This greatly simplifies the numerical procedure developed for determining the values of the design parameters such that a specified flutter velocity constraint is satisfied and the total structural mass is near a relative minimum. A search procedure is presented utilizing two gradient search methods and a gradient projection method. The procedure is applied to the design of a box beam, using finite-element representation. The results indicate that the procedure developed yields substantial design improvement satisfying the specified constraint and does converge to near a local optimum.
Anaerobic Threshold and Salivary α-amylase during Incremental Exercise.
Akizuki, Kazunori; Yazaki, Syouichirou; Echizenya, Yuki; Ohashi, Yukari
2014-07-01
[Purpose] The purpose of this study was to clarify the validity of salivary α-amylase as a method of quickly estimating anaerobic threshold and to establish the relationship between salivary α-amylase and double-product breakpoint in order to create a way to adjust exercise intensity to a safe and effective range. [Subjects and Methods] Eleven healthy young adults performed an incremental exercise test using a cycle ergometer. During the incremental exercise test, oxygen consumption, carbon dioxide production, and ventilatory equivalent were measured using a breath-by-breath gas analyzer. Systolic blood pressure and heart rate were measured to calculate the double product, from which double-product breakpoint was determined. Salivary α-amylase was measured to calculate the salivary threshold. [Results] One-way ANOVA revealed no significant differences among workloads at the anaerobic threshold, double-product breakpoint, and salivary threshold. Significant correlations were found between anaerobic threshold and salivary threshold and between anaerobic threshold and double-product breakpoint. [Conclusion] As a method for estimating anaerobic threshold, salivary threshold was as good as or better than determination of double-product breakpoint because the correlation between anaerobic threshold and salivary threshold was higher than the correlation between anaerobic threshold and double-product breakpoint. Therefore, salivary threshold is a useful index of anaerobic threshold during an incremental workload.
The free energy of a reaction coordinate at multiple constraints: a concise formulation
NASA Astrophysics Data System (ADS)
Schlitter, Jürgen; Klähn, Marco
The free energy as a function of the reaction coordinate (rc) is the key quantity for the computation of equilibrium and kinetic quantities. When it is considered as the potential of mean force, the problem is the calculation of the mean force for given values of the rc. We reinvestigate the PMCF (potential of mean constraint force) method which applies a constraint to the rc to compute the mean force as the mean negative constraint force and a metric tensor correction. The latter allows for the constraint imposed to the rc and possible artefacts due to multiple constraints of other variables which for practical reasons are often used in numerical simulations. Two main results are obtained that are of theoretical and practical interest. First, the correction term is given a very concise and simple shape which facilitates its interpretation and evaluation. Secondly, a theorem describes various rcs and possible combinations with constraints that can be used without introducing any correction to the constraint force. The results facilitate the computation of free energy by molecular dynamics simulations.
Deformable image registration with local rigidity constraints for cone-beam CT-guided spine surgery
NASA Astrophysics Data System (ADS)
Reaungamornrat, S.; Wang, A. S.; Uneri, A.; Otake, Y.; Khanna, A. J.; Siewerdsen, J. H.
2014-07-01
Image-guided spine surgery (IGSS) is associated with reduced co-morbidity and improved surgical outcome. However, precise localization of target anatomy and adjacent nerves and vessels relative to planning information (e.g., device trajectories) can be challenged by anatomical deformation. Rigid registration alone fails to account for deformation associated with changes in spine curvature, and conventional deformable registration fails to account for rigidity of the vertebrae, causing unrealistic distortions in the registered image that can confound high-precision surgery. We developed and evaluated a deformable registration method capable of preserving rigidity of bones while resolving the deformation of surrounding soft tissue. The method aligns preoperative CT to intraoperative cone-beam CT (CBCT) using free-form deformation (FFD) with constraints on rigid body motion imposed according to a simple intensity threshold of bone intensities. The constraints enforced three properties of a rigid transformation—namely, constraints on affinity (AC), orthogonality (OC), and properness (PC). The method also incorporated an injectivity constraint (IC) to preserve topology. Physical experiments involving phantoms, an ovine spine, and a human cadaver as well as digital simulations were performed to evaluate the sensitivity to registration parameters, preservation of rigid body morphology, and overall registration accuracy of constrained FFD in comparison to conventional unconstrained FFD (uFFD) and Demons registration. FFD with orthogonality and injectivity constraints (denoted FFD+OC+IC) demonstrated improved performance compared to uFFD and Demons. Affinity and properness constraints offered little or no additional improvement. The FFD+OC+IC method preserved rigid body morphology at near-ideal values of zero dilatation ({ D} = 0.05, compared to 0.39 and 0.56 for uFFD and Demons, respectively) and shear ({ S} = 0.08, compared to 0.36 and 0.44 for uFFD and Demons, respectively). Target registration error (TRE) was similarly improved for FFD+OC+IC (0.7 mm), compared to 1.4 and 1.8 mm for uFFD and Demons. Results were validated in human cadaver studies using CT and CBCT images, with FFD+OC+IC providing excellent preservation of rigid morphology and equivalent or improved TRE. The approach therefore overcomes distortions intrinsic to uFFD and could better facilitate high-precision IGSS.
Kim, Hyung Suk; Lee, Byung Ki; Jung, Jin-Woo; Lee, Jung Keun; Byun, Seok-Soo; Lee, Sang Eun; Jeong, Chang Wook
2014-11-01
Double-J stent insertion has been generally performed during laparoscopic upper urinary tract (UUT) surgical procedures to prevent transient urinary tract obstruction and postoperative flank pain from ureteral edema and blood clots. Several restrictive conditions that make this procedure difficult and time consuming, however, include the coiled distal ends of the flexible Double-J stent and the limited bending angle of the laparoscopic instruments. To overcome these limitations, we devised a Double-J stent insertion method using the new J-tube technique. Between July 2011 and May 2013, Double-J stents were inserted using the J-tube technique in 33 patients who underwent a laparoscopic UUT surgical procedure by a single surgeon. The mean stent placement time was 4.8±2.7 minutes, and there were no intraoperative complications. In conclusion, the J-tube technique is a safe and time-saving method for Double-J stent insertion during laparoscopic surgical procedures.
A Constraint Generation Approach to Learning Stable Linear Dynamical Systems
2008-01-01
task of learning dynamic textures from image sequences as well as to modeling biosurveillance drug-sales data. The constraint generation approach...previous methods in our experiments. One application of LDSs in computer vision is learning dynamic textures from video data [8]. An advantage of...over-the-counter (OTC) drug sales for biosurveillance , and sunspot numbers from the UCR archive [9]. Comparison to the best alternative methods [7, 10
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Q; Stanford University School of Medicine, Stanford, CA; Liu, H
Purpose: Spectral CT enabled by an energy-resolved photon-counting detector outperforms conventional CT in terms of material discrimination, contrast resolution, etc. One reconstruction method for spectral CT is to generate a color image from a reconstructed component in each energy channel. However, given the radiation dose, the number of photons in each channel is limited, which will result in strong noise in each channel and affect the final color reconstruction. Here we propose a novel dictionary learning method for spectral CT that combines dictionary-based sparse representation method and the patch based low-rank constraint to simultaneously improve the reconstruction in each channelmore » and to address the inter-channel correlations to further improve the reconstruction. Methods: The proposed method has two important features: (1) guarantee of the patch based sparsity in each energy channel, which is the result of the dictionary based sparse representation constraint; (2) the explicit consideration of the correlations among different energy channels, which is realized by patch-by-patch nuclear norm-based low-rank constraint. For each channel, the dictionary consists of two sub-dictionaries. One is learned from the average of the images in all energy channels, and the other is learned from the average of the images in all energy channels except the current channel. With average operation to reduce noise, these two dictionaries can effectively preserve the structural details and get rid of artifacts caused by noise. Combining them together can express all structural information in current channel. Results: Dictionary learning based methods can obtain better results than FBP and the TV-based method. With low-rank constraint, the image quality can be further improved in the channel with more noise. The final color result by the proposed method has the best visual quality. Conclusion: The proposed method can effectively improve the image quality of low-dose spectral CT. This work is partially supported by the National Natural Science Foundation of China (No. 61302136), and the Natural Science Basic Research Plan in Shaanxi Province of China (No. 2014JQ8317).« less
Neighboring extremals of dynamic optimization problems with path equality constraints
NASA Technical Reports Server (NTRS)
Lee, A. Y.
1988-01-01
Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.
Portfolios with nonlinear constraints and spin glasses
NASA Astrophysics Data System (ADS)
Gábor, Adrienn; Kondor, I.
1999-12-01
In a recent paper Galluccio, Bouchaud and Potters demonstrated that a certain portfolio problem with a nonlinear constraint maps exactly onto finding the ground states of a long-range spin glass, with the concomitant nonuniqueness and instability of the optimal portfolios. Here we put forward geometric arguments that lead to qualitatively similar conclusions, without recourse to the methods of spin glass theory, and give two more examples of portfolio problems with convex nonlinear constraints.
Ligand-based virtual screening under partial shape constraints.
von Behren, Mathias M; Rarey, Matthias
2017-04-01
Ligand-based virtual screening has proven to be a viable technology during the search for new lead structures in drug discovery. Despite the rapidly increasing number of published methods, meaningful shape matching as well as ligand and target flexibility still remain open challenges. In this work, we analyze the influence of knowledge-based sterical constraints on the performance of the recently published ligand-based virtual screening method mRAISE. We introduce the concept of partial shape matching enabling a more differentiated view on chemical structure. The new method is integrated into the LBVS tool mRAISE providing multiple options for such constraints. The applied constraints can either be derived automatically from a protein-ligand complex structure or by manual selection of ligand atoms. In this way, the descriptor directly encodes the fit of a ligand into the binding site. Furthermore, the conservation of close contacts between the binding site surface and the query ligand can be enforced. We validated our new method on the DUD and DUD-E datasets. Although the statistical performance remains on the same level, detailed analysis reveal that for certain and especially very flexible targets a significant improvement can be achieved. This is further highlighted looking at the quality of calculated molecular alignments using the recently introduced mRAISE dataset. The new partial shape constraints improved the overall quality of molecular alignments especially for difficult targets with highly flexible or different sized molecules. The software tool mRAISE is freely available on Linux operating systems for evaluation purposes and academic use (see http://www.zbh.uni-hamburg.de/raise ).
Ligand-based virtual screening under partial shape constraints
NASA Astrophysics Data System (ADS)
von Behren, Mathias M.; Rarey, Matthias
2017-04-01
Ligand-based virtual screening has proven to be a viable technology during the search for new lead structures in drug discovery. Despite the rapidly increasing number of published methods, meaningful shape matching as well as ligand and target flexibility still remain open challenges. In this work, we analyze the influence of knowledge-based sterical constraints on the performance of the recently published ligand-based virtual screening method mRAISE. We introduce the concept of partial shape matching enabling a more differentiated view on chemical structure. The new method is integrated into the LBVS tool mRAISE providing multiple options for such constraints. The applied constraints can either be derived automatically from a protein-ligand complex structure or by manual selection of ligand atoms. In this way, the descriptor directly encodes the fit of a ligand into the binding site. Furthermore, the conservation of close contacts between the binding site surface and the query ligand can be enforced. We validated our new method on the DUD and DUD-E datasets. Although the statistical performance remains on the same level, detailed analysis reveal that for certain and especially very flexible targets a significant improvement can be achieved. This is further highlighted looking at the quality of calculated molecular alignments using the recently introduced mRAISE dataset. The new partial shape constraints improved the overall quality of molecular alignments especially for difficult targets with highly flexible or different sized molecules. The software tool mRAISE is freely available on Linux operating systems for evaluation purposes and academic use (see http://www.zbh.uni-hamburg.de/raise).
Acousto-optical modulation of light at a doubled sound frequency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotov, V M; Averin, S V; Shkerdin, G N
2016-02-28
A method of acousto-optical (AO) Bragg diffraction is proposed that provides the amplitude modulation of optical radiation at a doubled acoustic frequency. The method is based on the double transmission of the light through the AO modulator made of a gyrotropic crystal and is experimentally tested by the example of the modulation of light with a wavelength of 0.63 μm, controlled by the paratellurite AO cell. (acoustooptics)
NASA Technical Reports Server (NTRS)
Tsai, C.; Szabo, B. A.
1973-01-01
An approch to the finite element method which utilizes families of conforming finite elements based on complete polynomials is presented. Finite element approximations based on this method converge with respect to progressively reduced element sizes as well as with respect to progressively increasing orders of approximation. Numerical results of static and dynamic applications of plates are presented to demonstrate the efficiency of the method. Comparisons are made with plate elements in NASTRAN and the high-precision plate element developed by Cowper and his co-workers. Some considerations are given to implementation of the constraint method into general purpose computer programs such as NASTRAN.
Mariella, Jr., Raymond P.
2008-11-18
A method of synthesizing a desired double-stranded DNA of a predetermined length and of a predetermined sequence. Preselected sequence segments that will complete the desired double-stranded DNA are determined. Preselected segment sequences of DNA that will be used to complete the desired double-stranded DNA are provided. The preselected segment sequences of DNA are assembled to produce the desired double-stranded DNA.
NASA Astrophysics Data System (ADS)
Nomoto, Ken&'Ichi; Tolstov, Alexey; Sorokina, Elena; Blinnikov, Sergei; Bersten, Melina; Suzuki, Tomoharu
2017-11-01
The physical origin of Type-I (hydrogen-less) superluminous supernovae (SLSNe-I), whose luminosities are 10 to 500 times higher than normal core-collapse supernovae, remains still unknown. Thanks to their brightness, SLSNe-I would be useful probes of distant Universe. For the power source of the light curves of SLSNe-I, radioactive-decays, magnetars, and circumstellar interactions have been proposed, although no definitive conclusions have been reached yet. Since most of light curve studies have been based on simplified semi-analytic models, we have constructed multi-color light curve models by means of detailed radiation hydrodynamical calculations for various mass of stars including very massive ones and large amount of mass loss. We compare the rising time, peak luminosity, width, and decline rate of the model light curves with observations of SLSNe-I and obtain constraints on their progenitors and explosion mechanisms. We particularly pay attention to the recently reported double peaks of the light curves. We discuss how to discriminate three models, relevant models parameters, their evolutionary origins, and implications for the early evolution of the Universe.
Barbancho, Miguel A; Berthier, Marcelo L; Navas-Sánchez, Patricia; Dávila, Guadalupe; Green-Heredia, Cristina; García-Alberca, José M; Ruiz-Cruces, Rafael; López-González, Manuel V; Dawid-Milner, Marc S; Pulvermüller, Friedemann; Lara, J Pablo
2015-01-01
Changes in ERP (P100 and N400) and root mean square (RMS) were obtained during a silent reading task in 28 patients with chronic post-stroke aphasia in a randomized, double-blind, placebo-controlled trial of both memantine and constraint-induced aphasia therapy (CIAT). Participants received memantine/placebo alone (weeks 0-16), followed by drug treatment combined with CIAT (weeks 16-18), and then memantine/placebo alone (weeks 18-20). ERP/RMS values (week 16) decreased more in the memantine group than in the placebo group. During CIAT application (weeks 16-18), improvements in aphasia severity and ERP/RMS values were amplified by memantine and changes remained stable thereafter (weeks 18-20). Changes in ERP/RMS occurred in left and right hemispheres and correlated with gains in language performance. No changes in ERP/RMS were found in a healthy group in two separated evaluations. Our results show that aphasia recovery induced by both memantine alone and in combination with CIAT is indexed by bilateral cortical potentials. Copyright © 2015 Elsevier Inc. All rights reserved.
Concerning the Video Drift Method to Measure Double Stars
NASA Astrophysics Data System (ADS)
Nugent, Richard L.; Iverson, Ernest W.
2015-05-01
Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.
NASA Astrophysics Data System (ADS)
Wagner, Andreas; Spelsberg-Korspeter, Gottfried
2013-09-01
The finite element method is one of the most common tools for the comprehensive analysis of structures with applications reaching from static, often nonlinear stress-strain, to transient dynamic analyses. For single calculations the expense to generate an appropriate mesh is often insignificant compared to the analysis time even for complex geometries and therefore negligible. However, this is not the case for certain other applications, most notably structural optimization procedures, where the (re-)meshing effort is very important with respect to the total runtime of the procedure. Thus it is desirable to find methods to efficiently generate mass and stiffness matrices allowing to reduce this effort, especially for structures with modifications of minor complexity, e.g. panels with cutouts. Therefore, a modeling approach referred to as Energy Modification Method is proposed in this paper. The underlying idea is to model and discretize the basis structure, e.g. a plate, and the modifications, e.g. holes, separately. The discretized energy expressions of the modifications are then subtracted from (or added to) the energy expressions of the basis structure and the coordinates are related to each other by kinematical constraints leading to the mass and stiffness matrices of the complete structure. This approach will be demonstrated by two simple examples, a rod with varying material properties and a rectangular plate with a rectangular or circular hole, using a finite element discretization as basis. Convergence studies of the method based on the latter example follow demonstrating the rapid convergence and efficiency of the method. Finally, the Energy Modification Method is successfully used in the structural optimization of a circular plate with holes, with the objective to split all its double eigenfrequencies.
NASA Astrophysics Data System (ADS)
Joulidehsar, Farshad; Moradzadeh, Ali; Doulati Ardejani, Faramarz
2018-06-01
The joint interpretation of two sets of geophysical data related to the same source is an appropriate method for decreasing non-uniqueness of the resulting models during inversion process. Among the available methods, a method based on using cross-gradient constraint combines two datasets is an efficient approach. This method, however, is time-consuming for 3D inversion and cannot provide an exact assessment of situation and extension of anomaly of interest. In this paper, the first attempt is to speed up the required calculation by substituting singular value decomposition by least-squares QR method to solve the large-scale kernel matrix of 3D inversion, more rapidly. Furthermore, to improve the accuracy of resulting models, a combination of depth-weighing matrix and compacted constraint, as automatic selection covariance of initial parameters, is used in the proposed inversion algorithm. This algorithm was developed in Matlab environment and first implemented on synthetic data. The 3D joint inversion of synthetic gravity and magnetic data shows a noticeable improvement in the results and increases the efficiency of algorithm for large-scale problems. Additionally, a real gravity and magnetic dataset of Jalalabad mine, in southeast of Iran was tested. The obtained results by the improved joint 3D inversion of cross-gradient along with compacted constraint showed a mineralised zone in depth interval of about 110-300 m which is in good agreement with the available drilling data. This is also a further confirmation on the accuracy and progress of the improved inversion algorithm.
Choi, Mi-Ri; Jeon, Sang-Wan; Yi, Eun-Surk
2018-04-01
The purpose of this study is to analyze the differences among the hospitalized cancer patients on their perception of exercise and physical activity constraints based on their medical history. The study used questionnaire survey as measurement tool for 194 cancer patients (male or female, aged 20 or older) living in Seoul metropolitan area (Seoul, Gyeonggi, Incheon). The collected data were analyzed using frequency analysis, exploratory factor analysis, reliability analysis t -test, and one-way distribution using statistical program SPSS 18.0. The following results were obtained. First, there was no statistically significant difference between cancer stage and exercise recognition/physical activity constraint. Second, there was a significant difference between cancer stage and sociocultural constraint/facility constraint/program constraint. Third, there was a significant difference between cancer operation history and physical/socio-cultural/facility/program constraint. Fourth, there was a significant difference between cancer operation history and negative perception/facility/program constraint. Fifth, there was a significant difference between ancillary cancer treatment method and negative perception/facility/program constraint. Sixth, there was a significant difference between hospitalization period and positive perception/negative perception/physical constraint/cognitive constraint. In conclusion, this study will provide information necessary to create patient-centered healthcare service system by analyzing exercise recognition of hospitalized cancer patients based on their medical history and to investigate the constraint factors that prevents patients from actually making efforts to exercise.
Briscoe, J; Rankin, P M
2009-01-01
Children with specific language impairment (SLI) often experience difficulties in the recall and repetition of verbal information. Archibald and Gathercole (2006) suggested that children with SLI are vulnerable across two separate components of a tripartite model of working memory (Baddeley and Hitch 1974). However, the hierarchical relationship between the 'slave' systems (temporary storage) and the central executive components places a particular challenge for interpreting working memory profiles within a tripartite model. This study aimed to examine whether a 'double-jeopardy' assumption is compatible with a hierarchical relationship between the phonological loop and central executive components of the working memory model in children with SLI. If a strong double-jeopardy assumption is valid for children with SLI, it was predicted that raw scores of working memory tests thought to tap phonological loop and central executive components of tripartite working memory would be lower than the scores of children matched for chronological age and those of children matched for language level, according to independent sources of constraint. In contrast, a hierarchical relationship would imply that a weakness in a slave component of working memory (the phonological loop) would also constrain performance on tests tapping a super-ordinate component (central executive). This locus of constraint would predict that scores of children with SLI on working memory tests that tap the central executive would be weaker relative to the scores of chronological age-matched controls only. Seven subtests of the Working Memory Test Battery for Children (Digit recall, Word recall, Non-word recall, Word matching, Listening recall, Backwards digit recall and Block recall; Pickering and Gathercole 2001) were administered to 14 children with SLI recruited via language resource bases and specialist schools, as well as two control groups matched on chronological age and vocabulary level, respectively. Mean group differences were ascertained by directly comparing raw scores on memory tests linked to different components of the tripartite model using a series of multivariate analyses. The majority of working memory scores of the SLI group were depressed relative to chronological age-matched controls, with the exception of spatial recall (block tapping) and word (order) matching tasks. Marked deficits in serial recall of words and digits were evident, with the SLI group scoring more poorly than the language-ability matched control group on these measures. Impairments of the SLI group on phonological loop tasks were robust, even when covariance with executive working memory scores was accounted for. There was no robust effect of group on complex working memory (central executive) tasks, despite a slight association between listening recall and phonological loop measures. A predominant feature of the working memory profile of SLI was a marked deficit on phonological loop tasks. Although scores on complex working memory tasks were also depressed, there was little evidence for a strong interpretation of double-jeopardy within working memory profiles for these children, rather these findings were consistent with an interpretation of a constraint on phonological loop for children with SLI that operated at all levels of a hierarchical tripartite model of working memory (Baddeley and Hitch 1974). These findings imply that low scores on complex working memory tasks alone do not unequivocally imply an independent deficit in central executive (domain-general) resources of working memory and should therefore be treated cautiously in a clinical context.
Ordering of the O-O stretching vibrational frequencies in ozone
NASA Technical Reports Server (NTRS)
Scuseria, Gustavo E.; Lee, Timothy J.; Scheiner, Andrew C.; Schaefer, Henry F., III
1989-01-01
The ordering of nu1 and nu3 for O3 is incorrectly predicted by most theoretical methods, including some very high level methods. The first systematic electron correlation method based on one-reference configuration to solve this problem is the coupled cluster single and double excitation method. However, a relatively large basis set, triple zeta plus double polarization is required. Comparison with other theoretical methods is made.
Constrained Multi-Level Algorithm for Trajectory Optimization
NASA Astrophysics Data System (ADS)
Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi
The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in diagonalised methods, being the only single update with quadratic convergence. For a single level, the diagonalised multiplier method (DMM) is described in Ref.5. The main advantage of the two-level analogue of the DMM approach is that it avoids the inner loop optimizations required in the other methods. The scheme also introduces a gradient change measure to reduce the computational time needed to calculate the gradients. It is demonstrated that the new multi-level scheme leads to a robust procedure to handle the sensitivity of the constraints, and the multiple objectives of different trajectory phases. Ref. 1. Fahroo, F and Ross, M., " A Spectral Patching Method for Direct Trajectory Optimization" The Journal of the Astronautical Sciences, Vol.48, 2000, pp.269-286 Ref. 2. Phililps, C.A. and Drake, J.C., "Trajectory Optimization for a Missile using a Multitier Approach" Journal of Spacecraft and Rockets, Vol.37, 2000, pp.663-669 Ref. 3. Gath, P.F., and Calise, A.J., " Optimization of Launch Vehicle Ascent Trajectories with Path Constraints and Coast Arcs", Journal of Guidance, Control, and Dynamics, Vol. 24, 2001, pp.296-304 Ref. 4. Betts, J.T., " Survey of Numerical Methods for Trajectory Optimization", Journal of Guidance, Control, and Dynamics, Vol.21, 1998, pp. 193-207 Ref. 5. Adimurthy, V., " Launch Vehicle Trajectory Optimization", Acta Astronautica, Vol.15, 1987, pp.845-850.
Carroll, Raymond J; Delaigle, Aurore; Hall, Peter
2011-03-01
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Belcher, AH; Wiersma, R
Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tojo, H.; Hatae, T.; Yatsuka, E.
2012-10-15
This paper focuses on a method for measuring the electron temperature (T{sub e}) without knowing the transmissivity using Thomson scattering diagnostic with a double-pass scattering system. Application of this method for measuring the anisotropic T{sub e}, i.e., the T{sub e} in the directions parallel (T{sub e Double-Vertical-Line Double-Vertical-Line }) and perpendicular (T{sub e Up-Tack }) to the magnetic field, is proposed. Simulations based on the designed parameters for a JT-60SA indicate the feasibility of the measurements except in certain T{sub e} ranges, e.g., T{sub e Double-Vertical-Line Double-Vertical-Line }{approx} 3.5T{sub e Up-Tack} at 120 Degree-Sign of the scattering angle.
A new look at the simultaneous analysis and design of structures
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1994-01-01
The minimum weight optimization of structural systems, subject to strength and displacement constraints as well as size side constraints, was investigated by the Simultaneous ANalysis and Design (SAND) approach. As an optimizer, the code NPSOL was used which is based on a sequential quadratic programming (SQP) algorithm. The structures were modeled by the finite element method. The finite element related input to NPSOL was automatically generated from the input decks of such standard FEM/optimization codes as NASTRAN or ASTROS, with the stiffness matrices, at present, extracted from the FEM code ANALYZE. In order to avoid ill-conditioned matrices that can be encountered when the global stiffness equations are used as additional nonlinear equality constraints in the SAND approach (with the displacements as additional variables), the matrix displacement method was applied. In this approach, the element stiffness equations are used as constraints instead of the global stiffness equations, in conjunction with the nodal force equilibrium equations. This approach adds the element forces as variables to the system. Since, for complex structures and the associated large and very sparce matrices, the execution times of the optimization code became excessive due to the large number of required constraint gradient evaluations, the Kreisselmeier-Steinhauser function approach was used to decrease the computational effort by reducing the nonlinear equality constraint system to essentially a single combined constraint equation. As the linear equality and inequality constraints require much less computational effort to evaluate, they were kept in their previous form to limit the complexity of the KS function evaluation. To date, the standard three-bar, ten-bar, and 72-bar trusses have been tested. For the standard SAND approach, correct results were obtained for all three trusses although convergence became slower for the 72-bar truss. When the matrix displacement method was used, correct results were still obtained, but the execution times became excessive due to the large number of constraint gradient evaluations required. Using the KS function, the computational effort dropped, but the optimization seemed to become less robust. The investigation of this phenomenon is continuing. As an alternate approach, the code MINOS for the optimization of sparse matrices can be applied to the problem in lieu of the Kreisselmeier-Steinhauser function. This investigation is underway.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
Design Method For Ultra-High Resolution Linear CCD Imagers
NASA Astrophysics Data System (ADS)
Sheu, Larry S.; Truong, Thanh; Yuzuki, Larry; Elhatem, Abdul; Kadekodi, Narayan
1984-11-01
This paper presents the design method to achieve ultra-high resolution linear imagers. This method utilizes advanced design rules and novel staggered bilinear photo sensor arrays with quadrilinear shift registers. Design constraint in the detector arrays and shift registers are analyzed. Imager architecture to achieve ultra-high resolution is presented. The characteristics of MTF, aliasing, speed, transfer efficiency and fine photolithography requirements associated with this architecture are also discussed. A CCD imager with advanced 1.5 um minimum feature size was fabricated. It is intended as a test vehicle for the next generation small sampling pitch ultra-high resolution CCD imager. Standard double-poly, two-phase shift registers were fabricated at an 8 um pitch using the advanced design rules. A special process step that blocked the source-drain implant from the shift register area was invented. This guaranteed excellent performance of the shift registers regardless of the small poly overlaps. A charge transfer efficiency of better than 0.99995 and maximum transfer speed of 8 MHz were achieved. The imager showed excellent performance. The dark current was less than 0.2 mV/ms, saturation 250 mV, adjacent photoresponse non-uniformity ± 4% and responsivity 0.7 V/ μJ/cm2 for the 8 μm x 6 μm photosensor size. The MTF was 0.6 at 62.5 cycles/mm. These results confirm the feasibility of the next generation ultra-high resolution CCD imagers.
Comparing genomes with rearrangements and segmental duplications.
Shao, Mingfu; Moret, Bernard M E
2015-06-15
Large-scale evolutionary events such as genomic rearrange.ments and segmental duplications form an important part of the evolution of genomes and are widely studied from both biological and computational perspectives. A basic computational problem is to infer these events in the evolutionary history for given modern genomes, a task for which many algorithms have been proposed under various constraints. Algorithms that can handle both rearrangements and content-modifying events such as duplications and losses remain few and limited in their applicability. We study the comparison of two genomes under a model including general rearrangements (through double-cut-and-join) and segmental duplications. We formulate the comparison as an optimization problem and describe an exact algorithm to solve it by using an integer linear program. We also devise a sufficient condition and an efficient algorithm to identify optimal substructures, which can simplify the problem while preserving optimality. Using the optimal substructures with the integer linear program (ILP) formulation yields a practical and exact algorithm to solve the problem. We then apply our algorithm to assign in-paralogs and orthologs (a necessary step in handling duplications) and compare its performance with that of the state-of-the-art method MSOAR, using both simulations and real data. On simulated datasets, our method outperforms MSOAR by a significant margin, and on five well-annotated species, MSOAR achieves high accuracy, yet our method performs slightly better on each of the 10 pairwise comparisons. http://lcbb.epfl.ch/softwares/coser. © The Author 2015. Published by Oxford University Press.
Using LDPC Code Constraints to Aid Recovery of Symbol Timing
NASA Technical Reports Server (NTRS)
Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban
2008-01-01
A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.
Restoration of multichannel microwave radiometric images
NASA Technical Reports Server (NTRS)
Chin, R. T.; Yeh, C. L.; Olson, W. S.
1983-01-01
A constrained iterative image restoration method is applied to multichannel diffraction-limited imagery. This method is based on the Gerchberg-Papoulis algorithm utilizing incomplete information and partial constraints. The procedure is described using the orthogonal projection operators which project onto two prescribed subspaces iteratively. Some of its properties and limitations are also presented. The selection of appropriate constraints was emphasized in a practical application. Multichannel microwave images, each having different spatial resolution, were restored to a common highest resolution to demonstrate the effectiveness of the method. Both noise-free and noisy images were used in this investigation.
Sparse Covariance Matrix Estimation With Eigenvalue Constraints
LIU, Han; WANG, Lie; ZHAO, Tuo
2014-01-01
We propose a new approach for estimating high-dimensional, positive-definite covariance matrices. Our method extends the generalized thresholding operator by adding an explicit eigenvalue constraint. The estimated covariance matrix simultaneously achieves sparsity and positive definiteness. The estimator is rate optimal in the minimax sense and we develop an efficient iterative soft-thresholding and projection algorithm based on the alternating direction method of multipliers. Empirically, we conduct thorough numerical experiments on simulated datasets as well as real data examples to illustrate the usefulness of our method. Supplementary materials for the article are available online. PMID:25620866
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Bar-Kochva, Irit; Hasselhorn, Marcus
2015-12-01
The attainment of fluency in reading is a major difficulty for reading-disabled people. Manipulations applied on the presentation of texts, leading to "on-line" effects on reading (i.e., while texts are manipulated), are one direction of examinations in search of methods affecting reading. The imposing of time constraints, by deleting one letter after the other from texts presented on a computer screen, has been established as such a method. In an attempt to further understand its nature, we tested the relations between time constraints and processes of reading: phonological decoding of small orthogrpahic units and the addressing of orthographic representations from the mental lexicon. We also examined whether the type of orthogrpahic unit deleted (lexical, sublexical, or nonlexical unit) has any additional effect. Participants were German fifth graders with (n = 29) or without (n = 34) reading disability. Time constraints enhanced fluency in reading in both groups, and to a similar extent, across conditions. Comprehension was unimpaired. These results place the very principle of time constraints, regardless of the orthographic unit manipulated, as a critical factor affecting fluency in reading. However, phonological decoding explained a significant amount of variance in fluency in reading across all conditions in reading-disabled children, whereas the addressing of orthographic representations was the consistent predictor of fluency in reading in regular readers. These results indicate a qualitative difference in the processes explaining the variance in fluency in reading in regular and reading-disabled readers and suggest that time constraints might not have an effect on the relations between these processes and reading performance. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, W; Zhang, Y; Ren, L
2014-06-01
Purpose: To investigate the feasibility of using nanoparticle markers to validate liver tumor motion together with a deformation field map-based four dimensional (4D) cone-beam computed tomography (CBCT) reconstruction method. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In this method, each phase of the 4D-CBCT is considered as a deformation of a prior CT volume. The DFM is solved by a motion modeling and free-form deformation (MM-FD) technique, using a data fidelity constraint and the deformation energy minimization. For liver imaging, there is low contrast of a liver tumor inmore » on-board projections. A validation of liver tumor motion using implanted gold nanoparticles, along with the MM-FD deformation technique is implemented to reconstruct onboard 4D CBCT liver radiotherapy images. These nanoparticles were placed around the liver tumor to reflect the tumor positions in both CT simulation and on-board image acquisition. When reconstructing each phase of the 4D-CBCT, the migrations of the gold nanoparticles act as a constraint to regularize the deformation field, along with the data fidelity and the energy minimization constraints. In this study, multiple tumor diameters and positions were simulated within the liver for on-board 4D-CBCT imaging. The on-board 4D-CBCT reconstructed by the proposed method was compared with the “ground truth” image. Results: The preliminary data, which uses reconstruction for lung radiotherapy suggests that the advanced reconstruction algorithm including the gold nanoparticle constraint will Resultin volume percentage differences (VPD) between lesions in reconstructed images by MM-FD and “ground truth” on-board images of 11.5% (± 9.4%) and a center of mass shift of 1.3 mm (± 1.3 mm) for liver radiotherapy. Conclusion: The advanced MM-FD technique enforcing the additional constraints from gold nanoparticles, results in improved accuracy for reconstructing on-board 4D-CBCT of liver tumor. Varian medical systems research grant.« less
NASA Technical Reports Server (NTRS)
Tapia, R. A.; Vanrooy, D. L.
1976-01-01
A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.
NASA Technical Reports Server (NTRS)
Ly, Uy-Loi; Schoemig, Ewald
1993-01-01
In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.
Constraints in Genetic Programming
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.
1996-01-01
Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.
A globally convergent LCL method for nonlinear optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedlander, M. P.; Saunders, M. A.; Mathematics and Computer Science
2005-01-01
For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form 'minimize an augmented Lagrangian function subject to linearized constraints.' Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the well-known software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an optionmore » to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.« less
Solution and reasoning reuse in space planning and scheduling applications
NASA Technical Reports Server (NTRS)
Verfaillie, Gerard; Schiex, Thomas
1994-01-01
In the space domain, as in other domains, the CSP (Constraint Satisfaction Problems) techniques are increasingly used to represent and solve planning and scheduling problems. But these techniques have been developed to solve CSP's which are composed of fixed sets of variables and constraints, whereas many planning and scheduling problems are dynamic. It is therefore important to develop methods which allow a new solution to be rapidly found, as close as possible to the previous one, when some variables or constraints are added or removed. After presenting some existing approaches, this paper proposes a simple and efficient method, which has been developed on the basis of the dynamic backtracking algorithm. This method allows previous solution and reasoning to be reused in the framework of a CSP which is close to the previous one. Some experimental results on general random CSPs and on operation scheduling problems for remote sensing satellites are given.
CONORBIT: constrained optimization by radial basis function interpolation in trust regions
Regis, Rommel G.; Wild, Stefan M.
2016-09-26
Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less
Barba-J, Leiner; Escalante-Ramírez, Boris; Vallejo Venegas, Enrique; Arámbula Cosío, Fernando
2018-05-01
Analysis of cardiac images is a fundamental task to diagnose heart problems. Left ventricle (LV) is one of the most important heart structures used for cardiac evaluation. In this work, we propose a novel 3D hierarchical multiscale segmentation method based on a local active contour (AC) model and the Hermite transform (HT) for LV analysis in cardiac magnetic resonance (MR) and computed tomography (CT) volumes in short axis view. Features such as directional edges, texture, and intensities are analyzed using the multiscale HT space. A local AC model is configured using the HT coefficients and geometrical constraints. The endocardial and epicardial boundaries are used for evaluation. Segmentation of the endocardium is controlled using elliptical shape constraints. The final endocardial shape is used to define the geometrical constraints for segmentation of the epicardium. We follow the assumption that epicardial and endocardial shapes are similar in volumes with short axis view. An initialization scheme based on a fuzzy C-means algorithm and mathematical morphology was designed. The algorithm performance was evaluated using cardiac MR and CT volumes in short axis view demonstrating the feasibility of the proposed method.
Optimal control of singularly perturbed nonlinear systems with state-variable inequality constraints
NASA Technical Reports Server (NTRS)
Calise, A. J.; Corban, J. E.
1990-01-01
The established necessary conditions for optimality in nonlinear control problems that involve state-variable inequality constraints are applied to a class of singularly perturbed systems. The distinguishing feature of this class of two-time-scale systems is a transformation of the state-variable inequality constraint, present in the full order problem, to a constraint involving states and controls in the reduced problem. It is shown that, when a state constraint is active in the reduced problem, the boundary layer problem can be of finite time in the stretched time variable. Thus, the usual requirement for asymptotic stability of the boundary layer system is not applicable, and cannot be used to construct approximate boundary layer solutions. Several alternative solution methods are explored and illustrated with simple examples.
Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint
NASA Astrophysics Data System (ADS)
Rodrigues, José Francisco; Santos, Lisa
2012-08-01
We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.
Bruhn, Peter; Geyer-Schulz, Andreas
2002-01-01
In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.
Prolongation structures of nonlinear evolution equations. II
NASA Technical Reports Server (NTRS)
Estabrook, F. B.; Wahlquist, H. D.
1976-01-01
The prolongation structure of a closed ideal of exterior differential forms is further discussed, and its use illustrated by application to an ideal (in six dimensions) representing the cubically nonlinear Schroedinger equation. The prolongation structure in this case is explicitly given, and recurrence relations derived which support the conjecture that the structure is open - i.e., does not terminate as a set of structure relations of a finite-dimensional Lie group. We introduce the use of multiple pseudopotentials to generate multiple Baecklund transformation, and derive the double Baecklund transformation. This symmetric transformation concisely expresses the (usually conjectured) theorem of permutability, which must consequently apply to all solutions irrespective of asymptotic constraints.
β-decay spectroscopy for the r-process nucleosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishimura, Shunji; Collaboration: RIBF Decay Collaborations
2014-05-09
Series of decay spectroscopy experiments, utilizing of high-purity Ge detectors and double-sided silicon-strip detectors, have been conducted to harvest the decay properties of very exotic nuclei relevant to the r-process nucleosynthesis at the RIBF. The decay properties such as β-decay half-lives, low-lying states, β-delayed neutron emissions, isomeric states, and possibly Q{sub β} of the very neutron-rich nuclei are to be measured to give significant constraints in the uncertainties of nuclear properties for the r-process nucleosynthesis. Recent results of βγ spectroscopy study using in-flight fission of {sup 238}U-beam will be presented together with our future perspectives.
Fasoula, S; Zisi, Ch; Gika, H; Pappa-Louisi, A; Nikitas, P
2015-05-22
A package of Excel VBA macros have been developed for modeling multilinear gradient retention data obtained in single or double gradient elution mode by changing organic modifier(s) content and/or eluent pH. For this purpose, ten chromatographic models were used and four methods were adopted for their application. The methods were based on (a) the analytical expression of the retention time, provided that this expression is available, (b) the retention times estimated using the Nikitas-Pappa approach, (c) the stepwise approximation, and (d) a simple numerical approximation involving the trapezoid rule for integration of the fundamental equation for gradient elution. For all these methods, Excel VBA macros have been written and implemented using two different platforms; the fitting and the optimization platform. The fitting platform calculates not only the adjustable parameters of the chromatographic models, but also the significance of these parameters and furthermore predicts the analyte elution times. The optimization platform determines the gradient conditions that lead to the optimum separation of a mixture of analytes by using the Solver evolutionary mode, provided that proper constraints are set in order to obtain the optimum gradient profile in the minimum gradient time. The performance of the two platforms was tested using experimental and artificial data. It was found that using the proposed spreadsheets, fitting, prediction, and optimization can be performed easily and effectively under all conditions. Overall, the best performance is exhibited by the analytical and Nikitas-Pappa's methods, although the former cannot be used under all circumstances. Copyright © 2015 Elsevier B.V. All rights reserved.
A robust, efficient equidistribution 2D grid generation method
NASA Astrophysics Data System (ADS)
Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni
2007-11-01
We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).
The finite layer method for modelling the sound transmission through double walls
NASA Astrophysics Data System (ADS)
Díaz-Cereceda, Cristina; Poblet-Puig, Jordi; Rodríguez-Ferran, Antonio
2012-10-01
The finite layer method (FLM) is presented as a discretisation technique for the computation of noise transmission through double walls. It combines a finite element method (FEM) discretisation in the direction perpendicular to the wall with trigonometric functions in the two in-plane directions. It is used for solving the Helmholtz equation at the cavity inside the double wall, while the wall leaves are modelled with the thin plate equation and solved with modal analysis. Other approaches to this problem are described here (and adapted where needed) in order to compare them with the FLM. They range from impedance models of the double wall behaviour to different numerical methods for solving the Helmholtz equation in the cavity. For the examples simulated in this work (impact noise and airborne sound transmission), the former are less accurate than the latter at low frequencies. The main advantage of FLM over the other discretisation techniques is the possibility of extending it to multilayered structures without changing the interpolation functions and with an affordable computational cost. This potential is illustrated with a calculation of the noise transmission through a multilayered structure: a double wall partially filled with absorbing material.
Lu, Yuhua; Liu, Qian
2018-01-01
We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870
Xu, Lang; Lu, Yuhua; Liu, Qian
2018-02-01
We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.
NASA Astrophysics Data System (ADS)
Itoh, Masato; Hagimori, Yuki; Nonaka, Kenichiro; Sekiguchi, Kazuma
2016-09-01
In this study, we apply a hierarchical model predictive control to omni-directional mobile vehicle, and improve the tracking performance. We deal with an independent four-wheel driving/steering vehicle (IFWDS) equipped with four coaxial steering mechanisms (CSM). The coaxial steering mechanism is a special one composed of two steering joints on the same axis. In our previous study with respect to IFWDS with ideal steering, we proposed a model predictive tracking control. However, this method did not consider constraints of the coaxial steering mechanism which causes delay of steering. We also proposed a model predictive steering control considering constraints of this mechanism. In this study, we propose a hierarchical system combining above two control methods for IFWDS. An upper controller, which deals with vehicle kinematics, runs a model predictive tracking control, and a lower controller, which considers constraints of coaxial steering mechanism, runs a model predictive steering control which tracks the predicted steering angle optimized an upper controller. We verify the superiority of this method by comparing this method with the previous method.
Midstory hardwood species respond differently to chainsaw girdle method and herbicide treatment
Ronald A. Rathfon; Michael R. Saunders
2013-01-01
Foresters in the Central Hardwoods Region commonly fell or girdle interfering trees and apply herbicide to the cut surface when performing intermediate silvicultural treatments. The objective of this study was to compare the use of single and double chainsaw girdle methods in combination with a herbicide treatment and, within the double girdle method, compare herbicide...
Front and backside processed thin film electronic devices
Yuan, Hao-Chih; Wang, Guogong; Eriksson, Mark A.; Evans, Paul G.; Lagally, Max G.; Ma, Zhenqiang
2010-10-12
This invention provides methods for fabricating thin film electronic devices with both front- and backside processing capabilities. Using these methods, high temperature processing steps may be carried out during both frontside and backside processing. The methods are well-suited for fabricating back-gate and double-gate field effect transistors, double-sided bipolar transistors and 3D integrated circuits.
Clustering with Missing Values: No Imputation Required
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri
2004-01-01
Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.
Constraints on communication in classrooms for the deaf.
Matthews, T J; Reich, C F
1993-03-01
One explanation for the relatively low scholastic achievement of deaf students is the character of communication in the classroom. Unlike aural communication methods, line-of-sight methods share the limitation that the receiver of the message must look at the sender. To assess the magnitude of this constraint, we measured the amount of time signers were looked at by potential receivers in typical secondary school classes for the deaf. Videotaped segments indicated that on average the messages sent by teachers and students were seen less than half the time. Students frequently engaged in collateral conversations. The constraints of line-of-sight communication are profound and should be addressed by teaching techniques, classroom layout, and possibly, the use of computer-communication technology.
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2017-07-10
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling
NASA Technical Reports Server (NTRS)
Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne
2003-01-01
Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.
Image-optimized Coronal Magnetic Field Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outsidemore » of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.« less