Luo, Biao; Liu, Derong; Wu, Huai-Ning
2018-06-01
Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
Structural Brain Connectivity Constrains within-a-Day Variability of Direct Functional Connectivity
Park, Bumhee; Eo, Jinseok; Park, Hae-Jeong
2017-01-01
The idea that structural white matter connectivity constrains functional connectivity (interactions among brain regions) has widely been explored in studies of brain networks; studies have mostly focused on the “average” strength of functional connectivity. The question of how structural connectivity constrains the “variability” of functional connectivity remains unresolved. In this study, we investigated the variability of resting state functional connectivity that was acquired every 3 h within a single day from 12 participants (eight time sessions within a 24-h period, 165 scans per session). Three different types of functional connectivity (functional connectivity based on Pearson correlation, direct functional connectivity based on partial correlation, and the pseudo functional connectivity produced by their difference) were estimated from resting state functional magnetic resonance imaging data along with structural connectivity defined using fiber tractography of diffusion tensor imaging. Those types of functional connectivity were evaluated with regard to properties of structural connectivity (fiber streamline counts and lengths) and types of structural connectivity such as intra-/inter-hemispheric edges and topological edge types in the rich club organization. We observed that the structural connectivity constrained the variability of direct functional connectivity more than pseudo-functional connectivity and that the constraints depended strongly on structural connectivity types. The structural constraints were greater for intra-hemispheric and heterologous inter-hemispheric edges than homologous inter-hemispheric edges, and feeder and local edges than rich club edges in the rich club architecture. While each edge was highly variable, the multivariate patterns of edge involvement, especially the direct functional connectivity patterns among the rich club brain regions, showed low variability over time. This study suggests that structural connectivity not only constrains the strength of functional connectivity, but also the within-a-day variability of functional connectivity and connectivity patterns, particularly the direct functional connectivity among brain regions. PMID:28848416
Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W
2009-03-01
The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.
A multi-frequency receiver function inversion approach for crustal velocity structure
NASA Astrophysics Data System (ADS)
Li, Xuelei; Li, Zhiwei; Hao, Tianyao; Wang, Sheng; Xing, Jian
2017-05-01
In order to constrain the crustal velocity structures better, we developed a new nonlinear inversion approach based on multi-frequency receiver function waveforms. With the global optimizing algorithm of Differential Evolution (DE), low-frequency receiver function waveforms can primarily constrain large-scale velocity structures, while high-frequency receiver function waveforms show the advantages in recovering small-scale velocity structures. Based on the synthetic tests with multi-frequency receiver function waveforms, the proposed approach can constrain both long- and short-wavelength characteristics of the crustal velocity structures simultaneously. Inversions with real data are also conducted for the seismic stations of KMNB in southeast China and HYB in Indian continent, where crustal structures have been well studied by former researchers. Comparisons of inverted velocity models from previous and our studies suggest good consistency, but better waveform fitness with fewer model parameters are achieved by our proposed approach. Comprehensive tests with synthetic and real data suggest that the proposed inversion approach with multi-frequency receiver function is effective and robust in inverting the crustal velocity structures.
NASA Astrophysics Data System (ADS)
Ibraheem, Omveer, Hasan, N.
2010-10-01
A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.
Comments on "The multisynapse neural network and its application to fuzzy clustering".
Yu, Jian; Hao, Pengwei
2005-05-01
In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.
Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things
Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet
2017-01-01
As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things. PMID:28696393
Secure Service Proxy: A CoAP(s) Intermediary for a Securer and Smarter Web of Things.
Van den Abeele, Floris; Moerman, Ingrid; Demeester, Piet; Hoebeke, Jeroen
2017-07-11
As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things.
CONORBIT: constrained optimization by radial basis function interpolation in trust regions
Regis, Rommel G.; Wild, Stefan M.
2016-09-26
Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less
Working Memory in Children: A Time-Constrained Functioning Similar to Adults
ERIC Educational Resources Information Center
Portrat, Sophie; Camos, Valerie; Barrouillet, Pierre
2009-01-01
Within the time-based resource-sharing (TBRS) model, we tested a new conception of the relationships between processing and storage in which the core mechanisms of working memory (WM) are time constrained. However, our previous studies were restricted to adults. The current study aimed at demonstrating that these mechanisms are present and…
Self-constrained inversion of potential fields
NASA Astrophysics Data System (ADS)
Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.
2013-11-01
We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.
Geometric constrained variational calculus. II: The second variation (Part I)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Bruno, Danilo; Luria, Gianvittorio; Pagani, Enrico
2016-10-01
Within the geometrical framework developed in [Geometric constrained variational calculus. I: Piecewise smooth extremals, Int. J. Geom. Methods Mod. Phys. 12 (2015) 1550061], the problem of minimality for constrained calculus of variations is analyzed among the class of differentiable curves. A fully covariant representation of the second variation of the action functional, based on a suitable gauge transformation of the Lagrangian, is explicitly worked out. Both necessary and sufficient conditions for minimality are proved, and reinterpreted in terms of Jacobi fields.
Extracting electron transfer coupling elements from constrained density functional theory
NASA Astrophysics Data System (ADS)
Wu, Qin; Van Voorhis, Troy
2006-10-01
Constrained density functional theory (DFT) is a useful tool for studying electron transfer (ET) reactions. It can straightforwardly construct the charge-localized diabatic states and give a direct measure of the inner-sphere reorganization energy. In this work, a method is presented for calculating the electronic coupling matrix element (Hab) based on constrained DFT. This method completely avoids the use of ground-state DFT energies because they are known to irrationally predict fractional electron transfer in many cases. Instead it makes use of the constrained DFT energies and the Kohn-Sham wave functions for the diabatic states in a careful way. Test calculations on the Zn2+ and the benzene-Cl atom systems show that the new prescription yields reasonable agreement with the standard generalized Mulliken-Hush method. We then proceed to produce the diabatic and adiabatic potential energy curves along the reaction pathway for intervalence ET in the tetrathiafulvalene-diquinone (Q-TTF-Q) anion. While the unconstrained DFT curve has no reaction barrier and gives Hab≈17kcal /mol, which qualitatively disagrees with experimental results, the Hab calculated from constrained DFT is about 3kcal /mol and the generated ground state has a barrier height of 1.70kcal/mol, successfully predicting (Q-TTF-Q)- to be a class II mixed-valence compound.
Phillips, Jordan J; Peralta, Juan E
2011-11-14
We introduce a method for evaluating magnetic exchange couplings based on the constrained density functional theory (C-DFT) approach of Rudra, Wu, and Van Voorhis [J. Chem. Phys. 124, 024103 (2006)]. Our method shares the same physical principles as C-DFT but makes use of the fact that the electronic energy changes quadratically and bilinearly with respect to the constraints in the range of interest. This allows us to use coupled perturbed Kohn-Sham spin density functional theory to determine approximately the corrections to the energy of the different spin configurations and construct a priori the relevant energy-landscapes obtained by constrained spin density functional theory. We assess this methodology in a set of binuclear transition-metal complexes and show that it reproduces very closely the results of C-DFT. This demonstrates a proof-of-concept for this method as a potential tool for studying a number of other molecular phenomena. Additionally, routes to improving upon the limitations of this method are discussed. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Quan, Lulin; Yang, Zhixin
2010-05-01
To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.
Phenotypic constraints promote latent versatility and carbon efficiency in metabolic networks.
Bardoscia, Marco; Marsili, Matteo; Samal, Areejit
2015-07-01
System-level properties of metabolic networks may be the direct product of natural selection or arise as a by-product of selection on other properties. Here we study the effect of direct selective pressure for growth or viability in particular environments on two properties of metabolic networks: latent versatility to function in additional environments and carbon usage efficiency. Using a Markov chain Monte Carlo (MCMC) sampling based on flux balance analysis (FBA), we sample from a known biochemical universe random viable metabolic networks that differ in the number of directly constrained environments. We find that the latent versatility of sampled metabolic networks increases with the number of directly constrained environments and with the size of the networks. We then show that the average carbon wastage of sampled metabolic networks across the constrained environments decreases with the number of directly constrained environments and with the size of the networks. Our work expands the growing body of evidence about nonadaptive origins of key functional properties of biological networks.
Extracting electron transfer coupling elements from constrained density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Qin; Van Voorhis, Troy
2006-10-28
Constrained density functional theory (DFT) is a useful tool for studying electron transfer (ET) reactions. It can straightforwardly construct the charge-localized diabatic states and give a direct measure of the inner-sphere reorganization energy. In this work, a method is presented for calculating the electronic coupling matrix element (H{sub ab}) based on constrained DFT. This method completely avoids the use of ground-state DFT energies because they are known to irrationally predict fractional electron transfer in many cases. Instead it makes use of the constrained DFT energies and the Kohn-Sham wave functions for the diabatic states in a careful way. Test calculationsmore » on the Zn{sub 2}{sup +} and the benzene-Cl atom systems show that the new prescription yields reasonable agreement with the standard generalized Mulliken-Hush method. We then proceed to produce the diabatic and adiabatic potential energy curves along the reaction pathway for intervalence ET in the tetrathiafulvalene-diquinone (Q-TTF-Q) anion. While the unconstrained DFT curve has no reaction barrier and gives H{sub ab}{approx_equal}17 kcal/mol, which qualitatively disagrees with experimental results, the H{sub ab} calculated from constrained DFT is about 3 kcal/mol and the generated ground state has a barrier height of 1.70 kcal/mol, successfully predicting (Q-TTF-Q){sup -} to be a class II mixed-valence compound.« less
Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties
NASA Astrophysics Data System (ADS)
Lazzaro, D.; Loli Piccolomini, E.; Zama, F.
2016-10-01
This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.
NASA Astrophysics Data System (ADS)
Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
2004-08-01
An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.
Reinforcement learning solution for HJB equation arising in constrained optimal control problem.
Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong
2015-11-01
The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analyses of deep mammalian sequence alignments and constraint predictions for 1% of the human genome
Margulies, Elliott H.; Cooper, Gregory M.; Asimenos, George; Thomas, Daryl J.; Dewey, Colin N.; Siepel, Adam; Birney, Ewan; Keefe, Damian; Schwartz, Ariel S.; Hou, Minmei; Taylor, James; Nikolaev, Sergey; Montoya-Burgos, Juan I.; Löytynoja, Ari; Whelan, Simon; Pardi, Fabio; Massingham, Tim; Brown, James B.; Bickel, Peter; Holmes, Ian; Mullikin, James C.; Ureta-Vidal, Abel; Paten, Benedict; Stone, Eric A.; Rosenbloom, Kate R.; Kent, W. James; Bouffard, Gerard G.; Guan, Xiaobin; Hansen, Nancy F.; Idol, Jacquelyn R.; Maduro, Valerie V.B.; Maskeri, Baishali; McDowell, Jennifer C.; Park, Morgan; Thomas, Pamela J.; Young, Alice C.; Blakesley, Robert W.; Muzny, Donna M.; Sodergren, Erica; Wheeler, David A.; Worley, Kim C.; Jiang, Huaiyang; Weinstock, George M.; Gibbs, Richard A.; Graves, Tina; Fulton, Robert; Mardis, Elaine R.; Wilson, Richard K.; Clamp, Michele; Cuff, James; Gnerre, Sante; Jaffe, David B.; Chang, Jean L.; Lindblad-Toh, Kerstin; Lander, Eric S.; Hinrichs, Angie; Trumbower, Heather; Clawson, Hiram; Zweig, Ann; Kuhn, Robert M.; Barber, Galt; Harte, Rachel; Karolchik, Donna; Field, Matthew A.; Moore, Richard A.; Matthewson, Carrie A.; Schein, Jacqueline E.; Marra, Marco A.; Antonarakis, Stylianos E.; Batzoglou, Serafim; Goldman, Nick; Hardison, Ross; Haussler, David; Miller, Webb; Pachter, Lior; Green, Eric D.; Sidow, Arend
2007-01-01
A key component of the ongoing ENCODE project involves rigorous comparative sequence analyses for the initially targeted 1% of the human genome. Here, we present orthologous sequence generation, alignment, and evolutionary constraint analyses of 23 mammalian species for all ENCODE targets. Alignments were generated using four different methods; comparisons of these methods reveal large-scale consistency but substantial differences in terms of small genomic rearrangements, sensitivity (sequence coverage), and specificity (alignment accuracy). We describe the quantitative and qualitative trade-offs concomitant with alignment method choice and the levels of technical error that need to be accounted for in applications that require multisequence alignments. Using the generated alignments, we identified constrained regions using three different methods. While the different constraint-detecting methods are in general agreement, there are important discrepancies relating to both the underlying alignments and the specific algorithms. However, by integrating the results across the alignments and constraint-detecting methods, we produced constraint annotations that were found to be robust based on multiple independent measures. Analyses of these annotations illustrate that most classes of experimentally annotated functional elements are enriched for constrained sequences; however, large portions of each class (with the exception of protein-coding sequences) do not overlap constrained regions. The latter elements might not be under primary sequence constraint, might not be constrained across all mammals, or might have expendable molecular functions. Conversely, 40% of the constrained sequences do not overlap any of the functional elements that have been experimentally identified. Together, these findings demonstrate and quantify how many genomic functional elements await basic molecular characterization. PMID:17567995
NASA Astrophysics Data System (ADS)
Roychoudhury, Subhayan; O'Regan, David D.; Sanvito, Stefano
2018-05-01
Pulay terms arise in the Hellmann-Feynman forces in electronic-structure calculations when one employs a basis set made of localized orbitals that move with their host atoms. If the total energy of the system depends on a subspace population defined in terms of the localized orbitals across multiple atoms, then unconventional Pulay terms will emerge due to the variation of the orbital nonorthogonality with ionic translation. Here, we derive the required exact expressions for such terms, which cannot be eliminated by orbital orthonormalization. We have implemented these corrected ionic forces within the linear-scaling density functional theory (DFT) package onetep, and we have used constrained DFT to calculate the reorganization energy of a pentacene molecule adsorbed on a graphene flake. The calculations are performed by including ensemble DFT, corrections for periodic boundary conditions, and empirical Van der Waals interactions. For this system we find that tensorially invariant population analysis yields an adsorbate subspace population that is very close to integer-valued when based upon nonorthogonal Wannier functions, and also but less precisely so when using pseudoatomic functions. Thus, orbitals can provide a very effective population analysis for constrained DFT. Our calculations show that the reorganization energy of the adsorbed pentacene is typically lower than that of pentacene in the gas phase. We attribute this effect to steric hindrance.
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan
2013-08-22
Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.
NASA Astrophysics Data System (ADS)
Gilliot, Mickaël; Hadjadj, Aomar; Stchakovsky, Michel
2017-11-01
An original method of ellipsometric data inversion is proposed based on the use of constrained splines. The imaginary part of the dielectric function is represented by a series of splines, constructed with particular constraints on slopes at the node boundaries to avoid well-know oscillations of natural splines. The nodes are used as fit parameters. The real part is calculated using Kramers-Kronig relations. The inversion can be performed in successive inversion steps with increasing resolution. This method is used to characterize thin zinc oxide layers obtained by a sol-gel and spin-coating process, with a particular recipe yielding very thin layers presenting nano-porosity. Such layers have particular optical properties correlated with thickness, morphological and structural properties. The use of the constrained spline method is particularly efficient for such materials which may not be easily represented by standard dielectric function models.
Stretched hydrogen molecule from a constrained-search density-functional perspective
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valone, Steven M; Levy, Mel
2009-01-01
Constrained-search density functional theory gives valuable insights into the fundamentals of density functional theory. It provides exact results and bounds on the ground- and excited-state density functionals. An important advantage of the theory is that it gives guidance in the construction of functionals. Here they engage constrained search theory to explore issues associated with the functional behavior of 'stretched bonds' in molecular hydrogen. A constrained search is performed with familiar valence bond wavefunctions ordinarily used to describe molecular hydrogen. The effective, one-electron hamiltonian is computed and compared to the corresponding uncorrelated, Hartree-Fock effective hamiltonian. Analysis of the functional suggests themore » need to construct different functionals for the same density and to allow a competition among these functions. As a result the correlation energy functional is composed explicitly of energy gaps from the different functionals.« less
Numerical methods for the inverse problem of density functional theory
Jensen, Daniel S.; Wasserman, Adam
2017-07-17
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Numerical methods for the inverse problem of density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Daniel S.; Wasserman, Adam
Here, the inverse problem of Kohn–Sham density functional theory (DFT) is often solved in an effort to benchmark and design approximate exchange-correlation potentials. The forward and inverse problems of DFT rely on the same equations but the numerical methods for solving each problem are substantially different. We examine both problems in this tutorial with a special emphasis on the algorithms and error analysis needed for solving the inverse problem. Two inversion methods based on partial differential equation constrained optimization and constrained variational ideas are introduced. We compare and contrast several different inversion methods applied to one-dimensional finite and periodic modelmore » systems.« less
Sun, Jared H; Shing, Rachel; Twomey, Michele; Wallis, Lee A
2014-01-01
Resource-constrained countries are in extreme need of pre-hospital emergency care systems. However, current popular strategies to provide pre-hospital emergency care are inappropriate for and beyond the means of a resource-constrained country, and so new ones are needed-ones that can both function in an under-developed area's particular context and be done with the area's limited resources. In this study, we used a two-location pilot and consensus approach to develop a strategy to implement and support pre-hospital emergency care in one such developing, resource-constrained area: the Western Cape province of South Africa. Local community members are trained to be emergency first aid responders who can provide immediate, on-scene care until a Transporter can take the patient to the hospital. Management of the system is done through local Community Based Organizations, which can adapt the model to their communities as needed to ensure local appropriateness and feasibility. Within a community, the system is implemented in a graduated manner based on available resources, and is designed to not rely on the whole system being implemented first to provide partial function. The University of Cape Town's Division of Emergency Medicine and the Western Cape's provincial METRO EMS intend to follow this model, along with sharing it with other South African provinces. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping
2018-01-01
An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imam, Neena; Koenig, Gregory A; Machovec, Dylan
2016-01-01
Abstract: The worth of completing parallel tasks is modeled using utility functions, which monotonically-decrease with time and represent the importance and urgency of a task. These functions define the utility earned by a task at the time of its completion. The performance of such a system is measured as the total utility earned by all completed tasks over some interval of time (e.g., 24 hours). To maximize system performance when scheduling dynamically arriving parallel tasks onto a high performance computing (HPC) system that is oversubscribed and energy-constrained, we have designed, analyzed, and compared different heuristic techniques. Four utility-aware heuristics (i.e.,more » Max Utility, Max Utility-per-Time, Max Utility-per-Resource, and Max Utility-per-Energy), three FCFS-based heuristics (Conservative Backfilling, EASY Backfilling, and FCFS with Multiple Queues), and a Random heuristic were examined in this study. A technique that is often used with the FCFS-based heuristics is the concept of a permanent reservation. We compare the performance of permanent reservations with temporary place-holders to demonstrate the advantages that place-holders can provide. We also present a novel energy filtering technique that constrains the maximum energy-per-resource used by each task. We conducted a simulation study to evaluate the performance of these heuristics and techniques in an energy-constrained oversubscribed HPC environment. With place-holders, energy filtering, and dropping tasks with low potential utility, our utility-aware heuristics are able to significantly outperform the existing FCFS-based techniques.« less
Constrained multiple indicator kriging using sequential quadratic programming
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Erhan Tercan, A.
2012-11-01
Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
NASA Astrophysics Data System (ADS)
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
Neural network-based systems for handprint OCR applications.
Ganis, M D; Wilson, C L; Blue, J L
1998-01-01
Over the last five years or so, neural network (NN)-based approaches have been steadily gaining performance and popularity for a wide range of optical character recognition (OCR) problems, from isolated digit recognition to handprint recognition. We present an NN classification scheme based on an enhanced multilayer perceptron (MLP) and describe an end-to-end system for form-based handprint OCR applications designed by the National Institute of Standards and Technology (NIST) Visual Image Processing Group. The enhancements to the MLP are based on (i) neuron activations functions that reduce the occurrences of singular Jacobians; (ii) successive regularization to constrain the volume of the weight space; and (iii) Boltzmann pruning to constrain the dimension of the weight space. Performance characterization studies of NN systems evaluated at the first OCR systems conference and the NIST form-based handprint recognition system are also summarized.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System
NASA Astrophysics Data System (ADS)
Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang
2018-01-01
This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Wang, Yong; Wang, Bing-Chuan; Li, Han-Xiong; Yen, Gary G
2016-12-01
When solving constrained optimization problems by evolutionary algorithms, an important issue is how to balance constraints and objective function. This paper presents a new method to address the above issue. In our method, after generating an offspring for each parent in the population by making use of differential evolution (DE), the well-known feasibility rule is used to compare the offspring and its parent. Since the feasibility rule prefers constraints to objective function, the objective function information has been exploited as follows: if the offspring cannot survive into the next generation and if the objective function value of the offspring is better than that of the parent, then the offspring is stored into a predefined archive. Subsequently, the individuals in the archive are used to replace some individuals in the population according to a replacement mechanism. Moreover, a mutation strategy is proposed to help the population jump out of a local optimum in the infeasible region. Note that, in the replacement mechanism and the mutation strategy, the comparison of individuals is based on objective function. In addition, the information of objective function has also been utilized to generate offspring in DE. By the above processes, this paper achieves an effective balance between constraints and objective function in constrained evolutionary optimization. The performance of our method has been tested on two sets of benchmark test functions, namely, 24 test functions at IEEE CEC2006 and 18 test functions with 10-D and 30-D at IEEE CEC2010. The experimental results have demonstrated that our method shows better or at least competitive performance against other state-of-the-art methods. Furthermore, the advantage of our method increases with the increase of the number of decision variables.
NASA Astrophysics Data System (ADS)
Guo, Hong; Yang, Xiaohu; Lu, Yi
2018-05-01
We propose a novel method to constrain the missing fraction of galaxies using galaxy clustering measurements in the galaxy conditional stellar mass function (CSMF) framework, which is applicable to surveys that suffer significantly from sample selection effects. The clustering measurements, which are not sensitive to the random sampling (missing fraction) of galaxies, are widely used to constrain the stellar–halo mass relation (SHMR). By incorporating a missing fraction (incompleteness) component into the CSMF model (ICSMF), we use the incomplete stellar mass function and galaxy clustering to simultaneously constrain the missing fractions and the SHMRs. Tests based on mock galaxy catalogs with a few typical missing fraction models show that this method can accurately recover the missing fraction and the galaxy SHMR, hence providing us with reliable measurements of the galaxy stellar mass functions. We then apply it to the Baryon Oscillation Spectroscopic Survey (BOSS) over the redshift range of 0.1 < z < 0.8 for galaxies of M * > 1011 M ⊙. We find that the sample completeness for BOSS is over 80% at z < 0.6 but decreases at higher redshifts to about 30%. After taking these completeness factors into account, we provide accurate measurements of the stellar mass functions for galaxies with {10}11 {M}ȯ < {M}* < {10}12 {M}ȯ , as well as the SHMRs, over the redshift range 0.1 < z < 0.8 in this largest galaxy redshift survey.
Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.
Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo
2017-10-01
This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.
Barron, Daniel S; Fox, Peter T; Pardoe, Heath; Lancaster, Jack; Price, Larry R; Blackmon, Karen; Berry, Kristen; Cavazos, Jose E; Kuzniecky, Ruben; Devinsky, Orrin; Thesen, Thomas
2015-01-01
Noninvasive markers of brain function could yield biomarkers in many neurological disorders. Disease models constrained by coordinate-based meta-analysis are likely to increase this yield. Here, we evaluate a thalamic model of temporal lobe epilepsy that we proposed in a coordinate-based meta-analysis and extended in a diffusion tractography study of an independent patient population. Specifically, we evaluated whether thalamic functional connectivity (resting-state fMRI-BOLD) with temporal lobe areas can predict seizure onset laterality, as established with intracranial EEG. Twenty-four lesional and non-lesional temporal lobe epilepsy patients were studied. No significant differences in functional connection strength in patient and control groups were observed with Mann-Whitney Tests (corrected for multiple comparisons). Notwithstanding the lack of group differences, individual patient difference scores (from control mean connection strength) successfully predicted seizure onset zone as shown in ROC curves: discriminant analysis (two-dimensional) predicted seizure onset zone with 85% sensitivity and 91% specificity; logistic regression (four-dimensional) achieved 86% sensitivity and 100% specificity. The strongest markers in both analyses were left thalamo-hippocampal and right thalamo-entorhinal cortex functional connection strength. Thus, this study shows that thalamic functional connections are sensitive and specific markers of seizure onset laterality in individual temporal lobe epilepsy patients. This study also advances an overall strategy for the programmatic development of neuroimaging biomarkers in clinical and genetic populations: a disease model informed by coordinate-based meta-analysis was used to anatomically constrain individual patient analyses.
Periodic Forced Response of Structures Having Three-Dimensional Frictional Constraints
NASA Astrophysics Data System (ADS)
CHEN, J. J.; YANG, B. D.; MENQ, C. H.
2000-01-01
Many mechanical systems have moving components that are mutually constrained through frictional contacts. When subjected to cyclic excitations, a contact interface may undergo constant changes among sticks, slips and separations, which leads to very complex contact kinematics. In this paper, a 3-D friction contact model is employed to predict the periodic forced response of structures having 3-D frictional constraints. Analytical criteria based on this friction contact model are used to determine the transitions among sticks, slips and separations of the friction contact, and subsequently the constrained force which consists of the induced stick-slip friction force on the contact plane and the contact normal load. The resulting constrained force is often a periodic function and can be considered as a feedback force that influences the response of the constrained structures. By using the Multi-Harmonic Balance Method along with Fast Fourier Transform, the constrained force can be integrated with the receptance of the structures so as to calculate the forced response of the constrained structures. It results in a set of non-linear algebraic equations that can be solved iteratively to yield the relative motion as well as the constrained force at the friction contact. This method is used to predict the periodic response of a frictionally constrained 3-d.o.f. oscillator. The predicted results are compared with those of the direct time integration method so as to validate the proposed method. In addition, the effect of super-harmonic components on the resonant response and jump phenomenon is examined.
Combined radar-radiometer surface soil moisture and roughness estimation
USDA-ARS?s Scientific Manuscript database
A robust physics-based combined radar-radiometer, or Active-Passive, surface soil moisture and roughness estimation methodology is presented. Soil moisture and roughness retrieval is performed via optimization, i.e., minimization, of a joint objective function which constrains similar resolution rad...
Measurement of Residual Flexibility for Substructures Having Prominent Flexible Interfaces
NASA Technical Reports Server (NTRS)
Tinker, Michael L.; Bookout, Paul S.
1994-01-01
Verification of a dynamic model of a constrained structure requires a modal survey test of the physical structure and subsequent modification of the model to obtain the best agreement possible with test data. Constrained-boundary or fixed-base testing has historically been the most common approach for verifying constrained mathematical models, since the boundary conditions of the test article are designed to match the actual constraints in service. However, there are difficulties involved with fixed-base testing, in some cases making the approach impractical. It is not possible to conduct a truly fixed-base test due to coupling between the test article and the fixture. In addition, it is often difficult to accurately simulate the actual boundary constraints, and the cost of designing and constructing the fixture may be prohibitive. For use when fixed-base testing proves impractical or undesirable, alternate free-boundary test methods have been investigated, including the residual flexibility technique. The residual flexibility approach has been treated analytically in considerable detail and has had limited frequency response measurements for the method. This concern is well-justified for a number of reasons. First, residual flexibilities are very small numbers, typically on the order of 1.0E-6 in/lb for translational diagonal terms, and orders of magnitude smaller for off-diagonal values. This poses difficulty in obtaining accurate and noise-free measurements, especially for points removed from the excitation source. A second difficulty encountered in residual measurements lies in obtaining a clean residual function in the process of subtracting synthesized modal data from a measured response function. Inaccuracies occur since modes are not subtracted exactly, but only to the accuracy of the curve fits for each mode; these errors are compounded with increasing distance from the excitation point. In this paper, the residual flexibility method is applied to a simple structure in both test and analysis. Measured and predicted residual functions are compared, and regions of poor data in the measured curves are described. It is found that for accurate residual measurements, frequency response functions having prominent stiffness lines in the acceleration/force format are needed. The lack of such stiffness lines increases measurement errors. Interface drive point frequency respose functions for shuttle orbiter payloads exhibit dominant stiffness lines, making the residual test approach a good candidate for payload modal tests when constrained tests are inappropriate. Difficulties in extracting a residual flexibility value from noisy test data are discussed. It is shown that use of a weighted second order least-squares curve fit of the measured residual function allows identification of residual flexibility that compares very well with predictions for the simple structure. This approach also provides an estimate of second order residual mass effects.
Kim, Nam-Hoon; Hwang, Jin Hwan; Cho, Jaegab; Kim, Jae Seong
2018-06-04
The characteristics of an estuary are determined by various factors as like as tide, wave, river discharge, etc. which also control the water quality of the estuary. Therefore, detecting the changes of characteristics is critical in managing the environmental qualities and pollution and so the locations of monitoring should be selected carefully. The present study proposes a framework to deploy the monitoring systems based on a graphical method of the spatial and temporal optimizations. With the well-validated numerical simulation results, the monitoring locations are determined to capture the changes of water qualities and pollutants depending on the variations of tide, current and freshwater discharge. The deployment strategy to find the appropriate monitoring locations is designed with the constrained optimization method, which finds solutions by constraining the objective function into the feasible regions. The objective and constrained functions are constructed with the interpolation technique such as objective analysis. Even with the smaller number of the monitoring locations, the present method performs well equivalently to the arbitrarily and evenly deployed monitoring system. Copyright © 2018 Elsevier Ltd. All rights reserved.
Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection
NASA Astrophysics Data System (ADS)
Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan
2017-08-01
Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.
Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping
2017-01-01
Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network. PMID:29267252
Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping
2017-12-21
Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.
Ziegler, Tom; Krykunov, Mykhaylo
2010-08-21
It is well known that time-dependent density functional theory (TD-DFT) based on standard gradient corrected functionals affords both a quantitative and qualitative incorrect picture of charge transfer transitions between two spatially separated regions. It is shown here that the well known failure can be traced back to the use of linear response theory. Further, it is demonstrated that the inclusion of higher order terms readily affords a qualitatively correct picture even for simple functionals based on the local density approximation. The inclusion of these terms is done within the framework of a newly developed variational approach to excitation energies called constrained variational density functional theory (CV-DFT). To second order [CV(2)-DFT] this theory is identical to adiabatic TD-DFT within the Tamm-Dancoff approximation. With inclusion of fourth order corrections [CV(4)-DFT] it affords a qualitative correct description of charge transfer transitions. It is finally demonstrated that the relaxation of the ground state Kohn-Sham orbitals to first order in response to the change in density on excitation together with CV(4)-DFT affords charge transfer excitations in good agreement with experiment. The new relaxed theory is termed R-CV(4)-DFT. The relaxed scheme represents an effective way in which to introduce double replacements into the description of single electron excitations, something that would otherwise require a frequency dependent kernel.
NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide
NASA Technical Reports Server (NTRS)
Miura, H.; Schmit, L. A., Jr.
1979-01-01
A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.
Polynomial Size Formulations for the Distance and Capacity Constrained Vehicle Routing Problem
NASA Astrophysics Data System (ADS)
Kara, Imdat; Derya, Tusan
2011-09-01
The Distance and Capacity Constrained Vehicle Routing Problem (DCVRP) is an extension of the well known Traveling Salesman Problem (TSP). DCVRP arises in distribution and logistics problems. It would be beneficial to construct new formulations, which is the main motivation and contribution of this paper. We focused on two indexed integer programming formulations for DCVRP. One node based and one arc (flow) based formulation for DCVRP are presented. Both formulations have O(n2) binary variables and O(n2) constraints, i.e., the number of the decision variables and constraints grows with a polynomial function of the nodes of the underlying graph. It is shown that proposed arc based formulation produces better lower bound than the existing one (this refers to the Water's formulation in the paper). Finally, various problems from literature are solved with the node based and arc based formulations by using CPLEX 8.0. Preliminary computational analysis shows that, arc based formulation outperforms the node based formulation in terms of linear programming relaxation.
International HRD Perspectives.
ERIC Educational Resources Information Center
1999
The first of the four papers in this symposium, "Towards a Meaningful HRD [Human Resource Development] Function in the Post-Command Economies of Central and Eastern Europe" (Devi Jankowicz), examines the existing knowledge-base among managers who are to be trained as HRD practitioners and suggests that efforts may be constrained by…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Q.; Ayers, P.W.; Zhang, Y.
2009-10-28
The first purely density-based energy decomposition analysis (EDA) for intermolecular binding is developed within the density functional theory. The most important feature of this scheme is to variationally determine the frozen density energy, based on a constrained search formalism and implemented with the Wu-Yang algorithm [Q. Wu and W. Yang, J. Chem. Phys. 118, 2498 (2003) ]. This variational process dispenses with the Heitler-London antisymmetrization of wave functions used in most previous methods and calculates the electrostatic and Pauli repulsion energies together without any distortion of the frozen density, an important fact that enables a clean separation of these twomore » terms from the relaxation (i.e., polarization and charge transfer) terms. The new EDA also employs the constrained density functional theory approach [Q. Wu and T. Van Voorhis, Phys. Rev. A 72, 24502 (2005)] to separate out charge transfer effects. Because the charge transfer energy is based on the density flow in real space, it has a small basis set dependence. Applications of this decomposition to hydrogen bonding in the water dimer and the formamide dimer show that the frozen density energy dominates the binding in these systems, consistent with the noncovalent nature of the interactions. A more detailed examination reveals how the interplay of electrostatics and the Pauli repulsion determines the distance and angular dependence of these hydrogen bonds.« less
Current-State Constrained Filter Bank for Wald Testing of Spacecraft Conjunctions
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis
2012-01-01
We propose a filter bank consisting of an ordinary current-state extended Kalman filter, and two similar but constrained filters: one is constrained by a null hypothesis that the miss distance between two conjuncting spacecraft is inside their combined hard body radius at the predicted time of closest approach, and one is constrained by an alternative complementary hypothesis. The unconstrained filter is the basis of an initial screening for close approaches of interest. Once the initial screening detects a possibly risky conjunction, the unconstrained filter also governs measurement editing for all three filters, and predicts the time of closest approach. The constrained filters operate only when conjunctions of interest occur. The computed likelihoods of the innovations of the two constrained filters form a ratio for a Wald sequential probability ratio test. The Wald test guides risk mitigation maneuver decisions based on explicit false alarm and missed detection criteria. Since only current-state Kalman filtering is required to compute the innovations for the likelihood ratio, the present approach does not require the mapping of probability density forward to the time of closest approach. Instead, the hard-body constraint manifold is mapped to the filter update time by applying a sigma-point transformation to a projection function. Although many projectors are available, we choose one based on Lambert-style differential correction of the current-state velocity. We have tested our method using a scenario based on the Magnetospheric Multi-Scale mission, scheduled for launch in late 2014. This mission involves formation flight in highly elliptical orbits of four spinning spacecraft equipped with antennas extending 120 meters tip-to-tip. Eccentricities range from 0.82 to 0.91, and close approaches generally occur in the vicinity of perigee, where rapid changes in geometry may occur. Testing the method using two 12,000-case Monte Carlo simulations, we found the method achieved a missed detection rate of 0.1%, and a false alarm rate of 2%.
Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; ...
2016-09-09
In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillet, Natacha; Berstis, Laura; Wu, Xiaojing
In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less
Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; Gajdos, Fruzsina; Heck, Alexander; de la Lande, Aurélien; Blumberger, Jochen; Elstner, Marcus
2016-10-11
In this article, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesized by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated π-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. These four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.
Zhang, Yongsheng; Wei, Heng; Zheng, Kangning
2017-01-01
Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188
Wang, Sen; Wang, Weihong; Xiong, Shaofeng
2016-09-01
Considering a class of skid-to-turn (STT) missile with fixed target and constrained terminal impact angles, a novel three-dimensional (3D) integrated guidance and control (IGC) scheme is proposed in this paper. Based on coriolis theorem, the fully nonlinear IGC model without the assumption that the missile flies heading to the target at initial time is established in the three-dimensional space. For this strict-feedback form of multi-variable system, dynamic surface control algorithm is implemented combining with extended observer (ESO) to complete the preliminary design. Then, in order to deal with the problems of the input constraints, a hyperbolic tangent function is introduced to approximate the saturation function and auxiliary system including a Nussbaum function established to compensate for the approximation error. The stability of the closed-loop system is proven based on Lyapunov theory. Numerical simulations results show that the proposed integrated guidance and control algorithm can ensure the accuracy of target interception with initial alignment angle deviation and the input saturation is suppressed with smooth deflection curves. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Functional coupling constrains craniofacial diversification in Lake Tanganyika cichlids
Tsuboi, Masahito; Gonzalez-Voyer, Alejandro; Kolm, Niclas
2015-01-01
Functional coupling, where a single morphological trait performs multiple functions, is a universal feature of organismal design. Theory suggests that functional coupling may constrain the rate of phenotypic evolution, yet empirical tests of this hypothesis are rare. In fish, the evolutionary transition from guarding the eggs on a sandy/rocky substrate (i.e. substrate guarding) to mouthbrooding introduces a novel function to the craniofacial system and offers an ideal opportunity to test the functional coupling hypothesis. Using a combination of geometric morphometrics and a recently developed phylogenetic comparative method, we found that head morphology evolution was 43% faster in substrate guarding species than in mouthbrooding species. Furthermore, for species in which females were solely responsible for mouthbrooding the males had a higher rate of head morphology evolution than in those with bi-parental mouthbrooding. Our results support the hypothesis that adaptations resulting in functional coupling constrain phenotypic evolution. PMID:25948565
Functional coupling constrains craniofacial diversification in Lake Tanganyika cichlids.
Tsuboi, Masahito; Gonzalez-Voyer, Alejandro; Kolm, Niclas
2015-05-01
Functional coupling, where a single morphological trait performs multiple functions, is a universal feature of organismal design. Theory suggests that functional coupling may constrain the rate of phenotypic evolution, yet empirical tests of this hypothesis are rare. In fish, the evolutionary transition from guarding the eggs on a sandy/rocky substrate (i.e. substrate guarding) to mouthbrooding introduces a novel function to the craniofacial system and offers an ideal opportunity to test the functional coupling hypothesis. Using a combination of geometric morphometrics and a recently developed phylogenetic comparative method, we found that head morphology evolution was 43% faster in substrate guarding species than in mouthbrooding species. Furthermore, for species in which females were solely responsible for mouthbrooding the males had a higher rate of head morphology evolution than in those with bi-parental mouthbrooding. Our results support the hypothesis that adaptations resulting in functional coupling constrain phenotypic evolution.
NASA Astrophysics Data System (ADS)
Luo, Jianjun; Wei, Caisheng; Dai, Honghua; Yuan, Jianping
2018-03-01
This paper focuses on robust adaptive control for a class of uncertain nonlinear systems subject to input saturation and external disturbance with guaranteed predefined tracking performance. To reduce the limitations of classical predefined performance control method in the presence of unknown initial tracking errors, a novel predefined performance function with time-varying design parameters is first proposed. Then, aiming at reducing the complexity of nonlinear approximations, only two least-square-support-vector-machine-based (LS-SVM-based) approximators with two design parameters are required through norm form transformation of the original system. Further, a novel LS-SVM-based adaptive constrained control scheme is developed under the time-vary predefined performance using backstepping technique. Wherein, to avoid the tedious analysis and repeated differentiations of virtual control laws in the backstepping technique, a simple and robust finite-time-convergent differentiator is devised to only extract its first-order derivative at each step in the presence of external disturbance. In this sense, the inherent demerit of backstepping technique-;explosion of terms; brought by the recursive virtual controller design is conquered. Moreover, an auxiliary system is designed to compensate the control saturation. Finally, three groups of numerical simulations are employed to validate the effectiveness of the newly developed differentiator and the proposed adaptive constrained control scheme.
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Bhunia, A. K.; Roy, D.
2009-10-01
In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.
NASA Astrophysics Data System (ADS)
Regis, Rommel G.
2014-02-01
This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.
Analysis of polarization in hydrogen bonded complexes: An asymptotic projection approach
NASA Astrophysics Data System (ADS)
Drici, Nedjoua
2018-03-01
The asymptotic projection technique is used to investigate the polarization effect that arises from the interaction between the relaxed, and frozen monomeric charge densities of a set of neutral and charged hydrogen bonded complexes. The AP technique based on the resolution of the original Kohn-Sham equations can give an acceptable qualitative description of the polarization effect in neutral complexes. The significant overlap of the electron densities, in charged and π-conjugated complexes, impose further development of a new functional, describing the coupling between constrained and non-constrained electron densities within the AP technique to provide an accurate representation of the polarization effect.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
Wiedmann, Mareike M.; Tan, Yaw Sing; Wu, Yuteng; Aibara, Shintaro; Xu, Wenshu; Sore, Hannah F.; Verma, Chandra S.; Itzhaki, Laura; Stewart, Murray; Brenton, James D.
2016-01-01
Abstract There is a lack of current treatment options for ovarian clear cell carcinoma (CCC) and the cancer is often resistant to platinum‐based chemotherapy. Hence there is an urgent need for novel therapeutics. The transcription factor hepatocyte nuclear factor 1β (HNF1β) is ubiquitously overexpressed in CCC and is seen as an attractive therapeutic target. This was validated through shRNA‐mediated knockdown of the target protein, HNF1β, in five high‐ and low‐HNF1β‐expressing CCC lines. To inhibit the protein function, cell‐permeable, non‐helical constrained proteomimetics to target the HNF1β–importin α protein–protein interaction were designed, guided by X‐ray crystallographic data and molecular dynamics simulations. In this way, we developed the first reported series of constrained peptide nuclear import inhibitors. Importantly, this general approach may be extended to other transcription factors. PMID:27918136
Hirayama, Jun-ichiro; Hyvärinen, Aapo; Kiviniemi, Vesa; Kawanabe, Motoaki; Yamashita, Okito
2016-01-01
Characterizing the variability of resting-state functional brain connectivity across subjects and/or over time has recently attracted much attention. Principal component analysis (PCA) serves as a fundamental statistical technique for such analyses. However, performing PCA on high-dimensional connectivity matrices yields complicated “eigenconnectivity” patterns, for which systematic interpretation is a challenging issue. Here, we overcome this issue with a novel constrained PCA method for connectivity matrices by extending the idea of the previously proposed orthogonal connectivity factorization method. Our new method, modular connectivity factorization (MCF), explicitly introduces the modularity of brain networks as a parametric constraint on eigenconnectivity matrices. In particular, MCF analyzes the variability in both intra- and inter-module connectivities, simultaneously finding network modules in a principled, data-driven manner. The parametric constraint provides a compact module-based visualization scheme with which the result can be intuitively interpreted. We develop an optimization algorithm to solve the constrained PCA problem and validate our method in simulation studies and with a resting-state functional connectivity MRI dataset of 986 subjects. The results show that the proposed MCF method successfully reveals the underlying modular eigenconnectivity patterns in more general situations and is a promising alternative to existing methods. PMID:28002474
Constraint-induced movement therapy after stroke.
Kwakkel, Gert; Veerbeek, Janne M; van Wegen, Erwin E H; Wolf, Steven L
2015-02-01
Constraint-induced movement therapy (CIMT) was developed to overcome upper limb impairments after stroke and is the most investigated intervention for the rehabilitation of patients. Original CIMT includes constraining of the non-paretic arm and task-oriented training. Modified versions also apply constraining of the non-paretic arm, but not as intensive as original CIMT. Behavioural strategies are mostly absent for both modified and original CIMT. With forced use therapy, only constraining of the non-paretic arm is applied. The original and modified types of CIMT have beneficial effects on motor function, arm-hand activities, and self-reported arm-hand functioning in daily life, immediately after treatment and at long-term follow-up, whereas there is no evidence for the efficacy of constraint alone (as used in forced use therapy). The type of CIMT, timing, or intensity of practice do not seem to affect patient outcomes. Although the underlying mechanisms that drive modified and original CIMT are still poorly understood, findings from kinematic studies suggest that improvements are mainly based on adaptations through learning to optimise the use of intact end-effectors in patients with some voluntary motor control of wrist and finger extensors after stroke. Copyright © 2015 Elsevier Ltd. All rights reserved.
Multiple utility constrained multi-objective programs using Bayesian theory
NASA Astrophysics Data System (ADS)
Abbasian, Pooneh; Mahdavi-Amiri, Nezam; Fazlollahtabar, Hamed
2018-03-01
A utility function is an important tool for representing a DM's preference. We adjoin utility functions to multi-objective optimization problems. In current studies, usually one utility function is used for each objective function. Situations may arise for a goal to have multiple utility functions. Here, we consider a constrained multi-objective problem with each objective having multiple utility functions. We induce the probability of the utilities for each objective function using Bayesian theory. Illustrative examples considering dependence and independence of variables are worked through to demonstrate the usefulness of the proposed model.
Modeling and simulating networks of interdependent protein interactions.
Stöcker, Bianca K; Köster, Johannes; Zamir, Eli; Rahmann, Sven
2018-05-21
Protein interactions are fundamental building blocks of biochemical reaction systems underlying cellular functions. The complexity and functionality of these systems emerge not only from the protein interactions themselves but also from the dependencies between these interactions, as generated by allosteric effects or mutual exclusion due to steric hindrance. Therefore, formal models for integrating and utilizing information about interaction dependencies are of high interest. Here, we describe an approach for endowing protein networks with interaction dependencies using propositional logic, thereby obtaining constrained protein interaction networks ("constrained networks"). The construction of these networks is based on public interaction databases as well as text-mined information about interaction dependencies. We present an efficient data structure and algorithm to simulate protein complex formation in constrained networks. The efficiency of the model allows fast simulation and facilitates the analysis of many proteins in large networks. In addition, this approach enables the simulation of perturbation effects, such as knockout of single or multiple proteins and changes of protein concentrations. We illustrate how our model can be used to analyze a constrained human adhesome protein network, which is responsible for the formation of diverse and dynamic cell-matrix adhesion sites. By comparing protein complex formation under known interaction dependencies versus without dependencies, we investigate how these dependencies shape the resulting repertoire of protein complexes. Furthermore, our model enables investigating how the interplay of network topology with interaction dependencies influences the propagation of perturbation effects across a large biochemical system. Our simulation software CPINSim (for Constrained Protein Interaction Network Simulator) is available under the MIT license at http://github.com/BiancaStoecker/cpinsim and as a Bioconda package (https://bioconda.github.io).
NASA Astrophysics Data System (ADS)
Panda, Satyajit; Ray, M. C.
2008-04-01
In this paper, a geometrically nonlinear dynamic analysis has been presented for functionally graded (FG) plates integrated with a patch of active constrained layer damping (ACLD) treatment and subjected to a temperature field. The constraining layer of the ACLD treatment is considered to be made of the piezoelectric fiber-reinforced composite (PFRC) material. The temperature field is assumed to be spatially uniform over the substrate plate surfaces and varied through the thickness of the host FG plates. The temperature-dependent material properties of the FG substrate plates are assumed to be graded in the thickness direction of the plates according to a power-law distribution while the Poisson's ratio is assumed to be a constant over the domain of the plate. The constrained viscoelastic layer of the ACLD treatment is modeled using the Golla-Hughes-McTavish (GHM) method. Based on the first-order shear deformation theory, a three-dimensional finite element model has been developed to model the open-loop and closed-loop nonlinear dynamics of the overall FG substrate plates under the thermal environment. The analysis suggests the potential use of the ACLD treatment with its constraining layer made of the PFRC material for active control of geometrically nonlinear vibrations of FG plates in the absence or the presence of the temperature gradient across the thickness of the plates. It is found that the ACLD treatment is more effective in controlling the geometrically nonlinear vibrations of FG plates than in controlling their linear vibrations. The analysis also reveals that the ACLD patch is more effective for controlling the nonlinear vibrations of FG plates when it is attached to the softest surface of the FG plates than when it is bonded to the stiffest surface of the plates. The effect of piezoelectric fiber orientation in the active constraining PFRC layer on the damping characteristics of the overall FG plates is also discussed.
NASA Astrophysics Data System (ADS)
Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.
2016-09-01
PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.
NASA Astrophysics Data System (ADS)
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
2016-10-01
The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.
Constrained subsystem density functional theory.
Ramos, Pablo; Pavanello, Michele
2016-08-03
Constrained Subsystem Density Functional Theory (CSDFT) allows to compute diabatic states for charge transfer reactions using the machinery of the constrained DFT method, and at the same time is able to embed such diabatic states in a molecular environment via a subsystem DFT scheme. The CSDFT acronym is chosen to reflect the fact that on top of the subsystem DFT approach, a constraining potential is applied to each subsystem. We show that CSDFT can successfully tackle systems as complex as single stranded DNA complete of its backbone, and generate diabatic states as exotic as a hole localized on a phosphate group as well as on the nucleobases. CSDFT will be useful to investigators needing to evaluate the environmental effect on charge transfer couplings for systems in condensed phase environments.
NASA Astrophysics Data System (ADS)
Lim, Yeunhwan; Holt, Jeremy W.
2017-06-01
We investigate the structure of neutron star crusts, including the crust-core boundary, based on new Skyrme mean field models constrained by the bulk-matter equation of state from chiral effective field theory and the ground-state energies of doubly-magic nuclei. Nuclear pasta phases are studied using both the liquid drop model as well as the Thomas-Fermi approximation. We compare the energy per nucleon for each geometry (spherical nuclei, cylindrical nuclei, nuclear slabs, cylindrical holes, and spherical holes) to obtain the ground state phase as a function of density. We find that the size of the Wigner-Seitz cell depends strongly on the model parameters, especially the coefficients of the density gradient interaction terms. We employ also the thermodynamic instability method to check the validity of the numerical solutions based on energy comparisons.
A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography
NASA Astrophysics Data System (ADS)
Sun, S.; Chen, C.; WANG, H.; Wang, Q.
2014-12-01
The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).
First Monte Carlo analysis of fragmentation functions from single-inclusive e + e - annihilation
Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; ...
2016-12-02
Here, we perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data, and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.
Yang, Yana; Hua, Changchun; Guan, Xinping
2016-03-01
Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.
Rhythmic grouping biases constrain infant statistical learning
Hay, Jessica F.; Saffran, Jenny R.
2012-01-01
Linguistic stress and sequential statistical cues to word boundaries interact during speech segmentation in infancy. However, little is known about how the different acoustic components of stress constrain statistical learning. The current studies were designed to investigate whether intensity and duration each function independently as cues to initial prominence (trochaic-based hypothesis) or whether, as predicted by the Iambic-Trochaic Law (ITL), intensity and duration have characteristic and separable effects on rhythmic grouping (ITL-based hypothesis) in a statistical learning task. Infants were familiarized with an artificial language (Experiments 1 & 3) or a tone stream (Experiment 2) in which there was an alternation in either intensity or duration. In addition to potential acoustic cues, the familiarization sequences also contained statistical cues to word boundaries. In speech (Experiment 1) and non-speech (Experiment 2) conditions, 9-month-old infants demonstrated discrimination patterns consistent with an ITL-based hypothesis: intensity signaled initial prominence and duration signaled final prominence. The results of Experiment 3, in which 6.5-month-old infants were familiarized with the speech streams from Experiment 1, suggest that there is a developmental change in infants’ willingness to treat increased duration as a cue to word offsets in fluent speech. Infants’ perceptual systems interact with linguistic experience to constrain how infants learn from their auditory environment. PMID:23730217
2016-01-01
Abstract When the brain is stimulated, for example, by sensory inputs or goal-oriented tasks, the brain initially responds with activities in specific areas. The subsequent pattern formation of functional networks is constrained by the structural connectivity (SC) of the brain. The extent to which information is processed over short- or long-range SC is unclear. Whole-brain models based on long-range axonal connections, for example, can partly describe measured functional connectivity dynamics at rest. Here, we study the effect of SC on the network response to stimulation. We use a human whole-brain network model comprising long- and short-range connections. We systematically activate each cortical or thalamic area, and investigate the network response as a function of its short- and long-range SC. We show that when the brain is operating at the edge of criticality, stimulation causes a cascade of network recruitments, collapsing onto a smaller space that is partly constrained by SC. We found both short- and long-range SC essential to reproduce experimental results. In particular, the stimulation of specific areas results in the activation of one or more resting-state networks. We suggest that the stimulus-induced brain activity, which may indicate information and cognitive processing, follows specific routes imposed by structural networks explaining the emergence of functional networks. We provide a lookup table linking stimulation targets and functional network activations, which potentially can be useful in diagnostics and treatments with brain stimulation. PMID:27752540
Determination of wave-function functionals: The constrained-search variational method
NASA Astrophysics Data System (ADS)
Pan, Xiao-Yin; Sahni, Viraht; Massa, Lou
2005-09-01
In a recent paper [Phys. Rev. Lett. 93, 130401 (2004)], we proposed the idea of expanding the space of variations in variational calculations of the energy by considering the approximate wave function ψ to be a functional of functions χ , ψ=ψ[χ] , rather than a function. A constrained search is first performed over all functions χ such that the wave-function functional ψ[χ] satisfies a physical constraint or leads to the known value of an observable. A rigorous upper bound to the energy is then obtained via the variational principle. In this paper we generalize the constrained-search variational method, applicable to both ground and excited states, to the determination of arbitrary Hermitian single-particle operators as applied to two-electron atomic and ionic systems. We construct analytical three-parameter ground-state functionals for the H- ion and the He atom through the constraint of normalization. We present the results for the total energy E , the expectations of the single-particle operators W=∑irin , n=-2,-1,1,2 , W=∑iδ(ri) , and W=∑iδ(ri-r) , the structure of the nonlocal Coulomb hole charge ρc(rr') , and the expectations of the two particle operators u2,u,1/u,1/u2 , where u=∣ri-rj∣ . The results for all the expectation values are remarkably accurate when compared with the 1078-parameter wave function of Pekeris, and other wave functions that are not functionals. We conclude by describing our current work on how the constrained-search variational method in conjunction with quantal density-functional theory is being applied to the many-electron case.
Li, Yongming; Tong, Shaocheng
2017-12-01
In this paper, an adaptive fuzzy output constrained control design approach is addressed for multi-input multioutput uncertain stochastic nonlinear systems in nonstrict-feedback form. The nonlinear systems addressed in this paper possess unstructured uncertainties, unknown gain functions and unknown stochastic disturbances. Fuzzy logic systems are utilized to tackle the problem of unknown nonlinear uncertainties. The barrier Lyapunov function technique is employed to solve the output constrained problem. In the framework of backstepping design, an adaptive fuzzy control design scheme is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.
Constrained Low-Rank Learning Using Least Squares-Based Regularization.
Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong
2017-12-01
Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.
A probabilistic framework to infer brain functional connectivity from anatomical connections.
Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel
2011-01-01
We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.
NASA Astrophysics Data System (ADS)
Kim, Han Seul; Kim, Yong-Hoon
We have been developing a multi-space-constrained density functional theory approach for the first-principles calculations of nano-scale junctions subjected to non-equilibrium conditions and charge transport through them. In this presentation, we apply the method to vertically-stacked graphene/hexagonal boron nitride (hBN)/graphene Van der Waals heterostructures in the context of tunneling transistor applications. Bias-dependent changes in energy level alignment, wavefunction hybridization, and current are extracted. In particular, we compare quantum transport properties of single-layer (graphene) and infinite (graphite) electrode limits on the same ground, which is not possible within the traditional non-equilibrium Green function formalism. The effects of point defects within hBN on the current-voltage characteristics will be also discussed. Global Frontier Program (2013M3A6B1078881), Nano-Material Technology Development Programs (2016M3A7B4024133, 2016M3A7B4909944, and 2012M3A7B4049888), and Pioneer Program (2016M3C1A3906149) of the National Research Foundation.
On the optimal identification of tag sets in time-constrained RFID configurations.
Vales-Alonso, Javier; Bueno-Delgado, María Victoria; Egea-López, Esteban; Alcaraz, Juan José; Pérez-Mañogil, Juan Manuel
2011-01-01
In Radio Frequency Identification facilities the identification delay of a set of tags is mainly caused by the random access nature of the reading protocol, yielding a random identification time of the set of tags. In this paper, the cumulative distribution function of the identification time is evaluated using a discrete time Markov chain for single-set time-constrained passive RFID systems, namely those ones where a single group of tags is assumed to be in the reading area and only for a bounded time (sojourn time) before leaving. In these scenarios some tags in a set may leave the reader coverage area unidentified. The probability of this event is obtained from the cumulative distribution function of the identification time as a function of the sojourn time. This result provides a suitable criterion to minimize the probability of losing tags. Besides, an identification strategy based on splitting the set of tags in smaller subsets is also considered. Results demonstrate that there are optimal splitting configurations that reduce the overall identification time while keeping the same probability of losing tags.
Digital robust active control law synthesis for large order systems using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1987-01-01
This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.
He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min
2013-01-01
Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.
Stratospheric ethane on Neptune - Comparison of groundbased and Voyager IRIS retrievals
NASA Technical Reports Server (NTRS)
Kostiuk, Theodor; Romani, Paul; Espenak, Fred; Bezard, Bruno
1992-01-01
Near-simultaneous ground and spacecraft measurements of 12-micron ethane emission spectra during the Voyager encounter with Neptune have furnished bases for the determination of stratospheric ethane abundance and the testing and constraining of Neptune methane-photochemistry models. The ethane retrievals were sensitive to the thermal profile used. Contribution functions for warm thermal profiles peaked at higher altitudes, as expected, with the heterodyne functions covering lower-pressure regions. Both constant- and nonconstant-with-height profiles remain candidate distributions for Neptune's stratospheric ethane.
Application of a New Resource-Constrained Triage Method to Military-Age Victims
2009-12-01
evidence based, does not consider resources, and has been shown to be scientifically and opera- tionally flawed.’ General P . K. Carlton, former USAF Surgeon...metric that can be used to predict sur- vival probability) P ^ (t) = the survival probability of victims with original SCORE s treated in time period t. n...function coefficients were derived on the design set, and validated on the test set. The logistic function has the form: P ^ - 1/(1 + e"), where P ^ is
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Wiedmann, Mareike M; Tan, Yaw Sing; Wu, Yuteng; Aibara, Shintaro; Xu, Wenshu; Sore, Hannah F; Verma, Chandra S; Itzhaki, Laura; Stewart, Murray; Brenton, James D; Spring, David R
2017-01-09
There is a lack of current treatment options for ovarian clear cell carcinoma (CCC) and the cancer is often resistant to platinum-based chemotherapy. Hence there is an urgent need for novel therapeutics. The transcription factor hepatocyte nuclear factor 1β (HNF1β) is ubiquitously overexpressed in CCC and is seen as an attractive therapeutic target. This was validated through shRNA-mediated knockdown of the target protein, HNF1β, in five high- and low-HNF1β-expressing CCC lines. To inhibit the protein function, cell-permeable, non-helical constrained proteomimetics to target the HNF1β-importin α protein-protein interaction were designed, guided by X-ray crystallographic data and molecular dynamics simulations. In this way, we developed the first reported series of constrained peptide nuclear import inhibitors. Importantly, this general approach may be extended to other transcription factors. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang
2018-06-01
Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.
ChromA: signal-based retention time alignment for chromatography-mass spectrometry data.
Hoffmann, Nils; Stoye, Jens
2009-08-15
We describe ChromA, a web-based alignment tool for chromatography-mass spectrometry data from the metabolomics and proteomics domains. Users can supply their data in open and standardized file formats for retention time alignment using dynamic time warping with different configurable local distance and similarity functions. Additionally, user-defined anchors can be used to constrain and speedup the alignment. A neighborhood around each anchor can be added to increase the flexibility of the constrained alignment. ChromA offers different visualizations of the alignment for easier qualitative interpretation and comparison of the data. For the multiple alignment of more than two data files, the center-star approximation is applied to select a reference among input files to align to. ChromA is available at http://bibiserv.techfak.uni-bielefeld.de/chroma. Executables and source code under the L-GPL v3 license are provided for download at the same location.
Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation
NASA Astrophysics Data System (ADS)
Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito
2014-02-01
A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.
Constrained dynamics approach for motion synchronization and consensus
NASA Astrophysics Data System (ADS)
Bhatia, Divya
In this research we propose to develop constrained dynamical systems based stable attitude synchronization, consensus and tracking (SCT) control laws for the formation of rigid bodies. The generalized constrained dynamics Equations of Motion (EOM) are developed utilizing constraint potential energy functions that enforce communication constraints. Euler-Lagrange equations are employed to develop the non-linear constrained dynamics of multiple vehicle systems. The constraint potential energy is synthesized based on a graph theoretic formulation of the vehicle-vehicle communication. Constraint stabilization is achieved via Baumgarte's method. The performance of these constrained dynamics based formations is evaluated for bounded control authority. The above method has been applied to various cases and the results have been obtained using MATLAB simulations showing stability, synchronization, consensus and tracking of formations. The first case corresponds to an N-pendulum formation without external disturbances, in which the springs and the dampers connected between the pendulums act as the communication constraints. The damper helps in stabilizing the system by damping the motion whereas the spring acts as a communication link relaying relative position information between two connected pendulums. Lyapunov stabilization (energy based stabilization) technique is employed to depict the attitude stabilization and boundedness. Various scenarios involving different values of springs and dampers are simulated and studied. Motivated by the first case study, we study the formation of N 2-link robotic manipulators. The governing EOM for this system is derived using Euler-Lagrange equations. A generalized set of communication constraints are developed for this system using graph theory. The constraints are stabilized using Baumgarte's techniques. The attitude SCT is established for this system and the results are shown for the special case of three 2-link robotic manipulators. These methods are then applied to the formation of N-spacecraft. Modified Rodrigues Parameters (MRP) are used for attitude representation of the spacecraft because of their advantage of being a minimum parameter representation. Constrained non-linear equations of motion for this system are developed and stabilized using a Proportional-Derivative (PD) controller derived based on Baumgarte's method. A system of 3 spacecraft is simulated and the results for SCT are shown and analyzed. Another problem studied in this research is that of maintaining SCT under unknown external disturbances. We use an adaptive control algorithm to derive control laws for the actuator torques and develop an estimation law for the unknown disturbance parameters to achieve SCT. The estimate of the disturbance is added as a feed forward term in the actual control law to obtain the stabilization of a 3-spacecraft formation. The disturbance estimates are generated via a Lyapunov analysis of the closed loop system. In summary, the constrained dynamics method shows a lot of potential in formation control, achieving stabilization, synchronization, consensus and tracking of a set of dynamical systems.
Fragment approach to constrained density functional theory calculations using Daubechies wavelets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ratcliff, Laura E.; Genovese, Luigi; Mohr, Stephan
2015-06-21
In a recent paper, we presented a linear scaling Kohn-Sham density functional theory (DFT) code based on Daubechies wavelets, where a minimal set of localized support functions are optimized in situ and therefore adapted to the chemical properties of the molecular system. Thanks to the systematically controllable accuracy of the underlying basis set, this approach is able to provide an optimal contracted basis for a given system: accuracies for ground state energies and atomic forces are of the same quality as an uncontracted, cubic scaling approach. This basis set offers, by construction, a natural subset where the density matrix ofmore » the system can be projected. In this paper, we demonstrate the flexibility of this minimal basis formalism in providing a basis set that can be reused as-is, i.e., without reoptimization, for charge-constrained DFT calculations within a fragment approach. Support functions, represented in the underlying wavelet grid, of the template fragments are roto-translated with high numerical precision to the required positions and used as projectors for the charge weight function. We demonstrate the interest of this approach to express highly precise and efficient calculations for preparing diabatic states and for the computational setup of systems in complex environments.« less
Matrix Transfer Function Design for Flexible Structures: An Application
NASA Technical Reports Server (NTRS)
Brennan, T. J.; Compito, A. V.; Doran, A. L.; Gustafson, C. L.; Wong, C. L.
1985-01-01
The application of matrix transfer function design techniques to the problem of disturbance rejection on a flexible space structure is demonstrated. The design approach is based on parameterizing a class of stabilizing compensators for the plant and formulating the design specifications as a constrained minimization problem in terms of these parameters. The solution yields a matrix transfer function representation of the compensator. A state space realization of the compensator is constructed to investigate performance and stability on the nominal and perturbed models. The application is made to the ACOSSA (Active Control of Space Structures) optical structure.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Boundary control for a constrained two-link rigid-flexible manipulator with prescribed performance
NASA Astrophysics Data System (ADS)
Cao, Fangfei; Liu, Jinkun
2018-05-01
In this paper, we consider a boundary control problem for a constrained two-link rigid-flexible manipulator. The nonlinear system is described by hybrid ordinary differential equation-partial differential equation (ODE-PDE) dynamic model. Based on the coupled ODE-PDE model, boundary control is proposed to regulate the joint positions and eliminate the elastic vibration simultaneously. With the help of prescribed performance functions, the tracking error can converge to an arbitrarily small residual set and the convergence rate is no less than a certain pre-specified value. Asymptotic stability of the closed-loop system is rigorously proved by the LaSalle's Invariance Principle extended to infinite-dimensional system. Numerical simulations are provided to demonstrate the effectiveness of the proposed controller.
NASA Astrophysics Data System (ADS)
Li, Guang
2017-01-01
This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.
Fan, Quan-Yong; Yang, Guang-Hong
2017-01-01
The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
NASA Astrophysics Data System (ADS)
Zhu, H.
2017-12-01
Recently, seismologists observed increasing seismicity in North Texas and Oklahoma. Based on seismic observations and other geophysical measurements, some studies suggested possible links between the increasing seismicity and wastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their mechanisms, we need an accurate 3D crustal wavespeed model for North Texas and Oklahoma. Considering the uneven distribution of earthquakes in this region, seismic tomography with local earthquake records have difficulties to achieve good illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wavespeeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral-element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wavespeeds. 25 preconditioned conjugate gradient iterations are used to update model parameters and minimize data misfits. Features in the new crustal model M25 correlates with geological units in the study region, including the Llano uplift, the Anadarko basin and the Ouachita orogenic front. In addition, these seismic anomalies correlate with gravity and magnetic observations. This new model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicenter location and moment tensor solutions, which are important for investigating potential relations between seismicity and unconventional oil and gas exploration.
A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification
NASA Astrophysics Data System (ADS)
Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.
MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
3-D acoustic waveform simulation and inversion at Yasur Volcano, Vanuatu
NASA Astrophysics Data System (ADS)
Iezzi, A. M.; Fee, D.; Matoza, R. S.; Austin, A.; Jolly, A. D.; Kim, K.; Christenson, B. W.; Johnson, R.; Kilgour, G.; Garaebiti, E.; Kennedy, B.; Fitzgerald, R.; Key, N.
2016-12-01
Acoustic waveform inversion shows promise for improved eruption characterization that may inform volcano monitoring. Well-constrained inversion can provide robust estimates of volume and mass flux, increasing our ability to monitor volcanic emissions (potentially in real-time). Previous studies have made assumptions about the multipole source mechanism, which can be thought of as the combination of pressure fluctuations from a volume change, directionality, and turbulence. This infrasound source could not be well constrained up to this time due to infrasound sensors only being deployed on Earth's surface, so the assumption of no vertical dipole component has been made. In this study we deploy a high-density seismo-acoustic network, including multiple acoustic sensors along a tethered balloon around Yasur Volcano, Vanuatu. Yasur has frequent strombolian eruptions from any one of its three active vents within a 400 m diameter crater. The third dimension (vertical) of pressure sensor coverage allows us to begin to constrain the acoustic source components in a profound way, primarily the horizontal and vertical components and their previously uncharted contributions to volcano infrasound. The deployment also has a geochemical and visual component, including FLIR, FTIR, two scanning FLYSPECs, and a variety of visual imagery. Our analysis employs Finite-Difference Time-Domain (FDTD) modeling to obtain the full 3D Green's functions for each propagation path. This method, following Kim et al. (2015), takes into account realistic topographic scattering based on a digital elevation model created using structure-from-motion techniques. We then invert for the source location and source-time function, constraining the contribution of the vertical sound radiation to the source. The final outcome of this inversion is an infrasound-derived volume flux as a function of time, which we then compare to those derived independently from geochemical techniques as well as the inversion of seismic data. Kim, K., Fee, D., Yokoo, A., & Lees, J. M. (2015). Acoustic source inversion to estimate volume flux from volcanic explosions. Geophysical Research Letters, 42(13), 5243-5249
Li, Yongming; Ma, Zhiyao; Tong, Shaocheng
2017-09-01
The problem of adaptive fuzzy output-constrained tracking fault-tolerant control (FTC) is investigated for the large-scale stochastic nonlinear systems of pure-feedback form. The nonlinear systems considered in this paper possess the unstructured uncertainties, unknown interconnected terms and unknown nonaffine nonlinear faults. The fuzzy logic systems are employed to identify the unknown lumped nonlinear functions so that the problems of structured uncertainties can be solved. An adaptive fuzzy state observer is designed to solve the nonmeasurable state problem. By combining the barrier Lyapunov function theory, adaptive decentralized and stochastic control principles, a novel fuzzy adaptive output-constrained FTC approach is constructed. All the signals in the closed-loop system are proved to be bounded in probability and the system outputs are constrained in a given compact set. Finally, the applicability of the proposed controller is well carried out by a simulation example.
The in situ transverse lamina strength of composite laminates
NASA Technical Reports Server (NTRS)
Flaggs, D. L.
1983-01-01
The objective of the work reported in this presentation is to determine the in situ transverse strength of a lamina within a composite laminate. From a fracture mechanics standpoint, in situ strength may be viewed as constrained cracking that has been shown to be a function of both lamina thickness and the stiffness of adjacent plies that serve to constrain the cracking process. From an engineering point of view, however, constrained cracking can be perceived as an apparent increase in lamina strength. With the growing need to design more highly loaded composite structures, the concept of in situ strength may prove to be a viable means of increasing the design allowables of current and future composite material systems. A simplified one dimensional analytical model is presented that is used to predict the strain at onset of transverse cracking. While it is accurate only for the most constrained cases, the model is important in that the predicted failure strain is seen to be a function of a lamina's thickness d and of the extensional stiffness bE theta of the adjacent laminae that constrain crack propagation in the 90 deg laminae.
An Anatomically Constrained Model for Path Integration in the Bee Brain.
Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley
2017-10-23
Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.
Using Analysis of Governance to Unpack Community-Based Conservation: A Case Study from Tanzania.
Robinson, Lance W; Makupa, Enock
2015-11-01
Community-based conservation policies and programs are often hollow with little real devolution. But to pass a judgment of community-based or not community-based on such initiatives and programs obscures what is actually a suite of attributes. In this paper, we analyze governance around a specific case of what is nominally community-based conservation-Ikona Wildlife Management Area (WMA) in Tanzania-using two complementary sets of criteria. The first relates to governance "powers": planning powers, regulatory powers, spending powers, revenue-generating powers, and the power to enter into agreements. The second set of criteria derive from the understanding of governance as a set of social functions: social coordination, shaping power, setting direction, and building community. The analysis helps to detail ways in which the Tanzanian state through policy and regulations has constrained the potential for Ikona WMA to empower communities and community actors. Although it has some features of community-based conservation, community input into how the governance social functions would be carried out in the WMA was constrained from the start and is now largely out of community hands. The two governance powers that have any significant community-based flavor-spending powers and revenue-generating powers-relate to the WMA's tourism activities, but even here the picture is equivocal at best. The unpacking of governance that we have done, however, reveals that community empowerment through the processes associated with creating and recognizing indigenous and community-conserved areas is something that can be pursued through multiple channels, some of which might be more strategic than others.
Kim, Ki-Wook; Han, Youn-Hee; Min, Sung-Gi
2017-09-21
Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism.
Han, Youn-Hee; Min, Sung-Gi
2017-01-01
Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism. PMID:28934152
NASA Astrophysics Data System (ADS)
Jia, M.; Panning, M. P.; Lekic, V.; Gao, C.
2017-12-01
The InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission will deploy a geophysical station on Mars in 2018. Using seismology to explore the interior structure of the Mars is one of the main targets, and as part of the mission, we will use 3-component seismic data to constrain the crust and upper mantle structure including P and S wave velocities and densities underneath the station. We will apply a reversible jump Markov chain Monte Carlo algorithm in the transdimensional hierarchical Bayesian inversion framework, in which the number of parameters in the model space and the noise level of the observed data are also treated as unknowns in the inversion process. Bayesian based methods produce an ensemble of models which can be analyzed to quantify uncertainties and trade-offs of the model parameters. In order to get better resolution, we will simultaneously invert three different types of seismic data: receiver functions, surface wave dispersion (SWD), and ZH ratios. Because the InSight mission will only deliver a single seismic station to Mars, and both the source location and the interior structure will be unknown, we will jointly invert the ray parameter in our approach. In preparation for this work, we first verify our approach by using a set of synthetic data. We find that SWD can constrain the absolute value of velocities while receiver functions constrain the discontinuities. By joint inversion, the velocity structure in the crust and upper mantle is well recovered. Then, we apply our approach to real data from an earth-based seismic station BFO located in Black Forest Observatory in Germany, as already used in a demonstration study for single station location methods. From the comparison of the results, our hierarchical treatment shows its advantage over the conventional method in which the noise level of observed data is fixed as a prior.
Fast alternating projection methods for constrained tomographic reconstruction
Liu, Li; Han, Yongxin
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298
Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions
2007-09-01
C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to
NASA Astrophysics Data System (ADS)
Shiklomanov, A. N.; Cowdery, E.; Dietze, M.
2016-12-01
Recent syntheses of global trait databases have revealed that although the functional diversity among plant species is immense, this diversity is constrained by trade-offs between plant strategies. However, the use of among-trait and trait-environment correlations at the global scale for both qualitative ecological inference and land surface modeling has several important caveats. An alternative approach is to preserve the existing PFT-based model structure while using statistical analyses to account for uncertainty and variability in model parameters. In this study, we used a hierarchical Bayesian model of foliar traits in the TRY database to test the following hypotheses: (1) Leveraging the covariance between foliar traits will significantly constrain our uncertainty in their distributions; and (2) Among-trait covariance patterns are significantly different among and within PFTs, reflecting differences in trade-offs associated with biome-level evolution, site-level community assembly, and individual-level ecophysiological acclimation. We found that among-trait covariance significantly constrained estimates of trait means, and the additional information provided by across-PFT covariance led to more constraint still, especially for traits and PFTs with low sample sizes. We also found that among-trait correlations were highly variable among PFTs, and were generally inconsistent with correlations within PFTs. The hierarchical multivariate framework developed in our study can readily be enhanced with additional levels of hierarchy to account for geographic, species, and individual-level variability.
Strehl-constrained reconstruction of post-adaptive optics data and the Software Package AIRY, v. 6.1
NASA Astrophysics Data System (ADS)
Carbillet, Marcel; La Camera, Andrea; Deguignet, Jérémy; Prato, Marco; Bertero, Mario; Aristidi, Éric; Boccacci, Patrizia
2014-08-01
We first briefly present the last version of the Software Package AIRY, version 6.1, a CAOS-based tool which includes various deconvolution methods, accelerations, regularizations, super-resolution, boundary effects reduction, point-spread function extraction/extrapolation, stopping rules, and constraints in the case of iterative blind deconvolution (IBD). Then, we focus on a new formulation of our Strehl-constrained IBD, here quantitatively compared to the original formulation for simulated near-infrared data of an 8-m class telescope equipped with adaptive optics (AO), showing their equivalence. Next, we extend the application of the original method to the visible domain with simulated data of an AO-equipped 1.5-m telescope, testing also the robustness of the method with respect to the Strehl ratio estimation.
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Direct reconstruction of dark energy.
Clarkson, Chris; Zunckel, Caroline
2010-05-28
An important issue in cosmology is reconstructing the effective dark energy equation of state directly from observations. With so few physically motivated models, future dark energy studies cannot only be based on constraining a dark energy parameter space. We present a new nonparametric method which can accurately reconstruct a wide variety of dark energy behavior with no prior assumptions about it. It is simple, quick and relatively accurate, and involves no expensive explorations of parameter space. The technique uses principal component analysis and a combination of information criteria to identify real features in the data, and tailors the fitting functions to pick up trends and smooth over noise. We find that we can constrain a large variety of w(z) models to within 10%-20% at redshifts z≲1 using just SNAP-quality data.
Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.
Mulder, Joris
2014-02-01
Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.
Imaging learning and memory: classical conditioning.
Schreurs, B G; Alkon, D L
2001-12-15
The search for the biological basis of learning and memory has, until recently, been constrained by the limits of technology to classic anatomic and electrophysiologic studies. With the advent of functional imaging, we have begun to delve into what, for many, was a "black box." We review several different types of imaging experiments, including steady state animal experiments that image the functional labeling of fixed tissues, and dynamic human studies based on functional imaging of the intact brain during learning. The data suggest that learning and memory involve a surprising conservation of mechanisms and the integrated networking of a number of structures and processes. Copyright 2001 Wiley-Liss, Inc.
Section-constrained local geological interface dynamic updating method based on the HRBF surface
NASA Astrophysics Data System (ADS)
Guo, Jiateng; Wu, Lixin; Zhou, Wenhui; Li, Chaoling; Li, Fengdan
2018-02-01
Boundaries, attitudes and sections are the most common data acquired from regional field geological surveys, and they are used for three-dimensional (3D) geological modelling. However, constructing topologically consistent 3D geological models from rapid and automatic regional modelling with convenient local modifications remains unresolved. In previous works, the Hermite radial basis function (HRBF) surface was introduced for the simulation of geological interfaces from geological boundaries and attitudes, which allows 3D geological models to be automatically extracted from the modelling area by the interfaces. However, the reasonability and accuracy of non-supervised subsurface modelling is limited without further modifications generated through explanations and analyses performed by geology experts. In this paper, we provide flexible and convenient manual interactive manipulation tools for geologists to sketch constraint lines, and these tools may help geologists transform and apply their expert knowledge to the models. In the modified modelling workflow, the geological sections were treated as auxiliary constraints to construct more reasonable 3D geological models. The geometric characteristics of section lines were abstracted to coordinates and normal vectors, and along with the transformed coordinates and vectors from boundaries and attitudes, these characteristics were adopted to co-calculate the implicit geological surface function parameters of the HRBF equations and form constrained geological interfaces from topographic (boundaries and attitudes) and subsurface data (sketched sections). Based on this new modelling method, a prototype system was developed, in which the section lines could be imported from databases or interactively sketched, and the models could be immediately updated after the new constraints were added. Experimental comparisons showed that all boundary, attitude and section data are well represented in the constrained models, which are consistent with expert explanations and help improve the quality of the models.
Spectroscopy of reflection-asymmetric nuclei with relativistic energy density functionals
NASA Astrophysics Data System (ADS)
Xia, S. Y.; Tao, H.; Lu, Y.; Li, Z. P.; Nikšić, T.; Vretenar, D.
2017-11-01
Quadrupole and octupole deformation energy surfaces, low-energy excitation spectra, and transition rates in 14 isotopic chains: Xe, Ba, Ce, Nd, Sm, Gd, Rn, Ra, Th, U, Pu, Cm, Cf, and Fm, are systematically analyzed using a theoretical framework based on a quadrupole-octupole collective Hamiltonian (QOCH), with parameters determined by constrained reflection-asymmetric and axially symmetric relativistic mean-field calculations. The microscopic QOCH model based on the PC-PK1 energy density functional and δ -interaction pairing is shown to accurately describe the empirical trend of low-energy quadrupole and octupole collective states, and predicted spectroscopic properties are consistent with recent microscopic calculations based on both relativistic and nonrelativistic energy density functionals. Low-energy negative-parity bands, average octupole deformations, and transition rates show evidence for octupole collectivity in both mass regions, for which a microscopic mechanism is discussed in terms of evolution of single-nucleon orbitals with deformation.
CLFs-based optimization control for a class of constrained visual servoing systems.
Song, Xiulan; Miaomiao, Fu
2017-03-01
In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Fuzzy robust credibility-constrained programming for environmental management and planning.
Zhang, Yimei; Hang, Guohe
2010-06-01
In this study, a fuzzy robust credibility-constrained programming (FRCCP) is developed and applied to the planning for waste management systems. It incorporates the concepts of credibility-based chance-constrained programming and robust programming within an optimization framework. The developed method can reflect uncertainties presented as possibility-density by fuzzy-membership functions. Fuzzy credibility constraints are transformed to the crisp equivalents with different credibility levels, and ordinary fuzzy inclusion constraints are determined by their robust deterministic constraints by setting a-cut levels. The FRCCP method can provide different system costs under different credibility levels (lambda). From the results of sensitivity analyses, the operation cost of the landfill is a critical parameter. For the management, any factors that would induce cost fluctuation during landfilling operation would deserve serious observation and analysis. By FRCCP, useful solutions can be obtained to provide decision-making support for long-term planning of solid waste management systems. It could be further enhanced through incorporating methods of inexact analysis into its framework. It can also be applied to other environmental management problems.
Protein-mediated loops in supercoiled DNA create large topological domains
Yan, Yan; Ding, Yue; Leng, Fenfei; Dunlap, David; Finzi, Laura
2018-01-01
Abstract Supercoiling can alter the form and base pairing of the double helix and directly impact protein binding. More indirectly, changes in protein binding and the stress of supercoiling also influence the thermodynamic stability of regulatory, protein-mediated loops and shift the equilibria of fundamental DNA/chromatin transactions. For example, supercoiling affects the hierarchical organization and function of chromatin in topologically associating domains (TADs) in both eukaryotes and bacteria. On the other hand, a protein-mediated loop in DNA can constrain supercoiling within a plectonemic structure. To characterize the extent of constrained supercoiling, 400 bp, lac repressor-secured loops were formed in extensively over- or under-wound DNA under gentle tension in a magnetic tweezer. The protein-mediated loops constrained variable amounts of supercoiling that often exceeded the maximum writhe expected for a 400 bp plectoneme. Loops with such high levels of supercoiling appear to be entangled with flanking domains. Thus, loop-mediating proteins operating on supercoiled substrates can establish topological domains that may coordinate gene regulation and other DNA transactions across spans in the genome that are larger than the separation between the binding sites. PMID:29538766
A rate-constrained fast full-search algorithm based on block sum pyramid.
Song, Byung Cheol; Chun, Kang-Wook; Ra, Jong Beom
2005-03-01
This paper presents a fast full-search algorithm (FSA) for rate-constrained motion estimation. The proposed algorithm, which is based on the block sum pyramid frame structure, successively eliminates unnecessary search positions according to rate-constrained criterion. This algorithm provides the identical estimation performance to a conventional FSA having rate constraint, while achieving considerable reduction in computation.
A transformation method for constrained-function minimization
NASA Technical Reports Server (NTRS)
Park, S. K.
1975-01-01
A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points.
NASA Astrophysics Data System (ADS)
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
Design of bearings for rotor systems based on stability
NASA Technical Reports Server (NTRS)
Dhar, D.; Barrett, L. E.; Knospe, C. R.
1992-01-01
Design of rotor systems incorporating stable behavior is of great importance to manufacturers of high speed centrifugal machinery since destabilizing mechanisms (from bearings, seals, aerodynamic cross coupling, noncolocation effects from magnetic bearings, etc.) increase with machine efficiency and power density. A new method of designing bearing parameters (stiffness and damping coefficients or coefficients of the controller transfer function) is proposed, based on a numerical search in the parameter space. The feedback control law is based on a decentralized low order controller structure, and the various design requirements are specified as constraints in the specification and parameter spaces. An algorithm is proposed for solving the problem as a sequence of constrained 'minimax' problems, with more and more eigenvalues into an acceptable region in the complex plane. The algorithm uses the method of feasible directions to solve the nonlinear constrained minimization problem at each stage. This methodology emphasizes the designer's interaction with the algorithm to generate acceptable designs by relaxing various constraints and changing initial guesses interactively. A design oriented user interface is proposed to facilitate the interaction.
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform
NASA Astrophysics Data System (ADS)
Gato-Rivera, B.; Semikhatov, A. M.
1992-08-01
A direct relation between the conformal formalism for 2D quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the W ( l) -constrained KP hierarchy to the ( p‧, p‧) minimal model, with the tau function being given by the correlator of a product of (dressed) ( l, 1) [or (1, l)] operators, provided the Miwa parameter ni and the free parameter (an abstract bc spin) present in the constraint are expressed through the ratio p‧/ p and the level l.
Double quick, double click reversible peptide "stapling".
Grison, Claire M; Burslem, George M; Miles, Jennifer A; Pilsl, Ludwig K A; Yeo, David J; Imani, Zeynab; Warriner, Stuart L; Webb, Michael E; Wilson, Andrew J
2017-07-01
The development of constrained peptides for inhibition of protein-protein interactions is an emerging strategy in chemical biology and drug discovery. This manuscript introduces a versatile, rapid and reversible approach to constrain peptides in a bioactive helical conformation using BID and RNase S peptides as models. Dibromomaleimide is used to constrain BID and RNase S peptide sequence variants bearing cysteine (Cys) or homocysteine ( h Cys) amino acids spaced at i and i + 4 positions by double substitution. The constraint can be readily removed by displacement of the maleimide using excess thiol. This new constraining methodology results in enhanced α-helical conformation (BID and RNase S peptide) as demonstrated by circular dichroism and molecular dynamics simulations, resistance to proteolysis (BID) as demonstrated by trypsin proteolysis experiments and retained or enhanced potency of inhibition for Bcl-2 family protein-protein interactions (BID), or greater capability to restore the hydrolytic activity of the RNAse S protein (RNase S peptide). Finally, use of a dibromomaleimide functionalized with an alkyne permits further divergent functionalization through alkyne-azide cycloaddition chemistry on the constrained peptide with fluorescein, oligoethylene glycol or biotin groups to facilitate biophysical and cellular analyses. Hence this methodology may extend the scope and accessibility of peptide stapling.
NASA Astrophysics Data System (ADS)
Zhu, Hejun
2018-04-01
Recently, seismologists observed increasing seismicity in North Texas and Oklahoma. Based on seismic observations and other geophysical measurements, numerous studies suggested links between the increasing seismicity and wastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their triggering mechanisms, we need an accurate 3D crustal wavespeed model for the study region. Considering the uneven distribution of earthquakes in this area, seismic tomography with local earthquake records have difficulties achieving even illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wavespeeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral-element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wavespeeds. Twenty five preconditioned conjugate gradient iterations are used to update model parameters and minimize data misfits. Features in the new crustal model TO25 correlates well with geological provinces in the study region, including the Llano uplift, the Anadarko basin and the Ouachita orogenic front, etc. In addition, there are relatively good correlations between seismic results with gravity and magnetic observations. This new crustal model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicenter location as well as moment tensor solutions, which are important for investigating triggering mechanisms between these induced earthquakes and unconventional oil and gas exploration activities.
Dual-TRACER: High resolution fMRI with constrained evolution reconstruction.
Li, Xuesong; Ma, Xiaodong; Li, Lyu; Zhang, Zhe; Zhang, Xue; Tong, Yan; Wang, Lihong; Sen Song; Guo, Hua
2018-01-01
fMRI with high spatial resolution is beneficial for studies in psychology and neuroscience, but is limited by various factors such as prolonged imaging time, low signal to noise ratio and scarcity of advanced facilities. Compressed Sensing (CS) based methods for accelerating fMRI data acquisition are promising. Other advanced algorithms like k-t FOCUSS or PICCS have been developed to improve performance. This study aims to investigate a new method, Dual-TRACER, based on Temporal Resolution Acceleration with Constrained Evolution Reconstruction (TRACER), for accelerating fMRI acquisitions using golden angle variable density spiral. Both numerical simulations and in vivo experiments at 3T were conducted to evaluate and characterize this method. Results show that Dual-TRACER can provide functional images with a high spatial resolution (1×1mm 2 ) under an acceleration factor of 20 while maintaining hemodynamic signals well. Compared with other investigated methods, dual-TRACER provides a better signal recovery, higher fMRI sensitivity and more reliable activation detection. Copyright © 2017 Elsevier Inc. All rights reserved.
ChromA: signal-based retention time alignment for chromatography–mass spectrometry data
Hoffmann, Nils; Stoye, Jens
2009-01-01
Summary: We describe ChromA, a web-based alignment tool for chromatography–mass spectrometry data from the metabolomics and proteomics domains. Users can supply their data in open and standardized file formats for retention time alignment using dynamic time warping with different configurable local distance and similarity functions. Additionally, user-defined anchors can be used to constrain and speedup the alignment. A neighborhood around each anchor can be added to increase the flexibility of the constrained alignment. ChromA offers different visualizations of the alignment for easier qualitative interpretation and comparison of the data. For the multiple alignment of more than two data files, the center-star approximation is applied to select a reference among input files to align to. Availability: ChromA is available at http://bibiserv.techfak.uni-bielefeld.de/chroma. Executables and source code under the L-GPL v3 license are provided for download at the same location. Contact: stoye@techfak.uni-bielefeld.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19505941
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2015-04-01
A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956.
A trust region-based approach to optimize triple response systems
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen
2014-05-01
This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.
An RBF-based compression method for image-based relighting.
Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung
2006-04-01
In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.
2015-03-26
albeit powerful , method available for exploring CAS. As discussed above, there are many useful mathematical tools appropriate for CAS modeling. Agent-based...cells, tele- phone calls, and sexual contacts approach power -law distributions. [48] Networks in general are robust against random failures, but...targeted failures can have powerful effects – provided the targeter has a good understanding of the network structure. Some argue (convincingly) that all
Constrained reduced-order models based on proper orthogonal decomposition
Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...
2017-04-09
A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Minjing; Qian, Wei-jun; Gao, Yuqian
The kinetics of biogeochemical processes in natural and engineered environmental systems are typically described using Monod-type or modified Monod-type models. These models rely on biomass as surrogates for functional enzymes in microbial community that catalyze biogeochemical reactions. A major challenge to apply such models is the difficulty to quantitatively measure functional biomass for constraining and validating the models. On the other hand, omics-based approaches have been increasingly used to characterize microbial community structure, functions, and metabolites. Here we proposed an enzyme-based model that can incorporate omics-data to link microbial community functions with biogeochemical process kinetics. The model treats enzymes asmore » time-variable catalysts for biogeochemical reactions and applies biogeochemical reaction network to incorporate intermediate metabolites. The sequences of genes and proteins from metagenomes, as well as those from the UniProt database, were used for targeted enzyme quantification and to provide insights into the dynamic linkage among functional genes, enzymes, and metabolites that are necessary to be incorporated in the model. The application of the model was demonstrated using denitrification as an example by comparing model-simulated with measured functional enzymes, genes, denitrification substrates and intermediates« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Hsin-Fu; Tu, Ming-Hsien
2011-03-15
We derive the bilinear equations of the constrained BKP hierarchy from the calculus of pseudodifferential operators. The full hierarchy equations can be expressed in Hirota's bilinear form characterized by the functions {rho}, {sigma}, and {tau}. Besides, we also give a modification of the original Orlov-Schulman additional symmetry to preserve the constrained form of the Lax operator for this hierarchy. The vector fields associated with the modified additional symmetry turn out to satisfy a truncated centerless Virasoro algebra.
On the optimization of electromagnetic geophysical data: Application of the PSO algorithm
NASA Astrophysics Data System (ADS)
Godio, A.; Santilano, A.
2018-01-01
Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.
Classification-Assisted Memetic Algorithms for Equality-Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Handoko, Stephanus Daniel; Kwoh, Chee Keong; Ong, Yew Soon
Regressions has successfully been incorporated into memetic algorithm (MA) to build surrogate models for the objective or constraint landscape of optimization problems. This helps to alleviate the needs for expensive fitness function evaluations by performing local refinements on the approximated landscape. Classifications can alternatively be used to assist MA on the choice of individuals that would experience refinements. Support-vector-assisted MA were recently proposed to alleviate needs for function evaluations in the inequality-constrained optimization problems by distinguishing regions of feasible solutions from those of the infeasible ones based on some past solutions such that search efforts can be focussed on some potential regions only. For problems having equality constraints, however, the feasible space would obviously be extremely small. It is thus extremely difficult for the global search component of the MA to produce feasible solutions. Hence, the classification of feasible and infeasible space would become ineffective. In this paper, a novel strategy to overcome such limitation is proposed, particularly for problems having one and only one equality constraint. The raw constraint value of an individual, instead of its feasibility class, is utilized in this work.
JWST Wavefront Control Toolbox
NASA Technical Reports Server (NTRS)
Shin, Shahram Ron; Aronstein, David L.
2011-01-01
A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.
Fixman compensating potential for general branched molecules
NASA Astrophysics Data System (ADS)
Jain, Abhinandan; Kandel, Saugat; Wagner, Jeffrey; Larsen, Adrien; Vaidehi, Nagarajan
2013-12-01
The technique of constraining high frequency modes of molecular motion is an effective way to increase simulation time scale and improve conformational sampling in molecular dynamics simulations. However, it has been shown that constraints on higher frequency modes such as bond lengths and bond angles stiffen the molecular model, thereby introducing systematic biases in the statistical behavior of the simulations. Fixman proposed a compensating potential to remove such biases in the thermodynamic and kinetic properties calculated from dynamics simulations. Previous implementations of the Fixman potential have been limited to only short serial chain systems. In this paper, we present a spatial operator algebra based algorithm to calculate the Fixman potential and its gradient within constrained dynamics simulations for branched topology molecules of any size. Our numerical studies on molecules of increasing complexity validate our algorithm by demonstrating recovery of the dihedral angle probability distribution function for systems that range in complexity from serial chains to protein molecules. We observe that the Fixman compensating potential recovers the free energy surface of a serial chain polymer, thus annulling the biases caused by constraining the bond lengths and bond angles. The inclusion of Fixman potential entails only a modest increase in the computational cost in these simulations. We believe that this work represents the first instance where the Fixman potential has been used for general branched systems, and establishes the viability for its use in constrained dynamics simulations of proteins and other macromolecules.
An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
1989-01-01
A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, J.V.
The published work on exact penalization is indeed vast. Recently this work has indicated an intimate relationship between exact penalization, Lagrange multipliers, and problem stability or calmness. In the present work we chronicle this development within a simple idealized problem framework, wherein we unify, extend, and refine much of the known theory. In particular, most of the foundations for constrained optimization are developed with the aid of exact penalization techniques. Our approach is highly geometric and is based upon the elementary subdifferential theory for distance functions. It is assumed that the reader is familiar with the theory of convex setsmore » and functions. 54 refs.« less
Resonant Raman spectra of diindenoperylene thin films
NASA Astrophysics Data System (ADS)
Scholz, R.; Gisslén, L.; Schuster, B.-E.; Casu, M. B.; Chassé, T.; Heinemeyer, U.; Schreiber, F.
2011-01-01
Resonant and preresonant Raman spectra obtained on diindenoperylene (DIP) thin films are interpreted with calculations of the deformation of a relaxed excited molecule with density functional theory (DFT). The comparison of excited state geometries based on time-dependent DFT or on a constrained DFT scheme with observed absorption spectra of dissolved DIP reveals that the deformation pattern deduced from constrained DFT is more reliable. Most observed Raman peaks can be assigned to calculated A_g-symmetric breathing modes of DIP or their combinations. As the position of one of the laser lines used falls into a highly structured absorption band, we have carefully analyzed the Raman excitation profile arising from the frequency dependence of the dielectric tensor. This procedure gives Raman cross sections in good agreement with the observed relative intensities, both in the fully resonant and in the preresonant case.
Geometric constrained variational calculus. III: The second variation (Part II)
NASA Astrophysics Data System (ADS)
Massa, Enrico; Luria, Gianvittorio; Pagani, Enrico
2016-03-01
The problem of minimality for constrained variational calculus is analyzed within the class of piecewise differentiable extremaloids. A fully covariant representation of the second variation of the action functional based on a family of local gauge transformations of the original Lagrangian is proposed. The necessity of pursuing a local adaptation process, rather than the global one described in [1] is seen to depend on the value of certain scalar attributes of the extremaloid, here called the corners’ strengths. On this basis, both the necessary and the sufficient conditions for minimality are worked out. In the discussion, a crucial role is played by an analysis of the prolongability of the Jacobi fields across the corners. Eventually, in the appendix, an alternative approach to the concept of strength of a corner, more closely related to Pontryagin’s maximum principle, is presented.
Resonant Raman spectra of diindenoperylene thin films.
Scholz, R; Gisslén, L; Schuster, B-E; Casu, M B; Chassé, T; Heinemeyer, U; Schreiber, F
2011-01-07
Resonant and preresonant Raman spectra obtained on diindenoperylene (DIP) thin films are interpreted with calculations of the deformation of a relaxed excited molecule with density functional theory (DFT). The comparison of excited state geometries based on time-dependent DFT or on a constrained DFT scheme with observed absorption spectra of dissolved DIP reveals that the deformation pattern deduced from constrained DFT is more reliable. Most observed Raman peaks can be assigned to calculated A(g)-symmetric breathing modes of DIP or their combinations. As the position of one of the laser lines used falls into a highly structured absorption band, we have carefully analyzed the Raman excitation profile arising from the frequency dependence of the dielectric tensor. This procedure gives Raman cross sections in good agreement with the observed relative intensities, both in the fully resonant and in the preresonant case.
Vibration control of beams using stand-off layer damping: finite element modeling and experiments
NASA Astrophysics Data System (ADS)
Chaudry, A.; Baz, A.
2006-03-01
Damping treatments with stand-off layer (SOL) have been widely accepted as an attractive alternative to conventional constrained layer damping (CLD) treatments. Such an acceptance stems from the fact that the SOL, which is simply a slotted spacer layer sandwiched between the viscoelastic layer and the base structure, acts as a strain magnifier that considerably amplifies the shear strain and hence the energy dissipation characteristics of the viscoelastic layer. Accordingly, more effective vibration suppression can be achieved by using SOL as compared to employing CLD. In this paper, a comprehensive finite element model of the stand-off layer constrained damping treatment is developed. The model accounts for the geometrical and physical parameters of the slotted SOL, the viscoelastic, layer the constraining layer, and the base structure. The predictions of the model are validated against the predictions of a distributed transfer function model and a model built using a commercial finite element code (ANSYS). Furthermore, the theoretical predictions are validated experimentally for passive SOL treatments of different configurations. The obtained results indicate a close agreement between theory and experiments. Furthermore, the obtained results demonstrate the effectiveness of the CLD with SOL in enhancing the energy dissipation as compared to the conventional CLD. Extension of the proposed one-dimensional CLD with SOL to more complex structures is a natural extension to the present study.
Implementation and verification of global optimization benchmark problems
NASA Astrophysics Data System (ADS)
Posypkin, Mikhail; Usov, Alexander
2017-12-01
The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
Dambi, Jermaine M; Jelsma, Jennifer
2014-12-05
Cerebral palsy requires appropriate on-going rehabilitation intervention which should effectively meet the needs of both children and parents/care-givers. The provision of effective support is a challenge, particularly in resource constrained settings. A quasi-experimental pragmatic research design was used to compare the impact of two models of rehabilitation service delivery currently offered in Harare, Zimbabwe, an outreach-based programme and the other institution-based. Questionnaires were distributed to 46 caregivers of children with cerebral palsy at baseline and after three months. Twenty children received rehabilitation services in a community setting and 26 received services as outpatients at a central hospital. The Gross Motor Function Measurement was used to assess functional change. The burden of care was measured using the Caregiver Strain Index, satisfaction with physiotherapy was assessed using the modified Medrisk satisfaction with physiotherapy services questionnaire and compliance was measured as the proportion met of the scheduled appointments. Children receiving outreach-based treatment were significantly older than children in the institution-based group. Regression analysis revealed that, once age and level of severity were controlled for, children in the outreach-based treatment group improved their motor function 6% more than children receiving institution-based services. There were no differences detected between the groups with regard to caregiver well-being and 51% of the caregivers reported signs consistent with clinical distress/depression. Most caregivers (83%) expressed that they were overwhelmed by the caregiving role and this increased with the chronicity of care. The financial burden of caregiver was predictive of caregiver strain. Caregivers in the outreach-based group reported greater satisfaction with services and were more compliant (p < .001) as compared to recipients of institution-based services. Long term caregiving leads to strain in caregivers and there is a need to design interventions to alleviate the burden. The study was a pragmatic, quasi-experimental study thus causality cannot be inferred. However findings from this study suggest that the provision of care within a community setting as part of a well-structured outreach programme may be preferable method of service delivery within a resource-constrained context. It was associated with a greater improvement in functioning, greater satisfaction with services and better compliance.
Liu, Qingshan; Wang, Jun
2011-04-01
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
Monowar, Muhammad Mostafa; Rahman, Md. Obaidur; Hong, Choong Seon; Lee, Sungwon
2010-01-01
Energy conservation is one of the striking research issues now-a-days for power constrained wireless sensor networks (WSNs) and hence, several duty-cycle based MAC protocols have been devised for WSNs in the last few years. However, assimilation of diverse applications with different QoS requirements (i.e., delay and reliability) within the same network also necessitates in devising a generic duty-cycle based MAC protocol that can achieve both the delay and reliability guarantee, termed as multi-constrained QoS, while preserving the energy efficiency. To address this, in this paper, we propose a Multi-constrained QoS-aware duty-cycle MAC for heterogeneous traffic in WSNs (MQ-MAC). MQ-MAC classifies the traffic based on their multi-constrained QoS demands. Through extensive simulation using ns-2 we evaluate the performance of MQ-MAC. MQ-MAC provides the desired delay and reliability guarantee according to the nature of the traffic classes as well as achieves energy efficiency. PMID:22163439
Sequential Adaptive Multi-Modality Target Detection and Classification Using Physics Based Models
2006-09-01
estimation," R. Raghuram, R. Raich and A.O. Hero, IEEE Intl. Conf. on Acoustics, Speech , and Signal Processing, Toulouse France, June 2006, <http...can then be solved using off-the-shelf classifiers such as radial basis functions, SVM, or kNN classifier structures. When applied to mine detection we...stage waveform selection for adaptive resource constrained state estimation," 2006 IEEE Intl. Conf. on Acoustics, Speech , and Signal Processing
NASA Astrophysics Data System (ADS)
Spangelo, S. C.; Cutler, J.; Anderson, L.; Fosse, E.; Cheng, L.; Yntema, R.; Bajaj, M.; Delp, C.; Cole, B.; Soremekum, G.; Kaslow, D.
Small satellites are more highly resource-constrained by mass, power, volume, delivery timelines, and financial cost relative to their larger counterparts. Small satellites are operationally challenging because subsystem functions are coupled and constrained by the limited available commodities (e.g. data, energy, and access times to ground resources). Furthermore, additional operational complexities arise because small satellite components are physically integrated, which may yield thermal or radio frequency interference. In this paper, we extend our initial Model Based Systems Engineering (MBSE) framework developed for a small satellite mission by demonstrating the ability to model different behaviors and scenarios. We integrate several simulation tools to execute SysML-based behavior models, including subsystem functions and internal states of the spacecraft. We demonstrate utility of this approach to drive the system analysis and design process. We demonstrate applicability of the simulation environment to capture realistic satellite operational scenarios, which include energy collection, the data acquisition, and downloading to ground stations. The integrated modeling environment enables users to extract feasibility, performance, and robustness metrics. This enables visualization of both the physical states (e.g. position, attitude) and functional states (e.g. operating points of various subsystems) of the satellite for representative mission scenarios. The modeling approach presented in this paper offers satellite designers and operators the opportunity to assess the feasibility of vehicle and network parameters, as well as the feasibility of operational schedules. This will enable future missions to benefit from using these models throughout the full design, test, and fly cycle. In particular, vehicle and network parameters and schedules can be verified prior to being implemented, during mission operations, and can also be updated in near real-time with oper- tional performance feedback.
The generalized quadratic knapsack problem. A neuronal network approach.
Talaván, Pedro M; Yáñez, Javier
2006-05-01
The solution of an optimization problem through the continuous Hopfield network (CHN) is based on some energy or Lyapunov function, which decreases as the system evolves until a local minimum value is attained. A new energy function is proposed in this paper so that any 0-1 linear constrains programming with quadratic objective function can be solved. This problem, denoted as the generalized quadratic knapsack problem (GQKP), includes as particular cases well-known problems such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP). This new energy function generalizes those proposed by other authors. Through this energy function, any GQKP can be solved with an appropriate parameter setting procedure, which is detailed in this paper. As a particular case, and in order to test this generalized energy function, some computational experiments solving the traveling salesman problem are also included.
Optimization of an exchange-correlation density functional for water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritz, Michelle; Fernández-Serra, Marivi; Institute for Advanced Computational Science, Stony Brook University, Stony Brook, New York 11794-3800
2016-06-14
We describe a method, that we call data projection onto parameter space (DPPS), to optimize an energy functional of the electron density, so that it reproduces a dataset of experimental magnitudes. Our scheme, based on Bayes theorem, constrains the optimized functional not to depart unphysically from existing ab initio functionals. The resulting functional maximizes the probability of being the “correct” parameterization of a given functional form, in the sense of Bayes theory. The application of DPPS to water sheds new light on why density functional theory has performed rather poorly for liquid water, on what improvements are needed, and onmore » the intrinsic limitations of the generalized gradient approximation to electron exchange and correlation. Finally, we present tests of our water-optimized functional, that we call vdW-DF-w, showing that it performs very well for a variety of condensed water systems.« less
Design and Evaluation of the Terminal Area Precision Scheduling and Spacing System
NASA Technical Reports Server (NTRS)
Swenson, Harry N.; Thipphavong, Jane; Sadovsky, Alex; Chen, Liang; Sullivan, Chris; Martin, Lynne
2011-01-01
This paper describes the design, development and results from a high fidelity human-in-the-loop simulation of an integrated set of trajectory-based automation tools providing precision scheduling, sequencing and controller merging and spacing functions. These integrated functions are combined into a system called the Terminal Area Precision Scheduling and Spacing (TAPSS) system. It is a strategic and tactical planning tool that provides Traffic Management Coordinators, En Route and Terminal Radar Approach Control air traffic controllers the ability to efficiently optimize the arrival capacity of a demand-impacted airport while simultaneously enabling fuel-efficient descent procedures. The TAPSS system consists of four-dimensional trajectory prediction, arrival runway balancing, aircraft separation constraint-based scheduling, traffic flow visualization and trajectory-based advisories to assist controllers in efficient metering, sequencing and spacing. The TAPSS system was evaluated and compared to today's ATC operation through extensive series of human-in-the-loop simulations for arrival flows into the Los Angeles International Airport. The test conditions included the variation of aircraft demand from a baseline of today's capacity constrained periods through 5%, 10% and 20% increases. Performance data were collected for engineering and human factor analysis and compared with similar operations both with and without the TAPSS system. The engineering data indicate operations with the TAPSS show up to a 10% increase in airport throughput during capacity constrained periods while maintaining fuel-efficient aircraft descent profiles from cruise to landing.
Global optimization framework for solar building design
NASA Astrophysics Data System (ADS)
Silva, N.; Alves, N.; Pascoal-Faria, P.
2017-07-01
The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.
The dynamics of folding instability in a constrained Cosserat medium
NASA Astrophysics Data System (ADS)
Gourgiotis, Panos A.; Bigoni, Davide
2017-04-01
Different from Cauchy elastic materials, generalized continua, and in particular constrained Cosserat materials, can be designed to possess extreme (near a failure of ellipticity) orthotropy properties and in this way to model folding in a three-dimensional solid. Following this approach, folding, which is a narrow zone of highly localized bending, spontaneously emerges as a deformation pattern occurring in a strongly anisotropic solid. How this peculiar pattern interacts with wave propagation in the time-harmonic domain is revealed through the derivation of an antiplane, infinite-body Green's function, which opens the way to integral techniques for anisotropic constrained Cosserat continua. Viewed as a perturbing agent, the Green's function shows that folding, emerging near a steadily pulsating source in the limit of failure of ellipticity, is transformed into a disturbance with wavefronts parallel to the folding itself. The results of the presented study introduce the possibility of exploiting constrained Cosserat solids for propagating waves in materials displaying origami patterns of deformation. This article is part of the themed issue 'Patterning through instabilities in complex media: theory and applications.'
NASA Astrophysics Data System (ADS)
Zheng, Lixin; Chen, Mohan; Sun, Zhaoru; Ko, Hsin-Yu; Santra, Biswajit; Dhuvad, Pratikkumar; Wu, Xifan
2018-04-01
We perform ab initio molecular dynamics (AIMD) simulation of liquid water in the canonical ensemble at ambient conditions using the strongly constrained and appropriately normed (SCAN) meta-generalized-gradient approximation (GGA) functional approximation and carry out systematic comparisons with the results obtained from the GGA-level Perdew-Burke-Ernzerhof (PBE) functional and Tkatchenko-Scheffler van der Waals (vdW) dispersion correction inclusive PBE functional. We analyze various properties of liquid water including radial distribution functions, oxygen-oxygen-oxygen triplet angular distribution, tetrahedrality, hydrogen bonds, diffusion coefficients, ring statistics, density of states, band gaps, and dipole moments. We find that the SCAN functional is generally more accurate than the other two functionals for liquid water by not only capturing the intermediate-range vdW interactions but also mitigating the overly strong hydrogen bonds prescribed in PBE simulations. We also compare the results of SCAN-based AIMD simulations in the canonical and isothermal-isobaric ensembles. Our results suggest that SCAN provides a reliable description for most structural, electronic, and dynamical properties in liquid water.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
NASA Astrophysics Data System (ADS)
Oberhofer, Harald; Blumberger, Jochen
2010-12-01
We present a plane wave basis set implementation for the calculation of electronic coupling matrix elements of electron transfer reactions within the framework of constrained density functional theory (CDFT). Following the work of Wu and Van Voorhis [J. Chem. Phys. 125, 164105 (2006)], the diabatic wavefunctions are approximated by the Kohn-Sham determinants obtained from CDFT calculations, and the coupling matrix element calculated by an efficient integration scheme. Our results for intermolecular electron transfer in small systems agree very well with high-level ab initio calculations based on generalized Mulliken-Hush theory, and with previous local basis set CDFT calculations. The effect of thermal fluctuations on the coupling matrix element is demonstrated for intramolecular electron transfer in the tetrathiafulvalene-diquinone (Q-TTF-Q-) anion. Sampling the electronic coupling along density functional based molecular dynamics trajectories, we find that thermal fluctuations, in particular the slow bending motion of the molecule, can lead to changes in the instantaneous electron transfer rate by more than an order of magnitude. The thermal average, ( {< {| {H_ab } |^2 } > } )^{1/2} = 6.7 {mH}, is significantly higher than the value obtained for the minimum energy structure, | {H_ab } | = 3.8 {mH}. While CDFT in combination with generalized gradient approximation (GGA) functionals describes the intermolecular electron transfer in the studied systems well, exact exchange is required for Q-TTF-Q- in order to obtain coupling matrix elements in agreement with experiment (3.9 mH). The implementation presented opens up the possibility to compute electronic coupling matrix elements for extended systems where donor, acceptor, and the environment are treated at the quantum mechanical (QM) level.
A finite-temperature Hartree-Fock code for shell-model Hamiltonians
NASA Astrophysics Data System (ADS)
Bertsch, G. F.; Mehlhaff, J. M.
2016-10-01
The codes HFgradZ.py and HFgradT.py find axially symmetric minima of a Hartree-Fock energy functional for a Hamiltonian supplied in a shell model basis. The functional to be minimized is the Hartree-Fock energy for zero-temperature properties or the Hartree-Fock grand potential for finite-temperature properties (thermal energy, entropy). The minimization may be subjected to additional constraints besides axial symmetry and nucleon numbers. A single-particle operator can be used to constrain the minimization by adding it to the single-particle Hamiltonian with a Lagrange multiplier. One can also constrain its expectation value in the zero-temperature code. Also the orbital filling can be constrained in the zero-temperature code, fixing the number of nucleons having given Kπ quantum numbers. This is particularly useful to resolve near-degeneracies among distinct minima.
NASA Astrophysics Data System (ADS)
Aryal, Saurav; Finn, Susanna C.; Hewawasam, Kuravi; Maguire, Ryan; Geddes, George; Cook, Timothy; Martel, Jason; Baumgardner, Jeffrey L.; Chakrabarti, Supriya
2018-05-01
Energies and fluxes of precipitating electrons in an aurora over Lowell, MA on 22-23 June 2015 were derived based on simultaneous, high-resolution (≈ 0.02 nm) brightness measurements of N2+ (427.8 nm, blue line), OI (557.7 nm, green line), and OI (630.0 nm, red line) emissions. The electron energies and energy fluxes as a function of time and look direction were derived by nonlinear minimization of model predictions with respect to the measurements. Three different methods were compared; in the first two methods, we constrained the modeled brightnesses and brightness ratios, respectively, with measurements to simultaneously derive energies and fluxes. Then we used a hybrid method where we constrained the individual modeled brightness ratios with measurements to derive energies and then constrained modeled brightnesses with measurements to derive fluxes. Derived energy, assuming Maxwellian distribution, during this storm ranged from 109 to 262 eV and the total energy flux ranged from 0.8 to 2.2 ergs·cm-2·s-1. This approach provides a way to estimate energies and energy fluxes of the precipitating electrons using simultaneous multispectral measurements.
A novel approach based on preference-based index for interval bilevel linear programming problem.
Ren, Aihong; Wang, Yuping; Xue, Xingsi
2017-01-01
This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.
Remote sensing of plant functional types.
Ustin, Susan L; Gamon, John A
2010-06-01
Conceptually, plant functional types represent a classification scheme between species and broad vegetation types. Historically, these were based on physiological, structural and/or phenological properties, whereas recently, they have reflected plant responses to resources or environmental conditions. Often, an underlying assumption, based on an economic analogy, is that the functional role of vegetation can be identified by linked sets of morphological and physiological traits constrained by resources, based on the hypothesis of functional convergence. Using these concepts, ecologists have defined a variety of functional traits that are often context dependent, and the diversity of proposed traits demonstrates the lack of agreement on universal categories. Historically, remotely sensed data have been interpreted in ways that parallel these observations, often focused on the categorization of vegetation into discrete types, often dependent on the sampling scale. At the same time, current thinking in both ecology and remote sensing has moved towards viewing vegetation as a continuum rather than as discrete classes. The capabilities of new remote sensing instruments have led us to propose a new concept of optically distinguishable functional types ('optical types') as a unique way to address the scale dependence of this problem. This would ensure more direct relationships between ecological information and remote sensing observations.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
Uncertainty Analysis via Failure Domain Characterization: Unrestricted Requirement Functions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2011-01-01
This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. The methods developed herein, which are based on nonlinear constrained optimization, are applicable to requirement functions whose functional dependency on the uncertainty is arbitrary and whose explicit form may even be unknown. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the assumed uncertainty model (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.
Surface-Constrained Volumetric Brain Registration Using Harmonic Mappings
Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.
2015-01-01
In order to compare anatomical and functional brain imaging data across subjects, the images must first be registered to a common coordinate system in which anatomical features are aligned. Intensity-based volume registration methods can align subcortical structures well, but the variability in sulcal folding patterns typically results in misalignment of the cortical surface. Conversely, surface-based registration using sulcal features can produce excellent cortical alignment but the mapping between brains is restricted to the cortical surface. Here we describe a method for volumetric registration that also produces an accurate one-to-one point correspondence between cortical surfaces. This is achieved by first parameterizing and aligning the cortical surfaces using sulcal landmarks. We then use a constrained harmonic mapping to extend this surface correspondence to the entire cortical volume. Finally, this mapping is refined using an intensity-based warp. We demonstrate the utility of the method by applying it to T1-weighted magnetic resonance images (MRI). We evaluate the performance of our proposed method relative to existing methods that use only intensity information; for this comparison we compute the inter-subject alignment of expert-labeled sub-cortical structures after registration. PMID:18092736
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
2015-07-01
The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
State-constrained booster trajectory solutions via finite elements and shooting
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.; Seywald, Hans
1993-01-01
This paper presents an extension of a FEM formulation based on variational principles. A general formulation for handling internal boundary conditions and discontinuities in the state equations is presented, and the general formulation is modified for optimal control problems subject to state-variable inequality constraints. Solutions which only touch the state constraint and solutions which have a boundary arc of finite length are considered. Suitable shape and test functions are chosen for a FEM discretization. All element quadrature (equivalent to one-point Gaussian quadrature over each element) may be done in closed form. The final form of the algebraic equations is then derived. A simple state-constrained problem is solved. Then, for a practical application of the use of the FEM formulation, a launch vehicle subject to a dynamic pressure constraint (a first-order state inequality constraint) is solved. The results presented for the launch-vehicle trajectory have some interesting features, including a touch-point solution.
NASA Astrophysics Data System (ADS)
le Graverend, J.-B.
2018-05-01
A lattice-misfit-dependent damage density function is developed to predict the non-linear accumulation of damage when a thermal jump from 1050 °C to 1200 °C is introduced somewhere in the creep life. Furthermore, a phenomenological model aimed at describing the evolution of the constrained lattice misfit during monotonous creep load is also formulated. The response of the lattice-misfit-dependent plasticity-coupled damage model is compared with the experimental results obtained at 140 and 160 MPa on the first generation Ni-based single crystal superalloy MC2. The comparison reveals that the damage model is well suited at 160 MPa and less at 140 MPa because the transfer of stress to the γ' phase occurs for stresses above 150 MPa which leads to larger variations and, therefore, larger effects of the constrained lattice misfit on the lifetime during thermo-mechanical loading.
An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jin; Nelson, Karl E.
Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.
Retrieval of ammonia abundances and cloud opacities on Jupiter from Voyager IRIS spectra
NASA Technical Reports Server (NTRS)
Conrath, B. J.; Gierasch, P. J.
1986-01-01
Gaseous ammonia abundances and cloud opacities are retrieved from Voyager IRIS 5- and 45-micron data on the basis of a simplified atmospheric model and a two-stream radiative transfer approximation, assuming a single cloud layer with 680-mbar base pressure and 0.14 gas scale height. Brightness temperature measurements obtained as a function of emission angle from selected planetary locations are used to verify the model and constrain a number of its parameters.
An Unconditionally Monotone C 2 Quartic Spline Method with Nonoscillation Derivatives
Yao, Jin; Nelson, Karl E.
2018-01-24
Here, a one-dimensional monotone interpolation method based on interface reconstruction with partial volumes in the slope-space utilizing the Hermite cubic-spline, is proposed. The new method is only quartic, however is C 2 and unconditionally monotone. A set of control points is employed to constrain the curvature of the interpolation function and to eliminate possible nonphysical oscillations in the slope space. An extension of this method in two-dimensions is also discussed.
Wavefield reconstruction inversion with a multiplicative cost function
NASA Astrophysics Data System (ADS)
da Silva, Nuno V.; Yao, Gang
2018-01-01
We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.
NASA Astrophysics Data System (ADS)
Gao, C.; Lekic, V.
2016-12-01
When constraining the structure of the Earth's continental lithosphere, multiple seismic observables are often combined due to their complementary sensitivities.The transdimensional Bayesian (TB) approach in seismic inversion allows model parameter uncertainties and trade-offs to be quantified with few assumptions. TB sampling yields an adaptive parameterization that enables simultaneous inversion for different model parameters (Vp, Vs, density, radial anisotropy), without the need for strong prior information or regularization. We use a reversible jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate different seismic observables - surface wave dispersion (SWD), Rayleigh wave ellipticity (ZH ratio), and receiver functions - into the inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), density (ρ), and radial anisotropy (ξ) beneath a seismic station. By analyzing all three data types individually and together, we show that TB sampling can eliminate the need for a fixed parameterization based on prior information, and reduce trade-offs in model estimates. We then explore the effect of different types of misfit functions for receiver function inversion, which is a highly non-unique problem. We compare the synthetic inversion results using the L2 norm, cross-correlation type and integral type misfit function by their convergence rates and retrieved seismic structures. In inversions in which only one type of model parameter (Vs for the case of SWD) is inverted, assumed scaling relationships are often applied to account for sensitivity to other model parameters (e.g. Vp, ρ, ξ). Here we show that under a TB framework, we can eliminate scaling assumptions, while simultaneously constraining multiple model parameters to varying degrees. Furthermore, we compare the performance of TB inversion when different types of model parameters either share the same or use independent parameterizations. We show that different parameterizations can lead to differences in retrieved model parameters, consistent with limited data constraints. We then quantitatively examine the model parameter trade-offs and find that trade-offs between Vp and radial anisotropy might limit our ability to constrain shallow-layer radial anisotropy using current seismic observables.
Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.
Heydari, Ali; Balakrishnan, Sivasubramanya N
2013-01-01
To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.
Constraining physical parameters of ultra-fast outflows in PDS 456 with Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Hagino, K.; Odaka, H.; Done, C.; Gandhi, P.; Takahashi, T.
2014-07-01
Deep absorption lines with extremely high velocity of ˜0.3c observed in PDS 456 spectra strongly indicate the existence of ultra-fast outflows (UFOs). However, the launching and acceleration mechanisms of UFOs are still uncertain. One possible way to solve this is to constrain physical parameters as a function of distance from the source. In order to study the spatial dependence of parameters, it is essential to adopt 3-dimensional Monte Carlo simulations that treat radiation transfer in arbitrary geometry. We have developed a new simulation code of X-ray radiation reprocessed in AGN outflow. Our code implements radiative transfer in 3-dimensional biconical disk wind geometry, based on Monte Carlo simulation framework called MONACO (Watanabe et al. 2006, Odaka et al. 2011). Our simulations reproduce FeXXV and FeXXVI absorption features seen in the spectra. Also, broad Fe emission lines, which reflects the geometry and viewing angle, is successfully reproduced. By comparing the simulated spectra with Suzaku data, we obtained constraints on physical parameters. We discuss launching and acceleration mechanisms of UFOs in PDS 456 based on our analysis.
Buono, Frank D; Griffiths, Mark D; Sprong, Matthew E; Lloyd, Daniel P; Sullivan, Ryan M; Upton, Thomas D
2017-12-01
Background Internet gaming disorder (IGD) was introduced in the DSM-5 as a way of identifying and diagnosing problematic video game play. However, the use of the diagnosis is constrained, as it shares criteria with other addictive orders (e.g., pathological gambling). Aims Further work is required to better understand IGD. One potential avenue of investigation is IGD's relationship to the primary reinforcing behavioral functions. This study explores the relationship between duration of video game play and the reinforcing behavioral functions that may motivate or maintain video gaming. Methods A total of 499 video game players began the online survey, with complete data from 453 participants (85% white and 28% female), were analyzed. Individuals were placed into five groups based on self-reported hours of video gaming per week, and completed the Video Game Functional Assessment - Revised (VGFA-R). Results The results demonstrated the escape and social attention function were significant in predicting duration of video game play, whereas sensory and tangible were not significant. Conclusion Future implications of the VGFA-R and behaviorally based research are discussed.
Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions
Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.
2012-01-01
In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661
Optimal design of dampers within seismic structures
NASA Astrophysics Data System (ADS)
Ren, Wenjie; Qian, Hui; Song, Wali; Wang, Liqiang
2009-07-01
An improved multi-objective genetic algorithm for structural passive control system optimization is proposed. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. For a constrained problem, the dominance-based penalty function method is advanced, containing information on an individual's status (feasible or infeasible), position in a search space, and distance from a Pareto optimal set. The proposed approach is used for the optimal designs of a six-storey building with shape memory alloy dampers subjected to earthquake. The number and position of dampers are chosen as the design variables. The number of dampers and peak relative inter-storey drift are considered as the objective functions. Numerical results generate a set of non-dominated solutions.
Construct Validation Theory Applied to the Study of Personality Dysfunction
Zapolski, Tamika C. B.; Guller, Leila; Smith, Gregory T.
2013-01-01
The authors review theory validation and construct validation principles as related to the study of personality dysfunction. Historically, personality disorders have been understood to be syndromes of heterogeneous symptoms. The authors argue that the syndrome approach to description results in diagnoses of unclear meaning and constrained validity. The alternative approach of describing personality dysfunction in terms of homogeneous dimensions of functioning avoids the problems of the syndromal approach and has been shown to provide more valid description and diagnosis. The authors further argue that description based on homogeneous dimensions of personality function/dysfunction is more useful, because it provides direct connections to validated treatments. PMID:22321263
Design and optimization of organic rankine cycle for low temperature geothermal power plant
NASA Astrophysics Data System (ADS)
Barse, Kirtipal A.
Rising oil prices and environmental concerns have increased attention to renewable energy. Geothermal energy is a very attractive source of renewable energy. Although low temperature resources (90°C to 150°C) are the most common and most abundant source of geothermal energy, they were not considered economical and technologically feasible for commercial power generation. Organic Rankine Cycle (ORC) technology makes it feasible to use low temperature resources to generate power by using low boiling temperature organic liquids. The first hypothesis for this research is that using ORC is technologically and economically feasible to generate electricity from low temperature geothermal resources. The second hypothesis for this research is redesigning the ORC system for the given resource condition will improve efficiency along with improving economics. ORC model was developed using process simulator and validated with the data obtained from Chena Hot Springs, Alaska. A correlation was observed between the critical temperature of the working fluid and the efficiency for the cycle. Exergy analysis of the cycle revealed that the highest exergy destruction occurs in evaporator followed by condenser, turbine and working fluid pump for the base case scenarios. Performance of ORC was studied using twelve working fluids in base, Internal Heat Exchanger and turbine bleeding constrained and non-constrained configurations. R601a, R245ca, R600 showed highest first and second law efficiency in the non-constrained IHX configuration. The highest net power was observed for R245ca, R601a and R601 working fluids in the non-constrained base configuration. Combined heat exchanger area and size parameter of the turbine showed an increasing trend as the critical temperature of the working fluid decreased. The lowest levelized cost of electricity was observed for R245ca followed by R601a, R236ea in non-constrained base configuration. The next best candidates in terms of LCOE were R601a, R245ca and R600 in non-constrained IHX configuration. LCOE is dependent on net power and higher net power favors to lower the cost of electricity. Overall R245ca, R601, R601a, R600 and R236ea show better performance among the fluids studied. Non constrained configurations display better performance compared to the constrained configurations. Base non-constrained offered the highest net power and lowest LCOE.
Galaxy Redshifts from Discrete Optimization of Correlation Functions
NASA Astrophysics Data System (ADS)
Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi
2016-12-01
We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.
Metabolic flux estimation using particle swarm optimization with penalty function.
Long, Hai-Xia; Xu, Wen-Bo; Sun, Jun
2009-01-01
Metabolic flux estimation through 13C trace experiment is crucial for quantifying the intracellular metabolic fluxes. In fact, it corresponds to a constrained optimization problem that minimizes a weighted distance between measured and simulated results. In this paper, we propose particle swarm optimization (PSO) with penalty function to solve 13C-based metabolic flux estimation problem. The stoichiometric constraints are transformed to an unconstrained one, by penalizing the constraints and building a single objective function, which in turn is minimized using PSO algorithm for flux quantification. The proposed algorithm is applied to estimate the central metabolic fluxes of Corynebacterium glutamicum. From simulation results, it is shown that the proposed algorithm has superior performance and fast convergence ability when compared to other existing algorithms.
Nuclear parton density functions from dijet photoproduction at the EIC
NASA Astrophysics Data System (ADS)
Klasen, M.; Kovařík, K.
2018-06-01
We study the potential of dijet photoproduction measurements at a future electron-ion collider (EIC) to better constrain our present knowledge of the nuclear parton distribution functions. Based on theoretical calculations at next-to-leading order and approximate next-to-next-to-leading order of perturbative QCD, we establish the kinematic reaches for three different EIC designs, the size of the parton density function modifications for four different light and heavy nuclei from He-4 over C-12 and Fe-56 to Pb-208 with respect to the free proton, and the improvement of EIC measurements with respect to current determinations from deep-inelastic scattering and Drell-Yan data alone as well as when also considering data from existing hadron colliders.
Uniform magnetic fields in density-functional theory
NASA Astrophysics Data System (ADS)
Tellgren, Erik I.; Laestadius, Andre; Helgaker, Trygve; Kvaal, Simen; Teale, Andrew M.
2018-01-01
We construct a density-functional formalism adapted to uniform external magnetic fields that is intermediate between conventional density functional theory and Current-Density Functional Theory (CDFT). In the intermediate theory, which we term linear vector potential-DFT (LDFT), the basic variables are the density, the canonical momentum, and the paramagnetic contribution to the magnetic moment. Both a constrained-search formulation and a convex formulation in terms of Legendre-Fenchel transformations are constructed. Many theoretical issues in CDFT find simplified analogs in LDFT. We prove results concerning N-representability, Hohenberg-Kohn-like mappings, existence of minimizers in the constrained-search expression, and a restricted analog to gauge invariance. The issue of additivity of the energy over non-interacting subsystems, which is qualitatively different in LDFT and CDFT, is also discussed.
Uniform magnetic fields in density-functional theory.
Tellgren, Erik I; Laestadius, Andre; Helgaker, Trygve; Kvaal, Simen; Teale, Andrew M
2018-01-14
We construct a density-functional formalism adapted to uniform external magnetic fields that is intermediate between conventional density functional theory and Current-Density Functional Theory (CDFT). In the intermediate theory, which we term linear vector potential-DFT (LDFT), the basic variables are the density, the canonical momentum, and the paramagnetic contribution to the magnetic moment. Both a constrained-search formulation and a convex formulation in terms of Legendre-Fenchel transformations are constructed. Many theoretical issues in CDFT find simplified analogs in LDFT. We prove results concerning N-representability, Hohenberg-Kohn-like mappings, existence of minimizers in the constrained-search expression, and a restricted analog to gauge invariance. The issue of additivity of the energy over non-interacting subsystems, which is qualitatively different in LDFT and CDFT, is also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gudino, N., E-mail: natalia.gudino@nih.gov; Sonmez, M.; Nielles-Vallespin, S.
2015-01-15
Purpose: To provide a rapid method to reduce the radiofrequency (RF) E-field coupling and consequent heating in long conductors in an interventional MRI (iMRI) setup. Methods: A driving function for device heating (W) was defined as the integration of the E-field along the direction of the wire and calculated through a quasistatic approximation. Based on this function, the phases of four independently controlled transmit channels were dynamically changed in a 1.5 T MRI scanner. During the different excitation configurations, the RF induced heating in a nitinol wire immersed in a saline phantom was measured by fiber-optic temperature sensing. Additionally, amore » minimization of W as a function of phase and amplitude values of the different channels and constrained by the homogeneity of the RF excitation field (B{sub 1}) over a region of interest was proposed and its results tested on the benchtop. To analyze the validity of the proposed method, using a model of the array and phantom setup tested in the scanner, RF fields and SAR maps were calculated through finite-difference time-domain (FDTD) simulations. In addition to phantom experiments, RF induced heating of an active guidewire inserted in a swine was also evaluated. Results: In the phantom experiment, heating at the tip of the device was reduced by 92% when replacing the body coil by an optimized parallel transmit excitation with same nominal flip angle. In the benchtop, up to 90% heating reduction was measured when implementing the constrained minimization algorithm with the additional degree of freedom given by independent amplitude control. The computation of the optimum phase and amplitude values was executed in just 12 s using a standard CPU. The results of the FDTD simulations showed similar trend of the local SAR at the tip of the wire and measured temperature as well as to a quadratic function of W, confirming the validity of the quasistatic approach for the presented problem at 64 MHz. Imaging and heating reduction of the guidewire were successfully performed in vivo with the proposed hardware and phase control. Conclusions: Phantom and in vivo data demonstrated that additional degrees of freedom in a parallel transmission system can be used to control RF induced heating in long conductors. A novel constrained optimization approach to reduce device heating was also presented that can be run in just few seconds and therefore could be added to an iMRI protocol to improve RF safety.« less
An historical survey of computational methods in optimal control.
NASA Technical Reports Server (NTRS)
Polak, E.
1973-01-01
Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.
The reactants equation of state for the tri-amino-tri-nitro-benzene (TATB) based explosive PBX 9502
NASA Astrophysics Data System (ADS)
Aslam, Tariq D.
2017-07-01
The response of high explosives (HEs), due to mechanical and/or thermal insults, is of great importance for both safety and performance. A major component of how an HE responds to these stimuli stems from its reactant equation of state (EOS). Here, the tri-amino-tri-nitro-benzene based explosive PBX 9502 is investigated by examining recent experiments. Furthermore, a complete thermal EOS is calibrated based on the functional form devised by Wescott, Stewart, and Davis [J. Appl. Phys. 98, 053514 (2005)]. It is found, by comparing to earlier calibrations, that a variety of thermodynamic data are needed to sufficiently constrain the EOS response over a wide range of thermodynamic state space. Included in the calibration presented here is the specific heat as a function of temperature, isobaric thermal expansion, and shock Hugoniot response. As validation of the resulting model, isothermal compression and isentropic compression are compared with recent experiments.
NASA Astrophysics Data System (ADS)
Culpitt, Tanner; Brorsen, Kurt R.; Hammes-Schiffer, Sharon
2017-06-01
Density functional theory (DFT) embedding approaches have generated considerable interest in the field of computational chemistry because they enable calculations on larger systems by treating subsystems at different levels of theory. To circumvent the calculation of the non-additive kinetic potential, various projector methods have been developed to ensure the orthogonality of molecular orbitals between subsystems. Herein the orthogonality constrained basis set expansion (OCBSE) procedure is implemented to enforce this subsystem orbital orthogonality without requiring a level shifting parameter. This scheme is a simple alternative to existing parameter-free projector-based schemes, such as the Huzinaga equation. The main advantage of the OCBSE procedure is that excellent convergence behavior is attained for DFT-in-DFT embedding without freezing any of the subsystem densities. For the three chemical systems studied, the level of accuracy is comparable to or higher than that obtained with the Huzinaga scheme with frozen subsystem densities. Allowing both the high-level and low-level DFT densities to respond to each other during DFT-in-DFT embedding calculations provides more flexibility and renders this approach more generally applicable to chemical systems. It could also be useful for future extensions to embedding approaches combining wavefunction theories and DFT.
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
Proximity Navigation of Highly Constrained Spacecraft
NASA Technical Reports Server (NTRS)
Scarritt, S.; Swartwout, M.
2007-01-01
Bandit is a 3-kg automated spacecraft in development at Washington University in St. Louis. Bandit's primary mission is to demonstrate proximity navigation, including docking, around a 25-kg student-built host spacecraft. However, because of extreme constraints in mass, power and volume, traditional sensing and actuation methods are not available. In particular, Bandit carries only 8 fixed-magnitude cold-gas thrusters to control its 6 DOF motion. Bandit lacks true inertial sensing, and the ability to sense position relative to the host has error bounds that approach the size of the Bandit itself. Some of the navigation problems are addressed through an extremely robust, error-tolerant soft dock. In addition, we have identified a control methodology that performs well in this constrained environment: behavior-based velocity potential functions, which use a minimum-seeking method similar to Lyapunov functions. We have also adapted the discrete Kalman filter for use on Bandit for position estimation and have developed a similar measurement vs. propagation weighting algorithm for attitude estimation. This paper provides an overview of Bandit and describes the control and estimation approach. Results using our 6DOF flight simulator are provided, demonstrating that these methods show promise for flight use.
Acuity of a Cryptochrome and Vision-Based Magnetoreception System in Birds
Solov'yov, Ilia A.; Mouritsen, Henrik; Schulten, Klaus
2010-01-01
Abstract The magnetic compass of birds is embedded in the visual system and it has been hypothesized that the primary sensory mechanism is based on a radical pair reaction. Previous models of magnetoreception have assumed that the radical pair-forming molecules are rigidly fixed in space, and this assumption has been a major objection to the suggested hypothesis. In this article, we investigate theoretically how much disorder is permitted for the radical pair-forming, protein-based magnetic compass in the eye to remain functional. Our study shows that only one rotational degree of freedom of the radical pair-forming protein needs to be partially constrained, while the other two rotational degrees of freedom do not impact the magnetoreceptive properties of the protein. The result implies that any membrane-associated protein is sufficiently restricted in its motion to function as a radical pair-based magnetoreceptor. We relate our theoretical findings to the cryptochromes, currently considered the likeliest candidate to furnish radical pair-based magnetoreception. PMID:20655831
NASA Astrophysics Data System (ADS)
O'Donnell, J. P.; Dunham, C.; Stuart, G. W.; Brisbourne, A.; Nield, G. A.; Whitehouse, P. L.; Hooper, A. J.; Nyblade, A.; Wiens, D.; Aster, R. C.; Anandakrishnan, S.; Huerta, A. D.; Wilson, T. J.; Winberry, J. P.
2017-12-01
Quantifying the geothermal heat flux at the base of ice sheets is necessary to understand their dynamics and evolution. The heat flux is a composite function of concentration of upper crustal radiogenic elements and flow of heat from the mantle into the crust. Radiogenic element concentration varies with tectonothermal age, while heat flow across the crust-mantle boundary depends on crustal and lithospheric thicknesses. Meanwhile, accurately monitoring current ice mass loss via satellite gravimetry or altimetry hinges on knowing the upper mantle viscosity structure needed to account for the superimposed glacial isostatic adjustment (GIA) signal in the satellite data. In early 2016 the UK Antarctic Network (UKANET) of 10 broadband seismometers was deployed for two years across the southern Antarctic Peninsula and Ellsworth Land. Using UKANET data in conjunction with seismic records from our partner US Polar Earth Observing Network (POLENET) and the Antarctic Seismographic Argentinian Italian Network (ASAIN), we have developed a 3D shear wave velocity model of the West Antarctic crust and uppermost mantle based on Rayleigh and Love wave phase velocity dispersion curves extracted from ambient noise cross-correlograms. We combine seismic receiver functions with the shear wave model to help constrain the depth to the crust-mantle boundary across West Antarctica and delineate tectonic domains. The shear wave model is subsequently converted to temperature using a database of densities and elastic properties of minerals common in crustal and mantle rocks, while the various tectonic domains are assigned upper crustal radiogenic element concentrations based on their inferred tectonothermal ages. We combine this information to map the basal geothermal heat flux variation across West Antarctica. Mantle viscosity depends on factors including temperature, grain size, the hydrogen content of olivine and the presence of melt. Using published mantle xenolith and magnetotelluric data to constrain grain size and hydrogen content, respectively, we use the temperature model to estimate the regional upper mantle viscosity structure. The viscosity information will be incorporated in a 3D GIA model that will better constrain estimates of current ice loss from the West Antarctic Ice Sheet.
NASA Astrophysics Data System (ADS)
Yang, Y.; Zhao, Y.
2017-12-01
To understand the differences and their origins of emission inventories based on various methods for the source, emissions of PM10, PM2.5, OC, BC, CH4, VOCs, CO, CO2, NOX, SO2 and NH3 from open biomass burning (OBB) in Yangtze River Delta (YRD) are calculated for 2005-2012 using three (bottom-up, FRP-based and constraining) approaches. The inter-annual trends in emissions with FRP-based and constraining methods are similar with the fire counts in 2005-2012, while that with bottom-up method is different. For most years, emissions of all species estimated with constraining method are smaller than those with bottom-up method (except for VOCs), while they are larger than those with FRP-based (except for EC, CH4 and NH3). Such discrepancies result mainly from different masses of crop residues burned in the field (CRBF) estimated in the three methods. Among the three methods, the simulated concentrations from chemistry transport modeling with the constrained emissions are the closest to available observations, implying the result from constraining method is the best estimation for OBB emissions. CO emissions in the three methods are compared with other studies. Similar temporal variations were found for the constrained emissions, FRP-based emissions, GFASv1.0 and GFEDv4.1s, with the largest and the lowest emissions estimated for 2012 and 2006, respectively. The constrained CO emissions in this study are smaller than those in other studies based on bottom-up method and larger than those based on burned area and FRP derived from satellite. The contributions of OBB to two particulate pollution events in 2010 and 2012 are analyzed with the brute-force method. The average contribution of OBB to PM10 mass concentrations in June 8-14 2012 was estimated at 38.9% (74.8 μg m-3), larger than that in June 17-24, 2010 at 23.6 % (38.5 μg m-3). Influences of diurnal curves and meteorology on air pollution caused by OBB are also evaluated, and the results suggest that air pollution caused by OBB will become heavier if the meteorological conditions are unfavorable, and that more attention should be paid to the supervision in night. Quantified with the Monte-Carlo simulation, the uncertainties of OBB emissions with constraining method are significantly lower than those with bottom-up or FRP-based methods.
nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kusina, A.; Kovarik, Karol; Jezo, T.
2015-09-01
We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792 and concentrates on the comparison with other groups providing nuclear parton distributions.
nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kusina, A.; Kovarik, K.; Jezo, T.
2015-09-04
We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792, and concentrates on the comparison with other groups providing nuclear parton distributions.
Dust scattering from the Taurus Molecular Cloud
NASA Astrophysics Data System (ADS)
Narayan, Sathya; Murthy, Jayant; Karuppath, Narayanankutty
2017-04-01
We present an analysis of the diffuse ultraviolet emission near the Taurus Molecular Cloud based on observations made by the Galaxy Evolution Explorer. We used a Monte Carlo dust scattering model to show that about half of the scattered flux originates in the molecular cloud with 25 per cent arising in the foreground and 25 per cent behind the cloud. The best-fitting albedo of the dust grains is 0.3, but the geometry is such that we could not constrain the phase function asymmetry factor (g).
Semismooth Newton method for gradient constrained minimization problem
NASA Astrophysics Data System (ADS)
Anyyeva, Serbiniyaz; Kunisch, Karl
2012-08-01
In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.
Topological strings on singular elliptic Calabi-Yau 3-folds and minimal 6d SCFTs
NASA Astrophysics Data System (ADS)
Del Zotto, Michele; Gu, Jie; Huang, Min-xin; Kashani-Poor, Amir-Kian; Klemm, Albrecht; Lockhart, Guglielmo
2018-03-01
We apply the modular approach to computing the topological string partition function on non-compact elliptically fibered Calabi-Yau 3-folds with higher Kodaira singularities in the fiber. The approach consists in making an ansatz for the partition function at given base degree, exact in all fiber classes to arbitrary order and to all genus, in terms of a rational function of weak Jacobi forms. Our results yield, at given base degree, the elliptic genus of the corresponding non-critical 6d string, and thus the associated BPS invariants of the 6d theory. The required elliptic indices are determined from the chiral anomaly 4-form of the 2d worldsheet theories, or the 8-form of the corresponding 6d theories, and completely fix the holomorphic anomaly equation constraining the partition function. We introduce subrings of the known rings of Weyl invariant Jacobi forms which are adapted to the additional symmetries of the partition function, making its computation feasible to low base wrapping number. In contradistinction to the case of simpler singularities, generic vanishing conditions on BPS numbers are no longer sufficient to fix the modular ansatz at arbitrary base wrapping degree. We show that to low degree, imposing exact vanishing conditions does suffice, and conjecture this to be the case generally.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Ying -Qi; Segall, Paul; Bradley, Andrew
Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less
NASA Astrophysics Data System (ADS)
Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle
2017-10-01
Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ˜10-11.4m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.
Wong, Ying -Qi; Segall, Paul; Bradley, Andrew; ...
2017-10-04
Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less
Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle R.
2017-01-01
Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5wt%) total volatiles and that the magma permeability scale is well constrained at ~10-11.4 m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.
Planetary Transmission Diagnostics
NASA Technical Reports Server (NTRS)
Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.
2004-01-01
This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.
The joint fit of the BHMF and ERDF for the BAT AGN Sample
NASA Astrophysics Data System (ADS)
Weigel, Anna K.; Koss, Michael; Ricci, Claudio; Trakhtenbrot, Benny; Oh, Kyuseok; Schawinski, Kevin; Lamperti, Isabella
2018-01-01
A natural product of an AGN survey is the AGN luminosity function. This statistical measure describes the distribution of directly measurable AGN luminosities. Intrinsically, the shape of the luminosity function depends on the distribution of black hole masses and Eddington ratios. To constrain these fundamental AGN properties, the luminosity function thus has to be disentangled into the black hole mass and Eddington ratio distribution function. The BASS survey is unique as it allows such a joint fit for a large number of local AGN, is unbiased in terms of obscuration in the X-rays and provides black hole masses for type-1 and type-2 AGN. The black hole mass function at z ~ 0 represents an essential baseline for simulations and black hole growth models. The normalization of the Eddington ratio distribution function directly constrains the AGN fraction. Together, the BASS AGN luminosity, black hole mass and Eddington ratio distribution functions thus provide a complete picture of the local black hole population.
Constraining the Drag Coefficients of Meteors in Dark Flight
NASA Technical Reports Server (NTRS)
Carter, R. T.; Jandir, P. S.; Kress, M. E.
2011-01-01
Based on data in the aeronautics literature, we have derived functions for the drag coefficients of spheres and cubes as a function of Mach number. Experiments have shown that spheres and cubes exhibit an abrupt factor-of-two decrease in the drag coefficient as the object slows through the transonic regime. Irregularly shaped objects such as meteorites likely exhibit a similar trend. These functions are implemented in an otherwise simple projectile motion model, which is applicable to the non-ablative dark flight of meteors (speeds less than .+3 km/s). We demonstrate how these functions may be used as upper and lower limits on the drag coefficient of meteors whose shape is unknown. A Mach-dependent drag coefficient is potentially important in other planetary and astrophysical situations, for instance, in the core accretion scenario for giant planet formation.
Towards anti-causal Green's function for three-dimensional sub-diffraction focusing
NASA Astrophysics Data System (ADS)
Ma, Guancong; Fan, Xiying; Ma, Fuyin; de Rosny, Julien; Sheng, Ping; Fink, Mathias
2018-06-01
In causal physics, the causal Green's function describes the radiation of a point source. Its counterpart, the anti-causal Green's function, depicts a spherically converging wave. However, in free space, any converging wave must be followed by a diverging one. Their interference gives rise to the diffraction limit that constrains the smallest possible dimension of a wave's focal spot in free space, which is half the wavelength. Here, we show with three-dimensional acoustic experiments that we can realize a stand-alone anti-causal Green's function in a large portion of space up to a subwavelength distance from the focus point by introducing a near-perfect absorber for spherical waves at the focus. We build this subwavelength absorber based on membrane-type acoustic metamaterial, and experimentally demonstrate focusing of spherical waves beyond the diffraction limit.
Gladysz, Szymon; Yaitskova, Natalia; Christou, Julian C
2010-11-01
This paper is an introduction to the problem of modeling the probability density function of adaptive-optics speckle. We show that with the modified Rician distribution one cannot describe the statistics of light on axis. A dual solution is proposed: the modified Rician distribution for off-axis speckle and gamma-based distribution for the core of the point spread function. From these two distributions we derive optimal statistical discriminators between real sources and quasi-static speckles. In the second part of the paper the morphological difference between the two probability density functions is used to constrain a one-dimensional, "blind," iterative deconvolution at the position of an exoplanet. Separation of the probability density functions of signal and speckle yields accurate differential photometry in our simulations of the SPHERE planet finder instrument.
Configuration control of seven degree of freedom arms
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1995-01-01
A seven-degree-of-freedom robot arm with a six-degree-of-freedom end effector is controlled by a processor employing a 6-by-7 Jacobian matrix for defining location and orientation of the end effector in terms of the rotation angles of the joints, a 1 (or more)-by-7 Jacobian matrix for defining 1 (or more) user-specified kinematic functions constraining location or movement of selected portions of the arm in terms of the joint angles, the processor combining the two Jacobian matrices to produce an augmented 7 (or more)-by-7 Jacobian matrix, the processor effecting control by computing in accordance with forward kinematics from the augmented 7-by-7 Jacobian matrix and from the seven joint angles of the arm a set of seven desired joint angles for transmittal to the joint servo loops of the arms. One of the kinematic functions constrains the orientation of the elbow plane of the arm. Another one of the kinematic functions minimizing a sum of gravitational torques on the joints. Still another one of the kinematic functions constrains the location of the arm to perform collision avoidance. Generically, one of the kinematic functions minimizes a sum of selected mechanical parameters of at least some of the joints associated with weighting coefficients which may be changed during arm movement. The mechanical parameters may be velocity errors or position errors or gravity torques associated with individual joints.
Quantum mechanics of a constrained particle
NASA Astrophysics Data System (ADS)
da Costa, R. C. T.
1981-04-01
The motion of a particle rigidly bounded to a surface is discussed, considering the Schrödinger equation of a free particle constrained to move, by the action of an external potential, in an infinitely thin sheet of the ordinary three-dimensional space. Contrary to what seems to be the general belief expressed in the literature, this limiting process gives a perfectly well-defined result, provided that we take some simple precautions in the definition of the potentials and wave functions. It can then be shown that the wave function splits into two parts: the normal part, which contains the infinite energies required by the uncertainty principle, and a tangent part which contains "surface potentials" depending both on the Gaussian and mean curvatures. An immediate consequence of these results is the existence of different quantum mechanical properties for two isometric surfaces, as can be seen from the bound state which appears along the edge of a folded (but not stretched) plane. The fact that this surface potential is not a bending invariant (cannot be expressed as a function of the components of the metric tensor and their derivatives) is also interesting from the more general point of view of the quantum mechanics in curved spaces, since it can never be obtained from the classical Lagrangian of an a priori constrained particle without substantial modifications in the usual quantization procedures. Similar calculations are also presented for the case of a particle bounded to a curve. The properties of the constraining spatial potential, necessary to a meaningful limiting process, are discussed in some detail, and, as expected, the resulting Schrödinger equation contains a "linear potential" which is a function of the curvature.
Zhao, Tieshi; Zhao, Yanzhi; Hu, Qiangqiang; Ding, Shixing
2017-01-01
The measurement of large forces and the presence of errors due to dimensional coupling are significant challenges for multi-dimensional force sensors. To address these challenges, this paper proposes an over-constrained six-dimensional force sensor based on a parallel mechanism of steel ball structures as a measurement module. The steel ball structure can be subject to rolling friction instead of sliding friction, thus reducing the influence of friction. However, because the structure can only withstand unidirectional pressure, the application of steel balls in a six-dimensional force sensor is difficult. Accordingly, a new design of the sensor measurement structure was designed in this study. The static equilibrium and displacement compatibility equations of the sensor prototype’s over-constrained structure were established to obtain the transformation function, from which the forces in the measurement branches of the proposed sensor were then analytically derived. The sensor’s measurement characteristics were then analysed through numerical examples. Finally, these measurement characteristics were confirmed through calibration and application experiments. The measurement accuracy of the proposed sensor was determined to be 1.28%, with a maximum coupling error of 1.98%, indicating that the proposed sensor successfully overcomes the issues related to steel ball structures and provides sufficient accuracy. PMID:28867812
Grison, Claire M.; Burslem, George M.; Miles, Jennifer A.; Pilsl, Ludwig K. A.; Yeo, David J.; Imani, Zeynab; Warriner, Stuart L.; Webb, Michael E.
2017-01-01
The development of constrained peptides for inhibition of protein–protein interactions is an emerging strategy in chemical biology and drug discovery. This manuscript introduces a versatile, rapid and reversible approach to constrain peptides in a bioactive helical conformation using BID and RNase S peptides as models. Dibromomaleimide is used to constrain BID and RNase S peptide sequence variants bearing cysteine (Cys) or homocysteine (hCys) amino acids spaced at i and i + 4 positions by double substitution. The constraint can be readily removed by displacement of the maleimide using excess thiol. This new constraining methodology results in enhanced α-helical conformation (BID and RNase S peptide) as demonstrated by circular dichroism and molecular dynamics simulations, resistance to proteolysis (BID) as demonstrated by trypsin proteolysis experiments and retained or enhanced potency of inhibition for Bcl-2 family protein–protein interactions (BID), or greater capability to restore the hydrolytic activity of the RNAse S protein (RNase S peptide). Finally, use of a dibromomaleimide functionalized with an alkyne permits further divergent functionalization through alkyne–azide cycloaddition chemistry on the constrained peptide with fluorescein, oligoethylene glycol or biotin groups to facilitate biophysical and cellular analyses. Hence this methodology may extend the scope and accessibility of peptide stapling. PMID:28970902
A MAP blind image deconvolution algorithm with bandwidth over-constrained
NASA Astrophysics Data System (ADS)
Ren, Zhilei; Liu, Jin; Liang, Yonghui; He, Yulong
2018-03-01
We demonstrate a maximum a posteriori (MAP) blind image deconvolution algorithm with bandwidth over-constrained and total variation (TV) regularization to recover a clear image from the AO corrected images. The point spread functions (PSFs) are estimated by bandwidth limited less than the cutoff frequency of the optical system. Our algorithm performs well in avoiding noise magnification. The performance is demonstrated on simulated data.
Critical transition in the constrained traveling salesman problem.
Andrecut, M; Ali, M K
2001-04-01
We investigate the finite size scaling of the mean optimal tour length as a function of density of obstacles in a constrained variant of the traveling salesman problem (TSP). The computational experience pointed out a critical transition (at rho(c) approximately 85%) in the dependence between the excess of the mean optimal tour length over the Held-Karp lower bound and the density of obstacles.
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Pope, Stephen B.
2013-04-01
The Rate-Controlled Constrained-Equilibrium (RCCE) method is a thermodynamic based dimension reduction method which enables representation of chemistry involving n s species in terms of fewer n r constraints. Here we focus on the application of the RCCE method to Lagrangian particle probability density function based computations. In these computations, at every reaction fractional step, given the initial particle composition (represented using RCCE), we need to compute the reaction mapping, i.e. the particle composition at the end of the time step. In this work we study three different implementations of RCCE for computing this reaction mapping, and compare their relative accuracy and efficiency. These implementations include: (1) RCCE/TIFS (Trajectory In Full Space): this involves solving a system of n s rate-equations for all the species in the full composition space to obtain the reaction mapping. The other two implementations obtain the reaction mapping by solving a reduced system of n r rate-equations obtained by projecting the n s rate-equations for species evaluated in the full space onto the constrained subspace. These implementations include (2) RCCE: this is the classical implementation of RCCE which uses a direct projection of the rate-equations for species onto the constrained subspace; and (3) RCCE/RAMP (Reaction-mixing Attracting Manifold Projector): this is a new implementation introduced here which uses an alternative projector obtained using the RAMP approach. We test these three implementations of RCCE for methane/air premixed combustion in the partially-stirred reactor with chemistry represented using the n s=31 species GRI-Mech 1.2 mechanism with n r=13 to 19 constraints. We show that: (a) the classical RCCE implementation involves an inaccurate projector which yields large errors (over 50%) in the reaction mapping; (b) both RCCE/RAMP and RCCE/TIFS approaches yield significantly lower errors (less than 2%); and (c) overall the RCCE/TIFS approach is the most accurate, efficient (by orders of magnitude) and robust implementation.
NASA Astrophysics Data System (ADS)
Ren, Wenjie; Li, Hongnan; Song, Gangbing; Huo, Linsheng
2009-03-01
The problem of optimizing an absorber system for three-dimensional seismic structures is addressed. The objective is to determine the number and position of absorbers to minimize the coupling effects of translation-torsion of structures at minimum cost. A procedure for a multi-objective optimization problem is developed by integrating a dominance-based selection operator and a dominance-based penalty function method. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. The technique guarantees the better performing individual winning its competition, provides a slight selection pressure toward individuals and maintains diversity in the population. Moreover, due to the evaluation for individuals in each generation being finished in one run, less computational effort is taken. Penalty function methods are generally used to transform a constrained optimization problem into an unconstrained one. The dominance-based penalty function contains necessary information on non-dominated character and infeasible position of an individual, essential for success in seeking a Pareto optimal set. The proposed approach is used to obtain a set of non-dominated designs for a six-storey three-dimensional building with shape memory alloy dampers subjected to earthquake.
Design Principles of Regulatory Networks: Searching for the Molecular Algorithms of the Cell
Lim, Wendell A.; Lee, Connie M.; Tang, Chao
2013-01-01
A challenge in biology is to understand how complex molecular networks in the cell execute sophisticated regulatory functions. Here we explore the idea that there are common and general principles that link network structures to biological functions, principles that constrain the design solutions that evolution can converge upon for accomplishing a given cellular task. We describe approaches for classifying networks based on abstract architectures and functions, rather than on the specific molecular components of the networks. For any common regulatory task, can we define the space of all possible molecular solutions? Such inverse approaches might ultimately allow the assembly of a design table of core molecular algorithms that could serve as a guide for building synthetic networks and modulating disease networks. PMID:23352241
The gauge transformations of the constrained q-deformed KP hierarchy
NASA Astrophysics Data System (ADS)
Geng, Lumin; Chen, Huizhan; Li, Na; Cheng, Jipeng
2018-06-01
In this paper, we mainly study the gauge transformations of the constrained q-deformed Kadomtsev-Petviashvili (q-KP) hierarchy. Different from the usual case, we have to consider the additional constraints on the Lax operator of the constrained q-deformed KP hierarchy, since the form of the Lax operator must be kept when constructing the gauge transformations. For this reason, the selections of generating functions in elementary gauge transformation operators TD and TI must be very special, which are from the constraints in the Lax operator. At last, we consider the successive applications of n-step of TD and k-step of TI gauge transformations.
NASA Astrophysics Data System (ADS)
Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.
2018-05-01
A systematic approach for the model building of Generalized Parton Distributions (GPDs), based on their overlap representation within the DGLAP kinematic region and a further covariant extension to the ERBL one, is applied to the valence-quark pion's case, using light-front wave functions inspired by the Nakanishi representation of the pion Bethe-Salpeter amplitudes (BSA). This simple but fruitful pion GPD model illustrates the general model building technique and, in addition, allows for the ambiguities related to the covariant extension, grounded on the Double Distribution (DD) representation, to be constrained by requiring a soft-pion theorem to be properly observed.
NASA Astrophysics Data System (ADS)
Los, S. O.
2015-06-01
A model was developed to simulate spatial, seasonal and interannual variations in vegetation in response to temperature, precipitation and atmospheric CO2 concentrations; the model addresses shortcomings in current implementations. The model uses the minimum of 12 temperature and precipitation constraint functions to simulate NDVI. Functions vary based on the Köppen-Trewartha climate classification to take adaptations of vegetation to climate into account. The simulated NDVI, referred to as the climate constrained vegetation index (CCVI), captured the spatial variability (0.82 < r <0.87), seasonal variability (median r = 0.83) and interannual variability (median global r = 0.24) in NDVI. The CCVI simulated the effects of adverse climate on vegetation during the 1984 drought in the Sahel and during dust bowls of the 1930s and 1950s in the Great Plains in North America. A global CO2 fertilisation effect was found in NDVI data, similar in magnitude to that of earlier estimates (8 % for the 20th century). This effect increased linearly with simple ratio, a transformation of the NDVI. Three CCVI scenarios, based on climate simulations using the representative concentration pathway RCP4.5, showed a greater sensitivity of vegetation towards precipitation in Northern Hemisphere mid latitudes than is currently implemented in climate models. This higher sensitivity is of importance to assess the impact of climate variability on vegetation, in particular on agricultural productivity.
Using artificial neural networks to constrain the halo baryon fraction during reionization
NASA Astrophysics Data System (ADS)
Sullivan, David; Iliev, Ilian T.; Dixon, Keri L.
2018-01-01
Radiative feedback from stars and galaxies has been proposed as a potential solution to many of the tensions with simplistic galaxy formation models based on Λcold dark matter, such as the faint end of the ultraviolet (UV) luminosity function. The total energy budget of radiation could exceed that of galactic winds and supernovae combined, which has driven the development of sophisticated algorithms that evolve both the radiation field and the hydrodynamical response of gas simultaneously, in a cosmological context. We probe self-feedback on galactic scales using the adaptive mesh refinement, radiative transfer, hydrodynamics, and N-body code RAMSES-RT. Unlike previous studies which assume a homogeneous UV background, we self-consistently evolve both the radiation field and gas to constrain the halo baryon fraction during cosmic reionization. We demonstrate that the characteristic halo mass with mean baryon fraction half the cosmic mean, Mc(z), shows very little variation as a function of mass-weighted ionization fraction. Furthermore, we find that the inclusion of metal cooling and the ability to resolve scales small enough for self-shielding to become efficient leads to a significant drop in Mc when compared to recent studies. Finally, we develop an artificial neural network that is capable of predicting the baryon fraction of haloes based on recent tidal interactions, gas temperature, and mass-weighted ionization fraction. Such a model can be applied to any reionization history, and trivially incorporated into semi-analytical models of galaxy formation.
NASA Astrophysics Data System (ADS)
Cull, S. C.; Arvidson, R. E.; Seelos, F.; Wolff, M. J.
2010-03-01
Using data from CRISM's Emission Phase Function observations, we attempt to constrain Phoenix soil scattering properties, including soil grain size, single-scattering albedo, and surface phase function.
Raguideau, Sébastien; Plancade, Sandra; Pons, Nicolas; Leclerc, Marion; Laroche, Béatrice
2016-12-01
Whole Genome Shotgun (WGS) metagenomics is increasingly used to study the structure and functions of complex microbial ecosystems, both from the taxonomic and functional point of view. Gene inventories of otherwise uncultured microbial communities make the direct functional profiling of microbial communities possible. The concept of community aggregated trait has been adapted from environmental and plant functional ecology to the framework of microbial ecology. Community aggregated traits are quantified from WGS data by computing the abundance of relevant marker genes. They can be used to study key processes at the ecosystem level and correlate environmental factors and ecosystem functions. In this paper we propose a novel model based approach to infer combinations of aggregated traits characterizing specific ecosystemic metabolic processes. We formulate a model of these Combined Aggregated Functional Traits (CAFTs) accounting for a hierarchical structure of genes, which are associated on microbial genomes, further linked at the ecosystem level by complex co-occurrences or interactions. The model is completed with constraints specifically designed to exploit available genomic information, in order to favor biologically relevant CAFTs. The CAFTs structure, as well as their intensity in the ecosystem, is obtained by solving a constrained Non-negative Matrix Factorization (NMF) problem. We developed a multicriteria selection procedure for the number of CAFTs. We illustrated our method on the modelling of ecosystemic functional traits of fiber degradation by the human gut microbiota. We used 1408 samples of gene abundances from several high-throughput sequencing projects and found that four CAFTs only were needed to represent the fiber degradation potential. This data reduction highlighted biologically consistent functional patterns while providing a high quality preservation of the original data. Our method is generic and can be applied to other metabolic processes in the gut or in other ecosystems.
A Constrained-Clustering Approach to the Analysis of Remote Sensing Data.
1983-01-01
One old and two new clustering methods were applied to the constrained-clustering problem of separating different agricultural fields based on multispectral remote sensing satellite data. (Constrained-clustering involves double clustering in multispectral measurement similarity and geographical location.) The results of applying the three methods are provided along with a discussion of their relative strengths and weaknesses and a detailed description of their algorithms.
a Simulation-As Framework Facilitating Webgis Based Installation Planning
NASA Astrophysics Data System (ADS)
Zheng, Z.; Chang, Z. Y.; Fei, Y. F.
2017-09-01
Installation Planning is constrained by both natural and social conditions, especially for spatially sparse but functionally connected facilities. Simulation is important for proper deploy in space and configuration in function of facilities to make them a cohesive and supportive system to meet users' operation needs. Based on requirement analysis, we propose a framework to combine GIS and Agent simulation to overcome the shortness in temporal analysis and task simulation of traditional GIS. In this framework, Agent based simulation runs as a service on the server, exposes basic simulation functions, such as scenario configuration, simulation control, and simulation data retrieval to installation planners. At the same time, the simulation service is able to utilize various kinds of geoprocessing services in Agents' process logic to make sophisticated spatial inferences and analysis. This simulation-as-a-service framework has many potential benefits, such as easy-to-use, on-demand, shared understanding, and boosted performances. At the end, we present a preliminary implement of this concept using ArcGIS javascript api 4.0 and ArcGIS for server, showing how trip planning and driving can be carried out by agents.
Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J
2015-12-01
Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bouwens, Rychard; Trenti, Michele; Calvi, Valentina; Bernard, Stephanie; Labbe, Ivo; Oesch, Pascal; Coe, Dan; Holwerda, Benne; Bradley, Larry; Mason, Charlotte; Schmidt, Kasper; Illingworth, Garth
2015-10-01
Hubble's WFC3 has been a game changer for studying early galaxy formation in the first 700 Myr after the Big Bang. Reliable samples of sources up to z~10, which can be discovered only from space, are now constraining the evolution of the galaxy luminosity function into the epoch of reionization. Despite these efforts, the size of the highest redshift galaxy samples (z >9 and especially z > 10) is still very small, particularly at high luminosities (L > L*). To deliver transformational results, much larger numbers of bright z > 9 galaxies are needed both to map out the bright end of the luminosity/mass function and for spectroscopic follow-up (with JWST and otherwise). One especially efficient way of expanding current samples is (1) to leverage the huge amounts of pure-parallel data available with HST to identify large numbers of candidate z ~ 9 - 11 galaxies and (2) to follow up each candidate with shallow Spitzer/IRAC observations to distinguish the bona- fide z ~ 9 - 11 galaxies from z ~ 2 old, dusty galaxies. For this program we are requesting shallow Spitzer/IRAC follow-up of 20 candidate z ~ 9 - 11 galaxies we have identified from 130 WFC3/IR pointings obtained from more than 4 separate HST programs with no existing IRAC coverage. Based on our previous CANDELS/GOODS searches, we expect to confirm 5 to 10 sources as L > L* galaxies at z >= 9. Our results will be used to constrain the bright end of the LF at z >= 9, to provide targets for Keck spectroscopy to constrain the ionization state of the z > 8 universe, and to furnish JWST with bright targets for spectroscopic follow-up studies.
Zhou, Rong; Pickup, Stephen; Yankeelov, Thomas E; Springer, Charles S; Glickson, Jerry D
2004-08-01
A noninvasive technique for simultaneous measurement of the arterial input function (AIF) for gadodiamide (Omniscan) and its uptake in tumor was demonstrated in mice. Implantation of a tumor at a suitable location enabled its visualization in a cardiac short axis image. Sets of gated, low-resolution saturation recovery images were acquired from each of five tumor-bearing mice following intravenous administration of a bolus of contrast agent (CA). The AIF was extracted from the signal intensity changes in left ventricular blood using literature values of the CA relaxivity and a precontrast T1 map. The time-dependent 1H2O relaxation rate constant (R1 = 1/T1) in the tumor was modeled using the BOLus Enhanced Relaxation Overview (BOLERO) method in two modes regarding the equilibrium transcytolemmal water exchange system: 1) constraining it exclusively to the fast exchange limit (FXL) (the conventional assumption), and 2) allowing its transient departure from FXL and access to the fast exchange regime (FXR), thus designated FXL/FXR. The FXL/FXR analysis yielded better fittings than the FXL-constrained analysis for data from the tumor rims, whereas the results based on the two modes were indistinguishable for data from the tumor cores. For the tumor rims, the values of Ktrans (the rate constant for CA transfer from the vasculature to the interstitium) and ve (volume fraction of the tissue extracellular and extravascular space) returned from FXL/FXR analysis are consistently greater than those from the FXL-constrained analysis by a factor of 1.5 or more corresponding to a CA dose of 0.05 mmole/kg.
Contact lens design with slope-constrained Q-type aspheres for myopia correction
NASA Astrophysics Data System (ADS)
Peng, Wei-Jei; Cheng, Yuan-Chieh; Hsu, Wei-Yao; Yu, Zong-Ru; Ho, Cheng-Fang; Abou-El-Hossein, Khaled
2017-08-01
The design of the rigid contact lens (CL) with slope-constrained Q-type aspheres for myopia correction is presented in this paper. The spherical CL is the most common type for myopia correction, however the spherical aberration (SA) caused from the pupil dilation in dark leads to the degradation of visual acuity which cannot be corrected by spherical surface. The spherical and aspheric CLs are designed respectively based on Liou's schematic eye model, and the criterion is the modulation transfer function (MTF) at the frequency of 100 line pair per mm, which corresponds to the normal vision of one arc-minute. After optimization, the MTF of the aspheric design is superior to that of the spherical design, because the aspheric surface corrects the SA for improving the visual acuity in dark. For avoiding the scratch caused from the contact profilometer, the aspheric surface is designed to match the measurability of the interferometer. The Q-type aspheric surface is employed to constrain the root-mean-square (rms) slope of the departure from a best-fit sphere directly, because the fringe density is limited by the interferometer. The maximum sag departure from a best-fit sphere is also controlled according to the measurability of the aspheric stitching interferometer (ASI). The inflection point is removed during optimization for measurability and appearance. In this study, the aspheric CL is successfully designed with Q-type aspheres for the measurability of the interferometer. It not only corrects the myopia but also eliminates the SA for improving the visual acuity in dark based on the schematic eye model.
Mantle viscosity structure constrained by joint inversions of seismic velocities and density
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Moulik, P.; Lekic, V.
2017-12-01
The viscosity structure of Earth's deep mantle affects the thermal evolution of Earth, the ascent of mantle upwellings, sinking of subducted oceanic lithosphere, and the mixing of compositional heterogeneities in the mantle. Modeling the long-wavelength dynamic geoid allows us to constrain the radial viscosity profile of the mantle. Typically, in inversions for the mantle viscosity structure, wavespeed variations are mapped into density variations using a constant- or depth-dependent scaling factor. Here, we use a newly developed joint model of anisotropic Vs, Vp, density and transition zone topographies to generate a suite of solutions for the mantle viscosity structure directly from the seismologically constrained density structure. The density structure used to drive our forward models includes contributions from both thermal and compositional variations, including important contributions from compositionally dense material in the Large Low Velocity Provinces at the base of the mantle. These compositional variations have been neglected in the forward models used in most previous inversions and have the potential to significantly affect large-scale flow and thus the inferred viscosity structure. We use a transdimensional, hierarchical, Bayesian approach to solve the inverse problem, and our solutions for viscosity structure include an increase in viscosity below the base of the transition zone, in the shallow lower mantle. Using geoid dynamic response functions and an analysis of the correlation between the observed geoid and mantle structure, we demonstrate the underlying reason for this inference. Finally, we present a new family of solutions in which the data uncertainty is accounted for using covariance matrices associated with the mantle structure models.
NASA Astrophysics Data System (ADS)
di Stefano, Marco; Paulsen, Jonas; Lien, Tonje G.; Hovig, Eivind; Micheletti, Cristian
2016-10-01
Combining genome-wide structural models with phenomenological data is at the forefront of efforts to understand the organizational principles regulating the human genome. Here, we use chromosome-chromosome contact data as knowledge-based constraints for large-scale three-dimensional models of the human diploid genome. The resulting models remain minimally entangled and acquire several functional features that are observed in vivo and that were never used as input for the model. We find, for instance, that gene-rich, active regions are drawn towards the nuclear center, while gene poor and lamina associated domains are pushed to the periphery. These and other properties persist upon adding local contact constraints, suggesting their compatibility with non-local constraints for the genome organization. The results show that suitable combinations of data analysis and physical modelling can expose the unexpectedly rich functionally-related properties implicit in chromosome-chromosome contact data. Specific directions are suggested for further developments based on combining experimental data analysis and genomic structural modelling.
Di Stefano, Marco; Paulsen, Jonas; Lien, Tonje G; Hovig, Eivind; Micheletti, Cristian
2016-10-27
Combining genome-wide structural models with phenomenological data is at the forefront of efforts to understand the organizational principles regulating the human genome. Here, we use chromosome-chromosome contact data as knowledge-based constraints for large-scale three-dimensional models of the human diploid genome. The resulting models remain minimally entangled and acquire several functional features that are observed in vivo and that were never used as input for the model. We find, for instance, that gene-rich, active regions are drawn towards the nuclear center, while gene poor and lamina associated domains are pushed to the periphery. These and other properties persist upon adding local contact constraints, suggesting their compatibility with non-local constraints for the genome organization. The results show that suitable combinations of data analysis and physical modelling can expose the unexpectedly rich functionally-related properties implicit in chromosome-chromosome contact data. Specific directions are suggested for further developments based on combining experimental data analysis and genomic structural modelling.
Affording and Constraining Local Moral Orders in Teacher-Led Ability-Based Mathematics Groups
ERIC Educational Resources Information Center
Tait-McCutcheon, Sandi; Shuker, Mary Jane; Higgins, Joanna; Loveridge, Judith
2015-01-01
How teachers position themselves and their students can influence the development of afforded or constrained local moral orders in ability-based teacher-led mathematics lessons. Local moral orders are the negotiated discursive practices and interactions of participants in the group. In this article, the developing local moral orders of 12 teachers…
Robust media processing on programmable power-constrained systems
NASA Astrophysics Data System (ADS)
McVeigh, Jeff
2005-03-01
To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.
Cohen, Trevor; Blatter, Brett; Patel, Vimla
2005-01-01
Certain applications require computer systems to approximate intended human meaning. This is achievable in constrained domains with a finite number of concepts. Areas such as psychiatry, however, draw on concepts from the world-at-large. A knowledge structure with broad scope is required to comprehend such domains. Latent Semantic Analysis (LSA) is an unsupervised corpus-based statistical method that derives quantitative estimates of the similarity between words and documents from their contextual usage statistics. The aim of this research was to evaluate the ability of LSA to derive meaningful associations between concepts relevant to the assessment of dangerousness in psychiatry. An expert reference model of dangerousness was used to guide the construction of a relevant corpus. Derived associations between words in the corpus were evaluated qualitatively. A similarity-based scoring function was used to assign dangerousness categories to discharge summaries. LSA was shown to derive intuitive relationships between concepts and correlated significantly better than random with human categorization of psychiatric discharge summaries according to dangerousness. The use of LSA to derive a simulated knowledge structure can extend the scope of computer systems beyond the boundaries of constrained conceptual domains. PMID:16779020
Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L
1997-04-25
A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.
Protein secondary structure determination by constrained single-particle cryo-electron tomography.
Bartesaghi, Alberto; Lecumberry, Federico; Sapiro, Guillermo; Subramaniam, Sriram
2012-12-05
Cryo-electron microscopy (cryo-EM) is a powerful technique for 3D structure determination of protein complexes by averaging information from individual molecular images. The resolutions that can be achieved with single-particle cryo-EM are frequently limited by inaccuracies in assigning molecular orientations based solely on 2D projection images. Tomographic data collection schemes, however, provide powerful constraints that can be used to more accurately determine molecular orientations necessary for 3D reconstruction. Here, we propose "constrained single-particle tomography" as a general strategy for 3D structure determination in cryo-EM. A key component of our approach is the effective use of images recorded in tilt series to extract high-resolution information and correct for the contrast transfer function. By incorporating geometric constraints into the refinement to improve orientational accuracy of images, we reduce model bias and overrefinement artifacts and demonstrate that protein structures can be determined at resolutions of ∼8 Å starting from low-dose tomographic tilt series. Copyright © 2012 Elsevier Ltd. All rights reserved.
The tangential velocity of M31: CLUES from constrained simulations
NASA Astrophysics Data System (ADS)
Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Courtois, Hélène; Tully, R. Brent
2016-07-01
Determining the precise value of the tangential component of the velocity of M31 is a non-trivial astrophysical issue that relies on complicated modelling. This has recently lead to conflicting estimates, obtained by several groups that used different methodologies and assumptions. This Letter addresses the issue by computing a Bayesian posterior distribution function of this quantity, in order to measure the compatibility of those estimates with Λ cold dark matter (ΛCDM). This is achieved using an ensemble of Local Group (LG) look-alikes collected from a set of constrained simulations (CSs) of the local Universe, and a standard unconstrained ΛCDM. The latter allows us to build a control sample of LG-like pairs and to single out the influence of the environment in our results. We find that neither estimate is at odds with ΛCDM; however, whereas CSs favour higher values of vtan, the reverse is true for estimates based on LG samples gathered from unconstrained simulations, overlooking the environmental element.
Kinematics and constraints associated with swashplate blade pitch control
NASA Technical Reports Server (NTRS)
Leyland, Jane A.
1993-01-01
An important class of techniques to reduce helicopter vibration is based on using a Higher Harmonic controller to optimally define the Higher Harmonic blade pitch. These techniques typically require solution of a general optimization problem requiring the determination of a control vector which minimizes a performance index where functions of the control vector are subject to inequality constraints. Six possible constraint functions associated with swashplate blade pitch control were identified and defined. These functions constrain: (1) blade pitch Fourier Coefficients expressed in the Rotating System, (2) blade pitch Fourier Coefficients expressed in the Nonrotating System, (3) stroke of the individual actuators expressed in the Nonrotating System, (4) blade pitch expressed as a function of blade azimuth and actuator stroke, (5) time rate-of-change of the aforementioned parameters, and (6) required actuator power. The aforementioned constraints and the associated kinematics of swashplate blade pitch control by means of the strokes of the individual actuators are documented.
Deco, Gustavo; Mantini, Dante; Romani, Gian Luca; Hagmann, Patric; Corbetta, Maurizio
2013-01-01
Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure–function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure–function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications. PMID:23825427
Deco, Gustavo; Ponce-Alvarez, Adrián; Mantini, Dante; Romani, Gian Luca; Hagmann, Patric; Corbetta, Maurizio
2013-07-03
Brain fluctuations at rest are not random but are structured in spatial patterns of correlated activity across different brain areas. The question of how resting-state functional connectivity (FC) emerges from the brain's anatomical connections has motivated several experimental and computational studies to understand structure-function relationships. However, the mechanistic origin of resting state is obscured by large-scale models' complexity, and a close structure-function relation is still an open problem. Thus, a realistic but simple enough description of relevant brain dynamics is needed. Here, we derived a dynamic mean field model that consistently summarizes the realistic dynamics of a detailed spiking and conductance-based synaptic large-scale network, in which connectivity is constrained by diffusion imaging data from human subjects. The dynamic mean field approximates the ensemble dynamics, whose temporal evolution is dominated by the longest time scale of the system. With this reduction, we demonstrated that FC emerges as structured linear fluctuations around a stable low firing activity state close to destabilization. Moreover, the model can be further and crucially simplified into a set of motion equations for statistical moments, providing a direct analytical link between anatomical structure, neural network dynamics, and FC. Our study suggests that FC arises from noise propagation and dynamical slowing down of fluctuations in an anatomically constrained dynamical system. Altogether, the reduction from spiking models to statistical moments presented here provides a new framework to explicitly understand the building up of FC through neuronal dynamics underpinned by anatomical connections and to drive hypotheses in task-evoked studies and for clinical applications.
Resolving the faint end of the satellite luminosity function for the nearest elliptical Centaurus A
NASA Astrophysics Data System (ADS)
Crnojevic, Denija
2014-10-01
We request HST/ACS imaging to follow up 15 new faint candidate dwarfs around the nearest elliptical Centaurus A (3.8 Mpc). The dwarfs were found via a systematic ground-based (Magellan/Megacam) survey out to ~150 kpc, designed to directly confront the "missing satellites" problem in a wholly new environment. Current Cold Dark Matter models for structure formation fail to reproduce the shallow slope of the satellite luminosity function in spiral-dominated groups for which dwarfs fainter than M_V<-14 have been surveyed (the Local Group and the nearby, interacting M81 group). Clusters of galaxies show a better agreement with cosmological predictions, suggesting an environmental dependence of the (poorly-understood) physical processes acting on the evolution of low mass galaxies (e.g., reionization). However, the luminosity function completeness for these rich environments quickly drops due to the faintness of the satellites and to the difficult cluster membership determination. We target a yet unexplored "intermediate" environment, a nearby group dominated by an elliptical galaxy, ideal due to its proximity: accurate (10%) distance determinations for its members can be derived from resolved stellar populations. The proposed observations of the candidate dwarfs will confirm their nature, group membership, and constrain their luminosities, metallicities, and star formation histories. We will obtain the first complete census of dwarf satellites of an elliptical down to an unprecedented M_V<-9. Our results will crucially constrain cosmological predictions for the faint end of the satellite luminosity function to achieve a more complete picture of the galaxy formation process.
Plessow, Philipp N
2018-02-13
This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.
Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.
Xia, Youshen; Wang, Jun
2015-07-01
This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Steininger, H; Schuster, M; Kreuer, K D; Kaltbeitzel, A; Bingöl, B; Meyer, W H; Schauff, S; Brunklaus, G; Maier, J; Spiess, H W
2007-04-21
The melting behaviour and transport properties of straight chain alkanes mono- and difunctionalized with phosphonic acid groups have been investigated as a function of their length. The increase of melting temperature and decrease of proton conductivity with increasing chain length is suggested to be the consequence of an increasing ordering of the alkane segments which constrains the free aggregation of the phosphonic acid groups. However, the proton mobility is reduced to a greater extent than the proton diffusion coefficient indicating an increasing cooperativity of proton transport with increasing length of the alkane segment. The results clearly indicate that the "spacer concept", which had been proven successful in the optimization of the proton conductivity of heterocycle based systems, fails in the case of phosphonic acid functionalized polymers. Instead, a very high concentration of phosphonic acid functional groups forming "bulky" hydrogen bonded aggregates is suggested to be essential for obtaining very high proton conductivity. Aggregation is also suggested to reduce condensation reactions generally observed in phosphonic acid containing systems. On the basis of this understanding, the proton conductivities of poly(vinyl phosphonic acid) and poly(meta-phenylene phosphonic acid) are discussed. Though both polymers exhibit a substantial concentration of phosphonic acid groups, aggregation seems to be constrained to such an extent that intrinsic proton conductivity is limited to values below sigma = 10(-3) S cm(-1) at T = 150 degrees C. The results suggest that different immobilization concepts have to be developed in order to minimize the conductivity reduction compared to the very high intrinsic proton conductivity of neat phosphonic acid under quasi dry conditions. In the presence of high water activities, however, (as usually present in PEM fuel cells) the very high ion exchange capacities (IEC) possible for phosphonic acid functionalized ionomers (IEC >10 meq g(-1)) may allow for high proton conductivities in the intermediate temperature range (T approximately 120 -160 degrees C).
Functionalization mediates heat transport in graphene nanoflakes
Han, Haoxue; Zhang, Yong; Wang, Nan; Samani, Majid Kabiri; Ni, Yuxiang; Mijbil, Zainelabideen Y.; Edwards, Michael; Xiong, Shiyun; Sääskilahti, Kimmo; Murugesan, Murali; Fu, Yifeng; Ye, Lilei; Sadeghi, Hatef; Bailey, Steven; Kosevich, Yuriy A.; Lambert, Colin J.; Liu, Johan; Volz, Sebastian
2016-01-01
The high thermal conductivity of graphene and few-layer graphene undergoes severe degradations through contact with the substrate. Here we show experimentally that the thermal management of a micro heater is substantially improved by introducing alternative heat-escaping channels into a graphene-based film bonded to functionalized graphene oxide through amino-silane molecules. Using a resistance temperature probe for in situ monitoring we demonstrate that the hotspot temperature was lowered by ∼28 °C for a chip operating at 1,300 W cm−2. Thermal resistance probed by pulsed photothermal reflectance measurements demonstrated an improved thermal coupling due to functionalization on the graphene–graphene oxide interface. Three functionalization molecules manifest distinct interfacial thermal transport behaviour, corroborating our atomistic calculations in unveiling the role of molecular chain length and functional groups. Molecular dynamics simulations reveal that the functionalization constrains the cross-plane phonon scattering, which in turn enhances in-plane heat conduction of the bonded graphene film by recovering the long flexural phonon lifetime. PMID:27125636
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
NASA Astrophysics Data System (ADS)
Gaztañaga, Enrique; Juszkiewicz, Roman
2001-09-01
We present a new constraint on the biased galaxy formation picture. Gravitational instability theory predicts that the two-point mass density correlation function, ξ(r), has an inflection point at the separation r=r0, corresponding to the boundary between the linear and nonlinear regime of clustering, ξ~=1. We show how this feature can be used to constrain the biasing parameter b2≡ξg(r)/ξ(r) on scales r~=r0, where ξg is the galaxy-galaxy correlation function, which is allowed to differ from ξ. We apply our method to real data: the ξg(r), estimated from the Automatic Plate Measuring (APM) galaxy survey. Our results suggest that the APM galaxies trace the mass at separations r>~5 h-1 Mpc, where h is the Hubble constant in units of 100 km s-1 Mpc-1. The present results agree with earlier studies, based on comparing higher order correlations in the APM with weakly nonlinear perturbation theory. Both approaches constrain the b factor to be within 20% of unity. If the existence of the feature that we identified in the APM ξg(r)-the inflection point near ξg=1-is confirmed by more accurate surveys, we may have discovered gravity's smoking gun: the long-awaited ``shoulder'' in ξ, predicted by Gott and Rees 25 years ago.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
de Vaal, M H; Gee, M W; Stock, U A; Wall, W A
2016-12-01
Because aortic occlusion is arguably one of the most dangerous aortic manipulation maneuvers during cardiac surgery in terms of perioperative ischemic neurological injury, the purpose of this investigation is to assess the structural mechanical impact resulting from the use of existing and newly proposed occluders. Existing (clinically used) occluders considered include different cross-clamps (CCs) and endo-aortic balloon occlusion (EABO). A novel occluder is also introduced, namely, constrained EABO (CEABO), which consists of applying a constrainer externally around the aorta when performing EABO. Computational solid mechanics are employed to investigate each occluder according to a comprehensive list of functional requirements. The potential of a state of occlusion is also considered for the first time. Three different constrainer designs are evaluated for CEABO. Although the CCs were responsible for the highest strains, largest deformation, and most inefficient increase of the occlusion potential, it remains the most stable, simplest, and cheapest occluder. The different CC hinge geometries resulted in poorer performance of CC used for minimally invasive procedures than conventional ones. CEABO with a profiled constrainer successfully addresses the EABO shortcomings of safety, stability, and positioning accuracy, while maintaining its complexities of operation (disadvantage) and yielding additional functionalities (advantage). Moreover, CEABO is able to achieve the previously unattainable potential to provide a clinically determinable state of occlusion. CEABO offers an attractive alternative to the shortcomings of existing occluders, with its design rooted in achieving the highest patient safety. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Engineering calculations for communications satellite systems planning
NASA Technical Reports Server (NTRS)
Martin, C. H.; Gonsalvez, D. J.; Levis, C. A.; Wang, C. W.
1983-01-01
Progress is reported on a computer code to improve the efficiency of spectrum and orbit utilization for the Broadcasting Satellite Service in the 12 GHz band for Region 2. It implements a constrained gradient search procedure using an exponential objective function based on aggregate signal to noise ratio and an extended line search in the gradient direction. The procedure is tested against a manually generated initial scenario and appears to work satisfactorily. In this test it was assumed that alternate channels use orthogonal polarizations at any one satellite location.
Vibration Power Flow In A Constrained Layer Damping Cylindrical Shell
NASA Astrophysics Data System (ADS)
Wang, Yun; Zheng, Gangtie
2012-07-01
In this paper, the vibration power flow in a constrained layer damping (CLD) cylindrical shell using wave propagation approach is investigated. The dynamic equations of the shell are derived with the Hamilton principle in conjunction with the Donnell shell assumption. With these equations, the dynamic responses of the system under a line circumferential cosine harmonic exciting force is obtained by employing the Fourier transform and the residue theorem. The vibration power flows inputted to the system and transmitted along the shell axial direction are both studied. The results show that input power flow varies with driving frequency and circumferential mode order, and the constrained damping layer can obviously restrict the exciting force from inputting power flow into the base shell especially for a thicker viscoelastic layer, a thicker or stiffer constraining layer (CL), and a higher circumferential mode order, can rapidly attenuate the vibration power flow transmitted along the base shell axial direction.
Child-Care Provider Survey Reveals Cost Constrains Quality. Research Brief. Volume 96, Number 5
ERIC Educational Resources Information Center
Public Policy Forum, 2008
2008-01-01
A survey of 414 child care providers in southeastern Wisconsin reveals that cost as well as low wages and lack of benefits for workers can constrain providers from pursuing improvements to child-care quality. Of survey respondents, approximately half of whom are home-based and half center-based, 13% have at least three of five structural factors…
Online Coregularization for Multiview Semisupervised Learning
Li, Guohui; Huang, Kuihua
2013-01-01
We propose a novel online coregularization framework for multiview semisupervised learning based on the notion of duality in constrained optimization. Using the weak duality theorem, we reduce the online coregularization to the task of increasing the dual function. We demonstrate that the existing online coregularization algorithms in previous work can be viewed as an approximation of our dual ascending process using gradient ascent. New algorithms are derived based on the idea of ascending the dual function more aggressively. For practical purpose, we also propose two sparse approximation approaches for kernel representation to reduce the computational complexity. Experiments show that our derived online coregularization algorithms achieve risk and accuracy comparable to offline algorithms while consuming less time and memory. Specially, our online coregularization algorithms are able to deal with concept drift and maintain a much smaller error rate. This paper paves a way to the design and analysis of online coregularization algorithms. PMID:24194680
Manifold Learning by Preserving Distance Orders.
Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz
2014-03-01
Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.
NASA Technical Reports Server (NTRS)
Tapia, R. A.; Vanrooy, D. L.
1976-01-01
A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.
Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun
2016-01-01
In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550
2012-01-01
Background Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. Methods A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan’s year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. Results ICCs were generated for Taiwan’s year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. Conclusion We recommend using the ICC to annually assess a nation’s year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system. PMID:22587736
Chien, Tsair-Wei; Chou, Ming-Ting; Wang, Wen-Chung; Tsai, Li-Shu; Lin, Weir-Sen
2012-05-15
Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan's year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. ICCs were generated for Taiwan's year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. We recommend using the ICC to annually assess a nation's year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system.
A constrained registration problem based on Ciarlet-Geymonat stored energy
NASA Astrophysics Data System (ADS)
Derfoul, Ratiba; Le Guyader, Carole
2014-03-01
In this paper, we address the issue of designing a theoretically well-motivated registration model capable of handling large deformations and including geometrical constraints, namely landmark points to be matched, in a variational framework. The theory of linear elasticity being unsuitable in this case, since assuming small strains and the validity of Hooke's law, the introduced functional is based on nonlinear elasticity principles. More precisely, the shapes to be matched are viewed as Ciarlet-Geymonat materials. We demonstrate the existence of minimizers of the related functional minimization problem and prove a convergence result when the number of geometric constraints increases. We then describe and analyze a numerical method of resolution based on the introduction of an associated decoupled problem under inequality constraint in which an auxiliary variable simulates the Jacobian matrix of the deformation field. A theoretical result of -convergence is established. We then provide preliminary 2D results of the proposed matching model for the registration of mouse brain gene expression data to a neuroanatomical mouse atlas.
Estimating the Effects of Damping Treatments on the Vibration of Complex Structures
2012-09-26
26 4.3 Literature review 26 4.3.1 CLD Theory 26 4.3.2 Temperature Profiling 28 4.4 Constrained Layer Damping Analysis 29 4.5 Results 35...Coordinate systems and length scales are noted. Constraining layer, viscoelastic layer and base layer pertain to the nomenclature used through CLD ...for vibrational damping 4.1 Introduction Constrained layer damping ( CLD ) treatment systems are widely used in complex structures to dissipate
Thomaz, Ricardo de Lima; Carneiro, Pedro Cunha; Bonin, João Eliton; Macedo, Túlio Augusto Alves; Patrocinio, Ana Claudia; Soares, Alcimar Barbosa
2018-05-01
Detection of early hepatocellular carcinoma (HCC) is responsible for increasing survival rates in up to 40%. One-class classifiers can be used for modeling early HCC in multidetector computed tomography (MDCT), but demand the specific knowledge pertaining to the set of features that best describes the target class. Although the literature outlines several features for characterizing liver lesions, it is unclear which is most relevant for describing early HCC. In this paper, we introduce an unconstrained GA feature selection algorithm based on a multi-objective Mahalanobis fitness function to improve the classification performance for early HCC. We compared our approach to a constrained Mahalanobis function and two other unconstrained functions using Welch's t-test and Gaussian Data Descriptors. The performance of each fitness function was evaluated by cross-validating a one-class SVM. The results show that the proposed multi-objective Mahalanobis fitness function is capable of significantly reducing data dimensionality (96.4%) and improving one-class classification of early HCC (0.84 AUC). Furthermore, the results provide strong evidence that intensity features extracted at the arterial to portal and arterial to equilibrium phases are important for classifying early HCC.
Consistent description of kinetic equation with triangle anomaly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pu Shi; Gao Jianhua; Wang Qun
2011-05-01
We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less
An iterative algorithm for L1-TV constrained regularization in image restoration
NASA Astrophysics Data System (ADS)
Chen, K.; Loli Piccolomini, E.; Zama, F.
2015-11-01
We consider the problem of restoring blurred images affected by impulsive noise. The adopted method restores the images by solving a sequence of constrained minimization problems where the data fidelity function is the ℓ1 norm of the residual and the constraint, chosen as the image Total Variation, is automatically adapted to improve the quality of the restored images. Although this approach is general, we report here the case of vectorial images where the blurring model involves contributions from the different image channels (cross channel blur). A computationally convenient extension of the Total Variation function to vectorial images is used and the results reported show that this approach is efficient for recovering nearly optimal images.
Learning Quantitative Sequence-Function Relationships from Massively Parallel Experiments
NASA Astrophysics Data System (ADS)
Atwal, Gurinder S.; Kinney, Justin B.
2016-03-01
A fundamental aspect of biological information processing is the ubiquity of sequence-function relationships—functions that map the sequence of DNA, RNA, or protein to a biochemically relevant activity. Most sequence-function relationships in biology are quantitative, but only recently have experimental techniques for effectively measuring these relationships been developed. The advent of such "massively parallel" experiments presents an exciting opportunity for the concepts and methods of statistical physics to inform the study of biological systems. After reviewing these recent experimental advances, we focus on the problem of how to infer parametric models of sequence-function relationships from the data produced by these experiments. Specifically, we retrace and extend recent theoretical work showing that inference based on mutual information, not the standard likelihood-based approach, is often necessary for accurately learning the parameters of these models. Closely connected with this result is the emergence of "diffeomorphic modes"—directions in parameter space that are far less constrained by data than likelihood-based inference would suggest. Analogous to Goldstone modes in physics, diffeomorphic modes arise from an arbitrarily broken symmetry of the inference problem. An analytically tractable model of a massively parallel experiment is then described, providing an explicit demonstration of these fundamental aspects of statistical inference. This paper concludes with an outlook on the theoretical and computational challenges currently facing studies of quantitative sequence-function relationships.
Another procedure for the preliminary ordering of loci based on two point lod scores.
Curtis, D
1994-01-01
Because of the difficulty of performing full likelihood analysis over multiple loci and the large numbers of possible orders, a number of methods have been proposed for quickly evaluating orders and, to a lesser extent, for generating good orders. A new method is proposed which uses a function which is moderately laborious to compute, the sum of lod scores between all pairs of loci. This function can be smoothly minimized by initially allowing the loci to be placed anywhere in space, and only subsequently constraining them to lie along a one-dimensional map. Application of this approach to sample data suggests that it has promise and might usefully be combined with other methods when loci need to be ordered.
Optimal consensus algorithm integrated with obstacle avoidance
NASA Astrophysics Data System (ADS)
Wang, Jianan; Xin, Ming
2013-01-01
This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.
Evolutionary morphology of the Tenrecoidea (Mammalia) hindlimb skeleton.
Salton, Justine A; Sargis, Eric J
2009-03-01
The tenrecs of Central Africa and Madagascar provide an excellent model for exploring adaptive radiation and functional aspects of mammalian hindlimb form. The pelvic girdle, femur, and crus of 13 tenrecoid species, and four species from the families Solenodontidae, Macroscelididae, and Erinaceidae, were examined and measured. Results from qualitative and quantitative analyses demonstrate remarkable diversity in several aspects of knee and hip joint skeletal form that are supportive of function-based hypotheses, and consistent with studies on nontenrecoid eutherian postcranial adaptation. Locomotor specialists within Tenrecoidea exhibit suites of characteristics that are widespread among eutherians with similar locomotor behaviors. Furthermore, several characters that are constrained at the subfamily level were identified. Such characters are more indicative of postural behavior than locomotor behavior. Copyright 2008 Wiley-Liss, Inc.
Probabilistic Modeling of Aircraft Trajectories for Dynamic Separation Volumes
NASA Technical Reports Server (NTRS)
Lewis, Timothy A.
2016-01-01
With a proliferation of new and unconventional vehicles and operations expected in the future, the ab initio airspace design will require new approaches to trajectory prediction for separation assurance and other air traffic management functions. This paper presents an approach to probabilistic modeling of the trajectory of an aircraft when its intent is unknown. The approach uses a set of feature functions to constrain a maximum entropy probability distribution based on a set of observed aircraft trajectories. This model can be used to sample new aircraft trajectories to form an ensemble reflecting the variability in an aircraft's intent. The model learning process ensures that the variability in this ensemble reflects the behavior observed in the original data set. Computational examples are presented.
Tramontano, A; Bianchi, E; Venturini, S; Martin, F; Pessi, A; Sollazzo, M
1994-03-01
Conformationally constraining selectable peptides onto a suitable scaffold that enables their conformation to be predicted or readily determined by experimental techniques would considerably boost the drug discovery process by reducing the gap between the discovery of a peptide lead and the design of a peptidomimetic with a more desirable pharmacological profile. With this in mind, we designed the minibody, a 61-residue beta-protein aimed at retaining some desirable features of immunoglobulin variable domains, such as tolerance to sequence variability in selected regions of the protein and predictability of the main chain conformation of the same regions, based on the 'canonical structures' model. To test the ability of the minibody scaffold to support functional sites we also designed a metal binding version of the protein by suitably choosing the sequences of its loops. The minibody was produced both by chemical synthesis and expression in E. coli and characterized by size exclusion chromatography, UV CD (circular dichroism) spectroscopy and metal binding activity. All our data supported the model, but a more detailed structural characterization of the molecule was impaired by its low solubility. We were able to overcome this problem both by further mutagenesis of the framework and by addition of a solubilizing motif. The minibody is being used to select constrained human IL-6 peptidic ligands from a library displayed on the surface of the f1 bacteriophage.
Nims, Robert J; Cigan, Alexander D; Durney, Krista M; Jones, Brian K; O'Neill, John D; Law, Wing-Sum A; Vunjak-Novakovic, Gordana; Hung, Clark T; Ateshian, Gerard A
2017-08-01
When cultured with sufficient nutrient supply, engineered cartilage synthesizes proteoglycans rapidly, producing an osmotic swelling pressure that destabilizes immature collagen and prevents the development of a robust collagen framework, a hallmark of native cartilage. We hypothesized that mechanically constraining the proteoglycan-induced tissue swelling would enhance construct functional properties through the development of a more stable collagen framework. To test this hypothesis, we developed a novel "cage" growth system to mechanically prevent tissue constructs from swelling while ensuring adequate nutrient supply to the growing construct. The effectiveness of constrained culture was examined by testing constructs embedded within two different scaffolds: agarose and cartilage-derived matrix hydrogel (CDMH). Constructs were seeded with immature bovine chondrocytes and cultured under free swelling (FS) conditions for 14 days with transforming growth factor-β before being placed into a constraining cage for the remainder of culture. Controls were cultured under FS conditions throughout. Agarose constructs cultured in cages did not expand after the day 14 caging while FS constructs expanded to 8 × their day 0 weight after 112 days of culture. In addition to the physical differences in growth, by day 56, caged constructs had higher equilibrium (agarose: 639 ± 179 kPa and CDMH: 608 ± 257 kPa) and dynamic compressive moduli (agarose: 3.4 ± 1.0 MPa and CDMH 2.8 ± 1.0 MPa) than FS constructs (agarose: 193 ± 74 kPa and 1.1 ± 0.5 MPa and CDMH: 317 ± 93 kPa and 1.8 ± 1.0 MPa for equilibrium and dynamic properties, respectively). Interestingly, when normalized to final day wet weight, cage and FS constructs did not exhibit differences in proteoglycan or collagen content. However, caged culture enhanced collagen maturation through the increased formation of pyridinoline crosslinks and improved collagen matrix stability as measured by α-chymotrypsin solubility. These findings demonstrate that physically constrained culture of engineered cartilage constructs improves functional properties through improved collagen network maturity and stability. We anticipate that constrained culture may benefit other reported engineered cartilage systems that exhibit a mismatch in proteoglycan and collagen synthesis.
Zhai, Di-Hua; Xia, Yuanqing
2018-02-01
This paper addresses the adaptive control for task-space teleoperation systems with constrained predefined synchronization error, where a novel switched control framework is investigated. Based on multiple Lyapunov-Krasovskii functionals method, the stability of the resulting closed-loop system is established in the sense of state-independent input-to-output stability. Compared with previous work, the developed method can simultaneously handle the unknown kinematics/dynamics, asymmetric varying time delays, and prescribed performance control in a unified framework. It is shown that the developed controller can guarantee the prescribed transient-state and steady-state synchronization performances between the master and slave robots, which is demonstrated by the simulation study.
Optical study of the DAFT/FADA galaxy cluster survey
NASA Astrophysics Data System (ADS)
Martinet, N.; Durret, F.; Clowe, D.; Adami, C.
2013-11-01
DAFT/FADA (Dark energy American French Team) is a large survey of ˜90 high redshift (0.4
Modeling and vibration control of the flapping-wing robotic aircraft with output constraint
NASA Astrophysics Data System (ADS)
He, Wei; Mu, Xinxing; Chen, Yunan; He, Xiuyu; Yu, Yao
2018-06-01
In this paper, we propose the boundary control for undesired vibrations suppression with output constraint of the flapping-wing robotic aircraft (FWRA). We also present the dynamics of the flexible wing of FWRA with governing equations and boundary conditions, which are partial differential equations (PDEs) and ordinary differential equations (ODEs), respectively. An energy-based barrier Lyapunov function is introduced to analyze the system stability and prevent violation of output constraint. With the effect of the proposed boundary controller, distributed states of the system remain in the constrained spaces. Then the IBLF-based boundary controls are proposed to assess the stability of the FWRA in the presence of output constraint.
Comment on "Nonuniqueness of algebraic first-order density-matrix functionals"
NASA Astrophysics Data System (ADS)
Gritsenko, O. V.
2018-02-01
Wang and Knowles (WK) [Phys. Rev. A 92, 012520 (2015), 10.1103/PhysRevA.92.012520] have given a counterexample to the conventional in reduced density-matrix functional theory representation of the second-order reduced density matrix (2RDM) Γi j ,k l in the basis of the natural orbitals as a function Γi j ,k l(n ) of the orbital occupation numbers (ONs) ni. The observed nonuniqueness of Γi j ,k l for prototype systems of different symmetry has been interpreted as the inherent inability of ON functions to reproduce the 2RDM, due to the insufficient information contained in the 1RDM spectrum. In this Comment, it is argued that, rather than totally invalidating Γi j ,k l(n ) , the WK example exposes its symmetry dependence which, as well as the previously established analogous dependence in density functional theory, is demonstrated with a general formulation based on the Levy constrained search.
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
NASA Astrophysics Data System (ADS)
Anderson, C.; Bond-Lamberty, B. P.; Huang, M.; Xu, Y.; Stegen, J.
2016-12-01
Ecosystem composition is a key attribute of terrestrial ecosystems, influencing the fluxes of carbon, water, and energy between the land surface and the atmosphere. The description of current ecosystem composition has traditionally come from relatively few ground-based inventories of the plant canopy, but are spatially limited and do not provide a comprehensive picture of ecosystem composition at regional or global scales. In this analysis, imaging spectrometry measurements, collected as part of the HyspIRI Preparatory Mission, are used to provide spatially-resolved estimates of plant functional type composition providing an important constraint on terrestrial biosphere model predictions of carbon, water and energy fluxes across the heterogeneous landscapes of the Californian Sierras. These landscapes include oak savannas, mid-elevation mixed pines, fir-cedar forests, and high elevation pines. Our results show that imaging spectrometry measurements can be successfully used to estimate regional-scale variation in ecosystem composition and resulting spatial heterogeneity in patterns of carbon, water and energy fluxes and ecosystem dynamics. Simulations at four flux tower sites within the study region yield patterns of seasonal and inter-annual variation in carbon and water fluxes that have comparable accuracy to simulations initialized from ground-based inventory measurements. Finally, results indicate that during the 2012-2015 Californian drought, regional net carbon fluxes fell by 84%, evaporation and transpiration fluxes fell by 53% and 33% respectively, and sensible heat increase by 51%. This study provides a framework for assimilating near-future global satellite imagery estimates of ecosystem composition with terrestrial biosphere models, constraining and improving their predictions of large-scale ecosystem dynamics and functioning.
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
NASA Astrophysics Data System (ADS)
Antonarakis, A. S.; Bogan, S.; Moorcroft, P. R.
2017-12-01
Ecosystem composition is a key attribute of terrestrial ecosystems, influencing the fluxes of carbon, water, and energy between the land surface and the atmosphere. The description of current ecosystem composition has traditionally come from relatively few ground-based inventories of the plant canopy, but are spatially limited and do not provide a comprehensive picture of ecosystem composition at regional or global scales. In this analysis, imaging spectrometry measurements, collected as part of the HyspIRI Preparatory Mission, are used to provide spatially-resolved estimates of plant functional type composition providing an important constraint on terrestrial biosphere model predictions of carbon, water and energy fluxes across the heterogeneous landscapes of the Californian Sierras. These landscapes include oak savannas, mid-elevation mixed pines, fir-cedar forests, and high elevation pines. Our results show that imaging spectrometry measurements can be successfully used to estimate regional-scale variation in ecosystem composition and resulting spatial heterogeneity in patterns of carbon, water and energy fluxes and ecosystem dynamics. Simulations at four flux tower sites within the study region yield patterns of seasonal and inter-annual variation in carbon and water fluxes that have comparable accuracy to simulations initialized from ground-based inventory measurements. Finally, results indicate that during the 2012-2015 Californian drought, regional net carbon fluxes fell by 84%, evaporation and transpiration fluxes fell by 53% and 33% respectively, and sensible heat increase by 51%. This study provides a framework for assimilating near-future global satellite imagery estimates of ecosystem composition with terrestrial biosphere models, constraining and improving their predictions of large-scale ecosystem dynamics and functioning.
Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya
2018-01-30
Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
PREDICTING CME EJECTA AND SHEATH FRONT ARRIVAL AT L1 WITH A DATA-CONSTRAINED PHYSICAL MODEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hess, Phillip; Zhang, Jie, E-mail: phess4@gmu.edu
2015-10-20
We present a method for predicting the arrival of a coronal mass ejection (CME) flux rope in situ, as well as the sheath of solar wind plasma accumulated ahead of the driver. For faster CMEs, the front of this sheath will be a shock. The method is based upon geometrical separate measurement of the CME ejecta and sheath. These measurements are used to constrain a drag-based model, improved by including both a height dependence and accurate de-projected velocities. We also constrain the geometry of the model to determine the error introduced as a function of the deviation of the CMEmore » nose from the Sun–Earth line. The CME standoff-distance in the heliosphere fit is also calculated, fit, and combined with the ejecta model to determine sheath arrival. Combining these factors allows us to create predictions for both fronts at the L1 point and compare them against observations. We demonstrate an ability to predict the sheath arrival with an average error of under 3.5 hr, with an rms error of about 1.58 hr. For the ejecta the error is less than 1.5 hr, with an rms error within 0.76 hr. We also discuss the physical implications of our model for CME expansion and density evolution. We show the power of our method with ideal data and demonstrate the practical implications of having a permanent L5 observer with space weather forecasting capabilities, while also discussing the limitations of the method that will have to be addressed in order to create a real-time forecasting tool.« less
Röntsch, Raoul; Schulze, Markus
2015-09-21
We study top quark pair production in association with a Z boson at the Large Hadron Collider (LHC) and investigate the prospects of measuring the couplings of top quarks to the Z boson. To date these couplings have not been constrained in direct measurements. Such a determination will be possible for the first time at the LHC. Our calculation improves previous coupling studies through the inclusion of next-to-leading order (NLO) QCD corrections in production and decays of all unstable particles. We treat top quarks in the narrow-width approximation and retain all NLO spin correlations. To determine the sensitivity of amore » coupling measurement we perform a binned log-likelihood ratio test based on normalization and shape information of the angle between the leptons from the Z boson decay. The obtained limits account for statistical uncertainties as well as leading theoretical systematics from residual scale dependence and parton distribution functions. We use current CMS data to place the first direct constraints on the ttbZ couplings. We also consider the upcoming high-energy LHC run and find that with 300 inverse fb of data at an energy of 13 TeV the vector and axial ttbZ couplings can be constrained at the 95% confidence level to C_V=0.24^{+0.39}_{-0.85} and C_A=-0.60^{+0.14}_{-0.18}, where the central values are the Standard Model predictions. This is a reduction of uncertainties by 25% and 42%, respectively, compared to an analysis based on leading-order predictions. We also translate these results into limits on dimension-six operators contributing to the ttbZ interactions beyond the Standard Model.« less
Constrained Multiobjective Biogeography Optimization Algorithm
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
Moving Forward to Constrain the Shear Viscosity of QCD Matter
Denicol, Gabriel; Monnai, Akihiko; Schenke, Björn
2016-05-26
In this work, we demonstrate that measurements of rapidity differential anisotropic flow in heavy-ion collisions can constrain the temperature dependence of the shear viscosity to entropy density ratio η/s of QCD matter. Comparing results from hydrodynamic calculations with experimental data from the RHIC, we find evidence for a small η/s ≈ 0.04 in the QCD crossover region and a strong temperature dependence in the hadronic phase. A temperature independent η/s is disfavored by the data. We further show that measurements of the event-by-event flow as a function of rapidity can be used to independently constrain the initial state fluctuations inmore » three dimensions and the temperature dependent transport properties of QCD matter.« less
Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks
Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng
2014-01-01
Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408
NASA Astrophysics Data System (ADS)
Liu, Y.; Dedontney, N. L.; Rice, J. R.
2007-12-01
Rate and state friction, as applied to modeling subduction earthquake sequences, routinely predicts postseismic slip. It also predicts spontaneous aseismic slip transients, at least when pore pressure p is highly elevated near and downdip from the stability transition [Liu and Rice, 2007]. Here we address how to make such postseismic and transient predictions more fully compatible with geophysical observations. For example, lab observations can determine the a, b parameters and state evolution slip L of rate and state friction as functions of lithology and temperature and, with aid of a structural and thermal model of the subduction zone, as functions of downdip distance. Geodetic observations constrain interseismic, postseismic and aseismic transient deformations, which are controlled in the modeling by the distributions of a \\barσ and b \\barσ (parameters which also partly control the seismic rupture phase), where \\barσ = σ - p. Elevated p, controlled by tectonic compression and dehydration, may be constrained by petrologic and seismic observations. The amount of deformation and downdip extent of the slipping zone associated with the spontaneous quasi- periodic transients, as thus far modeled [Liu and Rice, 2007], is generally smaller than that observed during episodes of slow slip events in northern Cascadia and SW Japan subduction zones. However, the modeling was based on lab data for granite gouge under hydrothermal conditions because data is most complete for that case. We here report modeling based on lab data on dry granite gouge [Stesky, 1975; Lockner et al., 1986], involving no or lessened chemical interaction with water and hence being a possibly closer analog to dehydrated oceanic crust, and limited data on gabbro gouge [He et al., 2007], an expected lithology. Both data sets show a much less rapid increase of a-b with temperature above the stability transition (~ 350 °C) than does wet granite gouge; a-b increases to ~ 0.08 for wet granite at 600 °C, but to only ~ 0.01 in the dry granite and gabbro cases. We find that the lessened high-T a - b does, for the same \\barσ, modestly extend the transient slip episodes further downdip, although a majority of slip is still contributed near and in the updip rate-weakening region. However, postseismic slip, for the same \\barσ, propagates much further downdip into the rate-strengthening region. To better constrain the downdip distribution of (a - b) \\barσ, and possibly a \\barσ and L, we focus on the geodetically constrained [Hutton et al., 2001] space-time distribution of postseismic slip for the 1995 Mw = 8.0 Colima-Jalisco earthquake. This is a similarly shallow dipping subduction zone with a thermal profile [Currie et al., 2001] comparable to those that have thus far been shown to exhibit aseismic transients and non-volcanic tremor [Peacock et al., 2002]. We extrapolate the modeled 2-D postseismic slip, following a thrust earthquake with a coseismic slip similar to the 1995 event, to a spatial-temporal 3-D distribution. Surface deformation due to such slips on the thrust fault in an elastic half space is calculated and compared to that observed at western Mexico GPS stations, to constrain the above depth-variable model parameters.
Liu, Aiqin; Jennings, Louise M; Ingham, Eileen; Fisher, John
2015-09-18
The successful development of early-stage cartilage and meniscus repair interventions in the knee requires biomechanical and biotribological understanding of the design of the therapeutic interventions and their tribological function in the natural joint. The aim of this study was to develop and validate a porcine knee model using a whole joint knee simulator for investigation of the tribological function and biomechanical properties of the natural knee, which could then be used to pre-clinically assess the tribological performance of cartilage and meniscal repair interventions prior to in vivo studies. The tribological performance of standard artificial bearings in terms of anterior-posterior (A/P) shear force was determined in a newly developed six degrees of freedom tribological joint simulator. The porcine knee model was then developed and the tribological properties in terms of shear force measurements were determined for the first time for three levels of biomechanical constraints including A/P constrained, spring force semi-constrained and A/P unconstrained conditions. The shear force measurements showed higher values under the A/P constrained condition (predominantly sliding motion) compared to the A/P unconstrained condition (predominantly rolling motion). This indicated that the shear force simulation model was able to differentiate between tribological behaviours when the femoral and tibial bearing was constrained to slide or/and roll. Therefore, this porcine knee model showed the potential capability to investigate the effect of knee structural, biomechanical and kinematic changes, as well as different cartilage substitution therapies on the tribological function of natural knee joints. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D.
2015-01-01
Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset towards longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. PMID:25895910
Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D
2015-10-01
Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset toward longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. © The Author(s) 2015.
Di Maggio, Jimena; Fernández, Carolina; Parodi, Elisa R; Diaz, M Soledad; Estrada, Vanina
2016-01-01
In this paper we address the formulation of two mechanistic water quality models that differ in the way the phytoplankton community is described. We carry out parameter estimation subject to differential-algebraic constraints and validation for each model and comparison between models performance. The first approach aggregates phytoplankton species based on their phylogenetic characteristics (Taxonomic group model) and the second one, on their morpho-functional properties following Reynolds' classification (Functional group model). The latter approach takes into account tolerance and sensitivity to environmental conditions. The constrained parameter estimation problems are formulated within an equation oriented framework, with a maximum likelihood objective function. The study site is Paso de las Piedras Reservoir (Argentina), which supplies water for consumption for 450,000 population. Numerical results show that phytoplankton morpho-functional groups more closely represent each species growth requirements within the group. Each model performance is quantitatively assessed by three diagnostic measures. Parameter estimation results for seasonal dynamics of the phytoplankton community and main biogeochemical variables for a one-year time horizon are presented and compared for both models, showing the functional group model enhanced performance. Finally, we explore increasing nutrient loading scenarios and predict their effect on phytoplankton dynamics throughout a one-year time horizon. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimization of constrained density functional theory
NASA Astrophysics Data System (ADS)
O'Regan, David D.; Teobaldi, Gilberto
2016-07-01
Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.
Statistical Issues in Galaxy Cluster Cosmology
NASA Technical Reports Server (NTRS)
Mantz, Adam
2013-01-01
The number and growth of massive galaxy clusters are sensitive probes of cosmological structure formation. Surveys at various wavelengths can detect clusters to high redshift, but the fact that cluster mass is not directly observable complicates matters, requiring us to simultaneously constrain scaling relations of observable signals with mass. The problem can be cast as one of regression, in which the data set is truncated, the (cosmology-dependent) underlying population must be modeled, and strong, complex correlations between measurements often exist. Simulations of cosmological structure formation provide a robust prediction for the number of clusters in the Universe as a function of mass and redshift (the mass function), but they cannot reliably predict the observables used to detect clusters in sky surveys (e.g. X-ray luminosity). Consequently, observers must constrain observable-mass scaling relations using additional data, and use the scaling relation model in conjunction with the mass function to predict the number of clusters as a function of redshift and luminosity.
Retrieving rupture history using waveform inversions in time sequence
NASA Astrophysics Data System (ADS)
Yi, L.; Xu, C.; Zhang, X.
2017-12-01
The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.
The Probabilistic Admissible Region with Additional Constraints
NASA Astrophysics Data System (ADS)
Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.
The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea will be illustrated using a short-arc, angles-only observation scenario.
Robustness of Representative Signals Relative to Data Loss Using Atlas-Based Parcellations.
Gajdoš, Martin; Výtvarová, Eva; Fousek, Jan; Lamoš, Martin; Mikl, Michal
2018-04-24
Parcellation-based approaches are an important part of functional magnetic resonance imaging data analysis. They are a necessary processing step for sorting data in structurally or functionally homogenous regions. Real functional magnetic resonance imaging datasets usually do not cover the atlas template completely; they are often spatially constrained due to the physical limitations of MR sequence settings, the inter-individual variability in brain shape, etc. When using a parcellation template, many regions are not completely covered by actual data. This paper addresses the issue of the area coverage required in real data in order to reliably estimate the representative signal and the influence of this kind of data loss on network analysis metrics. We demonstrate this issue on four datasets using four different widely used parcellation templates. We used two erosion approaches to simulate data loss on the whole-brain level and the ROI-specific level. Our results show that changes in ROI coverage have a systematic influence on network measures. Based on the results of our analysis, we recommend controlling the ROI coverage and retaining at least 60% of the area in order to ensure at least 80% of explained variance of the original signal.
On Correspondence of BRST-BFV, Dirac, and Refined Algebraic Quantizations of Constrained Systems
NASA Astrophysics Data System (ADS)
Shvedov, O. Yu.
2002-11-01
The correspondence between BRST-BFV, Dirac, and refined algebraic (group averaging, projection operator) approaches to quantizing constrained systems is analyzed. For the closed-algebra case, it is shown that the component of the BFV wave function corresponding to maximal (minimal) value of number of ghosts and antighosts in the Schrodinger representation may be viewed as a wave function in the refined algebraic (Dirac) quantization approach. The Giulini-Marolf group averaging formula for the inner product in the refined algebraic quantization approach is obtained from the Batalin-Marnelius prescription for the BRST-BFV inner product, which should be generally modified due to topological problems. The considered prescription for the correspondence of states is observed to be applicable to the open-algebra case. The refined algebraic quantization approach is generalized then to the case of nontrivial structure functions. A simple example is discussed. The correspondence of observables for different quantization methods is also investigated.
Genetic and Diagnostic Biomarker Development in ASD Toddlers Using Resting-State Functional MRI
by the principal investigators are being mined for ASD relevant biomarkers. Structural and (constrained) functional meta-analyses of previously...ASD and typically developing (TD) individuals. These regions-of-interest will be extended through additional functional meta-analyses, network models will be created, and these models will be applied to primary ASD data .
A BRST formulation for the conic constrained particle
NASA Astrophysics Data System (ADS)
Barbosa, Gabriel D.; Thibes, Ronaldo
2018-04-01
We describe the gauge invariant BRST formulation of a particle constrained to move in a general conic. The model considered constitutes an explicit example of an originally second-class system which can be quantized within the BRST framework. We initially impose the conic constraint by means of a Lagrange multiplier leading to a consistent second-class system which generalizes previous models studied in the literature. After calculating the constraint structure and the corresponding Dirac brackets, we introduce a suitable first-order Lagrangian, the resulting modified system is then shown to be gauge invariant. We proceed to the extended phase space introducing fermionic ghost variables, exhibiting the BRST symmetry transformations and writing the Green’s function generating functional for the BRST quantized model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kuo -Ling; Mehrotra, Sanjay
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
Evidence for hubs in human functional brain networks
Power, Jonathan D; Schlaggar, Bradley L; Lessov-Schlaggar, Christina N; Petersen, Steven E
2013-01-01
Summary Hubs integrate and distribute information in powerful ways due to the number and positioning of their contacts in a network. Several resting state functional connectivity MRI reports have implicated regions of the default mode system as brain hubs; we demonstrate that previous degree-based approaches to hub identification may have identified portions of large brain systems rather than critical nodes of brain networks. We utilize two methods to identify hub-like brain regions: 1) finding network nodes that participate in multiple sub-networks of the brain, and 2) finding spatial locations where several systems are represented within a small volume. These methods converge on a distributed set of regions that differ from previous reports on hubs. This work identifies regions that support multiple systems, leading to spatially constrained predictions about brain function that may be tested in terms of lesions, evoked responses, and dynamic patterns of activity. PMID:23972601
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
Ballet, Steven; Feytens, Debby; Buysse, Koen; Chung, Nga N; Lemieux, Carole; Tumati, Suneeta; Keresztes, Attila; Van Duppen, Joost; Lai, Josephine; Varga, Eva; Porreca, Frank; Schiller, Peter W; Vanden Broeck, Jozef; Tourwé, Dirk
2011-04-14
A screening of conformationally constrained aromatic amino acids as base cores for the preparation of new NK1 receptor antagonists resulted in the discovery of three new NK1 receptor antagonists, 19 [Ac-Aba-Gly-NH-3',5'-(CF(3))(2)-Bn], 20 [Ac-Aba-Gly-NMe-3',5'-(CF(3))(2)-Bn], and 23 [Ac-Tic-NMe-3',5'-(CF(3))(2)-Bn], which were able to counteract the agonist effect of substance P, the endogenous ligand of NK1R. The most active NK1 antagonist of the series, 20 [Ac-Aba-Gly-NMe-3',5'-(CF(3))(2)-Bn], was then used in the design of a novel, potent chimeric opioid agonist-NK1 receptor antagonist, 35 [Dmt-D-Arg-Aba-Gly-NMe-3',5'-(CF(3))(2)-Bn], which combines the N terminus of the established Dmt(1)-DALDA agonist opioid pharmacophore (H-Dmt-D-Arg-Phe-Lys-NH(2)) and 20, the NK1R ligand. The opioid component of the chimeric compound 35, that is, Dmt-D-Arg-Aba-Gly-NH(2) (36), also proved to be an extremely potent and balanced μ and δ opioid receptor agonist with subnanomolar binding and in vitro functional activity.
Validity of strong lensing statistics for constraints on the galaxy evolution model
NASA Astrophysics Data System (ADS)
Matsumoto, Akiko; Futamase, Toshifumi
2008-02-01
We examine the usefulness of the strong lensing statistics to constrain the evolution of the number density of lensing galaxies by adopting the values of the cosmological parameters determined by recent Wilkinson Microwave Anisotropy Probe observation. For this purpose, we employ the lens-redshift test proposed by Kochanek and constrain the parameters in two evolution models, simple power-law model characterized by the power-law indexes νn and νv, and the evolution model by Mitchell et al. based on cold dark matter structure formation scenario. We use the well-defined lens sample from the Sloan Digital Sky Survey (SDSS) and this is similarly sized samples used in the previous studies. Furthermore, we adopt the velocity dispersion function of early-type galaxies based on SDSS DR1 and DR5. It turns out that the indexes of power-law model are consistent with the previous studies, thus our results indicate the mild evolution in the number and velocity dispersion of early-type galaxies out to z = 1. However, we found that the values for p and q used by Mitchell et al. are inconsistent with the presently available observational data. More complete sample is necessary to withdraw more realistic determination on these parameters.
Evolutionary optimization methods for accelerator design
NASA Astrophysics Data System (ADS)
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.
NASA Astrophysics Data System (ADS)
Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven
2017-04-01
Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.
3D deformable image matching: a hierarchical approach over nested subspaces
NASA Astrophysics Data System (ADS)
Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul
2000-06-01
This paper presents a fast hierarchical method to perform dense deformable inter-subject matching of 3D MR Images of the brain. To recover the complex morphological variations in neuroanatomy, a hierarchy of 3D deformations fields is estimated, by minimizing a global energy function over a sequence of nested subspaces. The nested subspaces, generated from a single scaling function, consist of deformation fields constrained at different scales. The highly non linear energy function, describing the interactions between the target and the source images, is minimized using a coarse-to-fine continuation strategy over this hierarchy. The resulting deformable matching method shows low sensitivity to local minima and is able to track large non-linear deformations, with moderate computational load. The performances of the approach are assessed both on simulated 3D transformations and on a real data base of 3D brain MR Images from different individuals. The method has shown efficient in putting into correspondence the principle anatomical structures of the brain. An application to atlas-based MRI segmentation, by transporting a labeled segmentation map on patient data, is also presented.
Li, Da-Peng; Li, Dong-Juan; Liu, Yan-Jun; Tong, Shaocheng; Chen, C L Philip
2017-10-01
This paper deals with the tracking control problem for a class of nonlinear multiple input multiple output unknown time-varying delay systems with full state constraints. To overcome the challenges which cause by the appearances of the unknown time-varying delays and full-state constraints simultaneously in the systems, an adaptive control method is presented for such systems for the first time. The appropriate Lyapunov-Krasovskii functions and a separation technique are employed to eliminate the effect of unknown time-varying delays. The barrier Lyapunov functions are employed to prevent the violation of the full state constraints. The singular problems are dealt with by introducing the signal function. Finally, it is proven that the proposed method can both guarantee the good tracking performance of the systems output, all states are remained in the constrained interval and all the closed-loop signals are bounded in the design process based on choosing appropriate design parameters. The practicability of the proposed control technique is demonstrated by a simulation study in this paper.
ISS observations offer insights into plant function
Stavros, E. Natasha; Schimel, David; Pavlick, Ryan; ...
2017-06-22
Technologies on the International Space Station will provide ~1 year of synchronous observations of ecosystem composition, structure and function, in 2018. Here, we discuss these instruments and how they can be used to constrain global models and improve our understanding of the current state of terrestrial ecosystems.
Dickman, Elizabeth M.; Newell, Jennifer M.; González, María J.; Vanni, Michael J.
2008-01-01
The efficiency of energy transfer through food chains [food chain efficiency (FCE)] is an important ecosystem function. It has been hypothesized that FCE across multiple trophic levels is constrained by the efficiency at which herbivores use plant energy, which depends on plant nutritional quality. Furthermore, the number of trophic levels may also constrain FCE, because herbivores are less efficient in using plant production when they are constrained by carnivores. These hypotheses have not been tested experimentally in food chains with 3 or more trophic levels. In a field experiment manipulating light, nutrients, and food-chain length, we show that FCE is constrained by algal food quality and food-chain length. FCE across 3 trophic levels (phytoplankton to carnivorous fish) was highest under low light and high nutrients, where algal quality was best as indicated by taxonomic composition and nutrient stoichiometry. In 3-level systems, FCE was constrained by the efficiency at which both herbivores and carnivores converted food into production; a strong nutrient effect on carnivore efficiency suggests a carryover effect of algal quality across 3 trophic levels. Energy transfer efficiency from algae to herbivores was also higher in 2-level systems (without carnivores) than in 3-level systems. Our results support the hypothesis that FCE is strongly constrained by light, nutrients, and food-chain length and suggest that carryover effects across multiple trophic levels are important. Because many environmental perturbations affect light, nutrients, and food-chain length, and many ecological services are mediated by FCE, it will be important to apply these findings to various ecosystem types. PMID:19011082
Design of a composite filter realizable on practical spatial light modulators
NASA Technical Reports Server (NTRS)
Rajan, P. K.; Ramakrishnan, Ramachandran
1994-01-01
Hybrid optical correlator systems use two spatial light modulators (SLM's), one at the input plane and the other at the filter plane. Currently available SLM's such as the deformable mirror device (DMD) and liquid crystal television (LCTV) SLM's exhibit arbitrarily constrained operating characteristics. The pattern recognition filters designed with the assumption that the SLM's have ideal operating characteristic may not behave as expected when implemented on the DMD or LCTV SLM's. Therefore it is necessary to incorporate the SLM constraints in the design of the filters. In this report, an iterative method is developed for the design of an unconstrained minimum average correlation energy (MACE) filter. Then using this algorithm a new approach for the design of a SLM constrained distortion invariant filter in the presence of input SLM is developed. Two different optimization algorithms are used to maximize the objective function during filter synthesis, one based on the simplex method and the other based on the Hooke and Jeeves method. Also, the simulated annealing based filter design algorithm proposed by Khan and Rajan is refined and improved. The performance of the filter is evaluated in terms of its recognition/discrimination capabilities using computer simulations and the results are compared with a simulated annealing optimization based MACE filter. The filters are designed for different LCTV SLM's operating characteristics and the correlation responses are compared. The distortion tolerance and the false class image discrimination qualities of the filter are comparable to those of the simulated annealing based filter but the new filter design takes about 1/6 of the computer time taken by the simulated annealing filter design.
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
PANATIKI: A Network Access Control Implementation Based on PANA for IoT Devices
Sanchez, Pedro Moreno; Lopez, Rafa Marin; Gomez Skarmeta, Antonio F.
2013-01-01
Internet of Things (IoT) networks are the pillar of recent novel scenarios, such as smart cities or e-healthcare applications. Among other challenges, these networks cover the deployment and interaction of small devices with constrained capabilities and Internet protocol (IP)-based networking connectivity. These constrained devices usually require connection to the Internet to exchange information (e.g., management or sensing data) or access network services. However, only authenticated and authorized devices can, in general, establish this connection. The so-called authentication, authorization and accounting (AAA) services are in charge of performing these tasks on the Internet. Thus, it is necessary to deploy protocols that allow constrained devices to verify their credentials against AAA infrastructures. The Protocol for Carrying Authentication for Network Access (PANA) has been standardized by the Internet engineering task force (IETF) to carry the Extensible Authentication Protocol (EAP), which provides flexible authentication upon the presence of AAA. To the best of our knowledge, this paper is the first deep study of the feasibility of EAP/PANA for network access control in constrained devices. We provide light-weight versions and implementations of these protocols to fit them into constrained devices. These versions have been designed to reduce the impact in standard specifications. The goal of this work is two-fold: (1) to demonstrate the feasibility of EAP/PANA in IoT devices; (2) to provide the scientific community with the first light-weight interoperable implementation of EAP/PANA for constrained devices in the Contiki operating system (Contiki OS), called PANATIKI. The paper also shows a testbed, simulations and experimental results obtained from real and simulated constrained devices. PMID:24189332
PANATIKI: a network access control implementation based on PANA for IoT devices.
Moreno Sanchez, Pedro; Marin Lopez, Rafa; Gomez Skarmeta, Antonio F
2013-11-01
Internet of Things (IoT) networks are the pillar of recent novel scenarios, such as smart cities or e-healthcare applications. Among other challenges, these networks cover the deployment and interaction of small devices with constrained capabilities and Internet protocol (IP)-based networking connectivity. These constrained devices usually require connection to the Internet to exchange information (e.g., management or sensing data) or access network services. However, only authenticated and authorized devices can, in general, establish this connection. The so-called authentication, authorization and accounting (AAA) services are in charge of performing these tasks on the Internet. Thus, it is necessary to deploy protocols that allow constrained devices to verify their credentials against AAA infrastructures. The Protocol for Carrying Authentication for Network Access (PANA) has been standardized by the Internet engineering task force (IETF) to carry the Extensible Authentication Protocol (EAP), which provides flexible authentication upon the presence of AAA. To the best of our knowledge, this paper is the first deep study of the feasibility of EAP/PANA for network access control in constrained devices. We provide light-weight versions and implementations of these protocols to fit them into constrained devices. These versions have been designed to reduce the impact in standard specifications. The goal of this work is two-fold: (1) to demonstrate the feasibility of EAP/PANA in IoT devices; (2) to provide the scientific community with the first light-weight interoperable implementation of EAP/PANA for constrained devices in the Contiki operating system (Contiki OS), called PANATIKI. The paper also shows a testbed, simulations and experimental results obtained from real and simulated constrained devices.
Constraint-Based Local Search for Constrained Optimum Paths Problems
NASA Astrophysics Data System (ADS)
Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal
Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.
De Sá Teixeira, Nuno Alexandre
2016-09-01
The memory for the final position of a moving object which suddenly disappears has been found to be displaced forward, in the direction of motion, and downwards, in the direction of gravity. These phenomena were coined, respectively, Representational Momentum and Representational Gravity. Although both these and similar effects have been systematically linked with the functioning of internal representations of physical variables (e.g. momentum and gravity), serious doubts have been raised for a cognitively based interpretation, favouring instead a major role of oculomotor and perceptual factors which, more often than not, were left uncontrolled and even ignored. The present work aims to determine the degree to which Representational Momentum and Representational Gravity are epiphenomenal to smooth pursuit eye movements. Observers were required to indicate the offset locations of targets moving along systematically varied directions after a variable imposed retention interval. Each participant completed the task twice, varying the eye movements' instructions: gaze was either constrained or left free to track the targets. A Fourier decomposition analysis of the localization responses was used to disentangle both phenomena. The results show unambiguously that constraining eye movements significantly eliminates the harmonic components which index Representational Momentum, but have no effect on Representational Gravity or its time course. The found outcomes offer promising prospects for the study of the visual representation of gravity and its neurological substrates.
NASA Astrophysics Data System (ADS)
Pavlick, R.; Schimel, D.
2014-12-01
Dynamic Global Vegetation Models (DGVMs) typically employ only a small set of Plant Functional Types (PFTs) to represent the vast diversity of observed vegetation forms and functioning. There is growing evidence, however, that this abstraction may not adequately represent the observed variation in plant functional traits, which is thought to play an important role for many ecosystem functions and for ecosystem resilience to environmental change. The geographic distribution of PFTs in these models is also often based on empirical relationships between present-day climate and vegetation patterns. Projections of future climate change, however, point toward the possibility of novel regional climates, which could lead to no-analog vegetation compositions incompatible with the PFT paradigm. Here, we present results from the Jena Diversity-DGVM (JeDi-DGVM), a novel traits-based vegetation model, which simulates a large number of hypothetical plant growth strategies constrained by functional tradeoffs, thereby allowing for a more flexible temporal and spatial representation of the terrestrial biosphere. First, we compare simulated present-day geographical patterns of functional traits with empirical trait observations (in-situ and from airborne imaging spectroscopy). The observed trait patterns are then used to improve the tradeoff parameterizations of JeDi-DGVM. Finally, focusing primarily on the simulated leaf traits, we run the model with various amounts of trait diversity. We quantify the effects of these modeled biodiversity manipulations on simulated ecosystem fluxes and stocks for both present-day conditions and transient climate change scenarios. The simulation results reveal that the coarse treatment of plant functional traits by current PFT-based vegetation models may contribute substantial uncertainty regarding carbon-climate feedbacks. Further development of trait-based models and further investment in global in-situ and spectroscopic plant trait observations are needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buda, I. G.; Lane, C.; Barbiellini, B.
We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less
Phase retrieval using regularization method in intensity correlation imaging
NASA Astrophysics Data System (ADS)
Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin
2014-11-01
Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition
Buda, I. G.; Lane, C.; Barbiellini, B.; ...
2017-03-23
We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less
Biologically plausible particulate air pollution mortality concentration-response functions.
Roberts, Steven
2004-01-01
In this article I introduce an alternative method for estimating particulate air pollution mortality concentration-response functions. This method constrains the particulate air pollution mortality concentration-response function to be biologically plausible--that is, a non-decreasing function of the particulate air pollution concentration. Using time-series data from Cook County, Illinois, the proposed method yields more meaningful particulate air pollution mortality concentration-response function estimates with an increase in statistical accuracy. PMID:14998745
Stochastic control system parameter identifiability
NASA Technical Reports Server (NTRS)
Lee, C. H.; Herget, C. J.
1975-01-01
The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.
Averill, Colin; Waring, Bonnie G; Hawkes, Christine V
2016-05-01
Soil moisture constrains the activity of decomposer soil microorganisms, and in turn the rate at which soil carbon returns to the atmosphere. While increases in soil moisture are generally associated with increased microbial activity, historical climate may constrain current microbial responses to moisture. However, it is not known if variation in the shape and magnitude of microbial functional responses to soil moisture can be predicted from historical climate at regional scales. To address this problem, we measured soil enzyme activity at 12 sites across a broad climate gradient spanning 442-887 mm mean annual precipitation. Measurements were made eight times over 21 months to maximize sampling during different moisture conditions. We then fit saturating functions of enzyme activity to soil moisture and extracted half saturation and maximum activity parameter values from model fits. We found that 50% of the variation in maximum activity parameters across sites could be predicted by 30-year mean annual precipitation, an indicator of historical climate, and that the effect is independent of variation in temperature, soil texture, or soil carbon concentration. Based on this finding, we suggest that variation in the shape and magnitude of soil microbial response to soil moisture due to historical climate may be remarkably predictable at regional scales, and this approach may extend to other systems. If historical contingencies on microbial activities prove to be persistent in the face of environmental change, this approach also provides a framework for incorporating historical climate effects into biogeochemical models simulating future global change scenarios. © 2016 John Wiley & Sons Ltd.
Origin and Evolutionary Alteration of the Mitochondrial Import System in Eukaryotic Lineages
Fukasawa, Yoshinori; Oda, Toshiyuki; Tomii, Kentaro
2017-01-01
Abstract Protein transport systems are fundamentally important for maintaining mitochondrial function. Nevertheless, mitochondrial protein translocases such as the kinetoplastid ATOM complex have recently been shown to vary in eukaryotic lineages. Various evolutionary hypotheses have been formulated to explain this diversity. To resolve any contradiction, estimating the primitive state and clarifying changes from that state are necessary. Here, we present more likely primitive models of mitochondrial translocases, specifically the translocase of the outer membrane (TOM) and translocase of the inner membrane (TIM) complexes, using scrutinized phylogenetic profiles. We then analyzed the translocases’ evolution in eukaryotic lineages. Based on those results, we propose a novel evolutionary scenario for diversification of the mitochondrial transport system. Our results indicate that presequence transport machinery was mostly established in the last eukaryotic common ancestor, and that primitive translocases already had a pathway for transporting presequence-containing proteins. Moreover, secondary changes including convergent and migrational gains of a presequence receptor in TOM and TIM complexes, respectively, likely resulted from constrained evolution. The nature of a targeting signal can constrain alteration to the protein transport complex. PMID:28369657
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
Shock interaction with deformable particles using a constrained interface reinitialization scheme
NASA Astrophysics Data System (ADS)
Sridharan, P.; Jackson, T. L.; Zhang, J.; Balachandar, S.; Thakur, S.
2016-02-01
In this paper, we present axisymmetric numerical simulations of shock propagation in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. We use the Mie-Gruneisen equation of state to describe both the medium and the particle. The numerical method is a finite-volume based solver on a Cartesian grid, that allows for multi-material interfaces and shocks, and uses a novel constrained reinitialization scheme to precisely preserve particle mass and volume. We compute the unsteady inviscid drag coefficient as a function of time, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. We also compute the mass-averaged particle pressure and show that the observed oscillations inside the particle are on the particle-acoustic time scale. Finally, we present simplified point-particle models that can be used for macroscale simulations. In the Appendix, we extend the isothermal or isentropic assumption concerning the point-force models to non-ideal equations of state, thus justifying their use for the current problem.
NASA Astrophysics Data System (ADS)
Jensen, Daniel; Wasserman, Adam; Baczewski, Andrew
The construction of approximations to the exchange-correlation potential for warm dense matter (WDM) is a topic of significant recent interest. In this work, we study the inverse problem of Kohn-Sham (KS) DFT as a means of guiding functional design at zero temperature and in WDM. Whereas the forward problem solves the KS equations to produce a density from a specified exchange-correlation potential, the inverse problem seeks to construct the exchange-correlation potential from specified densities. These two problems require different computational methods and convergence criteria despite sharing the same mathematical equations. We present two new inversion methods based on constrained variational and PDE-constrained optimization methods. We adapt these methods to finite temperature calculations to reveal the exchange-correlation potential's temperature dependence in WDM-relevant conditions. The different inversion methods presented are applied to both non-interacting and interacting model systems for comparison. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94.
Defensive traits exhibit an evolutionary trade-off and drive diversification in ants.
Blanchard, Benjamin D; Moreau, Corrie S
2017-02-01
Evolutionary biologists have long predicted that evolutionary trade-offs among traits should constrain morphological divergence and species diversification. However, this prediction has yet to be tested in a broad evolutionary context in many diverse clades, including ants. Here, we reconstruct an expanded ant phylogeny representing 82% of ant genera, compile a new family-wide trait database, and conduct various trait-based analyses to show that defensive traits in ants do exhibit an evolutionary trade-off. In particular, the use of a functional sting negatively correlates with a suite of other defensive traits including spines, large eye size, and large colony size. Furthermore, we find that several of the defensive traits that trade off with a sting are also positively correlated with each other and drive increased diversification, further suggesting that these traits form a defensive suite. Our results support the hypothesis that trade-offs in defensive traits significantly constrain trait evolution and influence species diversification in ants. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
On the utilization of engineering knowledge in design optimization
NASA Technical Reports Server (NTRS)
Papalambros, P.
1984-01-01
Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.
Constrained Allocation Flux Balance Analysis
Mori, Matteo; Hwa, Terence; Martin, Olivier C.
2016-01-01
New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an “ensemble averaging” procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. PMID:27355325
Noncoding origins of anthropoid traits and a new null model of transposon functionalization
del Rosario, Ricardo C.H.; Rayan, Nirmala Arul
2014-01-01
Little is known about novel genetic elements that drove the emergence of anthropoid primates. We exploited the sequencing of the marmoset genome to identify 23,849 anthropoid-specific constrained (ASC) regions and confirmed their robust functional signatures. Of the ASC base pairs, 99.7% were noncoding, suggesting that novel anthropoid functional elements were overwhelmingly cis-regulatory. ASCs were highly enriched in loci associated with fetal brain development, motor coordination, neurotransmission, and vision, thus providing a large set of candidate elements for exploring the molecular basis of hallmark primate traits. We validated ASC192 as a primate-specific enhancer in proliferative zones of the developing brain. Unexpectedly, transposable elements (TEs) contributed to >56% of ASCs, and almost all TE families showed functional potential similar to that of nonrepetitive DNA. Three L1PA repeat-derived ASCs displayed coherent eye-enhancer function, thus demonstrating that the “gene-battery” model of TE functionalization applies to enhancers in vivo. Our study provides fundamental insights into genome evolution and the origins of anthropoid phenotypes and supports an elegantly simple new null model of TE exaptation. PMID:25043600
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Q; Zhang, M; Chen, T
Purpose: Variation in function of different lung regions has been ignored so far for conventional lung cancer treatment planning, which may lead to higher risk of radiation induced lung disease. 4DCT based lung ventilation imaging provides a novel yet convenient approach for lung functional imaging as 4DCT is taken as routine for lung cancer treatment. Our work aims to evaluate the impact of accounting for spatial heterogeneity in lung function using 4DCT based lung ventilation imaging for proton and IMRT plans. Methods: Six patients with advanced stage lung cancer of various tumor locations were retrospectively evaluated for the study. Protonmore » and IMRT plans were designed following identical planning objective and constrains for each patient. Ventilation images were calculated from patients’ 4DCT using deformable image registration implemented by Velocity AI software based on Jacobian-metrics. Lung was delineated into two function level regions based on ventilation (low and high functional area). High functional region was defined as lung ventilation greater than 30%. Dose distribution and statistics in different lung function area was calculated for patients. Results: Variation in dosimetric statistics of different function lung region was observed between proton and IMRT plans. In all proton plans, high function lung regions receive lower maximum dose (100.2%–108.9%), compared with IMRT plans (106.4%–119.7%). Interestingly, three out of six proton plans gave higher mean dose by up to 2.2% than IMRT to high function lung region. Lower mean dose (lower by up to 14.1%) and maximum dose (lower by up to 9%) were observed in low function lung for proton plans. Conclusion: A systematic approach was developed to generate function lung ventilation imaging and use it to evaluate plans. This method hold great promise in function analysis of lung during planning. We are currently studying more subjects to evaluate this tool.« less
The origin and dynamic evolution of chemical information transfer
Steiger, Sandra; Schmitt, Thomas; Schaefer, H. Martin
2011-01-01
Although chemical communication is the most widespread form of communication, its evolution and diversity are not well understood. By integrating studies of a wide range of terrestrial plants and animals, we show that many chemicals are emitted, which can unintentionally provide information (cues) and, therefore, act as direct precursors for the evolution of intentional communication (signals). Depending on the content, design and the original function of the cue, there are predictable ways that selection can enhance the communicative function of chemicals. We review recent progress on how efficacy-based selection by receivers leads to distinct evolutionary trajectories of chemical communication. Because the original function of a cue may channel but also constrain the evolution of functional communication, we show that a broad perspective on multiple selective pressures acting upon chemicals provides important insights into the origin and dynamic evolution of chemical information transfer. Finally, we argue that integrating chemical ecology into communication theory may significantly enhance our understanding of the evolution, the design and the content of signals in general. PMID:21177681
The effect of claustrum lesions on human consciousness and recovery of function.
Chau, Aileen; Salazar, Andres M; Krueger, Frank; Cristofori, Irene; Grafman, Jordan
2015-11-01
Crick and Koch proposed that the claustrum plays a crucial role in consciousness. Their proposal was based on the structure and connectivity of the claustrum that suggested it had a role in coordinating a set of diverse brain functions. Given the few human studies investigating this claim, we decided to study the effects of claustrum lesions on consciousness in 171 combat veterans with penetrating traumatic brain injuries. Additionally, we studied the effects of claustrum lesions and loss of consciousness on long-term cognitive abilities. Claustrum damage was associated with the duration, but not frequency, of loss of consciousness, indicating that the claustrum may have an important role in regaining, but not maintaining, consciousness. Total brain volume loss, but not claustrum lesions, was associated with long-term recovery of neurobehavioral functions. Our findings constrain the current understanding of the neurobehavioral functions of the claustrum and its role in maintaining and regaining consciousness. Copyright © 2015 Elsevier Inc. All rights reserved.
Ferraro, Francesco; Kriston-Vizi, Janos; Metcalf, Daniel J.; Martin-Martin, Belen; Freeman, Jamie; Burden, Jemima J.; Westmoreland, David; Dyer, Clare E.; Knight, Alex E.; Ketteler, Robin; Cutler, Daniel F.
2014-01-01
Summary Weibel-Palade bodies (WPBs), endothelial-specific secretory granules that are central to primary hemostasis and inflammation, occur in dimensions ranging between 0.5 and 5 μm. How their size is determined and whether it has a functional relevance are at present unknown. Here, we provide evidence for a dual role of the Golgi apparatus in controlling the size of these secretory carriers. At the ministack level, cisternae constrain the size of nanostructures (“quanta”) of von Willebrand factor (vWF), the main WPB cargo. The ribbon architecture of the Golgi then allows copackaging of a variable number of vWF quanta within the continuous lumen of the trans-Golgi network, thereby generating organelles of different sizes. Reducing the WPB size abates endothelial cell hemostatic function by drastically diminishing platelet recruitment, but, strikingly, the inflammatory response (the endothelial capacity to engage leukocytes) is unaltered. Size can thus confer functional plasticity to an organelle by differentially affecting its activities. PMID:24794632
Constraining Galaxy Evolution With Hubble's Next Generation Spectral Library
NASA Astrophysics Data System (ADS)
Heap, S.; Lindler, D. J.
2009-03-01
We present Hubble's Next Generation Spectral Library, a library of UV-optical spectra (0.2-1.0 μ) of 378 stars. We show that the mid-UV spectrum can be used to constrain the ages and metallicities of high-redshift galaxies presently being observed with large, ground-based telescopes.
Identification of different geologic units using fuzzy constrained resistivity tomography
NASA Astrophysics Data System (ADS)
Singh, Anand; Sharma, S. P.
2018-01-01
Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.
ERIC Educational Resources Information Center
Bowlin, Melissa S.; McLeer, Dorothy F.; Danielson-Francois, Anne M.
2014-01-01
Evolutionary history and structural considerations constrain all aspects of animal physiology. Constraints on invertebrate locomotion are especially straightforward for students to observe and understand. In this exercise, students use spiders to investigate the concepts of adaptation, structure-function relationships, and trade-offs. Students…
Carroll, Raymond J; Delaigle, Aurore; Hall, Peter
2011-03-01
In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.
NASA Astrophysics Data System (ADS)
Hawkes, A. D.; Horton, B. P.
2007-05-01
Paleoseismologists infer the amount of coseismic subsidence during plate-boundary earthquakes from stratigraphic changes in microfossils across sharp peat-mud and peat-sand contacts. However, the use of lithostratigraphic-based reconstructions is associated with a number of limitations, and these become particularly significant when examining low amplitude, short period variations that occur during a plate-boundary earthquake. To address this, paleoecologists working in the coastal zone have recently adopted a transfer- function approach to environmental reconstruction. Continuing subduction of the Juan de Fuca plate beneath the North America plate constitutes a major seismic hazard in the Pacific Northwest. The subduction zone interface presently lacks seismicity. The timing of the last great earthquake along the Cascadia subduction zone (1700AD) is now well refined by Japanese records of an orphan tsunami (no causal earthquake was felt in Japan) that was generated from an earthquake off the Pacific Northwest on the evening of January 26th 1700AD. I will apply the transfer function to modern foraminiferal datasets along coastal Oregon to analyze the fossil record and quantitatively determine the amount of vertical land movement associated with the 1700AD earthquake event. To date, we have collected 7 modern transects totaling 132 samples from the intertidal zone to the upland. We have also collected 9 cores recording the 1700AD earthquake. Furthermore, a 4m vibracore was collected and contains between 3 and 5 potential earthquake horizons. The 1700AD earthquake in the vibracore shows a distinct litho- and biostratigraphical change representing an instantaneous episode of subsidence of approximately 1m. However, development and application of the transfer function to such events will provide quantitative constrained estimates of coseismic land movement. Measurements that are more accurate are necessary to help modelers develop simulations that are more realistic in order to better assess earthquake and tsunami hazards. This will enable efficient and effective mitigation planning and preparation to minimize the personal and economic costs associated with such hazards.
NASA Astrophysics Data System (ADS)
Mackay, D. S.; Frank, J.; Reed, D.; Whitehouse, F.; Ewers, B. E.; Pendall, E.; Massman, W. J.; Sperry, J. S.
2012-04-01
In woody plant systems transpiration is often the dominant component of total evapotranspiration, and so it is key to understanding water and energy cycles. Moreover, transpiration is tightly coupled to carbon and nutrient fluxes, and so it is also vital to understanding spatial variability of biogeochemical fluxes. However, the spatial variability of transpiration and its links to biogeochemical fluxes, within- and among-ecosystems, has been a challenge to constrain because of complex feedbacks between physical and biological controls. Plant hydraulics provides an emerging theory with the rigor needed to develop testable hypotheses and build useful models for scaling these coupled fluxes from individual plants to regional scales. This theory predicts that vegetative controls over water, energy, carbon, and nutrient fluxes can be determined from the limitation of plant water transport through the soil-xylem-stomata pathway. Limits to plant water transport can be predicted from measurable plant structure and function (e.g., vulnerability to cavitation). We present a next-generation coupled transpiration-biogeochemistry model based on this emerging theory. The model, TREEScav, is capable of predicting transpiration, along with carbon and nutrient flows, constrained by plant structure and function. The model incorporates tightly coupled mechanisms of the demand and supply of water through the soil-xylem-stomata system, with the feedbacks to photosynthesis and utilizable carbohydrates. The model is evaluated by testing it against transpiration and carbon flux data along an elevation gradient of woody plants comprising sagebrush steppe, mid-elevation lodgepole pine forests, and subalpine spruce/fir forests in the Rocky Mountains. The model accurately predicts transpiration and carbon fluxes as measured from gas exchange, sap flux, and eddy covariance towers. The results of this work demonstrate that credible spatial predictions of transpiration and related biogeochemical fluxes will be possible at regional scales using relatively easily obtained vegetation structural and functional information.
NASA Astrophysics Data System (ADS)
Ballantyne, David R.
2016-04-01
Deep X-ray surveys have provided a comprehensive and largely unbiased view of AGN evolution stretching back to z˜5. However, it has been challenging to use the survey results to connect this evolution to the cosmological environment that AGNs inhabit. Exploring this connection will be crucial to understanding the triggering mechanisms of AGNs and how these processes manifest in observations at all wavelengths. In anticipation of upcoming wide-field X-ray surveys that will allow quantitative analysis of AGN environments, we present a method to observationally constrain the Conditional Luminosity Function (CLF) of AGNs at a specific z. Once measured, the CLF allows the calculation of the AGN bias, mean dark matter halo mass, AGN lifetime, halo occupation number, and AGN correlation function - all as a function of luminosity. The CLF can be constrained using a measurement of the X-ray luminosity function and the correlation length at different luminosities. The method is demonstrated at z ≈0 and 0.9, and clear luminosity dependence in the AGN bias and mean halo mass is predicted at both z. The results support the idea that there are at least two different modes of AGN triggering: one, at high luminosity, that only occurs in high mass, highly biased haloes, and one that can occur over a wide range of halo masses and leads to luminosities that are correlated with halo mass. This latter mode dominates at z<0.9. The CLFs for Type 2 and Type 1 AGNs are also constrained at z ≈0, and we find evidence that unobscured quasars are more likely to be found in higher mass halos than obscured quasars. Thus, the AGN unification model seems to fail at quasar luminosities.
On the nullspace of TLS multi-station adjustment
NASA Astrophysics Data System (ADS)
Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen
2018-07-01
In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
NASA Astrophysics Data System (ADS)
Brunner, Philip; Doherty, J.; Simmons, Craig T.
2012-07-01
The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.
NASA Astrophysics Data System (ADS)
Aumann, T.; Bertulani, C. A.; Schindler, F.; Typel, S.
2017-12-01
An experimentally constrained equation of state of neutron-rich matter is fundamental for the physics of nuclei and the astrophysics of neutron stars, mergers, core-collapse supernova explosions, and the synthesis of heavy elements. To this end, we investigate the potential of constraining the density dependence of the symmetry energy close to saturation density through measurements of neutron-removal cross sections in high-energy nuclear collisions of 0.4 to 1 GeV /nucleon . We show that the sensitivity of the total neutron-removal cross section is high enough so that the required accuracy can be reached experimentally with the recent developments of new detection techniques. We quantify two crucial points to minimize the model dependence of the approach and to reach the required accuracy: the contribution to the cross section from inelastic scattering has to be measured separately in order to allow a direct comparison of experimental cross sections to theoretical cross sections based on density functional theory and eikonal theory. The accuracy of the reaction model should be investigated and quantified by the energy and target dependence of various nucleon-removal cross sections. Our calculations explore the dependence of neutron-removal cross sections on the neutron skin of medium-heavy neutron-rich nuclei, and we demonstrate that the slope parameter L of the symmetry energy could be constrained down to ±10 MeV by such a measurement, with a 2% accuracy of the measured and calculated cross sections.
A Model-Data Fusion Approach for Constraining Modeled GPP at Global Scales Using GOME2 SIF Data
NASA Astrophysics Data System (ADS)
MacBean, N.; Maignan, F.; Lewis, P.; Guanter, L.; Koehler, P.; Bacour, C.; Peylin, P.; Gomez-Dans, J.; Disney, M.; Chevallier, F.
2015-12-01
Predicting the fate of the ecosystem carbon, C, stocks and their sensitivity to climate change relies heavily on our ability to accurately model the gross carbon fluxes, i.e. photosynthesis and respiration. However, there are large differences in the Gross Primary Productivity (GPP) simulated by different land surface models (LSMs), not only in terms of mean value, but also in terms of phase and amplitude when compared to independent data-based estimates. This strongly limits our ability to provide accurate predictions of carbon-climate feedbacks. One possible source of this uncertainty is from inaccurate parameter values resulting from incomplete model calibration. Solar Induced Fluorescence (SIF) has been shown to have a linear relationship with GPP at the typical spatio-temporal scales used in LSMs (Guanter et al., 2011). New satellite-derived SIF datasets have the potential to constrain LSM parameters related to C uptake at global scales due to their coverage. Here we use SIF data derived from the GOME2 instrument (Köhler et al., 2014) to optimize parameters related to photosynthesis and leaf phenology of the ORCHIDEE LSM, as well as the linear relationship between SIF and GPP. We use a multi-site approach that combines many model grid cells covering a wide spatial distribution within the same optimization (e.g. Kuppel et al., 2014). The parameters are constrained per Plant Functional type as the linear relationship described above varies depending on vegetation structural properties. The relative skill of the optimization is compared to a case where only satellite-derived vegetation index data are used to constrain the model, and to a case where both data streams are used. We evaluate the results using an independent data-driven estimate derived from FLUXNET data (Jung et al., 2011) and with a new atmospheric tracer, Carbonyl sulphide (OCS) following the approach of Launois et al. (ACPD, in review). We show that the optimization reduces the strong positive bias of the ORCHIDEE model and increases the correlation compared to independent estimates. Differences in spatial patterns and gradients between simulated GPP and observed SIF remain largely unchanged however, suggesting that the underlying representation of vegetation type and/or structure and functioning in the model requires further investigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gair, Jonathan R.; Tang, Christopher; Volonteri, Marta
One of the sources of gravitational waves for the proposed space-based gravitational wave detector, the Laser Interferometer Space Antenna (LISA), are the inspirals of compact objects into supermassive black holes in the centers of galaxies--extreme-mass-ratio inspirals (EMRIs). Using LISA observations, we will be able to measure the parameters of each EMRI system detected to very high precision. However, the statistics of the set of EMRI events observed by LISA will be more important in constraining astrophysical models than extremely precise measurements for individual systems. The black holes to which LISA is most sensitive are in a mass range that ismore » difficult to probe using other techniques, so LISA provides an almost unique window onto these objects. In this paper we explore, using Bayesian techniques, the constraints that LISA EMRI observations can place on the mass function of black holes at low redshift. We describe a general framework for approaching inference of this type--using multiple observations in combination to constrain a parametrized source population. Assuming that the scaling of the EMRI rate with the black-hole mass is known and taking a black-hole distribution given by a simple power law, dn/dlnM=A{sub 0}(M/M{sub *}){sup {alpha}}{sub 0}, we find that LISA could measure the parameters to a precision of {Delta}(lnA{sub 0}){approx}0.08, and {Delta}({alpha}{sub 0}){approx}0.03 for a reference model that predicts {approx}1000 events. Even with as few as 10 events, LISA should constrain the slope to a precision {approx}0.3, which is the current level of observational uncertainty in the low-mass slope of the black-hole mass function. We also consider a model in which A{sub 0} and {alpha}{sub 0} evolve with redshift, but find that EMRI observations alone do not have much power to probe such an evolution.« less
Weighting climate model projections using observational constraints.
Gillett, Nathan P
2015-11-13
Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.
Constraining nuclear photon strength functions by the decay properties of photo-excited states
NASA Astrophysics Data System (ADS)
Isaak, J.; Savran, D.; Krtička, M.; Ahmed, M. W.; Beller, J.; Fiori, E.; Glorius, J.; Kelley, J. H.; Löher, B.; Pietralla, N.; Romig, C.; Rusev, G.; Scheck, M.; Schnorrenberger, L.; Silva, J.; Sonnabend, K.; Tonchev, A. P.; Tornow, W.; Weller, H. R.; Zweidinger, M.
2013-12-01
A new approach for constraining the low-energy part of the electric dipole Photon Strength Function (E1-PSF) is presented. Experiments at the Darmstadt High-Intensity Photon Setup and the High Intensity γ→-Ray Source have been performed to investigate the decay properties of 130Te between 5.50 and 8.15 MeV excitation energy. In particular, the average γ-ray branching ratio to the ground state and the population intensity of low-lying excited states have been studied. A comparison to the statistical model shows that the latter is sensitive to the low-energy behavior of the E1-PSF, while the average ground state branching ratio cannot be described by the statistical model in the energy range between 5.5 and 6.5 MeV.
Ullmann, J. L.; Kawano, T.; Baramsai, B.; ...
2017-08-31
The cross section for neutron capture in the continuum region has been difficult to calculate accurately. Previous results for 238 U show that including an M 1 scissors-mode contribution to the photon strength function resulted in very good agreement between calculation and measurement. Our paper extends that analysis to 234 , 236 U by using γ -ray spectra measured with the Detector for Advanced Neutron Capture Experiments (DANCE) at the Los Alamos Neutron Science Center to constrain the photon strength function used to calculate the capture cross section. Calculations using a strong scissors-mode contribution reproduced the measured γ -ray spectramore » and were in excellent agreement with the reported cross sections for all three isotopes.« less
Goldey, Matthew B.; Brawand, Nicholas P.; Voros, Marton; ...
2017-04-20
The in silico design of novel complex materials for energy conversion requires accurate, ab initio simulation of charge transport. In this work, we present an implementation of constrained density functional theory (CDFT) for the calculation of parameters for charge transport in the hopping regime. We verify our implementation against literature results for molecular systems, and we discuss the dependence of results on numerical parameters and the choice of localization potentials. In addition, we compare CDFT results with those of other commonly used methods for simulating charge transport between nanoscale building blocks. As a result, we show that some of thesemore » methods give unphysical results for thermally disordered configurations, while CDFT proves to be a viable and robust approach.« less
NASA Astrophysics Data System (ADS)
Ullmann, J. L.; Kawano, T.; Baramsai, B.; Bredeweg, T. A.; Couture, A.; Haight, R. C.; Jandel, M.; O'Donnell, J. M.; Rundberg, R. S.; Vieira, D. J.; Wilhelmy, J. B.; Krtička, M.; Becker, J. A.; Chyzh, A.; Wu, C. Y.; Mitchell, G. E.
2017-08-01
The cross section for neutron capture in the continuum region has been difficult to calculate accurately. Previous results for 238U show that including an M 1 scissors-mode contribution to the photon strength function resulted in very good agreement between calculation and measurement. This paper extends that analysis to U,236234 by using γ -ray spectra measured with the Detector for Advanced Neutron Capture Experiments (DANCE) at the Los Alamos Neutron Science Center to constrain the photon strength function used to calculate the capture cross section. Calculations using a strong scissors-mode contribution reproduced the measured γ -ray spectra and were in excellent agreement with the reported cross sections for all three isotopes.
Constraints and triggers: situational mechanics of gender in negotiation.
Bowles, Hannah Riley; Babcock, Linda; McGinn, Kathleen L
2005-12-01
The authors propose 2 categories of situational moderators of gender in negotiation: situational ambiguity and gender triggers. Reducing the degree of situational ambiguity constrains the influence of gender on negotiation. Gender triggers prompt divergent behavioral responses as a function of gender. Field and lab studies (1 and 2) demonstrated that decreased ambiguity in the economic structure of a negotiation (structural ambiguity) reduces gender effects on negotiation performance. Study 3 showed that representation role (negotiating for self or other) functions as a gender trigger by producing a greater effect on female than male negotiation performance. Study 4 showed that decreased structural ambiguity constrains gender effects of representation role, suggesting that situational ambiguity and gender triggers work in interaction to moderate gender effects on negotiation performance. Copyright 2006 APA, all rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Shaohua, E-mail: hua66com@163.com; School of Automation, Chongqing University, Chongqing 400044; Hou, Zhiwei
2015-12-15
In this paper, chaos control is proposed for the output- constrained system with uncertain control gain and time delay and is applied to the brushless DC motor. Using the dynamic surface technology, the controller overcomes the repetitive differentiation of backstepping and boundedness hypothesis of pre-determined control gain by incorporating radial basis function neural network and adaptive technology. The tangent barrier Lyapunov function is employed for time-delay chaotic system to prevent constraint violation. It is proved that the proposed control approach can guarantee asymptotically stable in the sense of uniformly ultimate boundedness without constraint violation. Finally, the effectiveness of the proposedmore » approach is demonstrated on the brushless DC motor example.« less
Bounds on OPE coefficients from interference effects in the conformal collider
NASA Astrophysics Data System (ADS)
Córdova, Clay; Maldacena, Juan; Turiaci, Gustavo J.
2017-11-01
We apply the average null energy condition to obtain upper bounds on the three-point function coefficients of stress tensors and a scalar operator, < TTOi>, in general CFTs. We also constrain the gravitational anomaly of U(1) currents in four-dimensional CFTs, which are encoded in three-point functions of the form 〈 T T J 〉. In theories with a large N AdS dual we translate these bounds into constraints on the coefficient of a higher derivative bulk term of the form ∫ϕ W 2. We speculate that these bounds also apply in de-Sitter. In this case our results constrain inflationary observables, such as the amplitude for chiral gravity waves that originate from higher derivative terms in the Lagrangian of the form ϕ W W ∗.
Seismic structure of the European upper mantle based on adjoint tomography
NASA Astrophysics Data System (ADS)
Zhu, Hejun; Bozdağ, Ebru; Tromp, Jeroen
2015-04-01
We use adjoint tomography to iteratively determine seismic models of the crust and upper mantle beneath the European continent and the North Atlantic Ocean. Three-component seismograms from 190 earthquakes recorded by 745 seismographic stations are employed in the inversion. Crustal model EPcrust combined with mantle model S362ANI comprise the 3-D starting model, EU00. Before the structural inversion, earthquake source parameters, for example, centroid moment tensors and locations, are reinverted based on global 3-D Green's functions and Fréchet derivatives. This study consists of three stages. In stage one, frequency-dependent phase differences between observed and simulated seismograms are used to constrain radially anisotropic wave speed variations. In stage two, frequency-dependent phase and amplitude measurements are combined to simultaneously constrain elastic wave speeds and anelastic attenuation. In these two stages, long-period surface waves and short-period body waves are combined to simultaneously constrain shallow and deep structures. In stage three, frequency-dependent phase and amplitude anomalies of three-component surface waves are used to simultaneously constrain radial and azimuthal anisotropy. After this three-stage inversion, we obtain a new seismic model of the European curst and upper mantle, named EU60. Improvements in misfits and histograms in both phase and amplitude help us to validate this three-stage inversion strategy. Long-wavelength elastic wave speed variations in model EU60 compare favourably with previous body- and surface wave tomographic models. Some hitherto unidentified features, such as the Adria microplate, naturally emerge from the smooth starting model. Subducting slabs, slab detachments, ancient suture zones, continental rifts and backarc basins are well resolved in model EU60. We find an anticorrelation between shear wave speed and anelastic attenuation at depths < 100 km. At greater depths, this anticorrelation becomes relatively weak, in agreement with previous global attenuation studies. Furthermore, enhanced attenuation is observed within the mantle transition zone beneath the North Atlantic Ocean. Consistent with typical radial anisotropy in 1-D reference models, the European continent is dominated by features with a radially anisotropic parameter ξ > 1, indicating predominantly horizontal flow within the upper mantle. In addition, subduction zones, such as the Apennines and Hellenic arcs, are characterized by vertical flow with ξ < 1 at depths greater than 150 km. We find that the direction of the fast anisotropic axis is closely tied to the tectonic evolution of the region. Averaged radial peak-to-peak anisotropic strength profiles identify distinct brittle-ductile deformation in lithospheric strength beneath oceans and continents. Finally, we use the `point-spread function' to assess image quality and analyse trade-offs between different model parameters.
A New Scheme for the Design of Hilbert Transform Pairs of Biorthogonal Wavelet Bases
NASA Astrophysics Data System (ADS)
Shi, Hongli; Luo, Shuqian
2010-12-01
In designing the Hilbert transform pairs of biorthogonal wavelet bases, it has been shown that the requirements of the equal-magnitude responses and the half-sample phase offset on the lowpass filters are the necessary and sufficient condition. In this paper, the relationship between the phase offset and the vanishing moment difference of biorthogonal scaling filters is derived, which implies a simple way to choose the vanishing moments so that the phase response requirement can be satisfied structurally. The magnitude response requirement is approximately achieved by a constrained optimization procedure, where the objective function and constraints are all expressed in terms of the auxiliary filters of scaling filters rather than the scaling filters directly. Generally, the calculation burden in the design implementation will be less than that of the current schemes. The integral of magnitude response difference between the primal and dual scaling filters has been chosen as the objective function, which expresses the magnitude response requirements in the whole frequency range. Two design examples illustrate that the biorthogonal wavelet bases designed by the proposed scheme are very close to Hilbert transform pairs.
NASA Astrophysics Data System (ADS)
Yung, L. Y. Aaron; Somerville, Rachel S.
2017-06-01
The well-established Santa Cruz semi-analytic galaxy formation framework has been shown to be quite successful at explaining observations in the local Universe, as well as making predictions for low-redshift observations. Recently, metallicity-based gas partitioning and H2-based star formation recipes have been implemented in our model, replacing the legacy cold-gas based recipe. We then use our revised model to explore the high-redshift Universe and make predictions up to z = 15. Although our model is only calibrated to observations from the local universe, our predictions seem to match incredibly well with mid- to high-redshift observational constraints available-to-date, including rest-frame UV luminosity functions and the reionization history as constrained by CMB and IGM observations. We provide predictions for individual and statistical galaxy properties at a wide range of redshifts (z = 4 - 15), including objects that are too far or too faint to be detected with current facilities. And using our model predictions, we also provide forecasted luminosity functions and other observables for upcoming studies with JWST.
Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF.
Duan, Chong; Kallehauge, Jesper F; Pérez-Torres, Carlos J; Bretthorst, G Larry; Beeman, Scott C; Tanderup, Kari; Ackerman, Joseph J H; Garbow, Joel R
2018-02-01
This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. Bayesian probability theory-based parameter estimation and model selection were used to compare tracer kinetic modeling employing either the measured remote-AIF (R-AIF, i.e., the traditional approach) or an inferred cL-AIF against both in silico DCE-MRI data and clinical, cervical cancer DCE-MRI data. When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels of the 16 patients (35,602 voxels in total). Among those voxels, a tracer kinetic model that employed the voxel-specific cL-AIF was preferred (i.e., had a higher posterior probability) in 80 % of the voxels compared to the direct use of a single R-AIF. Maps of spatial variation in voxel-specific AIF bolus amplitude and arrival time for heterogeneous tissues, such as cervical cancer, are accessible with the cL-AIF approach. The cL-AIF method, which estimates unique local-AIF amplitude and arrival time for each voxel within the tissue of interest, provides better modeling of DCE-MRI data than the use of a single, measured R-AIF. The Bayesian-based data analysis described herein affords estimates of uncertainties for each model parameter, via posterior probability density functions, and voxel-wise comparison across methods/models, via model selection in data modeling.
A grid of MHD models for stellar mass loss and spin-down rates of solar analogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, O.; Drake, J. J.
2014-03-01
Stellar winds are believed to be the dominant factor in the spin-down of stars over time. However, stellar winds of solar analogs are poorly constrained due to observational challenges. In this paper, we present a grid of magnetohydrodynamic models to study and quantify the values of stellar mass loss and angular momentum loss rates as a function of the stellar rotation period, magnetic dipole component, and coronal base density. We derive simple scaling laws for the loss rates as a function of these parameters, and constrain the possible mass loss rate of stars with thermally driven winds. Despite the successmore » of our scaling law in matching the results of the model, we find a deviation between the 'solar dipole' case and a real case based on solar observations that overestimates the actual solar mass loss rate by a factor of three. This implies that the model for stellar fields might require a further investigation with additional complexity. Mass loss rates in general are largely controlled by the magnetic field strength, with the wind density varying in proportion to the confining magnetic pressure B {sup 2}. We also find that the mass loss rates obtained using our grid models drop much faster with the increase in rotation period than scaling laws derived using observed stellar activity. For main-sequence solar-like stars, our scaling law for angular momentum loss versus poloidal magnetic field strength retrieves the well-known Skumanich decline of angular velocity with time, Ω{sub *}∝t {sup –1/2}, if the large-scale poloidal magnetic field scales with rotation rate as B{sub p}∝Ω{sub ⋆}{sup 2}.« less
Augmented Lagrange Hopfield network for solving economic dispatch problem in competitive environment
NASA Astrophysics Data System (ADS)
Vo, Dieu Ngoc; Ongsakul, Weerakorn; Nguyen, Khai Phuc
2012-11-01
This paper proposes an augmented Lagrange Hopfield network (ALHN) for solving economic dispatch (ED) problem in the competitive environment. The proposed ALHN is a continuous Hopfield network with its energy function based on augmented Lagrange function for efficiently dealing with constrained optimization problems. The ALHN method can overcome the drawbacks of the conventional Hopfield network such as local optimum, long computational time, and linear constraints. The proposed method is used for solving the ED problem with two revenue models of revenue based on payment for power delivered and payment for reserve allocated. The proposed ALHN has been tested on two systems of 3 units and 10 units for the two considered revenue models. The obtained results from the proposed methods are compared to those from differential evolution (DE) and particle swarm optimization (PSO) methods. The result comparison has indicated that the proposed method is very efficient for solving the problem. Therefore, the proposed ALHN could be a favorable tool for ED problem in the competitive environment.
The Function and Organization of the Motor System Controlling Flight Maneuvers in Flies.
Lindsay, Theodore; Sustar, Anne; Dickinson, Michael
2017-02-06
Animals face the daunting task of controlling their limbs using a small set of highly constrained actuators. This problem is particularly demanding for insects such as Drosophila, which must adjust wing motion for both quick voluntary maneuvers and slow compensatory reflexes using only a dozen pairs of muscles. To identify strategies by which animals execute precise actions using sparse motor networks, we imaged the activity of a complete ensemble of wing control muscles in intact, flying flies. Our experiments uncovered a remarkably efficient logic in which each of the four skeletal elements at the base of the wing are equipped with both large phasically active muscles capable of executing large changes and smaller tonically active muscles specialized for continuous fine-scaled adjustments. Based on the responses to a broad panel of visual motion stimuli, we have developed a model by which the motor array regulates aerodynamically functional features of wing motion. VIDEO ABSTRACT. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heikkinen, J. A.; Nora, M.
2011-02-15
Gyrokinetic equations of motion, Poisson equation, and energy and momentum conservation laws are derived based on the reduced-phase-space Lagrangian and inverse Kruskal iteration introduced by Pfirsch and Correa-Restrepo [J. Plasma Phys. 70, 719 (2004)]. This formalism, together with the choice of the adiabatic invariant J=
Constrained Metric Learning by Permutation Inducing Isometries.
Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle
2016-01-01
The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.
Charge redistribution in QM:QM ONIOM model systems: a constrained density functional theory approach
NASA Astrophysics Data System (ADS)
Beckett, Daniel; Krukau, Aliaksandr; Raghavachari, Krishnan
2017-11-01
The ONIOM hybrid method has found considerable success in QM:QM studies designed to approximate a high level of theory at a significantly reduced cost. This cost reduction is achieved by treating only a small model system with the target level of theory and the rest of the system with a low, inexpensive, level of theory. However, the choice of an appropriate model system is a limiting factor in ONIOM calculations and effects such as charge redistribution across the model system boundary must be considered as a source of error. In an effort to increase the general applicability of the ONIOM model, a method to treat the charge redistribution effect is developed using constrained density functional theory (CDFT) to constrain the charge experienced by the model system in the full calculation to the link atoms in the truncated model system calculations. Two separate CDFT-ONIOM schemes are developed and tested on a set of 20 reactions with eight combinations of levels of theory. It is shown that a scheme using a scaled Lagrange multiplier term obtained from the low-level CDFT model calculation outperforms ONIOM at each combination of levels of theory from 32% to 70%.
Obermeyer, Jessica A; Edmonds, Lisa A
2018-03-01
The purpose of this study was to examine the preliminary efficacy of Attentive Reading and Constrained Summarization-Written (ARCS-W) in people with mild aphasia. ARCS-W adapts an existing treatment, ARCS (Rogalski & Edmonds, 2008), to address discourse level writing in mild aphasia. ARCS-W focuses on the cognitive and linguistic skills required for discourse production. This study was a within-subject pre-postdesign. Three people with mild aphasia participated. ARCS-W integrates attentive reading or listening with constrained summarization of discourse level material in spoken and written modalities. Outcomes included macro- (main concepts) and microlinguistic (correct information units, complete utterances) discourse measures, confrontation naming, aphasia severity, and functional communication. All 3 participants demonstrated some generalization to untrained spoken and written discourse at the word, sentence, and text levels. Reduced aphasia severity and/or increased functional communication and confrontation naming were also observed in some participants. The findings of this study provide preliminary evidence of the efficacy of ARCS-W to improve spoken and written discourse in mild aphasia. Different generalization patterns suggest different mechanisms of improvement. Further research and replication are required to better understand how ARCS-W can impact discourse abilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Xuehang; Chen, Xingyuan; Ye, Ming
2015-07-01
This study develops a new framework of facies-based data assimilation for characterizing spatial distribution of hydrofacies and estimating their associated hydraulic properties. This framework couples ensemble data assimilation with transition probability-based geostatistical model via a parameterization based on a level set function. The nature of ensemble data assimilation makes the framework efficient and flexible to be integrated with various types of observation data. The transition probability-based geostatistical model keeps the updated hydrofacies distributions under geological constrains. The framework is illustrated by using a two-dimensional synthetic study that estimates hydrofacies spatial distribution and permeability in each hydrofacies from transient head data.more » Our results show that the proposed framework can characterize hydrofacies distribution and associated permeability with adequate accuracy even with limited direct measurements of hydrofacies. Our study provides a promising starting point for hydrofacies delineation in complex real problems.« less
Goudarz Mehdikhani, Kaveh; Morales Moreno, Beatriz; Reid, Jeremy J; de Paz Nieves, Ana; Lee, Yuo-Yu; González Della Valle, Alejandro
2016-07-01
We studied the need to use a constrained insert for residual intraoperative instability and the 1-year result of patients undergoing total knee arthroplasty (TKA) for a varus deformity. In a control group, a "classic" subperiosteal release of the medial soft tissue sleeve was performed as popularized by pioneers of TKA. In the study group, an algorithmic approach that selectively releases and pie-crusts posteromedial structures in extension and anteromedial structures in flexion was used. All surgeries were performed by a single surgeon using measured resection technique, and posterior-stabilized, cemented implants. There were 228 TKAs in the control group and 188 in the study group. Outcome variables included the use of a constrained insert, and the Knee Society Score at 6 weeks, 4 months, and 1 year postoperatively. The effect of the release technique on use of constrained inserts and clinical outcomes were analyzed in a multivariate model controlling for age, sex, body mass index, and severity of deformity. The use of constrained inserts was significantly lower in study than in control patients (8% vs 18%; P = .002). There was no difference in the Knee Society Score and range of motion between the groups at last follow-up. No patient developed postoperative medial instability. This algorithmic, pie-crusting release technique resulted in a significant reduction in the use of constrained inserts with no detrimental effects in clinical results, joint function, and stability. As constrained TKA implants are more costly than nonconstrained ones, if the adopted technique proves to be safe in the long term, it may cause a positive shift in value for hospitals and cost savings in the health care system. Copyright © 2016 Elsevier Inc. All rights reserved.
Multiply-Constrained Semantic Search in the Remote Associates Test
ERIC Educational Resources Information Center
Smith, Kevin A.; Huber, David E.; Vul, Edward
2013-01-01
Many important problems require consideration of multiple constraints, such as choosing a job based on salary, location, and responsibilities. We used the Remote Associates Test to study how people solve such multiply-constrained problems by asking participants to make guesses as they came to mind. We evaluated how people generated these guesses…
Monowar, Muhammad Mostafa; Hassan, Mohammad Mehedi; Bajaber, Fuad; Al-Hussein, Musaed; Alamri, Atif
2012-01-01
The emergence of heterogeneous applications with diverse requirements for resource-constrained Wireless Body Area Networks (WBANs) poses significant challenges for provisioning Quality of Service (QoS) with multi-constraints (delay and reliability) while preserving energy efficiency. To address such challenges, this paper proposes McMAC, a MAC protocol with multi-constrained QoS provisioning for diverse traffic classes in WBANs. McMAC classifies traffic based on their multi-constrained QoS demands and introduces a novel superframe structure based on the “transmit-whenever-appropriate” principle, which allows diverse periods for diverse traffic classes according to their respective QoS requirements. Furthermore, a novel emergency packet handling mechanism is proposed to ensure packet delivery with the least possible delay and the highest reliability. McMAC is also modeled analytically, and extensive simulations were performed to evaluate its performance. The results reveal that McMAC achieves the desired delay and reliability guarantee according to the requirements of a particular traffic class while achieving energy efficiency. PMID:23202224
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
Macho, Jorge Berzosa; Montón, Luis Gardeazabal; Rodriguez, Roberto Cortiñas
2017-08-01
The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices.
Montón, Luis Gardeazabal
2017-01-01
The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices. PMID:28763013
Free energy from molecular dynamics with multiple constraints
NASA Astrophysics Data System (ADS)
den Otter, W. K.; Briels, W. J.
In molecular dynamics simulations of reacting systems, the key step to determining the equilibrium constant and the reaction rate is the calculation of the free energy as a function of the reaction coordinate. Intuitively the derivative of the free energy is equal to the average force needed to constrain the reaction coordinate to a constant value, but the metric tensor effect of the constraint on the sampled phase space distribution complicates this relation. The appropriately corrected expression for the potential of mean constraint force method (PMCF) for systems in which only the reaction coordinate is constrained was published recently. Here we will consider the general case of a system with multiple constraints. This situation arises when both the reaction coordinate and the 'hard' coordinates are constrained, and also in systems with several reaction coordinates. The obvious advantage of this method over the established thermodynamic integration and free energy perturbation methods is that it avoids the cumbersome introduction of a full set of generalized coordinates complementing the constrained coordinates. Simulations of n -butane and n -pentane in vacuum illustrate the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Belcher, AH; Wiersma, R
Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less
Neural Activation to Emotional Faces in Adolescents with Autism Spectrum Disorders
ERIC Educational Resources Information Center
Weng, Shih-Jen; Carrasco, Melisa; Swartz, Johnna R.; Wiggins, Jillian Lee; Kurapati, Nikhil; Liberzon, Israel; Risi, Susan; Lord, Catherine; Monk, Christopher S.
2011-01-01
Background: Autism spectrum disorders (ASD) involve a core deficit in social functioning and impairments in the ability to recognize face emotions. In an emotional faces task designed to constrain group differences in attention, the present study used functional MRI to characterize activation in the amygdala, ventral prefrontal cortex (vPFC), and…
Small-kernel, constrained least-squares restoration of sampled image data
NASA Technical Reports Server (NTRS)
Hazra, Rajeeb; Park, Stephen K.
1992-01-01
Following the work of Park (1989), who extended a derivation of the Wiener filter based on the incomplete discrete/discrete model to a more comprehensive end-to-end continuous/discrete/continuous model, it is shown that a derivation of the constrained least-squares (CLS) filter based on the discrete/discrete model can also be extended to this more comprehensive continuous/discrete/continuous model. This results in an improved CLS restoration filter, which can be efficiently implemented as a small-kernel convolution in the spatial domain.
NASA Technical Reports Server (NTRS)
Swei, Sean
2014-01-01
We propose to develop a robust guidance and control system for the ADEPT (Adaptable Deployable Entry and Placement Technology) entry vehicle. A control-centric model of ADEPT will be developed to quantify the performance of candidate guidance and control architectures for both aerocapture and precision landing missions. The evaluation will be based on recent breakthroughs in constrained controllability/reachability analysis of control systems and constrained-based energy-minimum trajectory optimization for guidance development operating in complex environments.
An infinite set of Ward identities for adiabatic modes in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinterbichler, Kurt; Hui, Lam; Khoury, Justin, E-mail: khinterbichler@perimeterinstitute.ca, E-mail: lh399@columbia.edu, E-mail: jkhoury@sas.upenn.edu
2014-01-01
We show that the correlation functions of any single-field cosmological model with constant growing-modes are constrained by an infinite number of novel consistency relations, which relate N+1-point correlation functions with a soft-momentum scalar or tensor mode to a symmetry transformation on N-point correlation functions of hard-momentum modes. We derive these consistency relations from Ward identities for an infinite tower of non-linearly realized global symmetries governing scalar and tensor perturbations. These symmetries can be labeled by an integer n. At each order n, the consistency relations constrain — completely for n = 0,1, and partially for n ≥ 2 — themore » q{sup n} behavior of the soft limits. The identities at n = 0 recover Maldacena's original consistency relations for a soft scalar and tensor mode, n = 1 gives the recently-discovered conformal consistency relations, and the identities for n ≥ 2 are new. As a check, we verify directly that the n = 2 identity is satisfied by known correlation functions in slow-roll inflation.« less
Building a functional multiple intelligences theory to advance educational neuroscience.
Cerruti, Carlo
2013-01-01
A key goal of educational neuroscience is to conduct constrained experimental research that is theory-driven and yet also clearly related to educators' complex set of questions and concerns. However, the fields of education, cognitive psychology, and neuroscience use different levels of description to characterize human ability. An important advance in research in educational neuroscience would be the identification of a cognitive and neurocognitive framework at a level of description relatively intuitive to educators. I argue that the theory of multiple intelligences (MI; Gardner, 1983), a conception of the mind that motivated a past generation of teachers, may provide such an opportunity. I criticize MI for doing little to clarify for teachers a core misunderstanding, specifically that MI was only an anatomical map of the mind but not a functional theory that detailed how the mind actually processes information. In an attempt to build a "functional MI" theory, I integrate into MI basic principles of cognitive and neural functioning, namely interregional neural facilitation and inhibition. In so doing I hope to forge a path toward constrained experimental research that bears upon teachers' concerns about teaching and learning.
To the horizon and beyond: Weak lensing of the CMB and binary inspirals into horizonless objects
NASA Astrophysics Data System (ADS)
Kesden, Michael
This thesis examines two predictions of general relativity: weak lensing and gravitational waves. The cosmic microwave background (CMB) is gravitationally lensed by the large-scale structure between the observer and the last- scattering surface. This weak lensing induces non-Gaussian correlations that can be used to construct estimators for the deflection field. The error and bias of these estimators are derived and used to analyze the viability of lensing reconstruction for future CMB experiments. Weak lensing also affects the one-point probability distribution function of the CMB. The skewness and kurtosis induced by lensing and the Sunayev- Zel'dovich (SZ) effect are calculated as functions of the angular smoothing scale of the map. While these functions offer the advantage of easy computability, only the skewness from lensing-SZ correlations can potentially be detected, even in the limit of the largest amplitude fluctuations allowed by observation. Lensing estimators are also essential to constrain inflation, the favored explanation for large-scale isotropy and the origin of primordial perturbations. B-mode polarization is considered to be a "smoking-gun" signature of inflation, and lensing estimators can be used to recover primordial B-modes from lensing-induced contamination. The ability of future CMB experiments to constrain inflation is assessed as functions of survey size and instrumental sensitivity. A final application of lensing estimators is to constrain a possible cutoff in primordial density perturbations on near-horizon scales. The paucity of independent modes on such scales limits the statistical certainty of such a constraint. Measurements of the deflection field can be used to constrain at the 3s level the existence of a cutoff large enough to account for current CMB observations. A final chapter of this thesis considers an independent topic: the gravitational-wave (GW) signature of a binary inspiral into a horizonless object. If the supermassive objects at galactic centers lack the horizons of traditional black holes, inspiraling objects could emit GWs after passing within their surfaces. The GWs produced by such an inspiral are calculated, revealing distinctive features potentially observable by future GW observatories.
Towards weakly constrained double field theory
NASA Astrophysics Data System (ADS)
Lee, Kanghoon
2016-08-01
We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.
Calculation of primordial abundances of light nuclei including a heavy sterile neutrino
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosquera, M.E.; Civitarese, O., E-mail: mmosquera@fcaglp.unlp.edu.ar, E-mail: osvaldo.civitarese@fisica.unlp.edu.ar
2015-08-01
We include the coupling of a heavy sterile neutrino with active neutrinos in the calculation of primordial abundances of light-nuclei. We calculate neutrino distribution functions and primordial abundances, as functions depending on a renormalization of the sterile neutrino distribution function (a), the sterile neutrino mass (m{sub s}) and the mixing angle (φ). Using the observable data, we set constrains on these parameters, which have the values 0a < 0.4, sin{sup 2} φ ≈ 0.12−0.39 and 0m{sub s} < 7 keV at 1σ level, for a fixed value of the baryon to photon ratio. When the baryon to photon ratio is allowed to vary, its extracted value ismore » in agreement with the values constrained by Planck observations and by the Wilkinson Microwave Anisotropy Probe (WMAP). It is found that the anomaly in the abundance of {sup 7}Li persists, in spite of the inclusion of a heavy sterile neutrino.« less
A defect stream function, law of the wall/wake method for compressible turbulent boundary layers
NASA Technical Reports Server (NTRS)
Barnwell, Richard W.; Dejarnette, Fred R.; Wahls, Richard A.
1989-01-01
The application of the defect stream function to the solution of the two-dimensional, compressible boundary layer is examined. A law of the wall/law of the wake formulation for the inner part of the boundary layer is presented which greatly simplifies the computational task near the wall and eliminates the need for an eddy viscosity model in this region. The eddy viscosity model in the outer region is arbitrary. The modified Crocco temperature-velocity relationship is used as a simplification of the differential energy equation. Formulations for both equilibrium and nonequilibrium boundary layers are presented including a constrained zero-order form which significantly reduces the computational workload while retaining the significant physics of the flow. A formulation for primitive variables is also presented. Results are given for the constrained zero-order and second-order equilibrium formulations and are compared with experimental data. A compressible wake function valid near the wall has been developed from the present results.
NASA Astrophysics Data System (ADS)
Nesbet, Robert K.
2018-05-01
Velocities in stable circular orbits about galaxies, a measure of centripetal gravitation, exceed the expected Kepler/Newton velocity as orbital radius increases. Standard Λ cold dark matter (ΛCDM) attributes this anomaly to galactic dark matter. McGaugh et al. have recently shown for 153 disc galaxies that observed radial acceleration is an apparently universal function of classical acceleration computed for observed galactic baryonic mass density. This is consistent with the empirical modified Newtonian dynamics (MOND) model, not requiring dark matter. It is shown here that suitably constrained ΛCDM and conformal gravity (CG) also produce such a universal correlation function. ΛCDM requires a very specific dark matter distribution, while the implied CG non-classical acceleration must be independent of galactic mass. All three constrained radial acceleration functions agree with the empirical baryonic v4 Tully-Fisher relation. Accurate rotation data in the nominally flat velocity range could distinguish between MOND, ΛCDM, and CG.
Simulation of X-ray absorption spectra with orthogonality constrained density functional theory.
Derricotte, Wallace D; Evangelista, Francesco A
2015-06-14
Orthogonality constrained density functional theory (OCDFT) [F. A. Evangelista, P. Shushkov and J. C. Tully, J. Phys. Chem. A, 2013, 117, 7378] is a variational time-independent approach for the computation of electronic excited states. In this work we extend OCDFT to compute core-excited states and generalize the original formalism to determine multiple excited states. Benchmark computations on a set of 13 small molecules and 40 excited states show that unshifted OCDFT/B3LYP excitation energies have a mean absolute error of 1.0 eV. Contrary to time-dependent DFT, OCDFT excitation energies for first- and second-row elements are computed with near-uniform accuracy. OCDFT core excitation energies are insensitive to the choice of the functional and the amount of Hartree-Fock exchange. We show that OCDFT is a powerful tool for the assignment of X-ray absorption spectra of large molecules by simulating the gas-phase near-edge spectrum of adenine and thymine.
Evolving phenotypic networks in silico.
François, Paul
2014-11-01
Evolved gene networks are constrained by natural selection. Their structures and functions are consequently far from being random, as exemplified by the multiple instances of parallel/convergent evolution. One can thus ask if features of actual gene networks can be recovered from evolutionary first principles. I review a method for in silico evolution of small models of gene networks aiming at performing predefined biological functions. I summarize the current implementation of the algorithm, insisting on the construction of a proper "fitness" function. I illustrate the approach on three examples: biochemical adaptation, ligand discrimination and vertebrate segmentation (somitogenesis). While the structure of the evolved networks is variable, dynamics of our evolved networks are usually constrained and present many similar features to actual gene networks, including properties that were not explicitly selected for. In silico evolution can thus be used to predict biological behaviours without a detailed knowledge of the mapping between genotype and phenotype. Copyright © 2014 The Author. Published by Elsevier Ltd.. All rights reserved.
Minimal entropy probability paths between genome families.
Ahlbrandt, Calvin; Benson, Gary; Casey, William
2004-05-01
We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.
A RSSI-based parameter tracking strategy for constrained position localization
NASA Astrophysics Data System (ADS)
Du, Jinze; Diouris, Jean-François; Wang, Yide
2017-12-01
In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
NASA Technical Reports Server (NTRS)
1992-01-01
Summary charts of the following topics are presented: the Percentage of Critical Questions in Constrained and Robust Programs; the Executive Committee and AMAC Disposition of Critical Questions for Constrained and Robust Programs; and the Requirements for Ground-based Research and Flight Platforms for Constrained and Robust Programs. Data Tables are also presented and cover the following: critical questions from all Life Sciences Division Discipline Science Plans; critical questions listed by category and criticality; all critical questions which require ground-based research; critical questions that would utilize spacelabs listed by category and criticality; critical questions that would utilize Space Station Freedom (SSF) listed by category and criticality; critical questions that would utilize the SSF Centrifuge; facility listed by category and criticality; critical questions that would utilize a Moon base listed by category and criticality; critical questions that would utilize robotic missions listed by category and criticality; critical questions that would utilize free flyers listed by category and criticality; and critical questions by deliverables.
Tests of gravity with future space-based experiments
NASA Astrophysics Data System (ADS)
Sakstein, Jeremy
2018-03-01
Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, J; Chao, M
2016-06-15
Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less
Amundsen, Spencer; Lee, Yuo-Yu; González Della Valle, Alejandro
2017-06-01
Intra-operative sensing technology is an alternative to standard techniques in total knee arthroplasty (TKA) for determining balance by providing quantitative analysis of loads and point of contact throughout a range of motion. We used intra-operative sensing (VERASENSE-OrthoSensor, Inc.) to examine pie-crusting release of the medial collateral ligament in knees with varus deformity (study group) in comparison to a control group where balance was obtained using a classic release technique and assessed using laminar spreaders, spacer blocks, manual stress, and a ruler. The surgery was performed by a single surgeon utilizing measured resection and posterior-stabilized, cemented implants. Seventy-five study TKAs were matched 1:3 with 225 control TKAs. Outcome variables included the use of a constrained insert, functional- and knee-specific Knee Society score (KSS) at six weeks, four months, and one year post-operatively. Outcomes were analyzed in a multivariate model controlling for age, sex, BMI, and severity of deformity. The use of a constrained insert was significantly lower in the study group (5.3 vs. 13.8%; p = 0.049). The use of increased constraint was not significant between groups with increasing deformity. There was no difference in functional KSS and knee-specific KSS between groups at any follow-up interval. An algorithmic pie-crusting technique guided by intra-operative sensing is associated with decreased use of constrained inserts in TKA patients with a pre-operative varus deformity. This may cause a positive shift in value and cost savings.
Piai, Vitória; Roelofs, Ardi; Maris, Eric
2014-01-01
Two fundamental factors affecting the speed of spoken word production are lexical frequency and sentential constraint, but little is known about their timing and electrophysiological basis. In the present study, we investigated event-related potentials (ERPs) and oscillatory brain responses induced by these factors, using a task in which participants named pictures after reading sentences. Sentence contexts were either constraining or nonconstraining towards the final word, which was presented as a picture. Picture names varied in their frequency of occurrence in the language. Naming latencies and electrophysiological responses were examined as a function of context and lexical frequency. Lexical frequency is an index of our cumulative learning experience with words, so lexical-frequency effects most likely reflect access to memory representations for words. Pictures were named faster with constraining than nonconstraining contexts. Associated with this effect, starting around 400 ms pre-picture presentation, oscillatory power between 8 and 30 Hz was lower for constraining relative to nonconstraining contexts. Furthermore, pictures were named faster with high-frequency than low-frequency names, but only for nonconstraining contexts, suggesting differential ease of memory access as a function of sentential context. Associated with the lexical-frequency effect, starting around 500 ms pre-picture presentation, oscillatory power between 4 and 10 Hz was higher for high-frequency than for low-frequency names, but only for constraining contexts. Our results characterise electrophysiological responses associated with lexical frequency and sentential constraint in spoken word production, and point to new avenues for studying these fundamental factors in language production. © 2013 Published by Elsevier Ltd.
PSQP: Puzzle Solving by Quadratic Programming.
Andalo, Fernanda A; Taubin, Gabriel; Goldenstein, Siome
2017-02-01
In this article we present the first effective method based on global optimization for the reconstruction of image puzzles comprising rectangle pieces-Puzzle Solving by Quadratic Programming (PSQP). The proposed novel mathematical formulation reduces the problem to the maximization of a constrained quadratic function, which is solved via a gradient ascent approach. The proposed method is deterministic and can deal with arbitrary identical rectangular pieces. We provide experimental results showing its effectiveness when compared to state-of-the-art approaches. Although the method was developed to solve image puzzles, we also show how to apply it to the reconstruction of simulated strip-shredded documents, broadening its applicability.
Stability of micro-Cassie states on rough substrates
NASA Astrophysics Data System (ADS)
Guo, Zhenjiang; Liu, Yawei; Lohse, Detlef; Zhang, Xuehua; Zhang, Xianren
2015-06-01
We numerically study different forms of nanoscale gaseous domains on a model for rough surfaces. Our calculations based on the constrained lattice density functional theory show that the inter-connectivity of pores surrounded by neighboring nanoposts, which model the surface roughness, leads to the formation of stable microscopic Cassie states. We investigate the dependence of the stability of the micro-Cassie states on substrate roughness, fluid-solid interaction, and chemical potential and then address the differences between the origin of the micro-Cassie states and that of surface nanobubbles within similar models. Finally, we show that the micro-Cassie states share some features with experimentally observed micropancakes at solid-water interfaces.
Resource Management in Constrained Dynamic Situations
NASA Astrophysics Data System (ADS)
Seok, Jinwoo
Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments for the planning level. To obtain a policy, dynamic programing is applied, and to obtain a solution, limited breadth-first search is applied to the recomposable restricted finite state machine. A multi-function phased array radar resource management problem and an unmanned aerial vehicle patrolling problem are treated using recomposable restricted finite state machines. Then, we use model predictive control for the control level, because it allows constraint handling and setpoint tracking for the schedule. An aircraft power system management problem is treated that aims to develop an integrated control system for an aircraft gas turbine engine and electrical power system using rate-based model predictive control. Our results indicate that at the planning level, limited breadth-first search for recomposable restricted finite state machines generates good scheduling solutions in limited resource situations and unpredictably dynamic environments. The importance of cooperation in the planning level is also verified. At the control level, a rate-based model predictive controller allows good schedule tracking and safe operations. The importance of considering the system constraints and interactions between the subsystems is indicated. For the best resource management in constrained dynamic situations, the planning level and the control level need to be considered together.
NASA Technical Reports Server (NTRS)
Blanchard, D. L.; Chan, F. K.
1973-01-01
For a time-dependent, n-dimensional, special diagonal Hamilton-Jacobi equation a necessary and sufficient condition for the separation of variables to yield a complete integral of the form was established by specifying the admissible forms in terms of arbitrary functions. A complete integral was then expressed in terms of these arbitrary functions and also the n irreducible constants. As an application of the results obtained for the two-dimensional Hamilton-Jacobi equation, analysis was made for a comparatively wide class of dynamical problems involving a particle moving in Euclidean three-dimensional space under the action of external forces but constrained on a moving surface. All the possible cases in which this equation had a complete integral of the form were obtained and these are tubulated for reference.
A distance constrained synaptic plasticity model of C. elegans neuronal network
NASA Astrophysics Data System (ADS)
Badhwar, Rahul; Bagler, Ganesh
2017-03-01
Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.
De Kauwe, Martin G; Medlyn, Belinda E; Zaehle, Sönke; Walker, Anthony P; Dietze, Michael C; Wang, Ying-Ping; Luo, Yiqi; Jain, Atul K; El-Masri, Bassil; Hickler, Thomas; Wårlind, David; Weng, Ensheng; Parton, William J; Thornton, Peter E; Wang, Shusen; Prentice, I Colin; Asao, Shinichi; Smith, Benjamin; McCarthy, Heather R; Iversen, Colleen M; Hanson, Paul J; Warren, Jeffrey M; Oren, Ram; Norby, Richard J
2014-01-01
Elevated atmospheric CO2 concentration (eCO2) has the potential to increase vegetation carbon storage if increased net primary production causes increased long-lived biomass. Model predictions of eCO2 effects on vegetation carbon storage depend on how allocation and turnover processes are represented. We used data from two temperate forest free-air CO2 enrichment (FACE) experiments to evaluate representations of allocation and turnover in 11 ecosystem models. Observed eCO2 effects on allocation were dynamic. Allocation schemes based on functional relationships among biomass fractions that vary with resource availability were best able to capture the general features of the observations. Allocation schemes based on constant fractions or resource limitations performed less well, with some models having unintended outcomes. Few models represent turnover processes mechanistically and there was wide variation in predictions of tissue lifespan. Consequently, models did not perform well at predicting eCO2 effects on vegetation carbon storage. Our recommendations to reduce uncertainty include: use of allocation schemes constrained by biomass fractions; careful testing of allocation schemes; and synthesis of allocation and turnover data in terms of model parameters. Data from intensively studied ecosystem manipulation experiments are invaluable for constraining models and we recommend that such experiments should attempt to fully quantify carbon, water and nutrient budgets. PMID:24844873
Ballet, Steven; Feytens, Debby; Buysse, Koen; Chung, Nga N.; Lemieux, Carole; Tumati, Suneeta; Keresztes, Attila; Van Duppen, Joost; Lai, Josephine; Varga, Eva; Porreca, Frank; Schiller, Peter W.; Broeck, Jozef Vanden; Tourwé, Dirk
2011-01-01
A screening of conformationally constrained aromatic amino acids as base cores for the preparation of new NK1 receptor antagonists resulted in the discovery of three new NK1 receptor antagonists, 19 [Ac-Aba-Gly-NH-3′,5′-(CF3)2-Bn], 20 [Ac-Aba-Gly-NMe-3′,5′-(CF3)2-Bn] and 23 [Ac-Tic-NMe-3′,5′-(CF3)2-Bn], which were able to counteract the agonist effect of substance P, the endogenous ligand of NK1R. The most active NK1 antagonist of the series, 20 [Ac-Aba-Gly-NMe-3′,5′-(CF3)2-Bn], was then used in the design of a novel, potent chimeric opioid agonist-NK1 receptor antagonist, 35 [Dmt-D-Arg-Aba-Gly-NMe-3′,5′-(CF3)2-Bn], which combines the N-terminus of the established Dmt1-DALDA agonist opioid pharmacophore (H-Dmt-D-Arg-Phe-Lys-NH2) and 20, the NK1R ligand. The opioid component of the chimeric compound 35, i.e. Dmt-D-Arg-Aba-Gly-NH2 36, also proved to be an extremely potent and balanced μ- and δ opioid receptor agonist with subnanomolar binding and in vitro functional activity. PMID:21413804
Evidence for Endothermy in Pterosaurs Based on Flight Capability Analyses
NASA Astrophysics Data System (ADS)
Jenkins, H. S.; Pratson, L. F.
2005-12-01
Previous attempts to constrain flight capability in pterosaurs have relied heavily on the fossil record, using bone articulation and apparent muscle allocation to evaluate flight potential (Frey et al., 1997; Padian, 1983; Bramwell, 1974). However, broad definitions of the physical parameters necessary for flight in pterosaurs remain loosely defined and few systematic approaches to constraining flight capability have been synthesized (Templin, 2000; Padian, 1983). Here we present a new method to assess flight capability in pterosaurs as a function of humerus length and flight velocity. By creating an energy-balance model to evaluate the power required for flight against the power available to the animal, we derive a `U'-shaped power curve and infer optimal flight speeds and maximal wingspan lengths for pterosaurs Quetzalcoatlus northropi and Pteranodon ingens. Our model corroborates empirically derived power curves for the modern black-billed magpie ( Pica Pica) and accurately reproduces the mechanical power curve for modern cockatiels ( Nymphicus hollandicus) (Tobalske et al., 2003). When we adjust our model to include an endothermic metabolic rate for pterosaurs, we find a maximal wingspan length of 18 meters for Q. northropi. Model runs using an exothermic metabolism derive maximal wingspans of 6-8 meters. As estimates based on fossil evidence show total wingspan lengths reaching up to 15 meters for Q. northropi, we conclude that large pterosaurs may have been endothermic and therefore more metabolically similar to birds than to reptiles.
An RBF-based reparameterization method for constrained texture mapping.
Yu, Hongchuan; Lee, Tong-Yee; Yeh, I-Cheng; Yang, Xiaosong; Li, Wenxi; Zhang, Jian J
2012-07-01
Texture mapping has long been used in computer graphics to enhance the realism of virtual scenes. However, to match the 3D model feature points with the corresponding pixels in a texture image, surface parameterization must satisfy specific positional constraints. However, despite numerous research efforts, the construction of a mathematically robust, foldover-free parameterization that is subject to positional constraints continues to be a challenge. In the present paper, this foldover problem is addressed by developing radial basis function (RBF)-based reparameterization. Given initial 2D embedding of a 3D surface, the proposed method can reparameterize 2D embedding into a foldover-free 2D mesh, satisfying a set of user-specified constraint points. In addition, this approach is mesh free. Therefore, generating smooth texture mapping results is possible without extra smoothing optimization.
Splicing and transcription touch base: co-transcriptional spliceosome assembly and function
Herzel, Lydia; Ottoz, Diana S. M.; Alpert, Tara; Neugebauer, Karla M.
2018-01-01
Several macromolecular machines collaborate to produce eukaryotic messenger RNA. RNA polymerase II (Pol II) translocates along genes that are up to millions of base pairs in length and generates a flexible RNA copy of the DNA template. This nascent RNA harbours introns that are removed by the spliceosome, which is a megadalton ribonucleoprotein complex that positions the distant ends of the intron into its catalytic centre. Emerging evidence that the catalytic spliceosome is physically close to Pol II in vivo implies that transcription and splicing occur on similar timescales and that the transcription and splicing machineries may be spatially constrained. In this Review, we discuss aspects of spliceosome assembly, transcription elongation and other co-transcriptional events that allow the temporal coordination of co-transcriptional splicing. PMID:28792005
Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations
Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha
2015-04-30
Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less
Advances in knowledge-based software engineering
NASA Technical Reports Server (NTRS)
Truszkowski, Walt
1991-01-01
The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.
Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required
NASA Astrophysics Data System (ADS)
Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.
2017-12-01
A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely constraining the starting model. We also explore what types of datasets are needed to uniquely constrain the orientation(s) of anisotropic symmetry if the mechanism is assumed.
NASA Astrophysics Data System (ADS)
Mills, A. L.; Ford, R. M.; Vallino, J. J.; Herman, J. S.; Hornberger, G. M.
2001-12-01
Restoration of high-quality groundwater has been an elusive engineering goal. Consequently, natural microbially-mediated reactions are increasingly relied upon to degrade organic contaminants, including hydrocarbons and many synthetic compounds. Of concern is how the introduction of an organic chemical contaminant affects the indigenous microbial communities, the geochemistry of the aquifer, and the function of the ecosystem. The presence of functional redundancy in microbial communities suggests that recovery of the community after a disturbance such as a contamination event could easily result in a community that is similar in function to that which existed prior to the contamination, but which is compositionally quite different. To investigate the relationship between community structure and function we observed the response of a diverse microbial community obtained from raw sewage to a dynamic redox environment using an aerobic/anaerobic/aerobic cycle. To evaluate changes in community function CO2, pH, ammonium and nitrate levels were monitored. A phylogenetically-based DNA technique (tRFLP) was used to assess changes in microbial community structure. Principal component analysis of the tRFLP data revealed significant changes in the composition of the microbial community that correlated well with changes in community function. Results from our experiments will be discussed in the context of a metabolic model based the biogeochemistry of the system. The governing philosophy of this thermodynamically constrained metabolic model is that living systems synthesize and allocate cellular machinery in such a way as to "optimally" utilize available resources in the environment. The robustness of this optimization-based approach provides a powerful tool for studying relationships between microbial diversity and ecosystem function.
NASA Astrophysics Data System (ADS)
van Uitert, Edo; Joachimi, Benjamin; Joudaki, Shahab; Amon, Alexandra; Heymans, Catherine; Köhlinger, Fabian; Asgari, Marika; Blake, Chris; Choi, Ami; Erben, Thomas; Farrow, Daniel J.; Harnois-Déraps, Joachim; Hildebrandt, Hendrik; Hoekstra, Henk; Kitching, Thomas D.; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Miller, Lance; Nakajima, Reiko; Schneider, Peter; Valentijn, Edwin; Viola, Massimo
2018-06-01
We present cosmological parameter constraints from a joint analysis of three cosmological probes: the tomographic cosmic shear signal in ˜450 deg2 of data from the Kilo Degree Survey (KiDS), the galaxy-matter cross-correlation signal of galaxies from the Galaxies And Mass Assembly (GAMA) survey determined with KiDS weak lensing, and the angular correlation function of the same GAMA galaxies. We use fast power spectrum estimators that are based on simple integrals over the real-space correlation functions, and show that they are practically unbiased over relevant angular frequency ranges. We test our full pipeline on numerical simulations that are tailored to KiDS and retrieve the input cosmology. By fitting different combinations of power spectra, we demonstrate that the three probes are internally consistent. For all probes combined, we obtain S_8≡ σ _8 √{Ω _m/0.3}=0.800_{-0.027}^{+0.029}, consistent with Planck and the fiducial KiDS-450 cosmic shear correlation function results. Marginalizing over wide priors on the mean of the tomographic redshift distributions yields consistent results for S8 with an increase of 28 {per cent} in the error. The combination of probes results in a 26 per cent reduction in uncertainties of S8 over using the cosmic shear power spectra alone. The main gain from these additional probes comes through their constraining power on nuisance parameters, such as the galaxy intrinsic alignment amplitude or potential shifts in the redshift distributions, which are up to a factor of 2 better constrained compared to using cosmic shear alone, demonstrating the value of large-scale structure probe combination.
Reionization of Hydrogen and Helium by Early Stars and Quasars
NASA Astrophysics Data System (ADS)
Wyithe, J. Stuart B.; Loeb, Abraham
2003-04-01
We compute the reionization histories of hydrogen and helium caused by the ionizing radiation fields produced by stars and quasars. For the quasars we use a model based on halo-merger rates that reproduces all known properties of the quasar luminosity function at high redshifts. The less constrained properties of the ionizing radiation produced by stars are modeled with two free parameters: (i) a transition redshift, ztran, above which the stellar population is dominated by massive, zero-metallicity stars and below which it is dominated by a Scalo mass function; and (ii) the product of the escape fraction of stellar ionizing photons from their host galaxies and the star formation efficiency, fescf*. We constrain the allowed range of these free parameters at high redshifts on the basis of the lack of the H I Gunn-Peterson trough at z<~6 and the upper limit on the total intergalactic optical depth for electron scattering, τes<0.18, from recent cosmic microwave background (CMB) experiments. We find that quasars ionize helium by a redshift z~4, but cannot reionize hydrogen by themselves before z~6. A major fraction of the allowed combinations of fescf* and ztran leads to an early peak in the ionized fraction because of the presence of metal-free stars at high redshifts. This sometimes results in two reionization epochs, namely, an early H II or He III overlap phase followed by recombination and a second overlap phase. Even if early overlap is not achieved, the peak in the visibility function for scattering of the CMB often coincides with the early ionization phase rather than with the actual reionization epoch. Consequently, τes does not correspond directly to the reionization redshift. We generically find values of τes>~7%, which should be detectable by the MAP satellite.
Mixed Integer Programming and Heuristic Scheduling for Space Communication
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2013-01-01
Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.
THE LOCAL [C ii] 158 μ m EMISSION LINE LUMINOSITY FUNCTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemmati, Shoubaneh; Yan, Lin; Capak, Peter
We present, for the first time, the local [C ii] 158 μ m emission line luminosity function measured using a sample of more than 500 galaxies from the Revised Bright Galaxy Sample. [C ii] luminosities are measured from the Herschel PACS observations of the Luminous Infrared Galaxies (LIRGs) in the Great Observatories All-sky LIRG Survey and estimated for the rest of the sample based on the far-infrared (far-IR) luminosity and color. The sample covers 91.3% of the sky and is complete at S{sub 60μm} > 5.24 Jy. We calculate the completeness as a function of [C ii] line luminosity and distance, basedmore » on the far-IR color and flux densities. The [C ii] luminosity function is constrained in the range ∼10{sup 7–9} L{sub ⊙} from both the 1/ V{sub max} and a maximum likelihood methods. The shape of our derived [C ii] emission line luminosity function agrees well with the IR luminosity function. For the CO(1-0) and [C ii] luminosity functions to agree, we propose a varying ratio of [C ii]/CO(1-0) as a function of CO luminosity, with larger ratios for fainter CO luminosities. Limited [C ii] high-redshift observations as well as estimates based on the IR and UV luminosity functions are suggestive of an evolution in the [C ii] luminosity function similar to the evolution trend of the cosmic star formation rate density. Deep surveys using the Atacama Large Millimeter Array with full capability will be able to confirm this prediction.« less
Influence function based variance estimation and missing data issues in case-cohort studies.
Mark, S D; Katki, H
2001-12-01
Recognizing that the efficiency in relative risk estimation for the Cox proportional hazards model is largely constrained by the total number of cases, Prentice (1986) proposed the case-cohort design in which covariates are measured on all cases and on a random sample of the cohort. Subsequent to Prentice, other methods of estimation and sampling have been proposed for these designs. We formalize an approach to variance estimation suggested by Barlow (1994), and derive a robust variance estimator based on the influence function. We consider the applicability of the variance estimator to all the proposed case-cohort estimators, and derive the influence function when known sampling probabilities in the estimators are replaced by observed sampling fractions. We discuss the modifications required when cases are missing covariate information. The missingness may occur by chance, and be completely at random; or may occur as part of the sampling design, and depend upon other observed covariates. We provide an adaptation of S-plus code that allows estimating influence function variances in the presence of such missing covariates. Using examples from our current case-cohort studies on esophageal and gastric cancer, we illustrate how our results our useful in solving design and analytic issues that arise in practice.
ERIC Educational Resources Information Center
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack
2014-01-01
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…
Standards as a Tool for Teaching and Assessing Cross-Curricular Writing
ERIC Educational Resources Information Center
Evensen, Lars Sigfred; Berge, Kjell Lars; Thygesen, Ragnar; Matre, Synnove; Solheim, Randi
2016-01-01
The Berge et al. article in this volume presents the functional construct of writing that underlies summative and formative assessment of writing as a key competency in Norway. A functional construct implies that specific acts of writing and their purposes constrain what is a relevant selection among the semiotic resources that writing generally…
ERIC Educational Resources Information Center
Tay, Louis; Vermunt, Jeroen K.; Wang, Chun
2013-01-01
We evaluate the item response theory with covariates (IRT-C) procedure for assessing differential item functioning (DIF) without preknowledge of anchor items (Tay, Newman, & Vermunt, 2011). This procedure begins with a fully constrained baseline model, and candidate items are tested for uniform and/or nonuniform DIF using the Wald statistic.…
Hydrologic and hydraulic flood forecasting constrained by remote sensing data
NASA Astrophysics Data System (ADS)
Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.
2017-12-01
Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.
NASA Astrophysics Data System (ADS)
Ma, Yin-Zhe; Gong, Guo-Dong; Sui, Ning; He, Ping
2018-03-01
We calculate the cross-correlation function < (Δ T/T)({v}\\cdot \\hat{n}/σ _v) > between the kinetic Sunyaev-Zeldovich (kSZ) effect and the reconstructed peculiar velocity field using linear perturbation theory, with the aim of constraining the optical depth τ and peculiar velocity bias of central galaxies with Planck data. We vary the optical depth τ and the velocity bias function bv(k) = 1 + b(k/k0)n, and fit the model to the data, with and without varying the calibration parameter y0 that controls the vertical shift of the correlation function. By constructing a likelihood function and constraining the τ, b and n parameters, we find that the quadratic power-law model of velocity bias, bv(k) = 1 + b(k/k0)2, provides the best fit to the data. The best-fit values are τ = (1.18 ± 0.24) × 10-4, b=-0.84^{+0.16}_{-0.20} and y0=(12.39^{+3.65}_{-3.66})× 10^{-9} (68 per cent confidence level). The probability of b > 0 is only 3.12 × 10-8 for the parameter b, which clearly suggests a detection of scale-dependent velocity bias. The fitting results indicate that the large-scale (k ≤ 0.1 h Mpc-1) velocity bias is unity, while on small scales the bias tends to become negative. The value of τ is consistent with the stellar mass-halo mass and optical depth relationship proposed in the literature, and the negative velocity bias on small scales is consistent with the peak background split theory. Our method provides a direct tool for studying the gaseous and kinematic properties of galaxies.
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
Quantum diffusion during inflation and primordial black holes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pattison, Chris; Assadullahi, Hooshyar; Wands, David
We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. Inmore » the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ∼ 1 e -fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.« less
Quantum diffusion during inflation and primordial black holes
NASA Astrophysics Data System (ADS)
Pattison, Chris; Vennin, Vincent; Assadullahi, Hooshyar; Wands, David
2017-10-01
We calculate the full probability density function (PDF) of inflationary curvature perturbations, even in the presence of large quantum backreaction. Making use of the stochastic-δ N formalism, two complementary methods are developed, one based on solving an ordinary differential equation for the characteristic function of the PDF, and the other based on solving a heat equation for the PDF directly. In the classical limit where quantum diffusion is small, we develop an expansion scheme that not only recovers the standard Gaussian PDF at leading order, but also allows us to calculate the first non-Gaussian corrections to the usual result. In the opposite limit where quantum diffusion is large, we find that the PDF is given by an elliptic theta function, which is fully characterised by the ratio between the squared width and height (in Planck mass units) of the region where stochastic effects dominate. We then apply these results to the calculation of the mass fraction of primordial black holes from inflation, and show that no more than ~ 1 e-fold can be spent in regions of the potential dominated by quantum diffusion. We explain how this requirement constrains inflationary potentials with two examples.
Communication: CDFT-CI couplings can be unreliable when there is fractional charge transfer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu
2015-12-21
Constrained density functional theory with configuration interaction (CDFT-CI) is a useful, low-cost tool for the computational prediction of electronic couplings between pseudo-diabatic constrained electronic states. Such couplings are of paramount importance in electron transfer theory and transition state theory, among other areas of chemistry. Unfortunately, CDFT-CI occasionally fails significantly, predicting a coupling that does not decay exponentially with distance and/or overestimating the expected coupling by an order of magnitude or more. In this communication, we show that the eigenvalues of the difference density matrix between the two constrained states can be used as an a priori metric to determine whenmore » CDFT-CI are likely to be reliable: when the eigenvalues are near 0 or ±1, transfer of a whole electron is occurring, and CDFT-CI can be trusted. We demonstrate the utility of this metric with several illustrative examples.« less
Constrained Low-Interference Relay Node Deployment for Underwater Acoustic Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Wenping
An Underwater Acoustic Wireless Sensor Network (UA-WSN) consists of many resource-constrained Underwater Sensor Nodes (USNs), which are deployed to perform collaborative monitoring tasks over a given region. One way to preserve network connectivity while guaranteing other network QoS is to deploy some Relay Nodes (RNs) in the networks, in which RNs' function is more powerful than USNs and their cost is more expensive. This paper addresses Constrained Low-interference Relay Node Deployment (C-LRND) problem for 3-D UA-WSNs in which the RNs are placed at a subset of candidate locations to ensure connectivity between the USNs, under both the number of RNs deployed and the value of total incremental interference constraints. We first prove that it is NP-hard, then present a general approximation algorithm framework and get two polynomial time O(1)-approximation algorithms.
Communication: CDFT-CI couplings can be unreliable when there is fractional charge transfer
NASA Astrophysics Data System (ADS)
Mavros, Michael G.; Van Voorhis, Troy
2015-12-01
Constrained density functional theory with configuration interaction (CDFT-CI) is a useful, low-cost tool for the computational prediction of electronic couplings between pseudo-diabatic constrained electronic states. Such couplings are of paramount importance in electron transfer theory and transition state theory, among other areas of chemistry. Unfortunately, CDFT-CI occasionally fails significantly, predicting a coupling that does not decay exponentially with distance and/or overestimating the expected coupling by an order of magnitude or more. In this communication, we show that the eigenvalues of the difference density matrix between the two constrained states can be used as an a priori metric to determine when CDFT-CI are likely to be reliable: when the eigenvalues are near 0 or ±1, transfer of a whole electron is occurring, and CDFT-CI can be trusted. We demonstrate the utility of this metric with several illustrative examples.
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Use of constrained optimization in the conceptual design of a medium-range subsonic transport
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1980-01-01
Constrained parameter optimization was used to perform the optimal conceptual design of a medium range transport configuration. The impact of choosing a given performance index was studied, and the required income for a 15 percent return on investment was proposed as a figure of merit. A number of design constants and constraint functions were systematically varied to document the sensitivities of the optimal design to a variety of economic and technological assumptions. A comparison was made for each of the parameter variations between the baseline configuration and the optimally redesigned configuration.
Noncoding origins of anthropoid traits and a new null model of transposon functionalization.
del Rosario, Ricardo C H; Rayan, Nirmala Arul; Prabhakar, Shyam
2014-09-01
Little is known about novel genetic elements that drove the emergence of anthropoid primates. We exploited the sequencing of the marmoset genome to identify 23,849 anthropoid-specific constrained (ASC) regions and confirmed their robust functional signatures. Of the ASC base pairs, 99.7% were noncoding, suggesting that novel anthropoid functional elements were overwhelmingly cis-regulatory. ASCs were highly enriched in loci associated with fetal brain development, motor coordination, neurotransmission, and vision, thus providing a large set of candidate elements for exploring the molecular basis of hallmark primate traits. We validated ASC192 as a primate-specific enhancer in proliferative zones of the developing brain. Unexpectedly, transposable elements (TEs) contributed to >56% of ASCs, and almost all TE families showed functional potential similar to that of nonrepetitive DNA. Three L1PA repeat-derived ASCs displayed coherent eye-enhancer function, thus demonstrating that the "gene-battery" model of TE functionalization applies to enhancers in vivo. Our study provides fundamental insights into genome evolution and the origins of anthropoid phenotypes and supports an elegantly simple new null model of TE exaptation. © 2014 del Rosario et al.; Published by Cold Spring Harbor Laboratory Press.
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
NASA Astrophysics Data System (ADS)
Kattge, J.; Knorr, W.; Raddatz, T.; Wirth, C.
2009-04-01
Photosynthetic capacity is one of the most sensitive parameters of terrestrial biosphere models whose representation in global scale simulations has been severely hampered by a lack of systematic analyses using a sufficiently broad database. Due to its coupling to stomatal conductance changes in the parameterisation of photosynthetic capacity may potentially influence transpiration rates and vegetation surface temperature. Here, we provide a constrained parameterisation of photosynthetic capacity for different plant functional types in the context of the photosynthesis model proposed by Farquhar et al. (1980), based on a comprehensive compilation of leaf photosynthesis rates and leaf nitrogen content. Mean values of photosynthetic capacity were implemented into the coupled climate-vegetation model ECHAM5/JSBACH and modelled gross primary production (GPP) is compared to a compilation of independent observations on stand scale. Compared to the current standard parameterisation the root-mean-squared difference between modelled and observed GPP is substantially reduced for almost all PFTs by the new parameterisation of photosynthetic capacity. We find a systematic depression of NUE (photosynthetic capacity divided by leaf nitrogen content) on certain tropical soils that are known to be deficient in phosphorus. Photosynthetic capacity of tropical trees derived by this study is substantially lower than standard estimates currently used in terrestrial biosphere models. This causes a decrease of modelled GPP while it significantly increases modelled tropical vegetation surface temperatures, up to 0.8°C. These results emphasise the importance of a constrained parameterisation of photosynthetic capacity not only for the carbon cycle, but also for the climate system.
NASA Astrophysics Data System (ADS)
Proistosescu, C.; Donohoe, A.; Armour, K.; Roe, G.; Stuecker, M. F.; Bitz, C. M.
2017-12-01
Joint observations of global surface temperature and energy imbalance provide for a unique opportunity to empirically constrain radiative feedbacks. However, the satellite record of Earth's radiative imbalance is relatively short and dominated by stochastic fluctuations. Estimates of radiative feedbacks obtained by regressing energy imbalance against surface temperature depend strongly on sampling choices and on assumptions about whether the stochastic fluctuations are primarily forced by atmospheric or oceanic variability (e.g. Murphy and Forster 2010, Dessler 2011, Spencer and Braswell 2011, Forster 2016). We develop a framework around a stochastic energy balance model that allows us to parse the different contributions of atmospheric and oceanic forcing based on their differing impacts on the covariance structure - or lagged regression - of temperature and radiative imbalance. We validate the framework in a hierarchy of general circulation models: the impact of atmospheric forcing is examined in unforced control simulations of fixed sea-surface temperature and slab ocean model versions; the impact of oceanic forcing is examined in coupled simulations with prescribed ENSO variability. With the impact of atmospheric and oceanic forcing constrained, we are able to predict the relationship between temperature and radiative imbalance in a fully coupled control simulation, finding that both forcing sources are needed to explain the structure of the lagged-regression. We further model the dependence of feedback estimates on sampling interval by considering the effects of a finite equilibration time for the atmosphere, and issues of smoothing and aliasing. Finally, we develop a method to fit the stochastic model to the short timeseries of temperature and radiative imbalance by performing a Bayesian inference based on a modified version of the spectral Whittle likelihood. We are thus able to place realistic joint uncertainty estimates on both stochastic forcing and radiative feedbacks derived from observational records. We find that these records are, as of yet, too short to be useful in constraining radiative feedbacks, and we provide estimates of how the uncertainty narrows as a function of record length.
NASA Astrophysics Data System (ADS)
Otake, Y.; Leonard, S.; Reiter, A.; Rajan, P.; Siewerdsen, J. H.; Ishii, M.; Taylor, R. H.; Hager, G. D.
2015-03-01
We present a system for registering the coordinate frame of an endoscope to pre- or intra- operatively acquired CT data based on optimizing the similarity metric between an endoscopic image and an image predicted via rendering of CT. Our method is robust and semi-automatic because it takes account of physical constraints, specifically, collisions between the endoscope and the anatomy, to initialize and constrain the search. The proposed optimization method is based on a stochastic optimization algorithm that evaluates a large number of similarity metric functions in parallel on a graphics processing unit. Images from a cadaver and a patient were used for evaluation. The registration error was 0.83 mm and 1.97 mm for cadaver and patient images respectively. The average registration time for 60 trials was 4.4 seconds. The patient study demonstrated robustness of the proposed algorithm against a moderate anatomical deformation.
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
Sun, Pengzhan; Wang, Yanlei; Liu, He; Wang, Kunlin; Wu, Dehai; Xu, Zhiping; Zhu, Hongwei
2014-01-01
A mild annealing procedure was recently proposed for the scalable enhancement of graphene oxide (GO) properties with the oxygen content preserved, which was demonstrated to be attributed to the thermally driven phase separation. In this work, the structure evolution of GO with mild annealing is closely investigated. It reveals that in addition to phase separation, the transformation of oxygen functionalities also occurs, which leads to the slight reduction of GO membranes and furthers the enhancement of GO properties. These results are further supported by the density functional theory based calculations. The results also show that the amount of chemically bonded oxygen atoms on graphene decreases gradually and we propose that the strongly physisorbed oxygen species constrained in the holes and vacancies on GO lattice might be responsible for the preserved oxygen content during the mild annealing procedure. The present experimental results and calculations indicate that both the diffusion and transformation of oxygen functional groups might play important roles in the scalable enhancement of GO properties. PMID:25372142
Teipel, Stefan; König, Alexandra; Hoey, Jesse; Kaye, Jeff; Krüger, Frank; Robillard, Julie M; Kirste, Thomas; Babiloni, Claudio
2018-06-21
Cognitive function is an important end point of treatments in dementia clinical trials. Measuring cognitive function by standardized tests, however, is biased toward highly constrained environments (such as hospitals) in selected samples. Patient-powered real-world evidence using information and communication technology devices, including environmental and wearable sensors, may help to overcome these limitations. This position paper describes current and novel information and communication technology devices and algorithms to monitor behavior and function in people with prodromal and manifest stages of dementia continuously, and discusses clinical, technological, ethical, regulatory, and user-centered requirements for collecting real-world evidence in future randomized controlled trials. Challenges of data safety, quality, and privacy and regulatory requirements need to be addressed by future smart sensor technologies. When these requirements are satisfied, these technologies will provide access to truly user relevant outcomes and broader cohorts of participants than currently sampled in clinical trials. Copyright © 2018. Published by Elsevier Inc.
Nuclear PDF for neutrino and charged lepton data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovarik, K.
2011-10-06
Neutrino Deep Inelastic Scattering (DIS) on nuclei is an essential process to constrain the strange quark parton distribution functions (PDF) in the proton. The critical component on the way to using the neutrino DIS data in a proton PDF analysis is understanding the nuclear effects in parton distribution functions. We parametrize these effects by nuclear parton distribution functions (NPDF). Here we compare results from two analysis of NPDF both done at next-to-leading order in QCD. The first uses neutral current charged-lepton (l{sup {+-}A}) Deeply Inelastic Scattering (DIS) and Drell-Yan data for several nuclear targets and the second uses neutrino-nucleon DISmore » data. We compare the nuclear corrections factors (F{sub 2}{sup Fe}/F{sub 2}{sup D}) for the charged-lepton data with other results from the literature. In particular, we compare and contrast fits based upon the charged-lepton DIS data with those using neutrino-nucleon DIS data.« less
General probability-matched relations between radar reflectivity and rain rate
NASA Technical Reports Server (NTRS)
Rosenfeld, Daniel; Wolff, David B.; Atlas, David
1993-01-01
An improved method for transforming radar-observed reflectivities Ze into rain rate R is presented. The method is based on a formulation of a Ze-R function constrained such that (1) the radar-retrieved pdf of R and all of its moments are identical to those determined from the gauges over a sufficiently large domain, and (2) the fraction of the time that it is raining above a low but still has an accurately measurable rain intensity is identical for both the radar and for simultaneous measurements of collocated gauges on average. Data measured by a 1.65-deg beamwidth C-band radar and 22 gauges located in the vicinity of Darwin, Australia, are used. The resultant Ze-R functions show a strong range dependence, especially for the rain regimes characterized by strong reflectivity gradients and substantial attenuation. The application of these novel Ze-R functions to the radar data produces excellent matches to the gauge measurements without any systematic bias.
Flexure Based Linear and Rotary Bearings
NASA Technical Reports Server (NTRS)
Voellmer, George M. (Inventor)
2016-01-01
A flexure based linear bearing includes top and bottom parallel rigid plates; first and second flexures connecting the top and bottom plates and constraining exactly four degrees of freedom of relative motion of the plates, the four degrees of freedom being X and Y axis translation and rotation about the X and Y axes; and a strut connecting the top and bottom plates and further constraining exactly one degree of freedom of the plates, the one degree of freedom being one of Z axis translation and rotation about the Z axis.
An adaptive finite element method for the inequality-constrained Reynolds equation
NASA Astrophysics Data System (ADS)
Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha
2018-07-01
We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.
Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model
NASA Astrophysics Data System (ADS)
Li, Jun
In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-01-01
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-10-22
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.
NASA Technical Reports Server (NTRS)
Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.
2011-01-01
Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.
Havas, David A; Chapp, Christopher B
2016-01-01
How does language influence the emotions and actions of large audiences? Functionally, emotions help address environmental uncertainty by constraining the body to support adaptive responses and social coordination. We propose emotions provide a similar function in language processing by constraining the mental simulation of language content to facilitate comprehension, and to foster alignment of mental states in message recipients. Consequently, we predicted that emotion-inducing language should be found in speeches specifically designed to create audience alignment - stump speeches of United States presidential candidates. We focused on phrases in the past imperfective verb aspect ("a bad economy was burdening us") that leave a mental simulation of the language content open-ended, and thus unconstrained, relative to past perfective sentences ("we were burdened by a bad economy"). As predicted, imperfective phrases appeared more frequently in stump versus comparison speeches, relative to perfective phrases. In a subsequent experiment, participants rated phrases from presidential speeches as more emotionally intense when written in the imperfective aspect compared to the same phrases written in the perfective aspect, particularly for sentences perceived as negative in valence. These findings are consistent with the notion that emotions have a role in constraining the comprehension of language, a role that may be used in communication with large audiences.
Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations
Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad
2013-01-01
Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194
Origin and Evolutionary Alteration of the Mitochondrial Import System in Eukaryotic Lineages.
Fukasawa, Yoshinori; Oda, Toshiyuki; Tomii, Kentaro; Imai, Kenichiro
2017-07-01
Protein transport systems are fundamentally important for maintaining mitochondrial function. Nevertheless, mitochondrial protein translocases such as the kinetoplastid ATOM complex have recently been shown to vary in eukaryotic lineages. Various evolutionary hypotheses have been formulated to explain this diversity. To resolve any contradiction, estimating the primitive state and clarifying changes from that state are necessary. Here, we present more likely primitive models of mitochondrial translocases, specifically the translocase of the outer membrane (TOM) and translocase of the inner membrane (TIM) complexes, using scrutinized phylogenetic profiles. We then analyzed the translocases' evolution in eukaryotic lineages. Based on those results, we propose a novel evolutionary scenario for diversification of the mitochondrial transport system. Our results indicate that presequence transport machinery was mostly established in the last eukaryotic common ancestor, and that primitive translocases already had a pathway for transporting presequence-containing proteins. Moreover, secondary changes including convergent and migrational gains of a presequence receptor in TOM and TIM complexes, respectively, likely resulted from constrained evolution. The nature of a targeting signal can constrain alteration to the protein transport complex. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
The PHAT and SPLASH Surveys: Rigorous Structural Decomposition of the Andromeda Galaxy
NASA Astrophysics Data System (ADS)
Dorman, Claire; Guhathakurta, P.; Widrow, L.; Foreman-Mackey, D.; Seth, A.; Dalcanton, J.; Gilbert, K.; Lang, D.; Williams, B. F.; SPLASH Team; PHAT Team
2013-01-01
Traditional surface brightness profile (SBP) based structural decompositions of late-type galaxies into Sersic bulge, exponential disk, and power-law halo are often degenerate in the best-fit profiles. The Andromeda galaxy (M31) is the only large spiral close enough that the relative contributions of the subcomponents can be further constrained via their distinct signatures in resolved stellar population surveys. We make use of two such surveys. The SPLASH program has used the Keck/DEIMOS multiobject spectrograph to measure radial velocities of over 10,000 individual red giant branch stars in the inner 20kpc of M31. The PHAT survey, an ongoing Hubble Space Telescope Multicycle Treasury program, has so far obtained six-filter photometry of over 90 million stars in the same region. We use an MCMC algorithm to simultaneously fit a simple bulge/disk/halo structural model to the SBP, the disk fraction as measured from kinematics, and the PHAT luminosity function. We find that the additional constraints favor a larger bulge than expected from a pure SBP fit. Comparison to galaxy formation models will constrain the formation histories of large spiral galaxies such as the Milky Way and Andromeda.
NASA Astrophysics Data System (ADS)
Polcari, Marco; Fernández, José; Albano, Matteo; Bignami, Christian; Palano, Mimmo; Stramondo, Salvatore
2017-12-01
In this work, we propose an improved algorithm to constrain the 3D ground displacement field induced by fast surface deformations due to earthquakes or landslides. Based on the integration of different data, we estimate the three displacement components by solving a function minimization problem from the Bayes theory. We exploit the outcomes from SAR Interferometry (InSAR), Global Positioning System (GNSS) and Multiple Aperture Interferometry (MAI) to retrieve the 3D surface displacement field. Any other source of information can be added to the processing chain in a simple way, being the algorithm computationally efficient. Furthermore, we use the intensity Pixel Offset Tracking (POT) to locate the discontinuity produced on the surface by a sudden deformation phenomenon and then improve the GNSS data interpolation. This approach allows to be independent from other information such as in-situ investigations, tectonic studies or knowledge of the data covariance matrix. We applied such a method to investigate the ground deformation field related to the 2014 Mw 6.0 Napa Valley earthquake, occurred few kilometers from the San Andreas fault system.
Disk mass and disk heating in the spiral galaxy NGC 3223
NASA Astrophysics Data System (ADS)
Gentile, G.; Tydtgat, C.; Baes, M.; De Geyter, G.; Koleva, M.; Angus, G. W.; de Blok, W. J. G.; Saftly, W.; Viaene, S.
2015-04-01
We present the stellar and gaseous kinematics of an Sb galaxy, NGC 3223, with the aim of determining the vertical and radial stellar velocity dispersion as a function of radius, which can help to constrain disk heating theories. Together with the observed NIR photometry, the vertical velocity dispersion is also used to determine the stellar mass-to-light (M/L) ratio, typically one of the largest uncertainties when deriving the dark matter distribution from the observed rotation curve. We find a vertical-to-radial velocity dispersion ratio of σz/σR = 1.21 ± 0.14, significantly higher than expectations from known correlations, and a weakly-constrained Ks-band stellar M/L ratio in the range 0.5-1.7, which is at the high end of (but consistent with) the predictions of stellar population synthesis models. Such a weak constraint on the stellar M/L ratio, however, does not allow us to securely determine the dark matter density distribution. To achieve this, either a statistical approach or additional data (e.g. integral-field unit) are needed. Based on observations collected at the European Southern Observatory, Chile, under proposal 68.B-0588.
NASA Astrophysics Data System (ADS)
Paksi, A. B. N.; Ma'ruf, A.
2016-02-01
In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.
Calculation of the Curie temperature of Ni using first principles based Wang-Landau Monte-Carlo
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Yin, Junqi; Li, Ying Wai; Nicholson, Don
2015-03-01
We combine constrained first principles density functional with a Wang-Landau Monte Carlo algorithm to calculate the Curie temperature of Ni. Mapping the magnetic interactions in Ni onto a Heisenberg like model to underestimates the Curie temperature. Using a model we show that the addition of the magnitude of the local magnetic moments can account for the difference in the calculated Curie temperature. For ab initio calculations, we have extended our Locally Selfconsistent Multiple Scattering (LSMS) code to constrain the magnitude of the local moments in addition to their direction and apply the Replica Exchange Wang-Landau method to sample the larger phase space efficiently to investigate Ni where the fluctuation in the magnitude of the local magnetic moments is of importance equal to their directional fluctuations. We will present our results for Ni where we compare calculations that consider only the moment directions and those including fluctuations of the magnetic moment magnitude on the Curie temperature. This research was sponsored by the Department of Energy, Offices of Basic Energy Science and Advanced Computing. We used Oak Ridge Leadership Computing Facility resources at Oak Ridge National Laboratory, supported by US DOE under contract DE-AC05-00OR22725.
Reconstruction of a yeast cell from x-ray diffraction data
Thibault, Pierre; Elser, Veit; Jacobsen, Chris; ...
2006-06-21
We provide details of the algorithm used for the reconstruction of yeast cell images in the recent demonstration of diffraction microscopy by Shapiro, Thibault, Beetz, Elser, Howells, Jacobsen, Kirz, Lima, Miao, Nieman & Sayre. Two refinements of the iterative constraint-based scheme are developed to address the current experimental realities of this imaging technique, which include missing central data and noise. A constrained power operator is defined whose eigenmodes allow the identification of a small number of degrees of freedom in the reconstruction that are negligibly constrained as a result of the missing data. To achieve reproducibility in the algorithm's output,more » a special intervention is required for these modes. Weak incompatibility of the constraints caused by noise in both direct and Fourier space leads to residual phase fluctuations. This problem is addressed by supplementing the algorithm with an averaging method. The effect of averaging may be interpreted in terms of an effective modulation transfer function, as used in optics, to quantify the resolution. The reconstruction details are prefaced with simulations of wave propagation through a model yeast cell. These show that the yeast cell is a strong-phase-contrast object for the conditions in the experiment.« less
Benameur, S.; Mignotte, M.; Meunier, J.; Soucy, J. -P.
2009-01-01
Image restoration is usually viewed as an ill-posed problem in image processing, since there is no unique solution associated with it. The quality of restored image closely depends on the constraints imposed of the characteristics of the solution. In this paper, we propose an original extension of the NAS-RIF restoration technique by using information fusion as prior information with application in SPECT medical imaging. That extension allows the restoration process to be constrained by efficiently incorporating, within the NAS-RIF method, a regularization term which stabilizes the inverse solution. Our restoration method is constrained by anatomical information extracted from a high resolution anatomical procedure such as magnetic resonance imaging (MRI). This structural anatomy-based regularization term uses the result of an unsupervised Markovian segmentation obtained after a preliminary registration step between the MRI and SPECT data volumes from each patient. This method was successfully tested on 30 pairs of brain MRI and SPECT acquisitions from different subjects and on Hoffman and Jaszczak SPECT phantoms. The experiments demonstrated that the method performs better, in terms of signal-to-noise ratio, than a classical supervised restoration approach using a Metz filter. PMID:19812704
Al-Shaikhli, Saif Dawood Salman; Yang, Michael Ying; Rosenhahn, Bodo
2016-12-01
This paper presents a novel method for Alzheimer's disease classification via an automatic 3D caudate nucleus segmentation. The proposed method consists of segmentation and classification steps. In the segmentation step, we propose a novel level set cost function. The proposed cost function is constrained by a sparse representation of local image features using a dictionary learning method. We present coupled dictionaries: a feature dictionary of a grayscale brain image and a label dictionary of a caudate nucleus label image. Using online dictionary learning, the coupled dictionaries are learned from the training data. The learned coupled dictionaries are embedded into a level set function. In the classification step, a region-based feature dictionary is built. The region-based feature dictionary is learned from shape features of the caudate nucleus in the training data. The classification is based on the measure of the similarity between the sparse representation of region-based shape features of the segmented caudate in the test image and the region-based feature dictionary. The experimental results demonstrate the superiority of our method over the state-of-the-art methods by achieving a high segmentation (91.5%) and classification (92.5%) accuracy. In this paper, we find that the study of the caudate nucleus atrophy gives an advantage over the study of whole brain structure atrophy to detect Alzheimer's disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
NASA Astrophysics Data System (ADS)
Zhu, Dechao; Deng, Zhongmin; Wang, Xingwei
2001-08-01
In the present paper, a series of hierarchical warping functions is developed to analyze the static and dynamic problems of thin walled composite laminated helicopter rotors composed of several layers with single closed cell. This method is the development and extension of the traditional constrained warping theory of thin walled metallic beams, which had been proved very successful since 1940s. The warping distribution along the perimeter of each layer is expanded into a series of successively corrective warping functions with the traditional warping function caused by free torsion or free bending as the first term, and is assumed to be piecewise linear along the thickness direction of layers. The governing equations are derived based upon the variational principle of minimum potential energy for static analysis and Rayleigh Quotient for free vibration analysis. Then the hierarchical finite element method is introduced to form a numerical algorithm. Both static and natural vibration problems of sample box beams are analyzed with the present method to show the main mechanical behavior of the thin walled composite laminated helicopter rotor.
Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes
NASA Astrophysics Data System (ADS)
van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.
2017-12-01
Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.
NASA Astrophysics Data System (ADS)
Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu
2017-09-01
A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.
The Swift AGN and Cluster Survey
NASA Astrophysics Data System (ADS)
Dai, Xinyu
A key question in astrophysics is to constrain the evolution of the largest gravitationally bound structures in the universe. The serendipitous observations of Swift-XRT form an excellent medium-deep and wide soft X-ray survey, with a sky area of 160 square degrees at the flux limit of 5e-15 erg/s/cm^2. This survey is about an order of magnitude deeper than previous surveys of similar areas, and an order of magnitude wider than previous surveys of similar depth. It is comparable to the planned eROSITA deep survey, but already with the data several years ahead. The unique combination of the survey area and depth enables it to fill in the gap between the deep, pencil beam surveys (such as the Chandra Deep Fields) and the shallow, wide area surveys measured with ROSAT. With it, we will place independent and complementary measurements on the number counts and luminosity functions of X-ray sources. It has been proved that this survey is excellent for X-ray selected galaxy cluster surveys, based on our initial analysis of 1/4 of the fields and other independent studies. The highest priority goal is to produce the largest, uniformly selected catalog of X-ray selected clusters and increase the sample of intermediate to high redshift clusters (z > 0.5) by an order of magnitude. From this catalog, we will study the evolution of cluster number counts, luminosity function, scaling relations, and eventually the mass function. For example, various smaller scale surveys concluded divergently on the evolution of a key scaling relation, between temperature and luminosity of clusters. With the statistical power from this large sample, we will resolve the debate whether clusters evolve self-similarly. This is a crucial step in mapping cluster evolution and constraining cosmological models. First, we propose to extract the complete serendipitous extended source list for all Swift-XRT data to 2015. Second, we will use optical/IR observations to further identify galaxy clusters. These optical/IR observations include data from the SDSS, WISE, and deep optical follow-up observations from the APO, MDM, Magellan, and NOAO telescopes. WISE will confirm all z0.5 clusters. We will use ground-based observations to measure redshifts for z>0.5 clusters, with a focus of measuring 1/10 of the spectroscopic redshifts of z>0.5 clusters within the budget period. Third, we will analyze our deep Suzaku Xray follow-up observations of a sample of medium redshift clusters, and the 1/10 bright Swift clusters suitable for spectral analysis. We will also perform stacking analysis using the Swift data for clusters in different redshift bins to constrain the evolution of cluster properties.
NASA Astrophysics Data System (ADS)
Pan, Xiao-Yin; Slamet, Marlina; Sahni, Viraht
2010-04-01
We extend our prior work on the construction of variational wave functions ψ that are functionals of functions χ:ψ=ψ[χ] rather than simply being functions. In this manner, the space of variations is expanded over those of traditional variational wave functions. In this article we perform the constrained search over the functions χ chosen such that the functional ψ[χ] satisfies simultaneously the constraints of normalization and the exact expectation value of an arbitrary single- or two-particle Hermitian operator, while also leading to a rigorous upper bound to the energy. As such the wave function functional is accurate not only in the region of space in which the principal contributions to the energy arise but also in the other region of the space represented by the Hermitian operator. To demonstrate the efficacy of these ideas, we apply such a constrained search to the ground state of the negative ion of atomic hydrogen H-, the helium atom He, and its positive ions Li+ and Be2+. The operators W whose expectations are obtained exactly are the sum of the single-particle operators W=∑irin,n=-2,-1,1,2, W=∑iδ(ri), W=-(1)/(2)∑i∇i2, and the two-particle operators W=∑nun,n=-2,-1,1,2, where u=|ri-rj|. Comparisons with the method of Lagrangian multipliers and of other constructions of wave-function functionals are made. Finally, we present further insights into the construction of wave-function functionals by studying a previously proposed construction of functionals ψ[χ] that lead to the exact expectation of arbitrary Hermitian operators. We discover that analogous to the solutions of the Schrödinger equation, there exist ψ[χ] that are unphysical in that they lead to singular values for the expectations. We also explain the origin of the singularity.
Does perturbative quantum chromodynamics imply a Regge singularity above unity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishari, M.
1982-07-15
It is investigated whether perturbative quantum chromodynamics can have some implications on Regge behavior of deep-inelastic structure functions. The possible indirect but important role of unitarity, in constraining the theory, is pointed out.
Post-Traumatic Stress Constrains the Dynamic Repertoire of Neural Activity.
Mišić, Bratislav; Dunkley, Benjamin T; Sedge, Paul A; Da Costa, Leodante; Fatima, Zainab; Berman, Marc G; Doesburg, Sam M; McIntosh, Anthony R; Grodecki, Richard; Jetly, Rakesh; Pang, Elizabeth W; Taylor, Margot J
2016-01-13
Post-traumatic stress disorder (PTSD) is an anxiety disorder arising from exposure to a traumatic event. Although primarily defined in terms of behavioral symptoms, the global neurophysiological effects of traumatic stress are increasingly recognized as a critical facet of the human PTSD phenotype. Here we use magnetoencephalographic recordings to investigate two aspects of information processing: inter-regional communication (measured by functional connectivity) and the dynamic range of neural activity (measured in terms of local signal variability). We find that both measures differentiate soldiers diagnosed with PTSD from soldiers without PTSD, from healthy civilians, and from civilians with mild traumatic brain injury, which is commonly comorbid with PTSD. Specifically, soldiers with PTSD display inter-regional hypersynchrony at high frequencies (80-150 Hz), as well as a concomitant decrease in signal variability. The two patterns are spatially correlated and most pronounced in a left temporal subnetwork, including the hippocampus and amygdala. We hypothesize that the observed hypersynchrony may effectively constrain the expression of local dynamics, resulting in less variable activity and a reduced dynamic repertoire. Thus, the re-experiencing phenomena and affective sequelae in combat-related PTSD may result from functional networks becoming "stuck" in configurations reflecting memories, emotions, and thoughts originating from the traumatizing experience. The present study investigates the effects of post-traumatic stress disorder (PTSD) in combat-exposed soldiers. We find that soldiers with PTSD exhibit hypersynchrony in a circuit of temporal lobe areas associated with learning and memory function. This rigid functional architecture is associated with a decrease in signal variability in the same areas, suggesting that the observed hypersynchrony may constrain the expression of local dynamics, resulting in a reduced dynamic range. Our findings suggest that the re-experiencing of traumatic events in PTSD may result from functional networks becoming locked in configurations that reflect memories, emotions, and thoughts associated with the traumatic experience. Copyright © 2016 the authors 0270-6474/16/360419-13$15.00/0.
Constrained variation in Jastrow method at high density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, J.C.; Bishop, R.F.; Irvine, J.M.
1976-11-01
A method is derived for constraining the correlation function in a Jastrow variational calculation which permits the truncation of the cluster expansion after two-body terms, and which permits exact minimization of the two-body cluster by functional variation. This method is compared with one previously proposed by Pandharipande and is found to be superior both theoretically and practically. The method is tested both on liquid /sup 3/He, by using the Lennard--Jones potential, and on the model system of neutrons treated as Boltzmann particles (''homework'' problem). Good agreement is found both with experiment and with other calculations involving the explicit evaluation ofmore » higher-order terms in the cluster expansion. The method is then applied to a more realistic model of a neutron gas up to a density of 4 neutrons per F/sup 3/, and is found to give ground-state energies considerably lower than those of Pandharipande. (AIP)« less
Fission barriers from multidimensionally-constrained covariant density functional theories
NASA Astrophysics Data System (ADS)
Lu, Bing-Nan; Zhao, Jie; Zhao, En-Guang; Zhou, Shan-Gui
2017-11-01
In recent years, we have developed the multidimensionally-constrained covariant density functional theories (MDC-CDFTs) in which both axial and spatial reflection symmetries are broken and all shape degrees of freedom described by βλμ with even μ, such as β20, β22, β30, β32, β40, etc., are included self-consistently. The MDC-CDFTs have been applied to the investigation of potential energy surfaces and fission barriers of actinide nuclei, third minima in potential energy surfaces of light actinides, shapes and potential energy surfaces of superheavy nuclei, octupole correlations between multiple chiral doublet bands in 78Br, octupole correlations in Ba isotopes, the Y32 correlations in N = 150 isotones and Zr isotopes, the spontaneous fission of Fm isotopes, and shapes of hypernuclei. In this contribution we present the formalism of MDC-CDFTs and the application of these theories to the study of fission barriers and potential energy surfaces of actinide nuclei.
Czakó, Gábor; Kaledin, Alexey L; Bowman, Joel M
2010-04-28
We report the implementation of a previously suggested method to constrain a molecular system to have mode-specific vibrational energy greater than or equal to the zero-point energy in quasiclassical trajectory calculations [J. M. Bowman et al., J. Chem. Phys. 91, 2859 (1989); W. H. Miller et al., J. Chem. Phys. 91, 2863 (1989)]. The implementation is made practical by using a technique described recently [G. Czako and J. M. Bowman, J. Chem. Phys. 131, 244302 (2009)], where a normal-mode analysis is performed during the course of a trajectory and which gives only real-valued frequencies. The method is applied to the water dimer, where its effectiveness is shown by computing mode energies as a function of integration time. Radial distribution functions are also calculated using constrained quasiclassical and standard classical molecular dynamics at low temperature and at 300 K and compared to rigorous quantum path integral calculations.
Infinite horizon problems on stratifiable state-constraints sets
NASA Astrophysics Data System (ADS)
Hermosilla, C.; Zidani, H.
2015-02-01
This paper deals with a state-constrained control problem. It is well known that, unless some compatibility condition between constraints and dynamics holds, the Value Function has not enough regularity, or can fail to be the unique constrained viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation. Here, we consider the case of a set of constraints having a stratified structure. Under this circumstance, the interior of this set may be empty or disconnected, and the admissible trajectories may have the only option to stay on the boundary without possible approximation in the interior of the constraints. In such situations, the classical pointing qualification hypothesis is not relevant. The discontinuous Value Function is then characterized by means of a system of HJB equations on each stratum that composes the state-constraints. This result is obtained under a local controllability assumption which is required only on the strata where some chattering phenomena could occur.
Genome Informed Trait-Based Models
NASA Astrophysics Data System (ADS)
Karaoz, U.; Cheng, Y.; Bouskill, N.; Tang, J.; Beller, H. R.; Brodie, E.; Riley, W. J.
2013-12-01
Trait-based approaches are powerful tools for representing microbial communities across both spatial and temporal scales within ecosystem models. Trait-based models (TBMs) represent the diversity of microbial taxa as stochastic assemblages with a distribution of traits constrained by trade-offs between these traits. Such representation with its built-in stochasticity allows the elucidation of the interactions between the microbes and their environment by reducing the complexity of microbial community diversity into a limited number of functional ';guilds' and letting them emerge across spatio-temporal scales. From the biogeochemical/ecosystem modeling perspective, the emergent properties of the microbial community could be directly translated into predictions of biogeochemical reaction rates and microbial biomass. The accuracy of TBMs depends on the identification of key traits of the microbial community members and on the parameterization of these traits. Current approaches to inform TBM parameterization are empirical (i.e., based on literature surveys). Advances in omic technologies (such as genomics, metagenomics, metatranscriptomics, and metaproteomics) pave the way to better-initialize models that can be constrained in a generic or site-specific fashion. Here we describe the coupling of metagenomic data to the development of a TBM representing the dynamics of metabolic guilds from an organic carbon stimulated groundwater microbial community. Illumina paired-end metagenomic data were collected from the community as it transitioned successively through electron-accepting conditions (nitrate-, sulfate-, and Fe(III)-reducing), and used to inform estimates of growth rates and the distribution of metabolic pathways (i.e., aerobic and anaerobic oxidation, fermentation) across a spatially resolved TBM. We use this model to evaluate the emergence of different metabolisms and predict rates of biogeochemical processes over time. We compare our results to observational outputs.
Building cancer nursing skills in a resource-constrained government hospital.
Strother, R M; Fitch, Margaret; Kamau, Peter; Beattie, Kathy; Boudreau, Angela; Busakhalla, N; Loehrer, P J
2012-09-01
Cancer is a rising cause of morbidity and mortality in resource-constrained settings. Few places in the developing world have cancer care experts and infrastructure for caring for cancer patients; therefore, it is imperative to develop this infrastructure and expertise. A critical component of cancer care, rarely addressed in the published literature, is cancer nursing. This report describes an effort to develop cancer nursing subspecialty knowledge and skills in support of a growing resource-constrained comprehensive cancer care program in Western Kenya. This report highlights the context of cancer care delivery in a resource-constrained setting, and describes one targeted intervention to further develop the skill set and knowledge of cancer care providers, as part of collaboration between developed world academic institutions and a medical school and governmental hospital in Western Kenya. Based on observations of current practice, practice setting, and resource limitations, a pragmatic curriculum for cancer care nursing was developed and implemented.
NASA Technical Reports Server (NTRS)
Pawson, Steven; Ott, Lesley E.; Zhu, Zhengxin; Bowman, Kevin; Brix, Holger; Collatz, G. James; Dutkiewicz, Stephanie; Fisher, Joshua B.; Gregg, Watson W.; Hill, Chris;
2011-01-01
Forward GEOS-5 AGCM simulations of CO2, with transport constrained by analyzed meteorology for 2009-2010, are examined. The CO2 distributions are evaluated using AIRS upper tropospheric CO2 and ACOS-GOSAT total column CO2 observations. Different combinations of surface C02 fluxes are used to generate ensembles of runs that span some uncertainty in surface emissions and uptake. The fluxes are specified in GEOS-5 from different inventories (fossil and biofuel), different data-constrained estimates of land biological emissions, and different data-constrained ocean-biology estimates. One set of fluxes is based on the established "Transcom" database and others are constructed using contemporary satellite observations to constrain land and ocean process models. Likewise, different approximations to sub-grid transport are employed, to construct an ensemble of CO2 distributions related to transport variability. This work is part of NASA's "Carbon Monitoring System Flux Pilot Project,"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saide, Pablo E.; Peterson, David A.; de Silva, Arlindo
We couple airborne, ground-based, and satellite observations; conduct regional simulations; and develop and apply an inversion technique to constrain hourly smoke emissions from the Rim Fire, the third largest observed in California, USA. Emissions constrained with multiplatform data show notable nocturnal enhancements (sometimes over a factor of 20), correlate better with daily burned area data, and are a factor of 2–4 higher than a priori estimates, highlighting the need for improved characterization of diurnal profiles and day-to-day variability when modeling extreme fires. Constraining only with satellite data results in smaller enhancements mainly due to missing retrievals near the emissions source,more » suggesting that top-down emission estimates for these events could be underestimated and a multiplatform approach is required to resolve them. Predictions driven by emissions constrained with multiplatform data present significant variations in downwind air quality and in aerosol feedback on meteorology, emphasizing the need for improved emissions estimates during exceptional events.« less
Active/Passive Control of Sound Radiation from Panels using Constrained Layer Damping
NASA Technical Reports Server (NTRS)
Gibbs, Gary P.; Cabell, Randolph H.
2003-01-01
A hybrid passive/active noise control system utilizing constrained layer damping and model predictive feedback control is presented. This system is used to control the sound radiation of panels due to broadband disturbances. To facilitate the hybrid system design, a methodology for placement of constrained layer damping which targets selected modes based on their relative radiated sound power is developed. The placement methodology is utilized to determine two constrained layer damping configurations for experimental evaluation of a hybrid system. The first configuration targets the (4,1) panel mode which is not controllable by the piezoelectric control actuator, and the (2,3) and (5,2) panel modes. The second configuration targets the (1,1) and (3,1) modes. The experimental results demonstrate the improved reduction of radiated sound power using the hybrid passive/active control system as compared to the active control system alone.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
ERIC Educational Resources Information Center
Simos, Panagiotis G.; Rezaie, Roozbeh; Fletcher, Jack M.; Papanicolaou, Andrew C.
2013-01-01
The study investigated functional associations between left hemisphere occipitotemporal, temporoparietal, and inferior frontal regions during oral pseudoword reading in 58 school-aged children with typical reading skills (aged 10.4 [plus or minus] 1.6, range 7.5-12.5 years). Event-related neuromagnetic data were used to compute source-current…
Distributed Constrained Optimization with Semicoordinate Transformations
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2006-01-01
Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.
Building a functional multiple intelligences theory to advance educational neuroscience
Cerruti, Carlo
2013-01-01
A key goal of educational neuroscience is to conduct constrained experimental research that is theory-driven and yet also clearly related to educators’ complex set of questions and concerns. However, the fields of education, cognitive psychology, and neuroscience use different levels of description to characterize human ability. An important advance in research in educational neuroscience would be the identification of a cognitive and neurocognitive framework at a level of description relatively intuitive to educators. I argue that the theory of multiple intelligences (MI; Gardner, 1983), a conception of the mind that motivated a past generation of teachers, may provide such an opportunity. I criticize MI for doing little to clarify for teachers a core misunderstanding, specifically that MI was only an anatomical map of the mind but not a functional theory that detailed how the mind actually processes information. In an attempt to build a “functional MI” theory, I integrate into MI basic principles of cognitive and neural functioning, namely interregional neural facilitation and inhibition. In so doing I hope to forge a path toward constrained experimental research that bears upon teachers’ concerns about teaching and learning. PMID:24391613
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Cluster/Peace Electrons Velocity Distribution Function: Modeling the Strahl in the Solar Wind
NASA Technical Reports Server (NTRS)
Figueroa-Vinas, Adolfo; Gurgiolo, Chris; Goldstein, Melvyn L.
2008-01-01
We present a study of kinetic properties of the strahl electron velocity distribution functions (VDF's) in the solar wind. These are used to investigate the pitch-angle scattering and stability of the population to interactions with electromagnetic (whistler) fluctuations. The study is based on high time resolution data from the Cluster/PEACE electron spectrometer. Our study focuses on the mechanisms that control and regulate the pitch-angle and stability of strahl electrons in the solar wind; mechanisms that are not yet well understood. Various parameters are investigated such as the electron heat-flux and temperature anisotropy. The goal is to check whether the strahl electrons are constrained by some instability (e.g., the whistler instability), or are maintained by other types of processes. The electron heat-flux and temperature anisotropy are determined by fitting the VDF's to a spectral spherical harmonic model from which the moments are derived directly from the model coefficients.
NASA Astrophysics Data System (ADS)
Shah, Shishir
This paper presents a segmentation method for detecting cells in immunohistochemically stained cytological images. A two-phase approach to segmentation is used where an unsupervised clustering approach coupled with cluster merging based on a fitness function is used as the first phase to obtain a first approximation of the cell locations. A joint segmentation-classification approach incorporating ellipse as a shape model is used as the second phase to detect the final cell contour. The segmentation model estimates a multivariate density function of low-level image features from training samples and uses it as a measure of how likely each image pixel is to be a cell. This estimate is constrained by the zero level set, which is obtained as a solution to an implicit representation of an ellipse. Results of segmentation are presented and compared to ground truth measurements.
Issues of convection in insect respiration: Insights from synchrotron X-ray imaging and beyond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Socha, John J.; Förster, Thomas D.; Greenlee, Kendra J.
2010-11-01
While it has long been known that in small animals, such as insects, sufficient gas transport could be provided by diffusion, it is now recognized that animals generate and control convective flows to improve oxygen delivery across a range of body sizes and taxa. However, size-based methodological limitations have constrained our understanding of the mechanisms that underlie the production of these convective flows. Recently, new techniques have enabled the elucidation of the anatomical structures and physiological processes that contribute to creating and maintaining bulk flow in small animals. In particular, synchrotron X-ray imaging provides unprecedented spatial and temporal resolution ofmore » internal functional morphology and is changing the way we understand gas exchange in insects. This symposium highlights recent efforts towards understanding the relationship between form, function, and control in the insect respiratory system.« less
NASA Technical Reports Server (NTRS)
Bauschlicher, C. W., Jr.; Yarkony, D. R.
1980-01-01
A previously reported multi-configuration self-consistent field (MCSCF) algorithm based on the generalized Brillouin theorem is extended in order to treat the excited states of polar molecules. In particular, the algorithm takes into account the proper treatment of nonorthogonality in the space of single excitations and invokes, when necessary, a constrained optimization procedure to prevent the variational collapse of excited states. In addition, a configuration selection scheme (suitable for use in conjunction with extended configuration interaction methods) is proposed for the MCSCF procedure. The algorithm is used to study the low-lying singlet states of BeO, a system which has not previously been studied using an MCSCF procedure. MCSCF wave functions are obtained for three 1 Sigma + and two 1 Pi states. The 1 Sigma + results are juxtaposed with comparable results for MgO in order to assess the generality of the description presented here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vörös, Márton; Brawand, Nicholas P.; Galli, Giulia
Lead chalcogenide (PbX) nanoparticles are promising materials for solar energy conversion. However, the presence of trap states in their electronic gap limits their usability, and developing a universal strategy to remove trap states is a persistent challenge. Using calculations based on density functional theory, we show that hydrogen acts as an amphoteric impurity on PbX nanoparticle surfaces; hydrogen atoms may passivate defects arising from ligand imbalance or off-stoichiometric surface terminations irrespective of whether they originate from cation or anion excess. In addition, we show, using constrained density functional theory calculations, that hydrogen treatment of defective nanoparticles is also beneficial formore » charge transport in films. We also find that hydrogen adsorption on stoichiometric nanoparticles leads to electronic doping, preferentially n-type. Lastly, our findings suggest that postsynthesis hydrogen treatment of lead chalcogenide nanoparticle films is a viable approach to reduce electronic trap states or to dope well-passivated films.« less