Insight and search in Katona's five-square problem.
Ollinger, Michael; Jones, Gary; Knoblich, Günther
2014-01-01
Insights are often productive outcomes of human thinking. We provide a cognitive model that explains insight problem solving by the interplay of problem space search and representational change, whereby the problem space is constrained or relaxed based on the problem representation. By introducing different experimental conditions that either constrained the initial search space or helped solvers to initiate a representational change, we investigated the interplay of problem space search and representational change in Katona's five-square problem. Testing 168 participants, we demonstrated that independent hints relating to the initial search space and to representational change had little effect on solution rates. However, providing both hints caused a significant increase in solution rates. Our results show the interplay between problem space search and representational change in insight problem solving: The initial problem space can be so large that people fail to encounter impasse, but even when representational change is achieved the resulting problem space can still provide a major obstacle to finding the solution.
Heuristics in Problem Solving: The Role of Direction in Controlling Search Space
ERIC Educational Resources Information Center
Chu, Yun; Li, Zheng; Su, Yong; Pizlo, Zygmunt
2010-01-01
Isomorphs of a puzzle called m+m resulted in faster solution times and an easily reproduced solution path in a labeled version of the problem compared to a more difficult binary version. We conjecture that performance is related to a type of heuristic called direction that not only constrains search space in the labeled version, but also…
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Constrained orbital intercept-evasion
NASA Astrophysics Data System (ADS)
Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh
2014-06-01
An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2018-05-01
Application of the multi-arm space robot will be more effective than single arm especially when the target is tumbling. This paper investigates the application of particle swarm optimization (PSO) strategy to coordinated trajectory planning of the dual-arm space robot in free-floating mode. In order to overcome the dynamics singularities issue, the direct kinematics equations in conjunction with constrained PSO are employed for coordinated trajectory planning of dual-arm space robot. The joint trajectories are parametrized with Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for coordinated trajectory planning of two kinematically redundant manipulators mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.
Semismooth Newton method for gradient constrained minimization problem
NASA Astrophysics Data System (ADS)
Anyyeva, Serbiniyaz; Kunisch, Karl
2012-08-01
In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2012-01-01
In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.
A Perspective of the Science and Mission Challenges in Aeronomy
NASA Technical Reports Server (NTRS)
Spann, James F.
2010-01-01
There are significant fundamental problems for which aeronomy can provide solutions and a critical role in applied science and space weather that only aeronomy can address. Examples of unresolved problems include the interaction of neutral and charged, the role of mass and energy transfer across Earth's interface with space, and the predictability of ionospheric density and composition variability. These and other problems impact the productivity of space assets and thus have a tangible applied dimension. This talk will explore open science problems and barriers to potential mission solutions in an era of constrained resources.
Classification-Assisted Memetic Algorithms for Equality-Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Handoko, Stephanus Daniel; Kwoh, Chee Keong; Ong, Yew Soon
Regressions has successfully been incorporated into memetic algorithm (MA) to build surrogate models for the objective or constraint landscape of optimization problems. This helps to alleviate the needs for expensive fitness function evaluations by performing local refinements on the approximated landscape. Classifications can alternatively be used to assist MA on the choice of individuals that would experience refinements. Support-vector-assisted MA were recently proposed to alleviate needs for function evaluations in the inequality-constrained optimization problems by distinguishing regions of feasible solutions from those of the infeasible ones based on some past solutions such that search efforts can be focussed on some potential regions only. For problems having equality constraints, however, the feasible space would obviously be extremely small. It is thus extremely difficult for the global search component of the MA to produce feasible solutions. Hence, the classification of feasible and infeasible space would become ineffective. In this paper, a novel strategy to overcome such limitation is proposed, particularly for problems having one and only one equality constraint. The raw constraint value of an individual, instead of its feasibility class, is utilized in this work.
A proof for loop-law constraints in stoichiometric metabolic networks
2012-01-01
Background Constraint-based modeling is increasingly employed for metabolic network analysis. Its underlying assumption is that natural metabolic phenotypes can be predicted by adding physicochemical constraints to remove unrealistic metabolic flux solutions. The loopless-COBRA approach provides an additional constraint that eliminates thermodynamically infeasible internal cycles (or loops) from the space of solutions. This allows the prediction of flux solutions that are more consistent with experimental data. However, it is not clear if this approach over-constrains the models by removing non-loop solutions as well. Results Here we apply Gordan’s theorem from linear algebra to prove for the first time that the constraints added in loopless-COBRA do not over-constrain the problem beyond the elimination of the loops themselves. Conclusions The loopless-COBRA constraints can be reliably applied. Furthermore, this proof may be adapted to evaluate the theoretical soundness for other methods in constraint-based modeling. PMID:23146116
Constrained multibody system dynamics: An automated approach
NASA Technical Reports Server (NTRS)
Kamman, J. W.; Huston, R. L.
1982-01-01
The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. The closed loop problem of multibody chain systems is addressed. The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. The modifications is based upon a solution of the constraint equations obtained through a zero eigenvalues theorem, is a contraction of the dynamical equations. For a system with n-generalized coordinates and m-constraint equations, the coefficients in the constraint equations may be viewed as constraint vectors in n-dimensional space. In this setting the system itself is free to move in the n-m directions which are orthogonal to the constraint vectors.
An efficient and practical approach to obtain a better optimum solution for structural optimization
NASA Astrophysics Data System (ADS)
Chen, Ting-Yu; Huang, Jyun-Hao
2013-08-01
For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.
NASA Astrophysics Data System (ADS)
Quan, Zhe; Wu, Lei
2017-09-01
This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.
Necessary optimality conditions for infinite dimensional state constrained control problems
NASA Astrophysics Data System (ADS)
Frankowska, H.; Marchini, E. M.; Mazzola, M.
2018-06-01
This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.
Space Radiation and the Challenges Towards Effective Shielding Solutions
NASA Technical Reports Server (NTRS)
Barghouty, Abdulnasser
2014-01-01
The hazards of space radiation and their effective mitigation strategies continue to pose special science and technology challenges to NASA. It is widely accepted now that shielding space vehicles and structures will have to rely on new and innovative materials since aluminum, like all high Z materials, are poor shields against the particulate and highly ionizing nature of space radiation. Shielding solutions, motivated and constrained by power and mass limitations, couple this realization with "multifunctionality," both in design concept as well as in material function and composition. Materials endowed with effective shielding properties as well as with some degree of multi-functionality may be the kernel of the so-called "radiation-smart" structures and designs. This talk will present some of the challenges and potential mitigation ideas towards the realization of such structures and designs.
On the BV formalism of open superstring field theory in the large Hilbert space
NASA Astrophysics Data System (ADS)
Matsunaga, Hiroaki; Nomura, Mitsuru
2018-05-01
We construct several BV master actions for open superstring field theory in the large Hilbert space. First, we show that a naive use of the conventional BV approach breaks down at the third order of the antifield number expansion, although it enables us to define a simple "string antibracket" taking the Darboux form as spacetime antibrackets. This fact implies that in the large Hilbert space, "string fields-antifields" should be reassembled to obtain master actions in a simple manner. We determine the assembly of the string anti-fields on the basis of Berkovits' constrained BV approach, and give solutions to the master equation defined by Dirac antibrackets on the constrained string field-antifield space. It is expected that partial gauge-fixing enables us to relate superstring field theories based on the large and small Hilbert spaces directly: reassembling string fields-antifields is rather natural from this point of view. Finally, inspired by these results, we revisit the conventional BV approach and construct a BV master action based on the minimal set of string fields-antifields.
An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.
Matter coupling in partially constrained vielbein formulation of massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Antonio De; Mukohyama, Shinji; Gümrükçüoğlu, A. Emir
2016-01-01
We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less
Matter coupling in partially constrained vielbein formulation of massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Antonio De; Gümrükçüoğlu, A. Emir; Heisenberg, Lavinia
2016-01-04
We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less
Matrix Transfer Function Design for Flexible Structures: An Application
NASA Technical Reports Server (NTRS)
Brennan, T. J.; Compito, A. V.; Doran, A. L.; Gustafson, C. L.; Wong, C. L.
1985-01-01
The application of matrix transfer function design techniques to the problem of disturbance rejection on a flexible space structure is demonstrated. The design approach is based on parameterizing a class of stabilizing compensators for the plant and formulating the design specifications as a constrained minimization problem in terms of these parameters. The solution yields a matrix transfer function representation of the compensator. A state space realization of the compensator is constructed to investigate performance and stability on the nominal and perturbed models. The application is made to the ACOSSA (Active Control of Space Structures) optical structure.
3-D model-based Bayesian classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soenneland, L.; Tenneboe, P.; Gehrmann, T.
1994-12-31
The challenging task of the interpreter is to integrate different pieces of information and combine them into an earth model. The sophistication level of this earth model might vary from the simplest geometrical description to the most complex set of reservoir parameters related to the geometrical description. Obviously the sophistication level also depend on the completeness of the available information. The authors describe the interpreter`s task as a mapping between the observation space and the model space. The information available to the interpreter exists in observation space and the task is to infer a model in model-space. It is well-knownmore » that this inversion problem is non-unique. Therefore any attempt to find a solution depend son constraints being added in some manner. The solution will obviously depend on which constraints are introduced and it would be desirable to allow the interpreter to modify the constraints in a problem-dependent manner. They will present a probabilistic framework that gives the interpreter the tools to integrate the different types of information and produce constrained solutions. The constraints can be adapted to the problem at hand.« less
Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C
2017-08-01
The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.
NASA Technical Reports Server (NTRS)
Hanks, Brantley R.; Skelton, Robert E.
1991-01-01
This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.
A transformation method for constrained-function minimization
NASA Technical Reports Server (NTRS)
Park, S. K.
1975-01-01
A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points.
Cosmological constraints on extended Galileon models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Antonio De; Tsujikawa, Shinji, E-mail: antoniod@nu.ac.th, E-mail: shinji@rs.kagu.tus.ac.jp
2012-03-01
The extended Galileon models possess tracker solutions with de Sitter attractors along which the dark energy equation of state is constant during the matter-dominated epoch, i.e. w{sub DE} = −1−s, where s is a positive constant. Even with this phantom equation of state there are viable parameter spaces in which the ghosts and Laplacian instabilities are absent. Using the observational data of the supernovae type Ia, the cosmic microwave background (CMB), and baryon acoustic oscillations, we place constraints on the tracker solutions at the background level and find that the parameter s is constrained to be s = 0.034{sub −0.034}{supmore » +0.327} (95 % CL) in the flat Universe. In order to break the degeneracy between the models we also study the evolution of cosmological density perturbations relevant to the large-scale structure (LSS) and the Integrated-Sachs-Wolfe (ISW) effect in CMB. We show that, depending on the model parameters, the LSS and the ISW effect is either positively or negatively correlated. It is then possible to constrain viable parameter spaces further from the observational data of the ISW-LSS cross-correlation as well as from the matter power spectrum.« less
Reliability of Source Mechanisms for a Hydraulic Fracturing Dataset
NASA Astrophysics Data System (ADS)
Eyre, T.; Van der Baan, M.
2016-12-01
Non-double-couple components have been inferred for induced seismicity due to fluid injection, yet these components are often poorly constrained due to the acquisition geometry. Likewise non-double-couple components in microseismic recordings are not uncommon. Microseismic source mechanisms provide an insight into the fracturing behaviour of a hydraulically stimulated reservoir. However, source inversion in a hydraulic fracturing environment is complicated by the likelihood of volumetric contributions to the source due to the presence of high pressure fluids, which greatly increases the possible solution space and therefore the non-uniqueness of the solutions. Microseismic data is usually recorded on either 2D surface or borehole arrays of sensors. In many cases, surface arrays appear to constrain source mechanisms with high shear components, whereas borehole arrays tend to constrain more variable mechanisms including those with high tensile components. The abilities of each geometry to constrain the true source mechanisms are therefore called into question.The ability to distinguish between shear and tensile source mechanisms with different acquisition geometries is investigated using synthetic data. For both inversions, both P- and S- wave amplitudes recorded on three component sensors need to be included to obtain reliable solutions. Surface arrays appear to give more reliable solutions due to a greater sampling of the focal sphere, but in reality tend to record signals with a low signal to noise ratio. Borehole arrays can produce acceptable results, however the reliability is much more affected by relative source-receiver locations and source orientation, with biases produced in many of the solutions. Therefore more care must be taken when interpreting results.These findings are taken into account when interpreting a microseismic dataset of 470 events recorded by two vertical borehole arrays monitoring a horizontal treatment well. Source locations and mechanisms are calculated and the results discussed, including the biases caused by the array geometry. The majority of the events are located within the target reservoir, however a small, seemingly disconnected cluster of events appears 100 m above the reservoir.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
On The Computation Of The Best-fit Okada-type Tsunami Source
NASA Astrophysics Data System (ADS)
Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A.
2017-12-01
The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz.
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Reducing the complexity of NASA's space communications infrastructure
NASA Technical Reports Server (NTRS)
Miller, Raymond E.; Liu, Hong; Song, Junehwa
1995-01-01
This report describes the range of activities performed during the annual reporting period in support of the NASA Code O Success Team - Lifecycle Effectiveness for Strategic Success (COST LESS) team. The overall goal of the COST LESS team is to redefine success in a constrained fiscal environment and reduce the cost of success for end-to-end mission operations. This goal is more encompassing than the original proposal made to NASA for reducing complexity of NASA's Space Communications Infrastructure. The COST LESS team approach for reengineering the space operations infrastructure has a focus on reversing the trend of engineering special solutions to similar problems.
On the nullspace of TLS multi-station adjustment
NASA Astrophysics Data System (ADS)
Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen
2018-07-01
In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CHUGH, Devesh; Gluesenkamp, Kyle R; Abdelaziz, Omar
In this study, development of a novel system for combined water heating, dehumidification, and space evaporative cooling is discussed. Ambient water vapor is used as a working fluid in an open system. First, water vapor is absorbed from an air stream into an absorbent solution. The latent heat of absorption is transferred into the process water that cools the absorber. The solution is then regenerated in the desorber, where it is heated by a heating fluid. The water vapor generated in the desorber is condensed and its heat of phase change is transferred to the process water in the condenser.more » The condensed water can then be used in an evaporative cooling process to cool the dehumidified air exiting the absorber, or it can be drained if primarily dehumidification is desired. Essentially, this open absorption cycle collects space heat and transfers it to process water. This technology is enabled by a membrane-based absorption/desorption process in which the absorbent is constrained by hydrophobic vapor-permeable membranes. Constraining the absorbent film has enabled fabrication of the absorber and desorber in a plate-and-frame configuration. An air stream can flow against the membrane at high speed without entraining the absorbent, which is a challenge in conventional dehumidifiers. Furthermore, the absorption and desorption rates of an absorbent constrained by a membrane are greatly enhanced. Isfahani and Moghaddam (Int. J. Heat Mass Transfer, 2013) demonstrated absorption rates of up to 0.008 kg/m2s in a membrane-based absorber and Isfahani et al. (Int. J. Multiphase Flow, 2013) have reported a desorption rate of 0.01 kg/m2s in a membrane-based desorber. The membrane-based architecture also enables economical small-scale systems, novel cycle configurations, and high efficiencies. The absorber, solution heat exchanger, and desorber are fabricated on a single metal sheet. In addition to the open arrangement and membrane-based architecture, another novel feature of the cycle is recovery of the solution heat energy exiting the desorber by process water (a process-solution heat exchanger ) rather than the absorber exiting solution (the conventional solution heat exchanger ). This approach has enabled heating the process water from an inlet temperature of 15 C to 57 C (conforming to the DOE water heater test standard) and interfacing the process water with absorbent on the opposite side of a single metal sheet encompassing the absorber, process-solution heat exchanger, and desorber. The system under development has a 3.2 kW water heating capacity and a target thermal coefficient of performance (COP) of 1.6.« less
Mixed Integer Programming and Heuristic Scheduling for Space Communication
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2013-01-01
Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
Hierarchically partitioned nonlinear equation solvers
NASA Technical Reports Server (NTRS)
Padovan, Joseph
1987-01-01
By partitioning solution space into a number of subspaces, a new multiply constrained partitioned Newton-Raphson nonlinear equation solver is developed. Specifically, for a given iteration, each of the various separate partitions are individually and simultaneously controlled. Due to the generality of the scheme, a hierarchy of partition levels can be employed. For finite-element-type applications, this includes the possibility of degree-of-freedom, nodal, elemental, geometric substructural, material and kinematically nonlinear group controls. It is noted that such partitioning can be continuously updated, depending on solution conditioning. In this context, convergence is ascertained at the individual partition level.
Smooth Constrained Heuristic Optimization of a Combinatorial Chemical Space
2015-05-01
ARL-TR-7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...
NASA Astrophysics Data System (ADS)
Wagner, L.
2007-12-01
There have been a number of recent papers (i.e. Lee (2003), James et al. (2004), Hacker and Abers (2004), Schutt and Lesher (2006)) which calculate predicted velocities for xenolith compositions at mantle pressures and temperatures. It is tempting, therefore, to attempt to go the other way ... to use tomographically determined absolute velocities to constrain mantle composition. However, in order to do this, it is vital that one is able to accurately constrain not only the polarity of the determined velocity deviations (i.e. fast vs slow) but also how much faster, how much slower relative to the starting model, if absolute velocities are to be so closely analyzed. While much attention has been given to issues concerning spatial resolution in seismic tomography (i.e. what areas are fast, what areas are slow), little attention has been directed at the issue of amplitude resolution (how fast, how slow). Velocity deviation amplitudes in seismic tomography are heavily influenced by the amount of regularization used and the number of iterations performed. Determining these two parameters is a difficult and little discussed problem. I explore the effect of these two parameters on the amplitudes obtained from the tomographic inversion of the Chile Argentina Geophysical Experiment (CHARGE) dataset, and attempt to determine a reasonable solution space for the low Vp, high Vs, low Vp/Vs anomaly found above the flat slab in central Chile. I then compare this solution space to the range in experimentally determined velocities for peridotite end-members to evaluate our ability to constrain composition using tomographically determined seismic velocities. I find that in general, it will be difficult to constrain the compositions of normal mantle peridotites using tomographically determined velocities, but that in the unusual case of the anomaly above the flat slab, the observed velocity structure still has an anomalously high S wave velocity and low Vp/Vs ratio that is most consistent with enstatite, but inconsistent with the predicted velocities of known mantle xenoliths.
Constrained Burn Optimization for the International Space Station
NASA Technical Reports Server (NTRS)
Brown, Aaron J.; Jones, Brandon A.
2017-01-01
In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.
Chi, Baofang; Tao, Shiheng; Liu, Yanlin
2015-01-01
Sampling the solution space of genome-scale models is generally conducted to determine the feasible region for metabolic flux distribution. Because the region for actual metabolic states resides only in a small fraction of the entire space, it is necessary to shrink the solution space to improve the predictive power of a model. A common strategy is to constrain models by integrating extra datasets such as high-throughput datasets and C13-labeled flux datasets. However, studies refining these approaches by performing a meta-analysis of massive experimental metabolic flux measurements, which are closely linked to cellular phenotypes, are limited. In the present study, experimentally identified metabolic flux data from 96 published reports were systematically reviewed. Several strong associations among metabolic flux phenotypes were observed. These phenotype-phenotype associations at the flux level were quantified and integrated into a Saccharomyces cerevisiae genome-scale model as extra physiological constraints. By sampling the shrunken solution space of the model, the metabolic flux fluctuation level, which is an intrinsic trait of metabolic reactions determined by the network, was estimated and utilized to explore its relationship to gene expression noise. Although no correlation was observed in all enzyme-coding genes, a relationship between metabolic flux fluctuation and expression noise of genes associated with enzyme-dosage sensitive reactions was detected, suggesting that the metabolic network plays a role in shaping gene expression noise. Such correlation was mainly attributed to the genes corresponding to non-essential reactions, rather than essential ones. This was at least partially, due to regulations underlying the flux phenotype-phenotype associations. Altogether, this study proposes a new approach in shrinking the solution space of a genome-scale model, of which sampling provides new insights into gene expression noise.
PLUTO'S SEASONS: NEW PREDICTIONS FOR NEW HORIZONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, L. A.
Since the last Pluto volatile transport models were published in 1996, we have (1) new stellar occultation data from 2002 and 2006-2012 that show roughly twice the pressure as the first definitive occultation from 1988, (2) new information about the surface properties of Pluto, (3) a spacecraft due to arrive at Pluto in 2015, and (4) a new volatile transport model that is rapid enough to allow a large parameter-space search. Such a parameter-space search coarsely constrained by occultation results reveals three broad solutions: a high-thermal inertia, large volatile inventory solution with permanent northern volatiles (PNVs; using the rotational northmore » pole convention); a lower thermal-inertia, smaller volatile inventory solution with exchanges of volatiles between hemispheres and a pressure plateau beyond 2015 (exchange with pressure plateau, EPP); and solutions with still smaller volatile inventories, with exchanges of volatiles between hemispheres and an early collapse of the atmosphere prior to 2015 (exchange with early collapse, EEC). PNV and EPP are favored by stellar occultation data, but EEC cannot yet be definitively ruled out without more atmospheric modeling or additional occultation observations and analysis.« less
NASA Astrophysics Data System (ADS)
Malinowski, Zbigniew; Cebo-Rudnicka, Agnieszka; Hadała, Beata; Szajding, Artur; Telejko, Tadeusz
2017-10-01
A cooling rate affects the mechanical properties of steel which strongly depend on microstructure evolution processes. The heat transfer boundary condition for the numerical simulation of steel cooling by water jets can be determined from the local one dimensional or from the three dimensional inverse solutions in space and time. In the present study the inconel plate has been heated to about 900 °C and then cooled by six circular water jets. The plate temperature has been measured by 30 thermocouples. The heat transfer coefficient and the heat flux distributions at the plate surface have been determined in time and space. The one dimensional solutions have given a local error to the heat transfer coefficient of about 35%. The three dimensional inverse solution has allowed reducing the local error to about 20%. The uncertainty test has confirmed that a better approximation of the heat transfer coefficient distribution over the cooled surface can be obtained even for limited number of thermocouples. In such a case it was necessary to constrain the inverse solution with the interpolated temperature sensors.
Öllinger, Michael; Jones, Gary; Knoblich, Günther
2014-03-01
The nine-dot problem is often used to demonstrate and explain mental impasse, creativity, and out of the box thinking. The present study investigated the interplay of a restricted initial search space, the likelihood of invoking a representational change, and the subsequent constraining of an unrestricted search space. In three experimental conditions, participants worked on different versions of the nine-dot problem that hinted at removing particular sources of difficulty from the standard problem. The hints were incremental such that the first suggested a possible route for a solution attempt; the second additionally indicated the dot at which lines meet on the solution path; and the final condition also provided non-dot locations that appear in the solution path. The results showed that in the experimental conditions, representational change is encountered more quickly and problems are solved more often than for the control group. We propose a cognitive model that focuses on general problem-solving heuristics and representational change to explain problem difficulty.
AI techniques for a space application scheduling problem
NASA Technical Reports Server (NTRS)
Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.
1991-01-01
Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Modeling aspects of the surface reconstruction problem
NASA Astrophysics Data System (ADS)
Toth, Charles K.; Melykuti, Gabor
1994-08-01
The ultimate goal of digital photogrammetry is to automatically produce digital maps which may in turn form the basis of GIS. Virtually all work in surface reconstruction deals with various kinds of approximations and constraints that are applied. In this paper we extend these concepts in various ways. For one, matching is performed in object space. Thus, matching and densification (modeling) is performed in the same reference system. Another extension concerns the solution of the second sub-problem. Rather than simply densifying (interpolating) the surface, we propose to model it. This combined top-down and bottom-up approach is performed in scale space, whereby the model is refined until compatibility between the data and expectations is reached. The paper focuses on the modeling aspects of the surface reconstruction problem. Obviously, the top-down and bottom-up model descriptions ought to be in a form which allows the generation and verification of hypotheses. Another crucial question is the degree of a priori scene knowledge necessary to constrain the solution space.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Genetic algorithms as global random search methods
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
A format for the interchange of scheduling models
NASA Technical Reports Server (NTRS)
Jaap, John P.; Davis, Elizabeth K.
1994-01-01
In recent years a variety of space-activity schedulers have been developed within the aerospace community. Space-activity schedulers are characterized by their need to handle large numbers of activities which are time-window constrained and make high demands on many scarce resources, but are minimally constrained by predecessor/successor requirements or critical paths. Two needs to exchange data between these schedulers have materialized. First, there is significant interest in comparing and evaluating the different scheduling engines to ensure that the best technology is applied to each scheduling endeavor. Second, there is a developing requirement to divide a single scheduling task among different sites, each using a different scheduler. In fact, the scheduling task for International Space Station Alpha (ISSA) will be distributed among NASA centers and among the international partners. The format used to interchange scheduling data for ISSA will likely use a growth version of the format discussed in this paper. The model interchange format (or MIF, pronounced as one syllable) discussed in this paper is a robust solution to the need to interchange scheduling requirements for space activities. It is highly extensible, human-readable, and can be generated or edited with common text editors. It also serves well the need to support a 'benchmark' data case which can be delivered on any computer platform.
Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.
Heikal, A A; Wachowicz, K; Fallone, B G
2016-10-01
To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.
Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints
NASA Technical Reports Server (NTRS)
Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren
2015-01-01
Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.
The covariance matrix for the solution vector of an equality-constrained least-squares problem
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1976-01-01
Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'
Wiback, Sharon J; Mahadevan, Radhakrishnan; Palsson, Bernhard Ø
2004-05-05
Constraint-based metabolic modeling has been used to capture the genome-scale, systems properties of an organism's metabolism. The first generation of these models has been built on annotated gene sequence. To further this field, we now need to develop methods to incorporate additional "omic" data types including transcriptomics, metabolomics, and fluxomics to further facilitate the construction, validation, and predictive capabilities of these models. The work herein combines metabolic flux data with an in silico model of central metabolism of Escherichia coli for model centric integration of the flux data. The extreme pathways for this network, which define the allowable solution space for all possible flux distributions, are analyzed using the alpha-spectrum. The alpha-spectrum determines which extreme pathways can and cannot contribute to the metabolic flux distribution for a given condition and gives the allowable range of weightings on each extreme pathway that can contribute. Since many extreme pathways cannot be used under certain conditions, the result is a "condition-specific" solution space that is a subset of the original solution space. The alpha-spectrum results are used to create a "condition-specific" extreme pathway matrix that can be analyzed using singular value decomposition (SVD). The first mode of the SVD analysis characterizes the solution space for a given condition. We show that SVD analysis of the alpha-spectrum extreme pathway matrix that incorporates measured uptake and byproduct secretion rates, can predict internal flux trends for different experimental conditions. These predicted internal flux trends are, in general, consistent with the flux trends measured using experimental metabolic flux analysis techniques. Copyright 2004 Wiley Periodicals, Inc.
Solution of a Complex Least Squares Problem with Constrained Phase.
Bydder, Mark
2010-12-30
The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
Designing an optimal software intensive system acquisition: A game theoretic approach
NASA Astrophysics Data System (ADS)
Buettner, Douglas John
The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy's inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin's agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and "large-corporation" software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson's problem, for a way to place a label of good or bad on systems.
Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390
Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.
Lovelock vacua with a recurrent null vector field
NASA Astrophysics Data System (ADS)
Ortaggio, Marcello
2018-02-01
Vacuum solutions of Lovelock gravity in the presence of a recurrent null vector field (a subset of Kundt spacetimes) are studied. We first discuss the general field equations, which constrain both the base space and the profile functions. While choosing a "generic" base space puts stronger constraints on the profile, in special cases there also exist solutions containing arbitrary functions (at least for certain values of the coupling constants). These and other properties (such as the p p - waves subclass and the overlap with VSI, CSI and universal spacetimes) are subsequently analyzed in more detail in lower dimensions n =5 , 6 as well as for particular choices of the base manifold. The obtained solutions describe various classes of nonexpanding gravitational waves propagating, e.g., in Nariai-like backgrounds M2×Σn -2. An Appendix contains some results about general (i.e., not necessarily Kundt) Lovelock vacua of Riemann type III/N and of Weyl and traceless-Ricci type III/N. For example, it is pointed out that for theories admitting a triply degenerate maximally symmetric vacuum, all the (reduced) field equations are satisfied identically, giving rise to large classes of exact solutions.
Optimization of Low-Thrust Spiral Trajectories by Collocation
NASA Technical Reports Server (NTRS)
Falck, Robert D.; Dankanich, John W.
2012-01-01
As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.
BFV-BRST analysis of equivalence between noncommutative and ordinary gauge theories
NASA Astrophysics Data System (ADS)
Dayi, O. F.
2000-05-01
Constrained hamiltonian structure of noncommutative gauge theory for the gauge group /U(1) is discussed. Constraints are shown to be first class, although, they do not give an Abelian algebra in terms of Poisson brackets. The related BFV-BRST charge gives a vanishing generalized Poisson bracket by itself due to the associativity of /*-product. Equivalence of noncommutative and ordinary gauge theories is formulated in generalized phase space by using BFV-BRST charge and a solution is obtained. Gauge fixing is discussed.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
NASA Astrophysics Data System (ADS)
Chandran, A.; Schulz, Marc D.; Burnell, F. J.
2016-12-01
Many phases of matter, including superconductors, fractional quantum Hall fluids, and spin liquids, are described by gauge theories with constrained Hilbert spaces. However, thermalization and the applicability of quantum statistical mechanics has primarily been studied in unconstrained Hilbert spaces. In this paper, we investigate whether constrained Hilbert spaces permit local thermalization. Specifically, we explore whether the eigenstate thermalization hypothesis (ETH) holds in a pinned Fibonacci anyon chain, which serves as a representative case study. We first establish that the constrained Hilbert space admits a notion of locality by showing that the influence of a measurement decays exponentially in space. This suggests that the constraints are no impediment to thermalization. We then provide numerical evidence that ETH holds for the diagonal and off-diagonal matrix elements of various local observables in a generic disorder-free nonintegrable model. We also find that certain nonlocal observables obey ETH.
Enhanced Communication Network Solution for Positive Train Control Implementation
NASA Technical Reports Server (NTRS)
Fatehi, M. T.; Simon, J.; Chang, W.; Chow, E. T.; Burleigh, S. C.
2011-01-01
The commuter and freight railroad industry is required to implement Positive Train Control (PTC) by 2015 (2012 for Metrolink), a challenging network communications problem. This paper will discuss present technologies developed by the National Aeronautics and Space Administration (NASA) to overcome comparable communication challenges encountered in deep space mission operations. PTC will be based on a new cellular wireless packet Internet Protocol (IP) network. However, ensuring reliability in such a network is difficult due to the "dead zones" and transient disruptions we commonly experience when we lose calls in commercial cellular networks. These disruptions make it difficult to meet PTC s stringent reliability (99.999%) and safety requirements, deployment deadlines, and budget. This paper proposes innovative solutions based on space-proven technologies that would help meet these challenges: (1) Delay Tolerant Networking (DTN) technology, designed for use in resource-constrained, embedded systems and currently in use on the International Space Station, enables reliable communication over networks in which timely data acknowledgments might not be possible due to transient link outages. (2) Policy-Based Management (PBM) provides dynamic management capabilities, allowing vital data to be exchanged selectively (with priority) by utilizing alternative communication resources. The resulting network may help railroads implement PTC faster, cheaper, and more reliably.
NASA Astrophysics Data System (ADS)
Zahr, M. J.; Persson, P.-O.
2018-07-01
This work introduces a novel discontinuity-tracking framework for resolving discontinuous solutions of conservation laws with high-order numerical discretizations that support inter-element solution discontinuities, such as discontinuous Galerkin or finite volume methods. The proposed method aims to align inter-element boundaries with discontinuities in the solution by deforming the computational mesh. A discontinuity-aligned mesh ensures the discontinuity is represented through inter-element jumps while smooth basis functions interior to elements are only used to approximate smooth regions of the solution, thereby avoiding Gibbs' phenomena that create well-known stability issues. Therefore, very coarse high-order discretizations accurately resolve the piecewise smooth solution throughout the domain, provided the discontinuity is tracked. Central to the proposed discontinuity-tracking framework is a discrete PDE-constrained optimization formulation that simultaneously aligns the computational mesh with discontinuities in the solution and solves the discretized conservation law on this mesh. The optimization objective is taken as a combination of the deviation of the finite-dimensional solution from its element-wise average and a mesh distortion metric to simultaneously penalize Gibbs' phenomena and distorted meshes. It will be shown that our objective function satisfies two critical properties that are required for this discontinuity-tracking framework to be practical: (1) possesses a local minima at a discontinuity-aligned mesh and (2) decreases monotonically to this minimum in a neighborhood of radius approximately h / 2, whereas other popular discontinuity indicators fail to satisfy the latter. Another important contribution of this work is the observation that traditional reduced space PDE-constrained optimization solvers that repeatedly solve the conservation law at various mesh configurations are not viable in this context since severe overshoot and undershoot in the solution, i.e., Gibbs' phenomena, may make it impossible to solve the discrete conservation law on non-aligned meshes. Therefore, we advocate a gradient-based, full space solver where the mesh and conservation law solution converge to their optimal values simultaneously and therefore never require the solution of the discrete conservation law on a non-aligned mesh. The merit of the proposed method is demonstrated on a number of one- and two-dimensional model problems including the L2 projection of discontinuous functions, Burgers' equation with a discontinuous source term, transonic flow through a nozzle, and supersonic flow around a bluff body. We demonstrate optimal O (h p + 1) convergence rates in the L1 norm for up to polynomial order p = 6 and show that accurate solutions can be obtained on extremely coarse meshes.
The Large Benefits of Small-Satellite Missions
NASA Astrophysics Data System (ADS)
Baker, Daniel N.; Worden, S. Pete
2008-08-01
Small-spacecraft missions play a key and compelling role in space-based scientific and engineering programs [Moretto and Robinson, 2008]. Compared with larger satellites, which can be in excess of 2000 kilograms, small satellites range from 750 kilograms-roughly the size of a golf cart-to less than 1 kilogram, about the size of a softball. They have been responsible for greatly reducing the time needed to obtain science and technology results. The shorter development times for smaller missions can reduce overall costs and can thus provide welcome budgetary options for highly constrained space programs. In many cases, we contend that 80% (or more) of program goals can be achieved for 20% of the cost by using small-spacecraft solutions.
Inflationary solutions in the brane world and their geometrical interpretation
NASA Astrophysics Data System (ADS)
Khoury, Justin; Steinhardt, Paul J.; Waldram, Daniel
2001-05-01
We consider the cosmology of a pair of domain walls bounding a five-dimensional bulk space-time with a negative cosmological constant, in which the distance between the branes is not fixed in time. Although there are strong arguments to suggest that this distance should be stabilized in the present epoch, no such constraints exist for the early universe and thus non-static solutions might provide relevant inflationary scenarios. We find the general solution for the standard ansatz where the bulk is foliated by planar-symmetric hypersurfaces. We show that in all cases the bulk geometry is that of anti-de Sitter (AdS5) space. We then present a geometrical interpretation for the solutions as embeddings of two de Sitter (dS4) surfaces in AdS5, which provide a simple interpretation of the physical properties of the solutions. A notable feature explained in the analysis is that two-way communication between branes expanding away from one another is possible for a finite amount of time, after which communication can proceed in one direction only. The geometrical picture also shows that our class of solutions (and related solutions in the literature) is not completely general, contrary to some claims. We then derive the most general solution for two walls in AdS5. This includes novel cosmologies where the brane tensions are not constrained to have opposite signs. The construction naturally generalizes to arbitrary FRW cosmologies on the branes.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
On Efficient Deployment of Wireless Sensors for Coverage and Connectivity in Constrained 3D Space.
Wu, Chase Q; Wang, Li
2017-10-10
Sensor networks have been used in a rapidly increasing number of applications in many fields. This work generalizes a sensor deployment problem to place a minimum set of wireless sensors at candidate locations in constrained 3D space to k -cover a given set of target objects. By exhausting the combinations of discreteness/continuousness constraints on either sensor locations or target objects, we formulate four classes of sensor deployment problems in 3D space: deploy sensors at Discrete/Continuous Locations (D/CL) to cover Discrete/Continuous Targets (D/CT). We begin with the design of an approximate algorithm for DLDT and then reduce DLCT, CLDT, and CLCT to DLDT by discretizing continuous sensor locations or target objects into a set of divisions without sacrificing sensing precision. Furthermore, we consider a connected version of each problem where the deployed sensors must form a connected network, and design an approximation algorithm to minimize the number of deployed sensors with connectivity guarantee. For performance comparison, we design and implement an optimal solution and a genetic algorithm (GA)-based approach. Extensive simulation results show that the proposed deployment algorithms consistently outperform the GA-based heuristic and achieve a close-to-optimal performance in small-scale problem instances and a significantly superior overall performance than the theoretical upper bound.
Griffith, Caitlin A
2014-04-28
Infrared transmission and emission spectroscopy of exoplanets, recorded from primary transit and secondary eclipse measurements, indicate the presence of the most abundant carbon and oxygen molecular species (H2O, CH4, CO and CO2) in a few exoplanets. However, efforts to constrain the molecular abundances to within several orders of magnitude are thwarted by the broad range of degenerate solutions that fit the data. Here, we explore, with radiative transfer models and analytical approximations, the nature of the degenerate solution sets resulting from the sparse measurements of 'hot Jupiter' exoplanets. As demonstrated with simple analytical expressions, primary transit measurements probe roughly four atmospheric scale heights at each wavelength band. Derived mixing ratios from these data are highly sensitive to errors in the radius of the planet at a reference pressure. For example, an uncertainty of 1% in the radius of a 1000 K and H2-based exoplanet with Jupiter's radius and mass causes an uncertainty of a factor of approximately 100-10,000 in the derived gas mixing ratios. The degree of sensitivity depends on how the line strength increases with the optical depth (i.e. the curve of growth) and the atmospheric scale height. Temperature degeneracies in the solutions of the primary transit data, which manifest their effects through the scale height and absorption coefficients, are smaller. We argue that these challenges can be partially surmounted by a combination of selected wavelength sampling of optical and infrared measurements and, when possible, the joint analysis of transit and secondary eclipse data of exoplanets. However, additional work is needed to constrain other effects, such as those owing to planetary clouds and star spots. Given the current range of open questions in the field, both observations and theory, there is a need for detailed measurements with space-based large mirror platforms (e.g. James web space telescope) and smaller broad survey telescopes as well as ground-based efforts.
Terrestrial Sagnac delay constraining modified gravity models
NASA Astrophysics Data System (ADS)
Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.
2018-04-01
Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.
NASA Astrophysics Data System (ADS)
Wells, J. R.; Kim, J. B.
2011-12-01
Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty
A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement
NASA Astrophysics Data System (ADS)
Koner, P.; Battaglia, A.; Simmer, C.
2009-04-01
The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.
Constrained reduced-order models based on proper orthogonal decomposition
Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...
2017-04-09
A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.
Anisotropic hydrodynamics for conformal Gubser flow
NASA Astrophysics Data System (ADS)
Nopoush, Mohammad; Ryblewski, Radoslaw; Strickland, Michael
2015-02-01
We derive the equations of motion for a system undergoing boost-invariant longitudinal and azimuthally symmetric transverse "Gubser flow" using leading-order anisotropic hydrodynamics. This is accomplished by assuming that the one-particle distribution function is ellipsoidally symmetric in the momenta conjugate to the de Sitter coordinates used to parametrize the Gubser flow. We then demonstrate that the S O (3 )q symmetry in de Sitter space further constrains the anisotropy tensor to be of spheroidal form. The resulting system of two coupled ordinary differential equations for the de Sitter-space momentum scale and anisotropy parameter are solved numerically and compared to a recently obtained exact solution of the relaxation-time-approximation Boltzmann equation subject to the same flow. We show that anisotropic hydrodynamics describes the spatiotemporal evolution of the system better than all currently known dissipative hydrodynamics approaches. In addition, we prove that anisotropic hydrodynamics gives the exact solution of the relaxation-time approximation Boltzmann equation in the ideal, η /s →0 , and free-streaming, η /s →∞, limits.
Finite dimensional approximation of a class of constrained nonlinear optimal control problems
NASA Technical Reports Server (NTRS)
Gunzburger, Max D.; Hou, L. S.
1994-01-01
An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.
Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David
2016-01-01
Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.
Exploration versus exploitation in space, mind, and society
Hills, Thomas T.; Todd, Peter M.; Lazer, David; Redish, A. David; Couzin, Iain D.
2015-01-01
Search is a ubiquitous property of life. Although diverse domains have worked on search problems largely in isolation, recent trends across disciplines indicate that the formal properties of these problems share similar structures and, often, similar solutions. Moreover, internal search (e.g., memory search) shows similar characteristics to external search (e.g., spatial foraging), including shared neural mechanisms consistent with a common evolutionary origin across species. Search problems and their solutions also scale from individuals to societies, underlying and constraining problem solving, memory, information search, and scientific and cultural innovation. In summary, search represents a core feature of cognition, with a vast influence on its evolution and processes across contexts and requiring input from multiple domains to understand its implications and scope. PMID:25487706
NASA Astrophysics Data System (ADS)
Leys, Antoine; Hull, Tony; Westerhoff, Thomas
2015-09-01
We address the problem that larger spaceborne mirrors require greater sectional thickness to achieve a sufficient first eigen frequency that is resilient to launch loads, and to be stable during optical telescope assembly integration and test, this added thickness results in unacceptable added mass if we simply scale up solutions for smaller mirrors. Special features, like cathedral ribs, arch, chamfers, and back-side following the contour of the mirror face have been considered for these studies. For computational efficiency, we have conducted detailed analysis on various configurations of a 800 mm hexagonal segment and of a 1.2-m mirror, in a manner that they can be constrained by manufacturing parameters as would be a 4-m mirror. Furthermore each model considered also has been constrained by cost-effective machining practice as defined in the SCHOTT Mainz factory. Analysis on variants of this 1.2-m mirror has shown a favorable configuration. We have then scaled this optimal configuration to 4-m aperture. We discuss resulting parameters of costoptimized 4-m mirrors. We also discuss the advantages and disadvantages this analysis reveals of going to cathedral rib architecture on 1-m class mirror substrates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk
2016-10-15
Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allowsmore » us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.« less
NASA Astrophysics Data System (ADS)
Chandra, Rishabh
Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.
Results of two multichord stellar occultations by dwarf planet (1) Ceres
NASA Astrophysics Data System (ADS)
Gomes-Júnior, A. R.; Giacchini, B. L.; Braga-Ribas, F.; Assafin, M.; Vieira-Martins, R.; Camargo, J. I. B.; Sicardy, B.; Timerson, B.; George, T.; Broughton, J.; Blank, T.; Benedetti-Rossi, G.; Brooks, J.; Dantowitz, R. F.; Dunham, D. W.; Dunham, J. B.; Ellington, C. K.; Emilio, M.; Herpich, F. R.; Jacques, C.; Maley, P. D.; Mehret, L.; Mello, A. J. T.; Milone, A. C.; Pimentel, E.; Schoenell, W.; Weber, N. S.
2015-08-01
We report the results of two multichord stellar occultations by the dwarf planet (1) Ceres that were observed from Brazil on 2010 August 17, and from the USA on 2013 October 25. Four positive detections were obtained for the 2010 occultation, and nine for the 2013 occultation. Elliptical models were adjusted to the observed chords to obtain Ceres' size and shape. Two limb-fitting solutions were studied for each event. The first one is a nominal solution with an indeterminate polar aspect angle. The second one was constrained by the pole coordinates as given by Drummond et al. Assuming a Maclaurin spheroid, we determine an equatorial diameter of 972 ± 6 km and an apparent oblateness of 0.08 ± 0.03 as our best solution. These results are compared to all available size and shape determinations for Ceres made so far, and shall be confirmed by the NASA's Dawn space mission.
Adaptable Constrained Genetic Programming: Extensions and Applications
NASA Technical Reports Server (NTRS)
Janikow, Cezary Z.
2005-01-01
An evolutionary algorithm applies evolution-based principles to problem solving. To solve a problem, the user defines the space of potential solutions, the representation space. Sample solutions are encoded in a chromosome-like structure. The algorithm maintains a population of such samples, which undergo simulated evolution by means of mutation, crossover, and survival of the fittest principles. Genetic Programming (GP) uses tree-like chromosomes, providing very rich representation suitable for many problems of interest. GP has been successfully applied to a number of practical problems such as learning Boolean functions and designing hardware circuits. To apply GP to a problem, the user needs to define the actual representation space, by defining the atomic functions and terminals labeling the actual trees. The sufficiency principle requires that the label set be sufficient to build the desired solution trees. The closure principle allows the labels to mix in any arity-consistent manner. To satisfy both principles, the user is often forced to provide a large label set, with ad hoc interpretations or penalties to deal with undesired local contexts. This unfortunately enlarges the actual representation space, and thus usually slows down the search. In the past few years, three different methodologies have been proposed to allow the user to alleviate the closure principle by providing means to define, and to process, constraints on mixing the labels in the trees. Last summer we proposed a new methodology to further alleviate the problem by discovering local heuristics for building quality solution trees. A pilot system was implemented last summer and tested throughout the year. This summer we have implemented a new revision, and produced a User's Manual so that the pilot system can be made available to other practitioners and researchers. We have also designed, and partly implemented, a larger system capable of dealing with much more powerful heuristics.
Scientific and Engineering Studies: Spectral Estimation
1989-08-11
PROBLEM SOLUTION Four different constrained problems will be addressed in this section: con- strained window duration L ; constrained equivalent...sm(frtp + C, ^ smk ) » 0. (B_18) (B-19) The simultaneous solution of (B-ll) and (B-18), with smallest *< , is then given by q =.?0n l^fi
A Bitter Pill: The Cosmic Lithium Problem
NASA Astrophysics Data System (ADS)
Fields, Brian
2014-03-01
Primordial nucleosynthesis describes the production of the lightest nuclides in the first three minutes of cosmic time. We will discuss the transformative influence of the WMAP and Planck determinations of the cosmic baryon density. Coupled with nucleosynthesis theory, these measurements make tight predictions for the primordial light element abundances: deuterium observations agree spectacularly with these predictions, helium observations are in good agreement, but lithium observations (in ancient halo stars) are significantly discrepant-this is the ``lithium problem.'' Over the past decade, the lithium discrepancy has become more severe, and very recently the solution space has shrunk. A solution due to new nuclear resonances has now been essentially ruled out experimentally. Stellar evolution solutions remain viable but must be finely tuned. Observational systematics are now being probed by qualitatively new methods of lithium observation. Finally, new physics solutions are now strongly constrained by the combination of the precision baryon determination by Planck, and the need to match the D/H abundances now measured to unprecedented precision at high redshift. Supported in part by NSF grant PHY-1214082.
Computation in Dynamically Bounded Asymmetric Systems
Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney
2015-01-01
Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems. PMID:25617645
An object correlation and maneuver detection approach for space surveillance
NASA Astrophysics Data System (ADS)
Huang, Jian; Hu, Wei-Dong; Xin, Qin; Du, Xiao-Yong
2012-10-01
Object correlation and maneuver detection are persistent problems in space surveillance and maintenance of a space object catalog. We integrate these two problems into one interrelated problem, and consider them simultaneously under a scenario where space objects only perform a single in-track orbital maneuver during the time intervals between observations. We mathematically formulate this integrated scenario as a maximum a posteriori (MAP) estimation. In this work, we propose a novel approach to solve the MAP estimation. More precisely, the corresponding posterior probability of an orbital maneuver and a joint association event can be approximated by the Joint Probabilistic Data Association (JPDA) algorithm. Subsequently, the maneuvering parameters are estimated by optimally solving the constrained non-linear least squares iterative process based on the second-order cone programming (SOCP) algorithm. The desired solution is derived according to the MAP criterions. The performance and advantages of the proposed approach have been shown by both theoretical analysis and simulation results. We hope that our work will stimulate future work on space surveillance and maintenance of a space object catalog.
Constraining the Physical Properties of Near-Earth Object 2009 BD
NASA Astrophysics Data System (ADS)
Mommert, M.; Hora, J. L.; Farnocchia, D.; Chesley, S. R.; Vokrouhlický, D.; Trilling, D. E.; Mueller, M.; Harris, A. W.; Smith, H. A.; Fazio, G. G.
2014-05-01
We report on Spitzer Space Telescope Infrared Array Camera observations of near-Earth object 2009 BD that were carried out in support of the NASA Asteroid Robotic Retrieval Mission concept. We did not detect 2009 BD in 25 hr of integration at 4.5 μm. Based on an upper-limit flux density determination from our data, we present a probabilistic derivation of the physical properties of this object. The analysis is based on the combination of a thermophysical model with an orbital model accounting for the non-gravitational forces acting upon the body. We find two physically possible solutions. The first solution shows 2009 BD as a 2.9 ± 0.3 m diameter rocky body (ρ = 2.9 ± 0.5 g cm-3) with an extremely high albedo of 0.85_{-0.10}^{+0.20} that is covered with regolith-like material, causing it to exhibit a low thermal inertia (\\Gamma =30_{-10}^{+20} SI units). The second solution suggests 2009 BD to be a 4 ± 1 m diameter asteroid with p_V=0.45_{-0.15}^{+0.35} that consists of a collection of individual bare rock slabs (Γ = 2000 ± 1000 SI units, \\rho = 1.7_{-0.4}^{+0.7} g cm-3). We are unable to rule out either solution based on physical reasoning. 2009 BD is the smallest asteroid for which physical properties have been constrained, in this case using an indirect method and based on a detection limit, providing unique information on the physical properties of objects in the size range smaller than 10 m.
Design Principles of Regulatory Networks: Searching for the Molecular Algorithms of the Cell
Lim, Wendell A.; Lee, Connie M.; Tang, Chao
2013-01-01
A challenge in biology is to understand how complex molecular networks in the cell execute sophisticated regulatory functions. Here we explore the idea that there are common and general principles that link network structures to biological functions, principles that constrain the design solutions that evolution can converge upon for accomplishing a given cellular task. We describe approaches for classifying networks based on abstract architectures and functions, rather than on the specific molecular components of the networks. For any common regulatory task, can we define the space of all possible molecular solutions? Such inverse approaches might ultimately allow the assembly of a design table of core molecular algorithms that could serve as a guide for building synthetic networks and modulating disease networks. PMID:23352241
NASA Technical Reports Server (NTRS)
Hanks, Brantley R.; Skelton, Robert E.
1991-01-01
Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.
Competition Between Transients in the Rate of Approach to a Fixed Point
NASA Astrophysics Data System (ADS)
Day, Judy; Rubin, Jonathan E.; Chow, Carson C.
2009-01-01
The goal of this paper is to provide and apply tools for analyzing a specific aspect of transient dynamics not covered by previous theory. The question we address is whether one component of a perturbed solution to a system of differential equations can overtake the corresponding component of a reference solution as both converge to a stable node at the origin, given that the perturbed solution was initially farther away and that both solutions are nonnegative for all time. We call this phenomenon tolerance, for its relation to a biological effect. We show using geometric arguments that tolerance will exist in generic linear systems with a complete set of eigenvectors and in excitable nonlinear systems. We also define a notion of inhibition that may constrain the regions in phase space where the possibility of tolerance arises in general systems. However, these general existence theorems do not not yield an assessment of tolerance for specific initial conditions. To address that issue, we develop some analytical tools for determining if particular perturbed and reference solution initial conditions will exhibit tolerance.
Trajectory planning of free-floating space robot using Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Walter, Ulrich
2015-07-01
This paper investigates the application of Particle Swarm Optimization (PSO) strategy to trajectory planning of the kinematically redundant space robot in free-floating mode. Due to the path dependent dynamic singularities, the volume of available workspace of the space robot is limited and enormous joint velocities are required when such singularities are met. In order to overcome this effect, the direct kinematics equations in conjunction with PSO are employed for trajectory planning of free-floating space robot. The joint trajectories are parametrized with the Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) redundant manipulator mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.
The Wronskian solution of the constrained discrete Kadomtsev-Petviashvili hierarchy
NASA Astrophysics Data System (ADS)
Li, Maohua; He, Jingsong
2016-05-01
From the constrained discrete Kadomtsev-Petviashvili (cdKP) hierarchy, the discrete nonlinear Schrödinger (DNLS) equations have been derived. By means of the gauge transformation, the Wronskian solution of DNLS equations have been given. The u1 of the cdKP hierarchy is a Y-type soliton solution for odd times of the gauge transformation, but it becomes a dark-bright soliton solution for even times of the gauge transformation. The role of the discrete variable n in the profile of the u1 is discussed.
Integrated manufacturing flow for selective-etching SADP/SAQP
NASA Astrophysics Data System (ADS)
Ali, Rehab Kotb; Fatehy, Ahmed Hamed; Word, James
2018-03-01
Printing cut mask in SAMP (Self Aligned Multi Patterning) is very challenging at advanced nodes. One of the proposed solutions is to print the cut shapes selectively. Which means the design is decomposed into mandrel tracks, Mandrel cuts and non-Mandrel cuts. The mandrel and non-Mandrel cuts are mutually independent which results in relaxing spacing constrains and as a consequence more dense metal lines. In this paper, we proposed the manufacturing flow of selective etching process. The results are quantified in terms of measuring PVBand, EPE and the number of hard bridging and pinching across the layout.
Barvinsky, A O
2007-08-17
The density matrix of the Universe for the microcanonical ensemble in quantum cosmology describes an equipartition in the physical phase space of the theory (sum over everything), but in terms of the observable spacetime geometry this ensemble is peaked about the set of recently obtained cosmological instantons limited to a bounded range of the cosmological constant. This suggests the mechanism of constraining the landscape of string vacua and a possible solution to the dark energy problem in the form of the quasiequilibrium decay of the microcanonical state of the Universe.
NASA Astrophysics Data System (ADS)
Pasyanos, Michael E.; Franz, Gregory A.; Ramirez, Abelardo L.
2006-03-01
In an effort to build seismic models that are the most consistent with multiple data sets we have applied a new probabilistic inverse technique. This method uses a Markov chain Monte Carlo (MCMC) algorithm to sample models from a prior distribution and test them against multiple data types to generate a posterior distribution. While computationally expensive, this approach has several advantages over deterministic models, notably the seamless reconciliation of different data types that constrain the model, the proper handling of both data and model uncertainties, and the ability to easily incorporate a variety of prior information, all in a straightforward, natural fashion. A real advantage of the technique is that it provides a more complete picture of the solution space. By mapping out the posterior probability density function, we can avoid simplistic assumptions about the model space and allow alternative solutions to be identified, compared, and ranked. Here we use this method to determine the crust and upper mantle structure of the Yellow Sea and Korean Peninsula region. The model is parameterized as a series of seven layers in a regular latitude-longitude grid, each of which is characterized by thickness and seismic parameters (Vp, Vs, and density). We use surface wave dispersion and body wave traveltime data to drive the model. We find that when properly tuned (i.e., the Markov chains have had adequate time to fully sample the model space and the inversion has converged), the technique behaves as expected. The posterior model reflects the prior information at the edge of the model where there is little or no data to constrain adjustments, but the range of acceptable models is significantly reduced in data-rich regions, producing values of sediment thickness, crustal thickness, and upper mantle velocities consistent with expectations based on knowledge of the regional tectonic setting.
NASA Technical Reports Server (NTRS)
Seyffert, A. S.; Venter, C.; Johnson, T. J.; Harding, A. K.
2012-01-01
Since the launch of the Large Area Telescope (LAT) on board the Fermi spacecraft in June 2008, the number of observed gamma-ray pulsars has increased dramatically. A large number of these are also observed at radio frequencies. Constraints on the viewing geometries of 5 of 6 gamma-ray pulsars exhibiting single-peaked gamma-ray profiles were derived using high-quality radio polarization data [1]. We obtain independent constraints on the viewing geometries of 6 by using a geometric emission code to model the Fermi LAT and radio light curves (LCs). We find fits for the magnetic inclination and observer angles by searching the solution space by eye. Our results are generally consistent with those previously obtained [1], although we do find small differences in some cases. We will indicate how the gamma-ray and radio pulse shapes as well as their relative phase lags lead to constraints in the solution space. Values for the flux correction factor (f(omega)) corresponding to the fits are also derived (with errors).
NASA Astrophysics Data System (ADS)
Li, Zheng-Yan; Xie, Zheng-Wei; Chen, Tong; Ouyang, Qi
2009-12-01
Constraint-based models such as flux balance analysis (FBA) are a powerful tool to study biological metabolic networks. Under the hypothesis that cells operate at an optimal growth rate as the result of evolution and natural selection, this model successfully predicts most cellular behaviours in growth rate. However, the model ignores the fact that cells can change their cellular metabolic states during evolution, leaving optimal metabolic states unstable. Here, we consider all the cellular processes that change metabolic states into a single term 'noise', and assume that cells change metabolic states by randomly walking in feasible solution space. By simulating a state of a cell randomly walking in the constrained solution space of metabolic networks, we found that in a noisy environment cells in optimal states tend to travel away from these points. On considering the competition between the noise effect and the growth effect in cell evolution, we found that there exists a trade-off between these two effects. As a result, the population of the cells contains different cellular metabolic states, and the population growth rate is at suboptimal states.
A High Performance COTS Based Computer Architecture
NASA Astrophysics Data System (ADS)
Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland
2014-08-01
Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Pseudo-updated constrained solution algorithm for nonlinear heat conduction
NASA Technical Reports Server (NTRS)
Tovichakchaikul, S.; Padovan, J.
1983-01-01
This paper develops efficiency and stability improvements in the incremental successive substitution (ISS) procedure commonly used to generate the solution to nonlinear heat conduction problems. This is achieved by employing the pseudo-update scheme of Broyden, Fletcher, Goldfarb and Shanno in conjunction with the constrained version of the ISS. The resulting algorithm retains the formulational simplicity associated with ISS schemes while incorporating the enhanced convergence properties of slope driven procedures as well as the stability of constrained approaches. To illustrate the enhanced operating characteristics of the new scheme, the results of several benchmark comparisons are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, J; Chao, M
2016-06-15
Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less
Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L
1997-04-25
A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.
WaterNet:The NASA Water Cycle Solutions Network
NASA Astrophysics Data System (ADS)
Belvedere, D. R.; Houser, P. R.; Pozzi, W.; Imam, B.; Schiffer, R.; Schlosser, C. A.; Gupta, H.; Martinez, G.; Lopez, V.; Vorosmarty, C.; Fekete, B.; Matthews, D.; Lawford, R.; Welty, C.; Seck, A.
2008-12-01
Water is essential to life and directly impacts and constrains society's welfare, progress, and sustainable growth, and is continuously being transformed by climate change, erosion, pollution, and engineering. Projections of the effects of such factors will remain speculative until more effective global prediction systems and applications are implemented. NASA's unique role is to use its view from space to improve water and energy cycle monitoring and prediction, and has taken steps to collaborate and improve interoperability with existing networks and nodes of research organizations, operational agencies, science communities, and private industry. WaterNet is a Solutions Network, devoted to the identification and recommendation of candidate solutions that propose ways in which water-cycle related NASA research results can be skillfully applied by partner agencies, international organizations, state, and local governments. It is designed to improve and optimize the sustained ability of water cycle researchers, stakeholders, organizations and networks to interact, identify, harness, and extend NASA research results to augment Decision Support Tools that address national needs.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
Spitzer observations of two mission-accessible, tiny asteroids
NASA Astrophysics Data System (ADS)
Mommert, M.; Hora, J.; Farnocchia, D.; Chesley, S.; Vokrouhlicky, D.; Trilling, D.; Mueller, M.; Harris, A.; Smith, H.; Fazio, G.
2014-07-01
Small asteroids are most likely collisional fragments of larger objects and make up a large fraction of the near-Earth-object (NEO) population. Despite their abundance, little is known about the physical properties of these objects, which is mainly due to their faintness, which also impedes their discovery. We report on Spitzer Space Telescope observations of two small NEOs, both of which are of interest as potential spacecraft targets. We observed NEOs 2009 BD using 25 hrs and 2011 MD using ˜20 hrs of Spitzer Infrared Array Camera Channel 2 time. For each target, we have combined the data into maps in the moving frame of the target, minimizing the background confusion. We did not detect 2009 BD and place an upper limit on its flux density, but we detected 2011 MD as a 2.2σ detection. We have analyzed the data on both objects in a combined model approach, using an asteroid thermophysical model and a model of non-gravitational forces acting on the object. As a result, we are able to constrain the physical properties of both objects. In the case of 2009 BD (Mommert et al. 2014), a wealth of existing astrometry data significantly constrains the physical properties of the object. We find two physically possible solutions. The first solution shows 2009 BD as a 2.9±0.3 m-sized massive rock body (bulk density ρ=2.9±0.5 g cm^{-3}) with an extremely high albedo of 0.85_{-0.10}^{+0.20} that is covered with regolith-like material, causing it to exhibit a low thermal inertia (thermal inertia Γ=30_{-10}^{+20} SI units). The second solution suggests 2009 BD to be a 4±1 m-sized asteroid with p_{V}=0.45_{-0.15}^{+0.35} that consists of a collection of individual bare rock slabs (Γ = 2000±1000 SI units, ρ = 1.7_{-0.4}^{+0.7} g cm^{-3}). We are unable to rule out either solution based on physical reasoning. The preliminary analysis of 2011 MD shows this object as a ˜6 m-sized asteroid with an albedo of ˜0.3. Additional constraints on the physical properties of these objects will be available at the time of the conference (Mommert et al., in preparation). 2009 BD and 2011 MD are the smallest asteroids for which physical properties have been constrained, providing unique insights into a population of asteroids that gives rise to frequent impacts on the Earth and the Moon. Furthermore, both asteroids are among the most easily accessible objects in space.
Multiple positive normalized solutions for nonlinear Schrödinger systems
NASA Astrophysics Data System (ADS)
Gou, Tianxiang; Jeanjean, Louis
2018-05-01
We consider the existence of multiple positive solutions to the nonlinear Schrödinger systems set on , under the constraint Here are prescribed, , and the frequencies are unknown and will appear as Lagrange multipliers. Two cases are studied, the first when , the second when In both cases, assuming that is sufficiently small, we prove the existence of two positive solutions. The first one is a local minimizer for which we establish the compactness of the minimizing sequences and also discuss the orbital stability of the associated standing waves. The second solution is obtained through a constrained mountain pass and a constrained linking respectively.
NASA Astrophysics Data System (ADS)
Gervás, Pablo
2016-04-01
Most poetry-generation systems apply opportunistic approaches where algorithmic procedures are applied to explore the conceptual space defined by a given knowledge resource in search of solutions that might be aesthetically valuable. Aesthetical value is assumed to arise from compliance to a given poetic form - such as rhyme or metrical regularity - or from evidence of semantic relations between the words in the resulting poems that can be interpreted as rhetorical tropes - such as similes, analogies, or metaphors. This approach tends to fix a priori the aesthetic parameters of the results, and imposes no constraints on the message to be conveyed. The present paper describes an attempt to initiate a shift in this balance, introducing means for constraining the output to certain topics and allowing a looser mechanism for constraining form. This goal arose as a result of the need to produce poems for a themed collection commissioned to be included in a book. The solution adopted explores an approach to creativity where the goals are not solely aesthetic and where the results may be surprising in their poetic form. An existing computer poet, originally developed to produce poems in a given form but with no specific constraints on their content, is put to the task of producing a set of poems with explicit restrictions on content, and allowing for an exploration of poetic form. Alternative generation methods are devised to overcome the difficulties, and the various insights arising from these new methods and the impact they have on the set of resulting poems are discussed in terms of their potential contribution to better poetry-generation systems.
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.
Mulder, Joris
2014-02-01
Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Fang, Jing; Yuan, Jianping
2018-03-01
The existence of the path dependent dynamic singularities limits the volume of available workspace of free-floating space robot and induces enormous joint velocities when such singularities are met. In order to overcome this demerit, this paper presents an optimal joint trajectory planning method using forward kinematics equations of free-floating space robot, while joint motion laws are delineated with application of the concept of reaction null-space. Bézier curve, in conjunction with the null-space column vectors, are applied to describe the joint trajectories. Considering the forward kinematics equations of the free-floating space robot, the trajectory planning issue is consequently transferred to an optimization issue while the control points to construct the Bézier curve are the design variables. A constrained differential evolution (DE) scheme with premature handling strategy is implemented to find the optimal solution of the design variables while specific objectives and imposed constraints are satisfied. Differ from traditional methods, we synthesize null-space and specialized curve to provide a novel viewpoint for trajectory planning of free-floating space robot. Simulation results are presented for trajectory planning of 7 degree-of-freedom (DOF) kinematically redundant manipulator mounted on a free-floating spacecraft and demonstrate the feasibility and effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Kaufman, Yoram; Mattoo, Shana; Tanre, Didier; Kleidman, Richard; Lau, William K. M. (Technical Monitor)
2001-01-01
The ESSP3-CENA space mission (formally PICASSO-CENA) will provide continues global observations with a two wavelength lidar. The attenuated backscattering coefficients measured by the lidar, have valuable information about the vertical distribution of aerosol particles and their sizes. However the information cannot be mapped into unique aerosol physical properties. Infinite number of physical solutions with different attenuations through the atmosphere can reconstruct the same two wavelength backscattered profile measured from space. Spectral radiance measured by MODIS simultaneously with the ESSP3 data can constrain the problem and resolve this ambiguity to a large extent. Sensitivity study shows that inversion of the integrated MODIS+ESSP3 data can derive the vertical profiles of the fine and coarse modes mixed in the same atmospheric column in the presence of moderate calibration uncertainties and electronic noise (approx. 10%). We shall present the sensitivity study and results from application of the technique to measurements in the SAFARI-2000 and SHADE experiments.
An optimization tool for satellite equipment layout
NASA Astrophysics Data System (ADS)
Qin, Zheng; Liang, Yan-gang; Zhou, Jian-ping
2018-01-01
Selection of the satellite equipment layout with performance constraints is a complex task which can be viewed as a constrained multi-objective optimization and a multiple criteria decision making problem. The layout design of a satellite cabin involves the process of locating the required equipment in a limited space, thereby satisfying various behavioral constraints of the interior and exterior environments. The layout optimization of satellite cabin in this paper includes the C.G. offset, the moments of inertia and the space debris impact risk of the system, of which the impact risk index is developed to quantify the risk to a satellite cabin of coming into contact with space debris. In this paper an optimization tool for the integration of CAD software as well as the optimization algorithms is presented, which is developed to automatically find solutions for a three-dimensional layout of equipment in satellite. The effectiveness of the tool is also demonstrated by applying to the layout optimization of a satellite platform.
Low-lying excited states by constrained DFT
NASA Astrophysics Data System (ADS)
Ramos, Pablo; Pavanello, Michele
2018-04-01
Exploiting the machinery of Constrained Density Functional Theory (CDFT), we propose a variational method for calculating low-lying excited states of molecular systems. We dub this method eXcited CDFT (XCDFT). Excited states are obtained by self-consistently constraining a user-defined population of electrons, Nc, in the virtual space of a reference set of occupied orbitals. By imposing this population to be Nc = 1.0, we computed the first excited state of 15 molecules from a test set. Our results show that XCDFT achieves an accuracy in the predicted excitation energy only slightly worse than linear-response time-dependent DFT (TDDFT), but without incurring into problems of variational collapse typical of the more commonly adopted ΔSCF method. In addition, we selected a few challenging processes to test the limits of applicability of XCDFT. We find that in contrast to TDDFT, XCDFT is capable of reproducing energy surfaces featuring conical intersections (azobenzene and H3) with correct topology and correct overall energetics also away from the intersection. Venturing to condensed-phase systems, XCDFT reproduces the TDDFT solvatochromic shift of benzaldehyde when it is embedded by a cluster of water molecules. Thus, we find XCDFT to be a competitive method among single-reference methods for computations of excited states in terms of time to solution, rate of convergence, and accuracy of the result.
NASA Astrophysics Data System (ADS)
Adavi, Zohre; Mashhadi-Hossainali, Masoud
2015-04-01
Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.
A new class of asymptotically non-chaotic vacuum singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klinger, Paul, E-mail: paul.klinger@univie.ac.at
2015-12-15
The BKL conjecture, stated in the 1960s and early 1970s by Belinski, Khalatnikov and Lifschitz, proposes a detailed description of the generic asymptotic dynamics of spacetimes as they approach a spacelike singularity. It predicts complicated chaotic behaviour in the generic case, but simpler non-chaotic one in cases with symmetry assumptions or certain kinds of matter fields. Here we construct a new class of four-dimensional vacuum spacetimes containing spacelike singularities which show non-chaotic behaviour. In contrast with previous constructions, no symmetry assumptions are made. Rather, the metric is decomposed in Iwasawa variables and conditions on the asymptotic evolution of some ofmore » them are imposed. The constructed solutions contain five free functions of all space coordinates, two of which are constrained by inequalities. We investigate continuous and discrete isometries and compare the solutions to previous constructions. Finally, we give the asymptotic behaviour of the metric components and curvature.« less
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
Band connectivity for topological quantum chemistry: Band structures as a graph theory problem
NASA Astrophysics Data System (ADS)
Bradlyn, Barry; Elcoro, L.; Vergniory, M. G.; Cano, Jennifer; Wang, Zhijun; Felser, C.; Aroyo, M. I.; Bernevig, B. Andrei
2018-01-01
The conventional theory of solids is well suited to describing band structures locally near isolated points in momentum space, but struggles to capture the full, global picture necessary for understanding topological phenomena. In part of a recent paper [B. Bradlyn et al., Nature (London) 547, 298 (2017), 10.1038/nature23268], we have introduced the way to overcome this difficulty by formulating the problem of sewing together many disconnected local k .p band structures across the Brillouin zone in terms of graph theory. In this paper, we give the details of our full theoretical construction. We show that crystal symmetries strongly constrain the allowed connectivities of energy bands, and we employ graph theoretic techniques such as graph connectivity to enumerate all the solutions to these constraints. The tools of graph theory allow us to identify disconnected groups of bands in these solutions, and so identify topologically distinct insulating phases.
Advanced Design Methodology for Robust Aircraft Sizing and Synthesis
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.
1997-01-01
Contract efforts are focused on refining the Robust Design Methodology for Conceptual Aircraft Design. Robust Design Simulation (RDS) was developed earlier as a potential solution to the need to do rapid trade-offs while accounting for risk, conflict, and uncertainty. The core of the simulation revolved around Response Surface Equations as approximations of bounded design spaces. An ongoing investigation is concerned with the advantages of using Neural Networks in conceptual design. Thought was also given to the development of systematic way to choose or create a baseline configuration based on specific mission requirements. Expert system was developed, which selects aerodynamics, performance and weights model from several configurations based on the user's mission requirements for subsonic civil transport. The research has also resulted in a step-by-step illustration on how to use the AMV method for distribution generation and the search for robust design solutions to multivariate constrained problems.
Pareto joint inversion of 2D magnetotelluric and gravity data
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13
Stress-Constrained Structural Topology Optimization with Design-Dependent Loads
NASA Astrophysics Data System (ADS)
Lee, Edmund
Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.
Homotopy approach to optimal, linear quadratic, fixed architecture compensation
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1991-01-01
Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.
Magnetohydrodynamic Models of Molecular Tornadoes
NASA Astrophysics Data System (ADS)
Au, Kelvin; Fiege, Jason D.
2017-07-01
Recent observations near the Galactic Center (GC) have found several molecular filaments displaying striking helically wound morphology that are collectively known as molecular tornadoes. We investigate the equilibrium structure of these molecular tornadoes by formulating a magnetohydrodynamic model of a rotating, helically magnetized filament. A special analytical solution is derived where centrifugal forces balance exactly with toroidal magnetic stress. From the physics of torsional Alfvén waves we derive a constraint that links the toroidal flux-to-mass ratio and the pitch angle of the helical field to the rotation laws, which we find to be an important component in describing the molecular tornado structure. The models are compared to the Ostriker solution for isothermal, nonmagnetic, nonrotating filaments. We find that neither the analytic model nor the Alfvén wave model suffer from the unphysical density inversions noted by other authors. A Monte Carlo exploration of our parameter space is constrained by observational measurements of the Pigtail Molecular Cloud, the Double Helix Nebula, and the GC Molecular Tornado. Observable properties such as the velocity dispersion, filament radius, linear mass, and surface pressure can be used to derive three dimensionless constraints for our dimensionless models of these three objects. A virial analysis of these constrained models is studied for these three molecular tornadoes. We find that self-gravity is relatively unimportant, whereas magnetic fields and external pressure play a dominant role in the confinement and equilibrium radial structure of these objects.
NASA Technical Reports Server (NTRS)
Rogers, Aaron; Anderson, Kalle; Mracek, Anna; Zenick, Ray
2004-01-01
With the space industry's increasing focus upon multi-spacecraft formation flight missions, the ability to precisely determine system topology and the orientation of member spacecraft relative to both inertial space and each other is becoming a critical design requirement. Topology determination in satellite systems has traditionally made use of GPS or ground uplink position data for low Earth orbits, or, alternatively, inter-satellite ranging between all formation pairs. While these techniques work, they are not ideal for extension to interplanetary missions or to large fleets of decentralized, mixed-function spacecraft. The Vision-Based Attitude and Formation Determination System (VBAFDS) represents a novel solution to both the navigation and topology determination problems with an integrated approach that combines a miniature star tracker with a suite of robust processing algorithms. By combining a single range measurement with vision data to resolve complete system topology, the VBAFDS design represents a simple, resource-efficient solution that is not constrained to certain Earth orbits or formation geometries. In this paper, analysis and design of the VBAFDS integrated guidance, navigation and control (GN&C) technology will be discussed, including hardware requirements, algorithm development, and simulation results in the context of potential mission applications.
NASA Technical Reports Server (NTRS)
Madden, Michael G.; Wyrick, Roberta; O'Neill, Dale E.
2005-01-01
Space Shuttle Processing is a complicated and highly variable project. The planning and scheduling problem, categorized as a Resource Constrained - Stochastic Project Scheduling Problem (RC-SPSP), has a great deal of variability in the Orbiter Processing Facility (OPF) process flow from one flight to the next. Simulation Modeling is a useful tool in estimation of the makespan of the overall process. However, simulation requires a model to be developed, which itself is a labor and time consuming effort. With such a dynamic process, often the model would potentially be out of synchronization with the actual process, limiting the applicability of the simulation answers in solving the actual estimation problem. Integration of TEAMS model enabling software with our existing schedule program software is the basis of our solution. This paper explains the approach used to develop an auto-generated simulation model from planning and schedule efforts and available data.
NASA Technical Reports Server (NTRS)
Demerdash, N. A.; Wang, R.
1990-01-01
This paper describes the results of application of three well known 3D magnetic vector potential (MVP) based finite element formulations for computation of magnetostatic fields in electrical devices. The three methods were identically applied to three practical examples, the first of which contains only one medium (free space), while the second and third examples contained a mix of free space and iron. The first of these methods is based on the unconstrained curl-curl of the MVP, while the second and third methods are predicated upon constraining the divergence of the MVP 10 zero (Coulomb's Gauge). It was found that the two latter methods cease to give useful and meaningful results when the global solution region contains a mix of media of high and low permeabilities. Furthermore, it was found that their results do not achieve the intended zero constraint on the divergence of the MVP.
Modeling chain folding in protein-constrained circular DNA.
Martino, J A; Olson, W K
1998-01-01
An efficient method for sampling equilibrium configurations of DNA chains binding one or more DNA-bending proteins is presented. The technique is applied to obtain the tertiary structures of minimal bending energy for a selection of dinucleosomal minichromosomes that differ in degree of protein-DNA interaction, protein spacing along the DNA chain contour, and ring size. The protein-bound portions of the DNA chains are represented by tight, left-handed supercoils of fixed geometry. The protein-free regions are modeled individually as elastic rods. For each random spatial arrangement of the two nucleosomes assumed during a stochastic search for the global minimum, the paths of the flexible connecting DNA segments are determined through a numerical solution of the equations of equilibrium for torsionally relaxed elastic rods. The minimal energy forms reveal how protein binding and spacing and plasmid size differentially affect folding and offer new insights into experimental minichromosome systems. PMID:9591675
H2, fixed architecture, control design for large scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1990-01-01
The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.
Space Weathering of Itokawa Particles: Implications for Regolith Evolution
NASA Technical Reports Server (NTRS)
Berger, Eve L.; Keller, Lindsay P.
2015-01-01
Space weathering processes such as solar wind irradiation and micrometeorite impacts are known to alter the the properties of regolith materials exposed on airless bodies. The rates of space weathering processes however, are poorly constrained for asteroid regoliths, with recent estimates ranging over many orders of magnitude. The return of surface samples by JAXA's Hayabusa mission to asteroid 25143 Itokawa, and their laboratory analysis provides "ground truth" to anchor the timescales for space weathering processes on airless bodies. Here, we use the effects of solar wind irradiation and the accumulation of solar flare tracks recorded in Itokawa grains to constrain the rates of space weathering and yield information about regolith dynamics on these timescales.
Self-interacting inelastic dark matter: a viable solution to the small scale structure problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blennow, Mattias; Clementz, Stefan; Herrero-Garcia, Juan, E-mail: emb@kth.se, E-mail: scl@kth.se, E-mail: juan.herrero-garcia@adelaide.edu.au
2017-03-01
Self-interacting dark matter has been proposed as a solution to the small-scale structure problems, such as the observed flat cores in dwarf and low surface brightness galaxies. If scattering takes place through light mediators, the scattering cross section relevant to solve these problems may fall into the non-perturbative regime leading to a non-trivial velocity dependence, which allows compatibility with limits stemming from cluster-size objects. However, these models are strongly constrained by different observations, in particular from the requirements that the decay of the light mediator is sufficiently rapid (before Big Bang Nucleosynthesis) and from direct detection. A natural solution tomore » reconcile both requirements are inelastic endothermic interactions, such that scatterings in direct detection experiments are suppressed or even kinematically forbidden if the mass splitting between the two-states is sufficiently large. Using an exact solution when numerically solving the Schrödinger equation, we study such scenarios and find regions in the parameter space of dark matter and mediator masses, and the mass splitting of the states, where the small scale structure problems can be solved, the dark matter has the correct relic abundance and direct detection limits can be evaded.« less
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
NASA Technical Reports Server (NTRS)
Morgenthaler, George W.; Glover, Fred W.; Woodcock, Gordon R.; Laguna, Manuel
2005-01-01
The 1/14/04 USA Space Exploratiofltilization Initiative invites all Space-faring Nations, all Space User Groups in Science, Space Entrepreneuring, Advocates of Robotic and Human Space Exploration, Space Tourism and Colonization Promoters, etc., to join an International Space Partnership. With more Space-faring Nations and Space User Groups each year, such a Partnership would require Multi-year (35 yr.-45 yr.) Space Mission Planning. With each Nation and Space User Group demanding priority for its missions, one needs a methodology for obiectively selecting the best mission sequences to be added annually to this 45 yr. Moving Space Mission Plan. How can this be done? Planners have suggested building a Reusable, Sustainable, Space Transportation Infrastructure (RSSn) to increase Mission synergism, reduce cost, and increase scientific and societal returns from this Space Initiative. Morgenthaler and Woodcock presented a Paper at the 55th IAC, Vancouver B.C., Canada, entitled Constrained Optimization Models For Optimizing Multi - Year Space Programs. This Paper showed that a Binary Integer Programming (BIP) Constrained Optimization Model combined with the NASA ATLAS Cost and Space System Operational Parameter Estimating Model has the theoretical capability to solve such problems. IAA Commission III, Space Technology and Space System Development, in its ACADEMY DAY meeting at Vancouver, requested that the Authors and NASA experts find several Space Exploration Architectures (SEAS), apply the combined BIP/ATLAS Models, and report the results at the 56th Fukuoka IAC. While the mathematical Model is in Ref.[2] this Paper presents the Application saga of that effort.
Constrained Least Squares Estimators of Oblique Common Factors.
ERIC Educational Resources Information Center
McDonald, Roderick P.
1981-01-01
An expression is given for weighted least squares estimators of oblique common factors of factor analyses, constrained to have the same covariance matrix as the factors they estimate. A proof of the uniqueness of the solution is given. (Author/JKS)
Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach
NASA Astrophysics Data System (ADS)
Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto
2017-12-01
In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \
Sparsest representations and approximations of an underdetermined linear system
NASA Astrophysics Data System (ADS)
Tardivel, Patrick J. C.; Servien, Rémi; Concordet, Didier
2018-05-01
In an underdetermined linear system of equations, constrained l 1 minimization methods such as the basis pursuit or the lasso are often used to recover one of the sparsest representations or approximations of the system. The null space property is a sufficient and ‘almost’ necessary condition to recover a sparsest representation with the basis pursuit. Unfortunately, this property cannot be easily checked. On the other hand, the mutual coherence is an easily checkable sufficient condition insuring the basis pursuit to recover one of the sparsest representations. Because the mutual coherence condition is too strong, it is hardly met in practice. Even if one of these conditions holds, to our knowledge, there is no theoretical result insuring that the lasso solution is one of the sparsest approximations. In this article, we study a novel constrained problem that gives, without any condition, one of the sparsest representations or approximations. To solve this problem, we provide a numerical method and we prove its convergence. Numerical experiments show that this approach gives better results than both the basis pursuit problem and the reweighted l 1 minimization problem.
Astrophysical Model Selection in Gravitational Wave Astronomy
NASA Technical Reports Server (NTRS)
Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.
2012-01-01
Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Low-lying excited states by constrained DFT.
Ramos, Pablo; Pavanello, Michele
2018-04-14
Exploiting the machinery of Constrained Density Functional Theory (CDFT), we propose a variational method for calculating low-lying excited states of molecular systems. We dub this method eXcited CDFT (XCDFT). Excited states are obtained by self-consistently constraining a user-defined population of electrons, N c , in the virtual space of a reference set of occupied orbitals. By imposing this population to be N c = 1.0, we computed the first excited state of 15 molecules from a test set. Our results show that XCDFT achieves an accuracy in the predicted excitation energy only slightly worse than linear-response time-dependent DFT (TDDFT), but without incurring into problems of variational collapse typical of the more commonly adopted ΔSCF method. In addition, we selected a few challenging processes to test the limits of applicability of XCDFT. We find that in contrast to TDDFT, XCDFT is capable of reproducing energy surfaces featuring conical intersections (azobenzene and H 3 ) with correct topology and correct overall energetics also away from the intersection. Venturing to condensed-phase systems, XCDFT reproduces the TDDFT solvatochromic shift of benzaldehyde when it is embedded by a cluster of water molecules. Thus, we find XCDFT to be a competitive method among single-reference methods for computations of excited states in terms of time to solution, rate of convergence, and accuracy of the result.
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
Method for exploiting bias in factor analysis using constrained alternating least squares algorithms
Keenan, Michael R.
2008-12-30
Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
NASA Astrophysics Data System (ADS)
Chen, Qiujie; Chen, Wu; Shen, Yunzhong; Zhang, Xingfu; Hsu, Houze
2016-04-01
The existing unconstrained Gravity Recovery and Climate Experiment (GRACE) monthly solutions i.e. CSR RL05 from Center for Space Research (CSR), GFZ RL05a from GeoForschungsZentrum (GFZ), JPL RL05 from Jet Propulsion Laboratory (JPL), DMT-1 from Delft Institute of Earth Observation and Space Systems (DEOS), AIUB from Bern University, and Tongji-GRACE01 as well as Tongji-GRACE02 from Tongji University, are dominated by correlated noise (such as north-south stripe errors) in high degree coefficients. To suppress the correlated noise of the unconstrained GRACE solutions, one typical option is to use post-processing filters such as decorrelation filtering and Gaussian smoothing , which are quite effective to reduce the noise and convenient to be implemented. Unlike these post-processing methods, the CNES/GRGS monthly GRACE solutions from Centre National d'Etudes Spatiales (CNES) were developed by using regularization with Kaula rule, whose correlated noise are reduced to such a great extent that no decorrelation filtering is required. Actually, the previous studies demonstrated that the north-south stripes in the GRACE solutions are due to the poor sensitivity of gravity variation in east-west direction. In other words, the longitudinal sampling of GRACE mission is very sparse but the latitudinal sampling of GRACE mission is quite dense, indicating that the recoverability of the longitudinal gravity variation is poor or unstable, leading to the ill-conditioned monthly GRACE solutions. To stabilize the monthly solutions, we constructed the regularization matrices by minimizing the difference between the longitudinal and latitudinal gravity variations and applied them to derive a time series of regularized GRACE monthly solutions named RegTongji RL01 for the period Jan. 2003 to Aug. 2011 in this paper. The signal powers and noise level of RegTongji RL01 were analyzed in this paper, which shows that: (1) No smoothing or decorrelation filtering is required for RegTongji RL01 anymore. (2) The signal powers of RegTongji RL01 are obviously stronger than those of the filtered solutions but the noise levels of the regularized and filtered solutions are consistent, suggesting that RegTongji RL01 has the higher signal-to-noise ratio.
Particle Swarm Optimization Toolbox
NASA Technical Reports Server (NTRS)
Grant, Michael J.
2010-01-01
The Particle Swarm Optimization Toolbox is a library of evolutionary optimization tools developed in the MATLAB environment. The algorithms contained in the library include a genetic algorithm (GA), a single-objective particle swarm optimizer (SOPSO), and a multi-objective particle swarm optimizer (MOPSO). Development focused on both the SOPSO and MOPSO. A GA was included mainly for comparison purposes, and the particle swarm optimizers appeared to perform better for a wide variety of optimization problems. All algorithms are capable of performing unconstrained and constrained optimization. The particle swarm optimizers are capable of performing single and multi-objective optimization. The SOPSO and MOPSO algorithms are based on swarming theory and bird-flocking patterns to search the trade space for the optimal solution or optimal trade in competing objectives. The MOPSO generates Pareto fronts for objectives that are in competition. A GA, based on Darwin evolutionary theory, is also included in the library. The GA consists of individuals that form a population in the design space. The population mates to form offspring at new locations in the design space. These offspring contain traits from both of the parents. The algorithm is based on this combination of traits from parents to hopefully provide an improved solution than either of the original parents. As the algorithm progresses, individuals that hold these optimal traits will emerge as the optimal solutions. Due to the generic design of all optimization algorithms, each algorithm interfaces with a user-supplied objective function. This function serves as a "black-box" to the optimizers in which the only purpose of this function is to evaluate solutions provided by the optimizers. Hence, the user-supplied function can be numerical simulations, analytical functions, etc., since the specific detail of this function is of no concern to the optimizer. These algorithms were originally developed to support entry trajectory and guidance design for the Mars Science Laboratory mission but may be applied to any optimization problem.
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
Constraining the noncommutative spectral action via astrophysical observations.
Nelson, William; Ochoa, Joseph; Sakellariadou, Mairi
2010-09-03
The noncommutative spectral action extends our familiar notion of commutative spaces, using the data encoded in a spectral triple on an almost commutative space. Varying a rather simple action, one can derive all of the standard model of particle physics in this setting, in addition to a modified version of Einstein-Hilbert gravity. In this Letter we use observations of pulsar timings, assuming that no deviation from general relativity has been observed, to constrain the gravitational sector of this theory. While the bounds on the coupling constants remain rather weak, they are comparable to existing bounds on deviations from general relativity in other settings and are likely to be further constrained by future observations.
Magnetohydrodynamic Models of Molecular Tornadoes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Au, Kelvin; Fiege, Jason D., E-mail: fiege@physics.umanitoba.ca
Recent observations near the Galactic Center (GC) have found several molecular filaments displaying striking helically wound morphology that are collectively known as molecular tornadoes. We investigate the equilibrium structure of these molecular tornadoes by formulating a magnetohydrodynamic model of a rotating, helically magnetized filament. A special analytical solution is derived where centrifugal forces balance exactly with toroidal magnetic stress. From the physics of torsional Alfvén waves we derive a constraint that links the toroidal flux-to-mass ratio and the pitch angle of the helical field to the rotation laws, which we find to be an important component in describing the molecularmore » tornado structure. The models are compared to the Ostriker solution for isothermal, nonmagnetic, nonrotating filaments. We find that neither the analytic model nor the Alfvén wave model suffer from the unphysical density inversions noted by other authors. A Monte Carlo exploration of our parameter space is constrained by observational measurements of the Pigtail Molecular Cloud, the Double Helix Nebula, and the GC Molecular Tornado. Observable properties such as the velocity dispersion, filament radius, linear mass, and surface pressure can be used to derive three dimensionless constraints for our dimensionless models of these three objects. A virial analysis of these constrained models is studied for these three molecular tornadoes. We find that self-gravity is relatively unimportant, whereas magnetic fields and external pressure play a dominant role in the confinement and equilibrium radial structure of these objects.« less
NASA Astrophysics Data System (ADS)
Pan, Xiao-Yin; Slamet, Marlina; Sahni, Viraht
2010-04-01
We extend our prior work on the construction of variational wave functions ψ that are functionals of functions χ:ψ=ψ[χ] rather than simply being functions. In this manner, the space of variations is expanded over those of traditional variational wave functions. In this article we perform the constrained search over the functions χ chosen such that the functional ψ[χ] satisfies simultaneously the constraints of normalization and the exact expectation value of an arbitrary single- or two-particle Hermitian operator, while also leading to a rigorous upper bound to the energy. As such the wave function functional is accurate not only in the region of space in which the principal contributions to the energy arise but also in the other region of the space represented by the Hermitian operator. To demonstrate the efficacy of these ideas, we apply such a constrained search to the ground state of the negative ion of atomic hydrogen H-, the helium atom He, and its positive ions Li+ and Be2+. The operators W whose expectations are obtained exactly are the sum of the single-particle operators W=∑irin,n=-2,-1,1,2, W=∑iδ(ri), W=-(1)/(2)∑i∇i2, and the two-particle operators W=∑nun,n=-2,-1,1,2, where u=|ri-rj|. Comparisons with the method of Lagrangian multipliers and of other constructions of wave-function functionals are made. Finally, we present further insights into the construction of wave-function functionals by studying a previously proposed construction of functionals ψ[χ] that lead to the exact expectation of arbitrary Hermitian operators. We discover that analogous to the solutions of the Schrödinger equation, there exist ψ[χ] that are unphysical in that they lead to singular values for the expectations. We also explain the origin of the singularity.
Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks
Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng
2014-01-01
Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408
Time and Energy, Exploring Trajectory Options Between Nodes in Earth-Moon Space
NASA Technical Reports Server (NTRS)
Martinez, Roland; Condon, Gerald; Williams, Jacob
2012-01-01
The Global Exploration Roadmap (GER) was released by the International Space Exploration Coordination Group (ISECG) in September of 2011. It describes mission scenarios that begin with the International Space Station and utilize it to demonstrate necessary technologies and capabilities prior to deployment of systems into Earth-Moon space. Deployment of these systems is an intermediate step in preparation for more complex deep space missions to near-Earth asteroids and eventually Mars. In one of the scenarios described in the GER, "Asteroid Next", there are activities that occur in Earth-Moon space at one of the Earth-Moon Lagrange (libration) points. In this regard, the authors examine the possible role of an intermediate staging point in an effort to illuminate potential trajectory options for conducting missions in Earth-Moon space of increasing duration, ultimately leading to deep space missions. This paper will describe several options for transits between Low Earth Orbit (LEO) and the libration points, transits between libration points, and transits between the libration points and interplanetary trajectories. The solution space provided will be constrained by selected orbital mechanics design techniques and physical characteristics of hardware to be used in both crewed missions and uncrewed missions. The relationships between time and energy required to transfer hardware between these locations will provide a better understanding of the potential trade-offs mission planners could consider in the development of capabilities, individual missions, and mission series in the context of the ISECG GER.
Performance Analysis of Constrained Loosely Coupled GPS/INS Integration Solutions
Falco, Gianluca; Einicke, Garry A.; Malos, John T.; Dovis, Fabio
2012-01-01
The paper investigates approaches for loosely coupled GPS/INS integration. Error performance is calculated using a reference trajectory. A performance improvement can be obtained by exploiting additional map information (for example, a road boundary). A constrained solution has been developed and its performance compared with an unconstrained one. The case of GPS outages is also investigated showing how a Kalman filter that operates on the last received GPS position and velocity measurements provides a performance benefit. Results are obtained by means of simulation studies and real data. PMID:23202241
Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model
NASA Astrophysics Data System (ADS)
Li, Jun
In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin
2007-02-01
This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.
Global Assessment of New GRACE Mascons Solutions for Hydrologic Applications
NASA Astrophysics Data System (ADS)
Save, H.; Zhang, Z.; Scanlon, B. R.; Wiese, D. N.; Landerer, F. W.; Long, D.; Longuevergne, L.; Chen, J.
2016-12-01
Advances in GRACE (Gravity Recovery and Climate Experiment) satellite data processing using new mass concentration (mascon) solutions have greatly increased the spatial localization and amplitude of recovered total Terrestrial Water Storage (TWS) signals; however, limited testing has been conduct on land hydrologic applications. In this study we compared TWS anomalies from (1) Center for Space Research mascons (CSR-M) solution with (2) NASA JPL mascon (JPL-M) solution, and with (3) a CSR gridded spherical harmonic rescaled (sf) solution from Tellus (CSRT-GSH.sf) in 176 river basins covering 80% of the global land area. There is good correspondence in TWS anomalies from mascons (CSR-M and JPL-M) and SH solutions based on high correlations between time series (rank correlation coefficients mostly >0.9). The long-term trends in basin TWS anomalies represent a relatively small signal (up to ±20 mm/yr) with differences among GRACE solutions and inter-basin variability increasing with decreasing basin size. Long-term TWS declines are greatest in (semi)arid and irrigated basins. Annual and semiannual signals have much larger amplitudes (up to ±250 mm). There is generally good agreement among GRACE solutions, increasing confidence in seasonal fluctuations from GRACE data. Rescaling spherical harmonics to restore lost signal increases agreement with mascons solutions for long-term trends and seasonal fluctuations. There are many advantages to using GRACE mascons solutions relative to SH solutions, such as reduced leakage from land to ocean increasing signal amplitude, and constraining results by applying geophysical data during processing with little or no post-processing requirements, making mascons more user friendly for non-geodetic users. This inter-comparison of various GRACE solutions should allow hydrologists to better select suitable GRACE products for hydrologic applications.
Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement
Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...
2013-12-10
A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less
Software Agents Applications Using Real-Time CORBA
NASA Astrophysics Data System (ADS)
Fowell, S.; Ward, R.; Nielsen, M.
This paper describes current projects being performed by SciSys in the area of the use of software agents, built using CORBA middleware, to improve operations within autonomous satellite/ground systems. These concepts have been developed and demonstrated in a series of experiments variously funded by ESA's Technology Flight Opportunity Initiative (TFO) and Leading Edge Technology for SMEs (LET-SME), and the British National Space Centre's (BNSC) National Technology Programme. Some of this earlier work has already been reported in [1]. This paper will address the trends, issues and solutions associated with this software agent architecture concept, together with its implementation using CORBA within an on-board environment, that is to say taking account of its real- time and resource constrained nature.
Qualitative simulation for process modeling and control
NASA Technical Reports Server (NTRS)
Dalle Molle, D. T.; Edgar, T. F.
1989-01-01
A qualitative model is developed for a first-order system with a proportional-integral controller without precise knowledge of the process or controller parameters. Simulation of the qualitative model yields all of the solutions to the system equations. In developing the qualitative model, a necessary condition for the occurrence of oscillatory behavior is identified. Initializations that cannot exhibit oscillatory behavior produce a finite set of behaviors. When the phase-space behavior of the oscillatory behavior is properly constrained, these initializations produce an infinite but comprehensible set of asymptotically stable behaviors. While the predictions include all possible behaviors of the real system, a class of spurious behaviors has been identified. When limited numerical information is included in the model, the number of predictions is significantly reduced.
Aiding the search: Examining individual differences in multiply-constrained problem solving.
Ellis, Derek M; Brewer, Gene A
2018-07-01
Understanding and resolving complex problems is of vital importance in daily life. Problems can be defined by the limitations they place on the problem solver. Multiply-constrained problems are traditionally examined with the compound remote associates task (CRAT). Performance on the CRAT is partially dependent on an individual's working memory capacity (WMC). These findings suggest that executive processes are critical for problem solving and that there are reliable individual differences in multiply-constrained problem solving abilities. The goals of the current study are to replicate and further elucidate the relation between WMC and CRAT performance. To achieve these goals, we manipulated preexposure to CRAT solutions and measured WMC with complex-span tasks. In Experiment 1, we report evidence that preexposure to CRAT solutions improved problem solving accuracy, WMC was correlated with problem solving accuracy, and that WMC did not moderate the effect of preexposure on problem solving accuracy. In Experiment 2, we preexposed participants to correct and incorrect solutions. We replicated Experiment 1 and found that WMC moderates the effect of exposure to CRAT solutions such that high WMC participants benefit more from preexposure to correct solutions than low WMC (although low WMC participants have preexposure benefits as well). Broadly, these results are consistent with theories of working memory and problem solving that suggest a mediating role of attention control processes. Published by Elsevier Inc.
Binns, Michael; de Atauri, Pedro; Vlysidis, Anestis; Cascante, Marta; Theodoropoulos, Constantinos
2015-02-18
Flux balance analysis is traditionally implemented to identify the maximum theoretical flux for some specified reaction and a single distribution of flux values for all the reactions present which achieve this maximum value. However it is well known that the uncertainty in reaction networks due to branches, cycles and experimental errors results in a large number of combinations of internal reaction fluxes which can achieve the same optimal flux value. In this work, we have modified the applied linear objective of flux balance analysis to include a poling penalty function, which pushes each new set of reaction fluxes away from previous solutions generated. Repeated poling-based flux balance analysis generates a sample of different solutions (a characteristic set), which represents all the possible functionality of the reaction network. Compared to existing sampling methods, for the purpose of generating a relatively "small" characteristic set, our new method is shown to obtain a higher coverage than competing methods under most conditions. The influence of the linear objective function on the sampling (the linear bias) constrains optimisation results to a subspace of optimal solutions all producing the same maximal fluxes. Visualisation of reaction fluxes plotted against each other in 2 dimensions with and without the linear bias indicates the existence of correlations between fluxes. This method of sampling is applied to the organism Actinobacillus succinogenes for the production of succinic acid from glycerol. A new method of sampling for the generation of different flux distributions (sets of individual fluxes satisfying constraints on the steady-state mass balances of intermediates) has been developed using a relatively simple modification of flux balance analysis to include a poling penalty function inside the resulting optimisation objective function. This new methodology can achieve a high coverage of the possible flux space and can be used with and without linear bias to show optimal versus sub-optimal solution spaces. Basic analysis of the Actinobacillus succinogenes system using sampling shows that in order to achieve the maximal succinic acid production CO₂ must be taken into the system. Solutions involving release of CO₂ all give sub-optimal succinic acid production.
Configuration-constrained cranking Hartree-Fock pairing calculations for sidebands of nuclei
NASA Astrophysics Data System (ADS)
Liang, W. Y.; Jiao, C. F.; Wu, Q.; Fu, X. M.; Xu, F. R.
2015-12-01
Background: Nuclear collective rotations have been successfully described by the cranking Hartree-Fock-Bogoliubov (HFB) model. However, for rotational sidebands which are built on intrinsic excited configurations, it may not be easy to find converged cranking HFB solutions. The nonconservation of the particle number in the BCS pairing is another shortcoming. To improve the pairing treatment, a particle-number-conserving (PNC) pairing method was suggested. But the existing PNC calculations were performed within a phenomenological one-body potential (e.g., Nilsson or Woods-Saxon) in which one has to deal the double-counting problem. Purpose: The present work aims at an improved description of nuclear rotations, particularly for the rotations of excited configurations, i.e., sidebands. Methods: We developed a configuration-constrained cranking Skyrme Hartree-Fock (SHF) calculation with the pairing correlation treated by the PNC method. The PNC pairing takes the philosophy of the shell model which diagonalizes the Hamiltonian in a truncated model space. The cranked deformed SHF basis provides a small but efficient model space for the PNC diagonalization. Results: We have applied the present method to the calculations of collective rotations of hafnium isotopes for both ground-state bands and sidebands, reproducing well experimental observations. The first up-bendings observed in the yrast bands of the hafnium isotopes are reproduced, and the second up-bendings are predicted. Calculations for rotational bands built on broken-pair excited configurations agree well with experimental data. The band-mixing between two Kπ=6+ bands observed in 176Hf and the K purity of the 178Hf rotational state built on the famous 31 yr Kπ=16+ isomer are discussed. Conclusions: The developed configuration-constrained cranking calculation has been proved to be a powerful tool to describe both the yrast bands and sidebands of deformed nuclei. The analyses of rotational moments of inertia help to understand the structures of nuclei, including rotational alignments, configurations, and competitions between collective and single-particle excitations.
Xia, Yangkun; Fu, Zhuo; Pan, Lijun; Duan, Fenghua
2018-01-01
The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can't be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution.
Xia, Yangkun; Pan, Lijun; Duan, Fenghua
2018-01-01
The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can’t be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution. PMID:29763419
NASA Astrophysics Data System (ADS)
Elkhateeb, Esraa
2018-01-01
We consider a cosmological model based on a generalization of the equation of state proposed by Nojiri and Odintsov (2004) and Štefančić (2005, 2006). We argue that this model works as a dark fluid model which can interpolate between dust equation of state and the dark energy equation of state. We show how the asymptotic behavior of the equation of state constrained the parameters of the model. The causality condition for the model is also studied to constrain the parameters and the fixed points are tested to determine different solution classes. Observations of Hubble diagram of SNe Ia supernovae are used to further constrain the model. We present an exact solution of the model and calculate the luminosity distance and the energy density evolution. We also calculate the deceleration parameter to test the state of the universe expansion.
Formation Flying Design and Applications in Weak Stability Boundary Regions
NASA Technical Reports Server (NTRS)
Folta, David
2003-01-01
Weak Stability regions serve as superior locations for interferometric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observing efficiency. Design of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of WSB solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in WSB regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numerical methods for attaining constrained formation geometries and controlling their dynamical evolution. This paper presents a survey of formation missions in the WSB regions and a brief description of the formation design using numerical and dynamical techniques.
Formation flying design and applications in weak stability boundary regions.
Folta, David
2004-05-01
Weak stability regions serve as superior locations for interferomertric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observation efficiency. Designs of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of weak stability boundary solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in weak stability boundary regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numeric methods to attain constrained formation geometries and control their dynamical evolution. This paper presents a survey of formation missions in the weak stability boundary regions and a brief description of formation design using numerical and dynamical techniques.
The benefits of adaptive parametrization in multi-objective Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John
2010-10-01
In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).
Concurrent evolution of feature extractors and modular artificial neural networks
NASA Astrophysics Data System (ADS)
Hannak, Victor; Savakis, Andreas; Yang, Shanchieh Jay; Anderson, Peter
2009-05-01
This paper presents a new approach for the design of feature-extracting recognition networks that do not require expert knowledge in the application domain. Feature-Extracting Recognition Networks (FERNs) are composed of interconnected functional nodes (feurons), which serve as feature extractors, and are followed by a subnetwork of traditional neural nodes (neurons) that act as classifiers. A concurrent evolutionary process (CEP) is used to search the space of feature extractors and neural networks in order to obtain an optimal recognition network that simultaneously performs feature extraction and recognition. By constraining the hill-climbing search functionality of the CEP on specific parts of the solution space, i.e., individually limiting the evolution of feature extractors and neural networks, it was demonstrated that concurrent evolution is a necessary component of the system. Application of this approach to a handwritten digit recognition task illustrates that the proposed methodology is capable of producing recognition networks that perform in-line with other methods without the need for expert knowledge in image processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heikkinen, J. A.; Nora, M.
2011-02-15
Gyrokinetic equations of motion, Poisson equation, and energy and momentum conservation laws are derived based on the reduced-phase-space Lagrangian and inverse Kruskal iteration introduced by Pfirsch and Correa-Restrepo [J. Plasma Phys. 70, 719 (2004)]. This formalism, together with the choice of the adiabatic invariant J=
Resource Constrained Planning of Multiple Projects with Separable Activities
NASA Astrophysics Data System (ADS)
Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya
In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Constrained coding for the deep-space optical channel
NASA Technical Reports Server (NTRS)
Moision, B. E.; Hamkins, J.
2002-01-01
We investigate methods of coding for a channel subject to a large dead-time constraint, i.e. a constraint on the minimum spacing between transmitted pulses, with the deep-space optical channel as the motivating example.
Quantum Model of a Charged Black Hole
NASA Astrophysics Data System (ADS)
Gladush, V. D.
A canonical approach for constructing of the classical and quantum description spherically-symmetric con guration gravitational and electromagnetic elds is considered. According to the sign of the square of the Kodama vector, space-time is divided into R-and T-regions. By virtue of the generalized Birkho theorem, one can choose coordinate systems such that the desired metric functions in the T-region depend on the time, and in the R-domain on the space coordinate. Then, the initial action for the con guration breaks up into terms describing the elds in the T- and R-regions with the time and space evolutionary variable, respectively. For these regions, Lagrangians of the con guration are constructed, which contain dynamic and non-dynamic degrees of freedom, leading to constrains. We concentrate our attention on dynamic T-regions. There are two additional conserved physical quantities: the charge and the total mass of the system. The Poisson bracket of the total mass with the Hamiltonian function vanishes in the weak sense. A classical solution of the eld equations in the con guration space (minisuperspace) is constructed without xing non-dynamic variable. In the framework of the canonical approach to the quantum mechanics of the system under consideration, physical states are found by solving the Hamiltonian constraint in the operator form (the DeWitt equation) for the system wave function Ψ. It also requires that Ψ is an eigenfunction of the operators of charge and total mass. For the symmetric of the mass operator the corresponding ordering of operators is carried out. Since the total mass operator commutes with the Hamiltonian in the weak sense, its eigenfunctions must be constructed in conjunction with the solution of the DeWitt equation. The consistency condition leads to the ansatz, with the help of which the solution of the DeWitt equation for the state Ψem with a defined total mass and charge is constructed, taking into account the regularity condition on the horizon. The mass and charge spectra of the con guration in this approach turn out to be continuous. It is interesting that formal quantization in the R-region with a space evolutionary coordinate leads to a similar result.
The Tractable Cognition Thesis
ERIC Educational Resources Information Center
van Rooij, Iris
2008-01-01
The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the "Tractable Cognition thesis": Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories…
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide
NASA Technical Reports Server (NTRS)
Miura, H.; Schmit, L. A., Jr.
1979-01-01
A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.
Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal
ERIC Educational Resources Information Center
Steinley, Douglas; Hubert, Lawrence
2008-01-01
This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…
The Core Flight System (cFS) Community: Providing Low Cost Solutions for Small Spacecraft
NASA Technical Reports Server (NTRS)
McComas, David; Wilmot, Jonathan; Cudmore, Alan
2016-01-01
In February 2015 the NASA Goddard Space Flight Center (GSFC) completed the open source release of the entire Core Flight Software (cFS) suite. After the open source release a multi-NASA center Configuration Control Board (CCB) was established that has managed multiple cFS product releases. The cFS was developed and is being maintained in compliance with the NASA Class B software development process requirements and the open source release includes all Class B artifacts. The cFS is currently running on three operational science spacecraft and is being used on multiple spacecraft and instrument development efforts. While the cFS itself is a viable flight software (FSW) solution, we have discovered that the cFS community is a continuous source of innovation and growth that provides products and tools that serve the entire FSW lifecycle and future mission needs. This paper summarizes the current state of the cFS community, the key FSW technologies being pursued, the development/verification tools and opportunities for the small satellite community to become engaged. The cFS is a proven high quality and cost-effective solution for small satellites with constrained budgets.
Minimal norm constrained interpolation. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Irvine, L. D.
1985-01-01
In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.
State-constrained booster trajectory solutions via finite elements and shooting
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.; Seywald, Hans
1993-01-01
This paper presents an extension of a FEM formulation based on variational principles. A general formulation for handling internal boundary conditions and discontinuities in the state equations is presented, and the general formulation is modified for optimal control problems subject to state-variable inequality constraints. Solutions which only touch the state constraint and solutions which have a boundary arc of finite length are considered. Suitable shape and test functions are chosen for a FEM discretization. All element quadrature (equivalent to one-point Gaussian quadrature over each element) may be done in closed form. The final form of the algebraic equations is then derived. A simple state-constrained problem is solved. Then, for a practical application of the use of the FEM formulation, a launch vehicle subject to a dynamic pressure constraint (a first-order state inequality constraint) is solved. The results presented for the launch-vehicle trajectory have some interesting features, including a touch-point solution.
Tests of gravity with future space-based experiments
NASA Astrophysics Data System (ADS)
Sakstein, Jeremy
2018-03-01
Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.
Phase transition solutions in geometrically constrained magnetic domain wall models
NASA Astrophysics Data System (ADS)
Chen, Shouxin; Yang, Yisong
2010-02-01
Recent work on magnetic phase transition in nanoscale systems indicates that new physical phenomena, in particular, the Bloch wall width narrowing, arise as a consequence of geometrical confinement of magnetization and leads to the introduction of geometrically constrained domain wall models. In this paper, we present a systematic mathematical analysis on the existence of the solutions of the basic governing equations in such domain wall models. We show that, when the cross section of the geometric constriction is a simple step function, the solutions may be obtained by minimizing the domain wall energy over the constriction and solving the Bogomol'nyi equation outside the constriction. When the cross section and potential density are both even, we establish the existence of an odd domain wall solution realizing the phase transition process between two adjacent domain phases. When the cross section satisfies a certain integrability condition, we prove that a domain wall solution always exists which links two arbitrarily designated domain phases.
Precise regional baseline estimation using a priori orbital information
NASA Technical Reports Server (NTRS)
Lindqwister, Ulf J.; Lichten, Stephen M.; Blewitt, Geoffrey
1990-01-01
A solution using GPS measurements acquired during the CASA Uno campaign has resulted in 3-4 mm horizontal daily baseline repeatability and 13 mm vertical repeatability for a 729 km baseline, located in North America. The agreement with VLBI is at the level of 10-20 mm for all components. The results were obtained with the GIPSY orbit determination and baseline estimation software and are based on five single-day data arcs spanning the 20, 21, 25, 26, and 27 of January, 1988. The estimation strategy included resolving the carrier phase integer ambiguities, utilizing an optial set of fixed reference stations, and constraining GPS orbit parameters by applying a priori information. A multiday GPS orbit and baseline solution has yielded similar 2-4 mm horizontal daily repeatabilities for the same baseline, consistent with the constrained single-day arc solutions. The application of weak constraints to the orbital state for single-day data arcs produces solutions which approach the precise orbits obtained with unconstrained multiday arc solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bigham, S; Yu, DZ; Chugh, D
2014-02-01
The slow diffusion of an absorbate molecule into an absorbent often makes the absorption process a rate-limiting step in many applications. In cases involving an absorbate with a high heat of phase change, such as water absorption into a LiBr (lithium bromide) solution, the absorption rate is further slowed due to significant heating of the absorbent. Recently, it has been demonstrated that constraining a LiBr solution film by a hydrophobic porous structure enables manipulation of the solution flow thermohydraulic characteristics. Here, it is shown that mass transport mode in a constrained laminar solution flow can be changed from diffusive tomore » advective. This change in mode is accomplished through stretching and folding the laminar streamlines within the solution film via the implementation of micro-scale features on the flow channel surface. The process induces vortices within the solution film, which continuously bring concentrated solution from the bottom and middle of the solution channel to its interface with the vapor phase, thus leading to a significant enhancement in the absorption rate. The detailed physics of the involved transport processes is elucidated using the LBM (Lattice Boltzmann Method). Published by Elsevier Ltd.« less
NASA Astrophysics Data System (ADS)
Calcaferro, Leila M.; Córsico, Alejandro H.; Althaus, Leandro G.
2017-11-01
Context. Many pulsating low-mass white dwarf stars have been detected in the past years in the field of our Galaxy. Some of them exhibit multiperiodic brightness variation, therefore it is possible to probe their interiors through asteroseismology. Aims: We present a detailed asteroseismological study of all the known low-mass variable white dwarf stars based on a complete set of fully evolutionary models that are representative of low-mass He-core white dwarf stars. Methods: We employed adiabatic radial and nonradial pulsation periods for low-mass white dwarf models with stellar masses ranging from 0.1554 to 0.4352 M⊙ that were derived by simulating the nonconservative evolution of a binary system consisting of an initially 1 M⊙ zero-age main-sequence (ZAMS) star and a 1.4 M⊙ neutron star companion. We estimated the mean period spacing for the stars under study (where this was possible), and then we constrained the stellar mass by comparing the observed period spacing with the average of the computed period spacings for our grid of models. We also employed the individual observed periods of every known pulsating low-mass white dwarf star to search for a representative seismological model. Results: We found that even though the stars under analysis exhibit few periods and the period fits show multiplicity of solutions, it is possible to find seismological models whose mass and effective temperature are in agreement with the values given by spectroscopy for most of the cases. Unfortunately, we were not able to constrain the stellar masses by employing the observed period spacing because, in general, only few periods are exhibited by these stars. In the two cases where we were able to extract the period spacing from the set of observed periods, this method led to stellar mass values that were substantially higher than expected for this type of stars. Conclusions: The results presented in this work show the need for further photometric searches, on the one hand, and that some improvements of the theoretical models are required on the other hand in order to place the asteroseismological results on a firmer ground.
A Path Algorithm for Constrained Estimation
Zhou, Hua; Lange, Kenneth
2013-01-01
Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382
NASA Astrophysics Data System (ADS)
Biess, Armin
2013-01-01
The study of the kinematic and dynamic features of human arm movements provides insights into the computational strategies underlying human motor control. In this paper a differential geometric approach to movement control is taken by endowing arm configuration space with different non-Euclidean metric structures to study the predictions of the generalized minimum-jerk (MJ) model in the resulting Riemannian manifold for different types of human arm movements. For each metric space the solution of the generalized MJ model is given by reparametrized geodesic paths. This geodesic model is applied to a variety of motor tasks ranging from three-dimensional unconstrained movements of a four degree of freedom arm between pointlike targets to constrained movements where the hand location is confined to a surface (e.g., a sphere) or a curve (e.g., an ellipse). For the latter speed-curvature relations are derived depending on the boundary conditions imposed (periodic or nonperiodic) and the compatibility with the empirical one-third power law is shown. Based on these theoretical studies and recent experimental findings, I argue that geodesics may be an emergent property of the motor system and that the sensorimotor system may shape arm configuration space by learning metric structures through sensorimotor feedback.
NASA Technical Reports Server (NTRS)
Postma, Barry Dirk
2005-01-01
This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.
Impact Flash Monitoring Facility on the Deep Space Gateway
NASA Astrophysics Data System (ADS)
Needham, D. H.; Moser, D. E.; Suggs, R. M.; Cooke, W. J.; Kring, D. A.; Neal, C. R.; Fassett, C. I.
2018-02-01
Cameras mounted to the Deep Space Gateway exterior will detect flashes caused by impacts on the lunar surface. Observed flashes will help constrain the current lunar impact flux and assess hazards faced by crews living and working in cislunar space.
NASA Astrophysics Data System (ADS)
Kaven, J. Ole; Barbour, Andrew J.; Ali, Tabrez
2017-04-01
Continual production of geothermal energy at times leads to significant surface displacement that can be observed in high spatial resolution using InSAR imagery. The surface displacement can be analyzed to resolve volume change within the reservoir revealing the often-complicated patterns of reservoir deformation. Simple point source models of reservoir deformation in a homogeneous elastic or poro-elastic medium can be superimposed to provide spatially varying, kinematic representations of reservoir deformation. In many cases, injection and production data are known in insufficient detail; but, when these are available, the same Green functions can be used to constrain the reservoir deformation. Here we outline how the injection and production data can be used to constrain bounds on the solution by posing the inversion as a quadratic programming with inequality constraints and regularization rather than a conventional least squares solution with regularization. We apply this method to InSAR-derived surface displacements at the Coso and Salton Sea Geothermal Fields in California, using publically available injection and production data. At both geothermal fields the available surface deformation in conjunction with the injection and production data permit robust solutions for the spatially varying reservoir deformation. The reservoir deformation pattern resulting from the constrained quadratic programming solution is more heterogeneous when compared to a conventional least squares solution. The increased heterogeneity is consistent with the known structural controls on heat and fluid transport in each geothermal reservoir.
A hybrid model for computing nonthermal ion distributions in a long mean-free-path plasma
NASA Astrophysics Data System (ADS)
Tang, Xianzhu; McDevitt, Chris; Guo, Zehua; Berk, Herb
2014-10-01
Non-thermal ions, especially the suprathermal ones, are known to make a dominant contribution to a number of important physics such as the fusion reactivity in controlled fusion, the ion heat flux, and in the case of a tokamak, the ion bootstrap current. Evaluating the deviation from a local Maxwellian distribution of these non-thermal ions can be a challenging task in the context of a global plasma fluid model that evolves the plasma density, flow, and temperature. Here we describe a hybrid model for coupling such constrained kinetic calculation to global plasma fluid models. The key ingredient is a non-perturbative treatment of the tail ions where the ion Knudsen number approaches or surpasses order unity. This can be sharply constrasted with the standard Chapman-Enskog approach which relies on a perturbative treatment that is frequently invalidated. The accuracy of our coupling scheme is controlled by the precise criteria for matching the non-perturbative kinetic model to perturbative solutions in both configuration space and velocity space. Although our specific application examples will be drawn from laboratory controlled fusion experiments, the general approach is applicable to space and astrophysical plasmas as well. Work supported by DOE.
Chern-Simons improved Hamiltonians for strings in three space dimensions
NASA Astrophysics Data System (ADS)
Gordeli, Ivan; Melnikov, Dmitry; Niemi, Antti J.; Sedrakyan, Ara
2016-07-01
In the case of a structureless string the extrinsic curvature and torsion determine uniquely its shape in three-dimensional ambient space, by way of solution of the Frenet equation. In many physical scenarios there are in addition symmetries that constrain the functional form of the ensuing energy function. For example, the energy of a structureless string should be independent of the way the string is framed in the Frenet equation. Thus the energy should only involve the curvature and torsion as dynamical variables, in a manner that resembles the Hamiltonian of the Abelian Higgs model. Here we investigate the effect of symmetry principles in the construction of Hamiltonians for structureless strings. We deduce from the concept of frame independence that in addition to extrinsic curvature and torsion, the string can also engage a three-dimensional Abelian bulk gauge field as a dynamical variable. We find that the presence of a bulk gauge field gives rise to a long-range interaction between different strings. Moreover, when this gauge field is subject to Chern-Simons self-interaction, it becomes plausible that interacting strings are subject to fractional statistics in three space dimensions.
Maximizing photovoltaic power generation of a space-dart configured satellite
NASA Astrophysics Data System (ADS)
Lee, Dae Young; Cutler, James W.; Mancewicz, Joe; Ridley, Aaron J.
2015-06-01
Many small satellites are power constrained due to their minimal solar panel area and the eclipse environment of low-Earth orbit. As with larger satellites, these small satellites, including CubeSats, use deployable power arrays to increase power production. This presents a design opportunity to develop various objective functions related to energy management and methods for optimizing these functions over a satellite design. A novel power generation model was created, and a simulation system was developed to evaluate various objective functions describing energy management for complex satellite designs. The model uses a spacecraft-body-fixed spherical coordinate system to analyze the complex geometry of a satellite's self-induced shadowing with computation provided by the Open Graphics Library. As an example design problem, a CubeSat configured as a space-dart with four deployable panels is optimized. Due to the fast computation speed of the solution, an exhaustive search over the design space is used to find the solar panel deployment angles which maximize total power generation. Simulation results are presented for a variety of orbit scenarios. The method is extendable to a variety of complex satellite geometries and power generation systems.
Phase space flows for non-Hamiltonian systems with constraints
NASA Astrophysics Data System (ADS)
Sergi, Alessandro
2005-09-01
In this paper, non-Hamiltonian systems with holonomic constraints are treated by a generalization of Dirac’s formalism. Non-Hamiltonian phase space flows can be described by generalized antisymmetric brackets or by general Liouville operators which cannot be derived from brackets. Both situations are treated. In the first case, a Nosé-Dirac bracket is introduced as an example. In the second one, Dirac’s recipe for projecting out constrained variables from time translation operators is generalized and then applied to non-Hamiltonian linear response. Dirac’s formalism avoids spurious terms in the response function of constrained systems. However, corrections coming from phase space measure must be considered for general perturbations.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
NASA Technical Reports Server (NTRS)
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Steady-state kinetic modeling constrains cellular resting states and dynamic behavior.
Purvis, Jeremy E; Radhakrishnan, Ravi; Diamond, Scott L
2009-03-01
A defining characteristic of living cells is the ability to respond dynamically to external stimuli while maintaining homeostasis under resting conditions. Capturing both of these features in a single kinetic model is difficult because the model must be able to reproduce both behaviors using the same set of molecular components. Here, we show how combining small, well-defined steady-state networks provides an efficient means of constructing large-scale kinetic models that exhibit realistic resting and dynamic behaviors. By requiring each kinetic module to be homeostatic (at steady state under resting conditions), the method proceeds by (i) computing steady-state solutions to a system of ordinary differential equations for each module, (ii) applying principal component analysis to each set of solutions to capture the steady-state solution space of each module network, and (iii) combining optimal search directions from all modules to form a global steady-state space that is searched for accurate simulation of the time-dependent behavior of the whole system upon perturbation. Importantly, this stepwise approach retains the nonlinear rate expressions that govern each reaction in the system and enforces constraints on the range of allowable concentration states for the full-scale model. These constraints not only reduce the computational cost of fitting experimental time-series data but can also provide insight into limitations on system concentrations and architecture. To demonstrate application of the method, we show how small kinetic perturbations in a modular model of platelet P2Y(1) signaling can cause widespread compensatory effects on cellular resting states.
Species richness and morphological diversity of passerine birds
Ricklefs, Robert E.
2012-01-01
The relationship between species richness and the occupation of niche space can provide insight into the processes that shape patterns of biodiversity. For example, if species interactions constrained coexistence, one might expect tendencies toward even spacing within niche space and positive relationships between diversity and total niche volume. I use morphological diversity of passerine birds as a proxy for diet, foraging maneuvers, and foraging substrates and examine the morphological space occupied by regional and local passerine avifaunas. Although independently diversified regional faunas exhibit convergent morphology, species are clustered rather than evenly distributed, the volume of the morphological space is weakly related to number of species per taxonomic family, and morphological volume is unrelated to number of species within both regional avifaunas and local assemblages. These results seemingly contradict patterns expected when species interactions constrain regional or local diversity, and they suggest a larger role for diversification, extinction, and dispersal limitation in shaping species richness. PMID:22908271
NASA Technical Reports Server (NTRS)
Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.
1997-01-01
Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.
Estimating free-body modal parameters from tests of a constrained structure
NASA Technical Reports Server (NTRS)
Cooley, Victor M.
1993-01-01
Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.
Towards high-speed autonomous navigation of unknown environments
NASA Astrophysics Data System (ADS)
Richter, Charles; Roy, Nicholas
2015-05-01
In this paper, we summarize recent research enabling high-speed navigation in unknown environments for dynamic robots that perceive the world through onboard sensors. Many existing solutions to this problem guarantee safety by making the conservative assumption that any unknown portion of the map may contain an obstacle, and therefore constrain planned motions to lie entirely within known free space. In this work, we observe that safety constraints may significantly limit performance and that faster navigation is possible if the planner reasons about collision with unobserved obstacles probabilistically. Our overall approach is to use machine learning to approximate the expected costs of collision using the current state of the map and the planned trajectory. Our contribution is to demonstrate fast but safe planning using a learned function to predict future collision probabilities.
Optimal design of dampers within seismic structures
NASA Astrophysics Data System (ADS)
Ren, Wenjie; Qian, Hui; Song, Wali; Wang, Liqiang
2009-07-01
An improved multi-objective genetic algorithm for structural passive control system optimization is proposed. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. For a constrained problem, the dominance-based penalty function method is advanced, containing information on an individual's status (feasible or infeasible), position in a search space, and distance from a Pareto optimal set. The proposed approach is used for the optimal designs of a six-storey building with shape memory alloy dampers subjected to earthquake. The number and position of dampers are chosen as the design variables. The number of dampers and peak relative inter-storey drift are considered as the objective functions. Numerical results generate a set of non-dominated solutions.
Researching Children's Understanding of Safety: An Auto-Driven Visual Approach
ERIC Educational Resources Information Center
Agbenyega, Joseph S.
2011-01-01
Safe learning spaces allow children to explore their environment in an open and inquiring way, whereas unsafe spaces constrain, frustrate and disengage children from experiencing the fullness of their learning spaces. This study explores how children make sense of safe and unsafe learning spaces, and how this understanding affects the ways they…
NASA Astrophysics Data System (ADS)
Ibraheem, Omveer, Hasan, N.
2010-10-01
A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.
NASA Astrophysics Data System (ADS)
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.
Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming
2016-09-01
People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed.
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard
2012-01-01
The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Capacity-constrained traffic assignment in networks with residual queues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, W.H.K.; Zhang, Y.
2000-04-01
This paper proposes a capacity-constrained traffic assignment model for strategic transport planning in which the steady-state user equilibrium principle is extended for road networks with residual queues. Therefore, the road-exit capacity and the queuing effects can be incorporated into the strategic transport model for traffic forecasting. The proposed model is applicable to the congested network particularly when the traffic demands exceeds the capacity of the network during the peak period. An efficient solution method is proposed for solving the steady-state traffic assignment problem with residual queues. Then a simple numerical example is employed to demonstrate the application of the proposedmore » model and solution method, while an example of a medium-sized arterial highway network in Sioux Falls, South Dakota, is used to test the applicability of the proposed solution to real problems.« less
Catalogue Creation for Space Situational Awareness with Optical Sensors
NASA Astrophysics Data System (ADS)
Hobson, T.; Clarkson, I.; Bessell, T.; Rutten, M.; Gordon, N.; Moretti, N.; Morreale, B.
2016-09-01
In order to safeguard the continued use of space-based technologies, effective monitoring and tracking of man-made resident space objects (RSOs) is paramount. The diverse characteristics, behaviours and trajectories of RSOs make space surveillance a challenging application of the discipline that is tracking and surveillance. When surveillance systems are faced with non-canonical scenarios, it is common for human operators to intervene while researchers adapt and extend traditional tracking techniques in search of a solution. A complementary strategy for improving the robustness of space surveillance systems is to place greater emphasis on the anticipation of uncertainty. Namely, give the system the intelligence necessary to autonomously react to unforeseen events and to intelligently and appropriately act on tenuous information rather than discard it. In this paper we build from our 2015 campaign and describe the progression of a low-cost intelligent space surveillance system capable of autonomously cataloguing and maintaining track of RSOs. It currently exploits robotic electro-optical sensors, high-fidelity state-estimation and propagation as well as constrained initial orbit determination (IOD) to intelligently and adaptively manage its sensors in order to maintain an accurate catalogue of RSOs. In a step towards fully autonomous cataloguing, the system has been tasked with maintaining surveillance of a portion of the geosynchronous (GEO) belt. Using a combination of survey and track-refinement modes, the system is capable of maintaining a track of known RSOs and initiating tracks on previously unknown objects. Uniquely, due to the use of high-fidelity representations of a target's state uncertainty, as few as two images of previously unknown RSOs may be used to subsequently initiate autonomous search and reacquisition. To achieve this capability, particularly within the congested environment of the GEO-belt, we use a constrained admissible region (CAR) to generate a plausible estimate of the unknown RSO's state probability density function and disambiguate measurements using a particle-based joint probability data association (JPDA) method. Additionally, the use of alternative CAR generation methods, incorporating catalogue-based priors, is explored and tested. We also present the findings of two field trials of an experimental system that incorporates these techniques. The results demonstrate that such a system is capable of autonomously searching for an RSO that was briefly observed days prior in a GEO-survey and discriminating it from the measurements of other previously catalogued RSOs.
Towards improving searches for optimal phylogenies.
Ford, Eric; St John, Katherine; Wheeler, Ward C
2015-01-01
Finding the optimal evolutionary history for a set of taxa is a challenging computational problem, even when restricting possible solutions to be "tree-like" and focusing on the maximum-parsimony optimality criterion. This has led to much work on using heuristic tree searches to find approximate solutions. We present an approach for finding exact optimal solutions that employs and complements the current heuristic methods for finding optimal trees. Given a set of taxa and a set of aligned sequences of characters, there may be subsets of characters that are compatible, and for each such subset there is an associated (possibly partially resolved) phylogeny with edges corresponding to each character state change. These perfect phylogenies serve as anchor trees for our constrained search space. We show that, for sequences with compatible sites, the parsimony score of any tree [Formula: see text] is at least the parsimony score of the anchor trees plus the number of inferred changes between [Formula: see text] and the anchor trees. As the maximum-parsimony optimality score is additive, the sum of the lower bounds on compatible character partitions provides a lower bound on the complete alignment of characters. This yields a region in the space of trees within which the best tree is guaranteed to be found; limiting the search for the optimal tree to this region can significantly reduce the number of trees that must be examined in a search of the space of trees. We analyze this method empirically using four different biological data sets as well as surveying 400 data sets from the TreeBASE repository, demonstrating the effectiveness of our technique in reducing the number of steps in exact heuristic searches for trees under the maximum-parsimony optimality criterion. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Miller, Steven
1998-03-01
A generic stochastic method is presented that rapidly evaluates numerical bulk flux solutions to the one-dimensional integrodifferential radiative transport equation, for coherent irradiance of optically anisotropic suspensions of nonspheroidal bioparticles, such as blood. As Fermat rays or geodesics enter the suspension, they evolve into a bundle of random paths or trajectories due to scattering by the suspended bioparticles. Overall, this can be interpreted as a bundle of Markov trajectories traced out by a "gas" of Brownian-like point photons being scattered and absorbed by the homogeneous distribution of uncorrelated cells in suspension. By considering the cumulative vectorial intersections of a statistical bundle of random trajectories through sets of interior data planes in the space containing the medium, the effective equivalent information content and behavior of the (generally unknown) analytical flux solutions of the radiative transfer equation rapidly emerges. The fluxes match the analytical diffuse flux solutions in the diffusion limit, which verifies the accuracy of the algorithm. The method is not constrained by the diffusion limit and gives correct solutions for conditions where diffuse solutions are not viable. Unlike conventional Monte Carlo and numerical techniques adapted from neutron transport or nuclear reactor problems that compute scalar quantities, this vectorial technique is fast, easily implemented, adaptable, and viable for a wide class of biophotonic scenarios. By comparison, other analytical or numerical techniques generally become unwieldy, lack viability, or are more difficult to utilize and adapt. Illustrative calculations are presented for blood medias at monochromatic wavelengths in the visible spectrum.
Optimal synchronization in space
NASA Astrophysics Data System (ADS)
Brede, Markus
2010-02-01
In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.
Carneiro, Gustavo; Georgescu, Bogdan; Good, Sara; Comaniciu, Dorin
2008-09-01
We propose a novel method for the automatic detection and measurement of fetal anatomical structures in ultrasound images. This problem offers a myriad of challenges, including: difficulty of modeling the appearance variations of the visual object of interest, robustness to speckle noise and signal dropout, and large search space of the detection procedure. Previous solutions typically rely on the explicit encoding of prior knowledge and formulation of the problem as a perceptual grouping task solved through clustering or variational approaches. These methods are constrained by the validity of the underlying assumptions and usually are not enough to capture the complex appearances of fetal anatomies. We propose a novel system for fast automatic detection and measurement of fetal anatomies that directly exploits a large database of expert annotated fetal anatomical structures in ultrasound images. Our method learns automatically to distinguish between the appearance of the object of interest and background by training a constrained probabilistic boosting tree classifier. This system is able to produce the automatic segmentation of several fetal anatomies using the same basic detection algorithm. We show results on fully automatic measurement of biparietal diameter (BPD), head circumference (HC), abdominal circumference (AC), femur length (FL), humerus length (HL), and crown rump length (CRL). Notice that our approach is the first in the literature to deal with the HL and CRL measurements. Extensive experiments (with clinical validation) show that our system is, on average, close to the accuracy of experts in terms of segmentation and obstetric measurements. Finally, this system runs under half second on a standard dual-core PC computer.
Simulation of Constrained Musculoskeletal Systems in Task Space.
Stanev, Dimitar; Moustakas, Konstantinos
2018-02-01
This paper proposes an operational task space formalization of constrained musculoskeletal systems, motivated by its promising results in the field of robotics. The change of representation requires different algorithms for solving the inverse and forward dynamics simulation in the task space domain. We propose an extension to the direct marker control and an adaptation of the computed muscle control algorithms for solving the inverse kinematics and muscle redundancy problems, respectively. Experimental evaluation demonstrates that this framework is not only successful in dealing with the inverse dynamics problem, but also provides an intuitive way of studying and designing simulations, facilitating assessment prior to any experimental data collection. The incorporation of constraints in the derivation unveils an important extension of this framework toward addressing systems that use absolute coordinates and topologies that contain closed kinematic chains. Task space projection reveals a more intuitive encoding of the motion planning problem, allows for better correspondence between observed and estimated variables, provides the means to effectively study the role of kinematic redundancy, and most importantly, offers an abstract point of view and control, which can be advantageous toward further integration with high level models of the precommand level. Task-based approaches could be adopted in the design of simulation related to the study of constrained musculoskeletal systems.
Trajectory Design Strategies for the NGST L2 Libration Point Mission
NASA Technical Reports Server (NTRS)
Folta, David; Cooley, Steven; Howell, Kathleen; Bauer, Frank H.
2001-01-01
The Origins' Next Generation Space Telescope (NGST) trajectory design is addressed in light of improved methods for attaining constrained orbit parameters and their control at the exterior collinear libration point, L2. The use of a dynamical systems approach, state-space equations for initial libration orbit control, and optimization to achieve constrained orbit parameters are emphasized. The NGST trajectory design encompasses a direct transfer and orbit maintenance under a constant acceleration. A dynamical systems approach can be used to provide a biased orbit and stationkeeping maintenance method that incorporates the constraint of a single axis correction scheme.
NASA Astrophysics Data System (ADS)
Jones, A. G.; Afonso, J. C.
2015-12-01
The Earth comprises a single physio-chemical system that we interrogate from its surface and/or from space making observations related to various physical and chemical parameters. A change in one of those parameters affects many of the others; for example a change in velocity is almost always indicative of a concomitant change in density, which results in changes to elevation, gravity and geoid observations. Similarly, a change in oxide chemistry affects almost all physical parameters to a greater or lesser extent. We have now developed sophisticated tools to model/invert data in our individual disciplines to such an extent that we are obtaining high resolution, robust models from our datasets. However, in the vast majority of cases the different datasets are modelled/inverted independently of each other, and often even without considering other data in a qualitative sense. The LitMod framework of Afonso and colleagues presents integrated inversion of geoscientific data to yield thermo-chemical models that are petrologically consistent and constrained. Input data can comprise any combination of elevation, geoid, surface heat flow, seismic surface wave (Rayleigh and Love) data and receiver function data, and MT data. The basis of LitMod is characterization of the upper mantle in terms of five oxides in the CFMAS system and a thermal structure that is conductive to the LAB and convective along the adiabat below the LAB to the 410 km discontinuity. Candidate solutions are chosen from prior distributions of the oxides. For the crust, candidate solutions are chosen from distributions of crustal layering, velocity and density parameters. Those candidate solutions that fit the data within prescribed error limits are kept, and are used to establish broad posterior distributions from which new candidate solutions are chosen. Examples will be shown of application of this approach fitting data from the Kaapvaal Craton in South Africa and the Rae Craton in northern Canada. I will show that the MT data are the most discriminatory, requiring many millions of candidate solutions to be tested in order to sufficiently establish posterior distributions. In particular, the MT data require layered lithosphere, whereas the other data can be fit with a single lithosphere, and the MT data are particularly sensitive to the depth to the LAB.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvartatskyi, O. I., E-mail: alex.chvartatskyy@gmail.com; Sydorenko, Yu. M., E-mail: y-sydorenko@franko.lviv.ua
We introduce a new bidirectional generalization of (2+1)-dimensional k-constrained Kadomtsev-Petviashvili (KP) hierarchy ((2+1)-BDk-cKPH). This new hierarchy generalizes (2+1)-dimensional k-cKP hierarchy, (t{sub A}, τ{sub B}) and (γ{sub A}, σ{sub B}) matrix hierarchies. (2+1)-BDk-cKPH contains a new matrix (1+1)-k-constrained KP hierarchy. Some members of (2+1)-BDk-cKPH are also listed. In particular, it contains matrix generalizations of Davey-Stewartson (DS) systems, (2+1)-dimensional modified Korteweg-de Vries equation and the Nizhnik equation. (2+1)-BDk-cKPH also includes new matrix (2+1)-dimensional generalizations of the Yajima-Oikawa and Melnikov systems. Binary Darboux Transformation Dressing Method is also proposed for construction of exact solutions for equations from (2+1)-BDk-cKPH. As an example the exactmore » form of multi-soliton solutions for vector generalization of the DS system is given.« less
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2018-05-01
In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.
Vehicle routing problem with time windows using natural inspired algorithms
NASA Astrophysics Data System (ADS)
Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.
2018-03-01
Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties
NASA Astrophysics Data System (ADS)
Lazzaro, D.; Loli Piccolomini, E.; Zama, F.
2016-10-01
This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.
Normal Modes of a Lagrangian System Constrained in a Potential Well.
1983-12-01
A’ -137 948 NORMAL MODES OF A LFHbRANGIAN SYSTEM CONSTRAINED INvi P0TENTIAL WELL(U WISCONSNN UNIV-MADISON MATHEMATICS RESEARCH CENTER V EN DEC F1...Carolina 27709 DT FLE OP Y UNIVERSITY OF WISCONSIN-MADISON MATHEMATICS RESEARCH CENTER NORMAL MODES OF A LAGRANGIAN SYSTEM CONSTRAINED IN A POTENTIAL WELL...respect to the norm lYE [f i + 2 yi )dtl/ 0 Since H I(S’ 1 n’) C CO(S, fle ), then the set A 1 0 is an open set in H1 (lf’) The periodic solution of
Thunder-induced ground motions: 2. Site characterization
NASA Astrophysics Data System (ADS)
Lin, Ting-L.; Langston, Charles A.
2009-04-01
Thunder-induced ground motion, near-surface refraction, and Rayleigh wave dispersion measurements were used to constrain near-surface velocity structure at an unconsolidated sediment site. We employed near-surface seismic refraction measurements to first define ranges for site structure parameters. Air-coupled and hammer-generated Rayleigh wave dispersion curves were used to further constrain the site structure by a grid search technique. The acoustic-to-seismic coupling is modeled as an incident plane P wave in a fluid half-space impinging into a solid layered half-space. We found that the infrasound-induced ground motions constrained substrate velocities and the average thickness and velocities of the near-surface layer. The addition of higher-frequency near-surface Rayleigh waves produced tighter constraints on the near-surface velocities. This suggests that natural or controlled airborne pressure sources can be used to investigate the near-surface site structures for earthquake shaking hazard studies.
Imparting Desired Attributes by Optimization in Structural Design
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2003-01-01
Commonly available optimization methods typically produce a single optimal design as a Constrained minimum of a particular objective function. However, in engineering design practice it is quite often important to explore as much of the design space as possible with respect to many attributes to find out what behaviors are possible and not possible within the initially adopted design concept. The paper shows that the very simple method of the sum of objectives is useful for such exploration. By geometrical argument it is demonstrated that if every weighting coefficient is allowed to change its magnitude and its sign then the method returns a set of designs that are all feasible, diverse in their attributes, and include the Pareto and non-Pareto solutions, at least for convex cases. Numerical examples in the paper include a case of an aircraft wing structural box with thousands of degrees of freedom and constraints, and over 100 design variables, whose attributes are structural mass, volume, displacement, and frequency. The method is inherently suitable for parallel, coarse-grained implementation that enables exploration of the design space in the elapsed time of a single structural optimization.
Mission and Implementation of an Affordable Lunar Return
NASA Technical Reports Server (NTRS)
Spudis, Paul; Lavoie, Anthony
2010-01-01
We present an architecture that establishes the infrastructure for routine space travel by taking advantage of the Moon's resources, proximity and accessibility. We use robotic assets on the Moon that are teleoperated from Earth to prospect, test, demonstrate and produce water from lunar resources before human arrival. This plan is affordable, flexible and not tied to any specific launch vehicle solution. Individual surface pieces are small, permitting them to be deployed separately on small launchers or combined together on single large launchers. Schedule is our free variable; even under highly constrained budgets, the architecture permits this program to be continuously pursued using small, incremental, cumulative steps. The end stage is a fully functional, human-tended lunar outpost capable of producing 150 metric tonnes of water per year enough to export water from the Moon and create a transportation system that allows routine access to all of cislunar space. This cost-effective lunar architecture advances technology and builds a sustainable transportation infrastructure. By eliminating the need to launch everything from the surface of the Earth, we fundamentally change the paradigm of spaceflight.
Signal decomposition for surrogate modeling of a constrained ultrasonic design space
NASA Astrophysics Data System (ADS)
Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.
2018-04-01
The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.
Aircraft wake vortex measurements at Denver International Airport
DOT National Transportation Integrated Search
2004-05-10
Airport capacity is constrained, in part, by spacing requirements associated with the wake vortex hazard. NASA's Wake Vortex Avoidance Project has a goal to establish the feasibility of reducing this spacing while maintaining safety. Passive acoustic...
System Synthesis in Preliminary Aircraft Design using Statistical Methods
NASA Technical Reports Server (NTRS)
DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.
1996-01-01
This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.
Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee
2015-01-01
In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916
Mang, Andreas; Biros, George
2017-01-01
We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.
Reinforcement learning solution for HJB equation arising in constrained optimal control problem.
Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong
2015-11-01
The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhang, Xiaodong; Huang, Guo H; Nie, Xianghui
2009-12-20
Nonpoint source (NPS) water pollution is one of serious environmental issues, especially within an agricultural system. This study aims to propose a robust chance-constrained fuzzy possibilistic programming (RCFPP) model for water quality management within an agricultural system, where solutions for farming area, manure/fertilizer application amount, and livestock husbandry size under different scenarios are obtained and interpreted. Through improving upon the existing fuzzy possibilistic programming, fuzzy robust programming and chance-constrained programming approaches, the RCFPP can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original fuzzy constraints, the RCFPP enhances the robustness of the optimization processes and resulting solutions. The results of the case study indicate that useful information can be obtained through the proposed RCFPP model for providing feasible decision schemes for different agricultural activities under different scenarios (combinations of different p-necessity and p(i) levels). A p-necessity level represents the certainty or necessity degree of the imprecise objective function, while a p(i) level means the probabilities at which the constraints will be violated. A desire to acquire high agricultural income would decrease the certainty degree of the event that maximization of the objective be satisfied, and potentially violate water management standards; willingness to accept low agricultural income will run into the risk of potential system failure. The decision variables under combined p-necessity and p(i) levels were useful for the decision makers to justify and/or adjust the decision schemes for the agricultural activities through incorporation of their implicit knowledge. The results also suggest that this developed approach is applicable to many practical problems where fuzzy and probabilistic distribution information simultaneously exist.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jae-Min; Irwin, Patrick G. J.; Fletcher, Leigh N.
A number of observations have shown that Rayleigh scattering by aerosols dominates the transmission spectrum of HD 189733b at wavelengths shortward of 1 μm. In this study, we retrieve a range of aerosol distributions consistent with transmission spectroscopy between 0.3-24 μm that were recently re-analyzed by Pont et al. To constrain the particle size and the optical depth of the aerosol layer, we investigate the degeneracies between aerosol composition, temperature, planetary radius, and molecular abundances that prevent unique solutions for transit spectroscopy. Assuming that the aerosol is composed of MgSiO{sub 3}, we suggest that a vertically uniform aerosol layer overmore » all pressures with a monodisperse particle size smaller than about 0.1 μm and an optical depth in the range 0.002-0.02 at 1 μm provides statistically meaningful solutions for the day/night terminator regions of HD 189733b. Generally, we find that a uniform aerosol layer provide adequate fits to the data if the optical depth is less than 0.1 and the particle size is smaller than 0.1 μm, irrespective of the atmospheric temperature, planetary radius, aerosol composition, and gaseous molecules. Strong constraints on the aerosol properties are provided by spectra at wavelengths shortward of 1 μm as well as longward of 8 μm, if the aerosol material has absorption features in this region. We show that these are the optimal wavelengths for quantifying the effects of aerosols, which may guide the design of future space observations. The present investigation indicates that the current data offer sufficient information to constrain some of the aerosol properties of HD189733b, but the chemistry in the terminator regions remains uncertain.« less
NASA Astrophysics Data System (ADS)
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
The post-buckling behavior of a beam constrained by springy walls
NASA Astrophysics Data System (ADS)
Katz, Shmuel; Givli, Sefi
2015-05-01
The post-buckling behavior of a beam subjected to lateral constraints is of practical importance in a variety of applications, such as stent procedures, filopodia growth in living cells, endoscopic examination of internal organs, and deep drilling. Even though in reality the constraining surfaces are often deformable, the literature has focused mainly on rigid and fixed constraints. In this paper, we make a first step to bridge this gap through a theoretical and experimental examination of the post-buckling behavior of a beam constrained by a fixed wall and a springy wall, i.e. one that moves laterally against a spring. The response exhibited by the proposed system is much richer compared to that of the fixed-wall system, and can be tuned by choosing the spring stiffness. Based on small-deformation analysis, we obtained closed-form analytical solutions and quantitative insights. The accuracy of these results was examined by comparison to large-deformation analysis. We concluded that the closed-form solution of the small-deformation analysis provides an excellent approximation, except in the highest attainable mode. There, the system exhibits non-intuitive behavior and non-monotonous force-displacement relations that can only be captured by large-deformation theories. Although closed-form solutions cannot be derived for the large-deformation analysis, we were able to reveal general properties of the solution. In the last part of the paper, we present experimental results that demonstrate various features obtained from the theoretical analysis.
ERIC Educational Resources Information Center
Jolin, Michele; Schmitz, Paul; Seldon, Willa
2012-01-01
Communities face powerful challenges--a high-school dropout epidemic, youth unemployment, teen pregnancy--that require powerful solutions. In a climate of increasingly constrained resources, those solutions must help communities to achieve more with less. A new kind of community collaborative--an approach that aspires to significant,…
JWST Wavefront Control Toolbox
NASA Technical Reports Server (NTRS)
Shin, Shahram Ron; Aronstein, David L.
2011-01-01
A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.
Space Science for the Third Millennium
NASA Technical Reports Server (NTRS)
Frewing, Kent
1996-01-01
As NASA approaches the beginning of its fifth decade in 1998, and as the calendar approaches the beginning of its third millennium, America's civilian space agency is changing its historic ideas about conducting space science so that it will still be able to perform the desired scientific studies in an era of constrained NASA budgets.
Solving LP Relaxations of Large-Scale Precedence Constrained Problems
NASA Astrophysics Data System (ADS)
Bienstock, Daniel; Zuckerberg, Mark
We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.
The design of multirate digital control systems
NASA Technical Reports Server (NTRS)
Berg, M. C.
1986-01-01
The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.
Pumping tests in non-uniform aquifers - the linear strip case
Butler, J.J.; Liu, W.Z.
1991-01-01
Many pumping tests are performed in geologic settings that can be conceptualized as a linear infinite strip of one material embedded in a matrix of differing flow properties. A semi-analytical solution is presented to aid the analysis of drawdown data obtained from pumping tests performed in settings that can be represented by such a conceptual model. Integral transform techniques are employed to obtain a solution in transform space that can be numerically inverted to real space. Examination of the numerically transformed solution reveals several interesting features of flow in this configuration. If the transmissivity of the strip is much higher than that of the matrix, linear and bilinear flow are the primary flow regimes during a pumping test. If the contrast between matrix and strip properties is not as extreme, then radial flow should be the primary flow mechanism. Sensitivity analysis is employed to develop insight into the controls on drawdown in this conceptual model and to demonstrate the importance of temporal and spatial placement of observations. Changes in drawdown are sensitive to the transmissivity of the strip for a limited time duration. After that time, only the total drawdown remains a function of strip transmissivity. In the case of storativity, both the total drawdown and changes in drawdown are sensitive to the storativity of the strip for a time of quite limited duration. After that time, essentially no information can be gained about the storage properties of the strip from drawdown data. An example analysis is performed using data previously presented in the literature to demonstrate the viability of the semi-analytical solution and to illustrate a general procedure for analysis of drawdown data in complex geologic settings. This example reinforces the importance of observation well placement and the time of data collection in constraining parameter correlation, a major source of the uncertainty that arises in the parameter estimation procedure. ?? 1991.
Local and average structure of Mn- and La-substituted BiFeO3
NASA Astrophysics Data System (ADS)
Jiang, Bo; Selbach, Sverre M.
2017-06-01
The local and average structure of solid solutions of the multiferroic perovskite BiFeO3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.
Local and average structure of Mn- and La-substituted BiFeO 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Bo; Selbach, Sverre M.
2017-06-01
The local and average structure of solid solutions of the multiferroic perovskite BiFeO 3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space groupmore » symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO 3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.« less
Control of the constrained planar simple inverted pendulum
NASA Technical Reports Server (NTRS)
Bavarian, B.; Wyman, B. F.; Hemami, H.
1983-01-01
Control of a constrained planar inverted pendulum by eigenstructure assignment is considered. Linear feedback is used to stabilize and decouple the system in such a way that specified subspaces of the state space are invariant for the closed-loop system. The effectiveness of the feedback law is tested by digital computer simulation. Pre-compensation by an inverse plant is used to improve performance.
How to Test the SME with Space Missions?
NASA Technical Reports Server (NTRS)
Hees, A.; Lamine, B.; Le Poncin-Lafitte, C.; Wolf, P.
2013-01-01
In this communication, we focus on possibilities to constrain SME coefficients using Cassini and Messenger data. We present simulations of radio science observables within the framework of the SME, identify the linear combinations of SME coefficients the observations depend on and determine the sensitivity of these measurements to the SME coefficients. We show that these datasets are very powerful for constraining SME coefficients.
Forward modeling of gravity data using geostatistically generated subsurface density variations
Phelps, Geoffrey
2016-01-01
Using geostatistical models of density variations in the subsurface, constrained by geologic data, forward models of gravity anomalies can be generated by discretizing the subsurface and calculating the cumulative effect of each cell (pixel). The results of such stochastically generated forward gravity anomalies can be compared with the observed gravity anomalies to find density models that match the observed data. These models have an advantage over forward gravity anomalies generated using polygonal bodies of homogeneous density because generating numerous realizations explores a larger region of the solution space. The stochastic modeling can be thought of as dividing the forward model into two components: that due to the shape of each geologic unit and that due to the heterogeneous distribution of density within each geologic unit. The modeling demonstrates that the internally heterogeneous distribution of density within each geologic unit can contribute significantly to the resulting calculated forward gravity anomaly. Furthermore, the stochastic models match observed statistical properties of geologic units, the solution space is more broadly explored by producing a suite of successful models, and the likelihood of a particular conceptual geologic model can be compared. The Vaca Fault near Travis Air Force Base, California, can be successfully modeled as a normal or strike-slip fault, with the normal fault model being slightly more probable. It can also be modeled as a reverse fault, although this structural geologic configuration is highly unlikely given the realizations we explored.
Back-illuminated large area frame transfer CCDs for space-based hyper-spectral imaging applications
NASA Astrophysics Data System (ADS)
Philbrick, Robert H.; Gilmore, Angelo S.; Schrein, Ronald J.
2016-07-01
Standard offerings of large area, back-illuminated full frame CCD sensors are available from multiple suppliers and they continue to be commonly deployed in ground- and space-based applications. By comparison the availability of large area frame transfers CCDs is sparse, with the accompanying 2x increase in die area no doubt being a contributing factor. Modern back-illuminated CCDs yield very high quantum efficiency in the 290 to 400 nm band, a wavelength region of great interest in space-based instruments studying atmospheric phenomenon. In fast framing (e.g. 10 - 20 Hz), space-based applications such as hyper-spectral imaging, the use of a mechanical shutter to block incident photons during readout can prove costly and lower instrument reliability. The emergence of large area, all-digital visible CMOS sensors, with integrate while read functionality, are an alternative solution to CCDs; but, even after factoring in reduced complexity and cost of support electronics, the present cost to implement such novel sensors is prohibitive to cost constrained missions. Hence, there continues to be a niche set of applications where large area, back-illuminated frame transfer CCDs with high UV quantum efficiency, high frame rate, high full well, and low noise provide an advantageous solution. To address this need a family of large area frame transfer CCDs has been developed that includes 2048 (columns) x 256 (rows) (FT4), 2048 x 512 (FT5), and 2048 x 1024 (FT6) full frame transfer CCDs; and a 2048 x 1024 (FT7) split-frame transfer CCD. Each wafer contains 4 FT4, 2 FT5, 2 FT6, and 2 FT7 die. The designs have undergone radiation and accelerated life qualification and the electro-optical performance of these CCDs over the wavelength range of 290 to 900 nm is discussed.
Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data
NASA Astrophysics Data System (ADS)
Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam
2018-04-01
Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.
Space Telescope Cluster Supernova Survey: II. The Type Ia Supernova Rate in High-Redshift Galaxy /abs/0809.1648 Constraining Dust and Color Variations of High-z SNe Using NICMOS on the Hubble Space /0804.4142 A New Determination of the High-Redshift Type Ia Supernova Rates with the Hubble Space Telescope
Fast alternating projection methods for constrained tomographic reconstruction
Liu, Li; Han, Yongxin
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298
Network sensitivity solutions for regional moment-tensor inversions
Ford, Sean R.; Dreger, Douglas S.; Walter, William R.
2010-09-20
Well-resolved moment-tensor solutions reveal information about the sources of seismic waves. In this paper,we introduce a newly of assessing confidence in the regional full moment-tensor inversion via the introduction of the network sensitivity solution (NSS). The NSS takes into account the unique station distribution, frequency band, and signal-to-noise ratio of a given event scenario. The NSS compares both a hypothetical pure source (for example, an explosion or an earthquake) and the actual data with several thousand sets of synthetic data from a uniform distribution of all possible sources. The comparison with a hypothetical pure source provides the theoretically best-constrained source-typemore » distribution for a given set of stations; and with it, one can determine whether further analysis with the data is warranted. The NSS that employs the actual data gives a direct comparison of all other source types with the best fit source. In this way, one can choose a threshold level of fit in which the solution is comfortably constrained. The method is tested for the well-recorded nuclear test, JUNCTION, at the Nevada Test Site. Sources that fit comparably well to a hypothetical pure explosion recorded with no noise at the JUNCTION data stations have a large volumetric component and are not described well by a double-couple (DC) source. The NSS using the real data from JUNCTION is even more tightly constrained to an explosion because the data contain some energy that precludes fitting with any type of deviator source. We also calculate the NSS for the October 2006 North Korea test and a nearby earthquake, where the station coverage is poor and the event magnitude is small. As a result, the earthquake solution is very well fit by a DC source, and the best-fit solution to the nuclear test (M w 4.1) is dominantly explosion.« less
Rapid Slewing of Flexible Space Structures
2015-09-01
axis gimbal with elastic joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are...the effects of the nonlinearities so the vibrational motion can be constrained for a time-optimal slew. It is shown that by constructing an...joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are properly constrained, thus
Experimental Evaluation of Unicast and Multicast CoAP Group Communication
Ishaq, Isam; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet
2016-01-01
The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. PMID:27455262
Lunar Heat Flux Measurements Enabled by a Microwave Radiometer Aboard the Deep Space Gateway
NASA Astrophysics Data System (ADS)
Siegler, M.; Ruf, C.; Putzig, N.; Morgan, G.; Hayne, P.; Paige, D.; Nagihara, S.; Weber, R.
2018-02-01
We would like to present a concept to use the Deep Space Gateway as a platform for constraining the geothermal heat production, surface, and near-surface rocks, and dielectric properties of the Moon from orbit with passive microwave radiometery.
NASA Technical Reports Server (NTRS)
1992-01-01
Summary charts of the following topics are presented: the Percentage of Critical Questions in Constrained and Robust Programs; the Executive Committee and AMAC Disposition of Critical Questions for Constrained and Robust Programs; and the Requirements for Ground-based Research and Flight Platforms for Constrained and Robust Programs. Data Tables are also presented and cover the following: critical questions from all Life Sciences Division Discipline Science Plans; critical questions listed by category and criticality; all critical questions which require ground-based research; critical questions that would utilize spacelabs listed by category and criticality; critical questions that would utilize Space Station Freedom (SSF) listed by category and criticality; critical questions that would utilize the SSF Centrifuge; facility listed by category and criticality; critical questions that would utilize a Moon base listed by category and criticality; critical questions that would utilize robotic missions listed by category and criticality; critical questions that would utilize free flyers listed by category and criticality; and critical questions by deliverables.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.
2012-01-01
We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.
Constrained maximum consistency multi-path mitigation
NASA Astrophysics Data System (ADS)
Smith, George B.
2003-10-01
Blind deconvolution algorithms can be useful as pre-processors for signal classification algorithms in shallow water. These algorithms remove the distortion of the signal caused by multipath propagation when no knowledge of the environment is available. A framework in which filters that produce signal estimates from each data channel that are as consistent with each other as possible in a least-squares sense has been presented [Smith, J. Acoust. Soc. Am. 107 (2000)]. This framework provides a solution to the blind deconvolution problem. One implementation of this framework yields the cross-relation on which EVAM [Gurelli and Nikias, IEEE Trans. Signal Process. 43 (1995)] and Rietsch [Rietsch, Geophysics 62(6) (1997)] processing are based. In this presentation, partially blind implementations that have good noise stability properties are compared using Classification Operating Characteristics (CLOC) analysis. [Work supported by ONR under Program Element 62747N and NRL, Stennis Space Center, MS.
Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual
NASA Technical Reports Server (NTRS)
Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.
1992-01-01
The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.
On the vortices for the nonlinear Schrödinger equation in higher dimensions.
Feng, Wen; Stanislavova, Milena
2018-04-13
We consider the nonlinear Schrödinger equation in n space dimensions [Formula: see text]and study the existence and stability of standing wave solutions of the form [Formula: see text]and [Formula: see text]For n =2 k , ( r j , θ j ) are polar coordinates in [Formula: see text], j =1,2,…, k ; for n =2 k +1, ( r j , θ j ) are polar coordinates in [Formula: see text], ( r k , θ k , z ) are cylindrical coordinates in [Formula: see text], j =1,2,…, k -1. We show the existence of functions ϕ w , which are constructed variationally as minimizers of appropriate constrained functionals. These waves are shown to be spectrally stable (with respect to perturbations of the same type), if 1< p <1+4/ n This article is part of the theme issue 'Stability of nonlinear waves and patterns and related topics'. © 2018 The Author(s).
Weinman, J A
1988-10-01
A simulated analysis is presented that shows that returns from a single-frequency space-borne lidar can be combined with data from conventional visible satellite imagery to yield profiles of aerosol extinction coefficients and the wind speed at the ocean surface. The optical thickness of the aerosols in the atmosphere can be derived from visible imagery. That measurement of the total optical thickness can constrain the solution to the lidar equation to yield a robust estimate of the extinction profile. The specular reflection of the lidar beam from the ocean can be used to determine the wind speed at the sea surface once the transmission of the atmosphere is known. The impact on the retrieved aerosol profiles and surface wind speed produced by errors in the input parameters and noise in the lidar measurements is also considered.
Constraining sterile neutrinos with AMANDA and IceCube atmospheric neutrino data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esmaili, Arman; Peres, O.L.G.; Halzen, Francis, E-mail: aesmaili@ifi.unicamp.br, E-mail: halzen@icecube.wisc.edu, E-mail: orlando@ifi.unicamp.br
2012-11-01
We demonstrate that atmospheric neutrino data accumulated with the AMANDA and the partially deployed IceCube experiments constrain the allowed parameter space for a hypothesized fourth sterile neutrino beyond the reach of a combined analysis of all other experiments, for Δm{sup 2}{sub 41}∼<1 eV{sup 2}. Although the IceCube data wins the statistics in the analysis, the advantage of a combined analysis of AMANDA and IceCube data is the partial remedy of yet unknown instrumental systematic uncertainties. We also illustrate the sensitivity of the completed IceCube detector, that is now taking data, to the parameter space of 3+1 model.
Cournot games with network effects for electric power markets
NASA Astrophysics Data System (ADS)
Spezia, Carl John
The electric utility industry is moving from regulated monopolies with protected service areas to an open market with many wholesale suppliers competing for consumer load. This market is typically modeled by a Cournot game oligopoly where suppliers compete by selecting profit maximizing quantities. The classical Cournot model can produce multiple solutions when the problem includes typical power system constraints. This work presents a mathematical programming formulation of oligopoly that produces unique solutions when constraints limit the supplier outputs. The formulation casts the game as a supply maximization problem with power system physical limits and supplier incremental profit functions as constraints. The formulation gives Cournot solutions identical to other commonly used algorithms when suppliers operate within the constraints. Numerical examples demonstrate the feasibility of the theory. The results show that the maximization formulation will give system operators more transmission capacity when compared to the actions of suppliers in a classical constrained Cournot game. The results also show that the profitability of suppliers in constrained networks depends on their location relative to the consumers' load concentration.
A seismological overview of the induced earthquakes in the Duvernay play near Fox Creek, Alberta
NASA Astrophysics Data System (ADS)
Schultz, Ryan; Wang, Ruijia; Gu, Yu Jeffrey; Haug, Kristine; Atkinson, Gail
2017-01-01
This paper summarizes the current state of understanding regarding the induced seismicity in connection with hydraulic fracturing operations targeting the Duvernay Formation in central Alberta, near the town of Fox Creek. We demonstrate that earthquakes in this region cluster into distinct sequences in time, space, and focal mechanism using (i) cross-correlation detection methods to delineate transient temporal relationships, (ii) double-difference relocations to confirm spatial clustering, and (iii) moment tensor solutions to assess fault motion consistency. The spatiotemporal clustering of the earthquake sequences is strongly related to the nearby hydraulic fracturing operations. In addition, we identify a preference for strike-slip motions on subvertical faults with an approximate 45° P axis orientation, consistent with expectation from the ambient stress field. The hypocentral geometries for two of the largest-magnitude (M 4) sequences that are robustly constrained by local array data provide compelling evidence for planar features starting at Duvernay Formation depths and extending into the shallow Precambrian basement. We interpret these lineaments as subvertical faults orientated approximately north-south, consistent with the regional moment tensor solutions. Finally, we conclude that the sequences were triggered by pore pressure increases in response to hydraulic fracturing stimulations along previously existing faults.
Multiobjective GAs, quantitative indices, and pattern classification.
Bandyopadhyay, Sanghamitra; Pal, Sankar K; Aruna, B
2004-10-01
The concept of multiobjective optimization (MOO) has been integrated with variable length chromosomes for the development of a nonparametric genetic classifier which can overcome the problems, like overfitting/overlearning and ignoring smaller classes, as faced by single objective classifiers. The classifier can efficiently approximate any kind of linear and/or nonlinear class boundaries of a data set using an appropriate number of hyperplanes. While designing the classifier the aim is to simultaneously minimize the number of misclassified training points and the number of hyperplanes, and to maximize the product of class wise recognition scores. The concepts of validation set (in addition to training and test sets) and validation functional are introduced in the multiobjective classifier for selecting a solution from a set of nondominated solutions provided by the MOO algorithm. This genetic classifier incorporates elitism and some domain specific constraints in the search process, and is called the CEMOGA-Classifier (constrained elitist multiobjective genetic algorithm based classifier). Two new quantitative indices, namely, the purity and minimal spacing, are developed for evaluating the performance of different MOO techniques. These are used, along with classification accuracy, required number of hyperplanes and the computation time, to compare the CEMOGA-Classifier with other related ones.
Bayesian inversion of the global present-day GIA signal uncertainty from RSL data
NASA Astrophysics Data System (ADS)
Caron, Lambert; Ivins, Erik R.; Adhikari, Surendra; Larour, Eric
2017-04-01
Various geophysical signals measured in the process of studying the present-day climate change (such as changes in the Earth gravitational potential, ocean altimery or GPS data) include a secular Glacial Isostatic Adjustment contribution that has to be corrected for. Yet, one of the current major challenges that Glacial Isostatic Adjustment modelling is currently struggling with is to accurately determine the uncertainty of the predicted present-day GIA signal. This is especially true at the global scale, where coupling between ice history and mantle rheology greatly contributes to the non-uniqueness of the solutions. Here we propose to use more than 11000 paleo sea level records to constrain a set of GIA Bayesian inversions and thoroughly explore its parameters space. We include two linearly relaxing models to represent the mantle rheology and couple them with a scalable ice history model in order to better assess the non-uniqueness of the solutions. From the resulting estimates of the Probability Density Function, we then extract maps of uncertainty affecting the present-day vertical land motion and geoid due to GIA at the global scale, and their associated expectation of the signal.
NASA Technical Reports Server (NTRS)
Padovan, J.; Lackney, J.
1986-01-01
The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.
A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
Disentangling Redshift-Space Distortions and Nonlinear Bias using the 2D Power Spectrum
Jennings, Elise; Wechsler, Risa H.
2015-08-07
We present the nonlinear 2D galaxy power spectrum, P(k, µ), in redshift space, measured from the Dark Sky simulations, using galaxy catalogs constructed with both halo occupation distribution and subhalo abundance matching methods, chosen to represent an intermediate redshift sample of luminous red galaxies. We find that the information content in individual µ (cosine of the angle to the line of sight) bins is substantially richer then multipole moments, and show that this can be used to isolate the impact of nonlinear growth and redshift space distortion (RSD) effects. Using the µ < 0.2 simulation data, which we show ismore » not impacted by RSD effects, we can successfully measure the nonlinear bias to an accuracy of ~ 5% at k < 0.6hMpc-1 . This use of individual µ bins to extract the nonlinear bias successfully removes a large parameter degeneracy when constraining the linear growth rate of structure. We carry out a joint parameter estimation, using the low µ simulation data to constrain the nonlinear bias, and µ > 0.2 to constrain the growth rate and show that f can be constrained to ~ 26(22)% to a kmax < 0.4(0.6)hMpc-1 from clustering alone using a simple dispersion model, for a range of galaxy models. Our analysis of individual µ bins also reveals interesting physical effects which arise simply from different methods of populating halos with galaxies. We also find a prominent turnaround scale, at which RSD damping effects are greater then the nonlinear growth, which differs not only for each µ bin but also for each galaxy model. These features may provide unique signatures which could be used to shed light on the galaxy–dark matter connection. Furthermore, the idea of separating nonlinear growth and RSD effects making use of the full information in the 2D galaxy power spectrum yields significant improvements in constraining cosmological parameters and may be a promising probe of galaxy formation models.« less
Quantifying the isotopic composition of NOx emission sources: An analysis of collection methods
NASA Astrophysics Data System (ADS)
Fibiger, D.; Hastings, M.
2012-04-01
We analyze various collection methods for nitrogen oxides, NOx (NO2 and NO), used to evaluate the nitrogen isotopic composition (δ15N). Atmospheric NOx is a major contributor to acid rain deposition upon its conversion to nitric acid; it also plays a significant role in determining air quality through the production of tropospheric ozone. NOx is released by both anthropogenic (fossil fuel combustion, biomass burning, aircraft emissions) and natural (lightning, biogenic production in soils) sources. Global concentrations of NOx are rising because of increased anthropogenic emissions, while natural source emissions also contribute significantly to the global NOx burden. The contributions of both natural and anthropogenic sources and their considerable variability in space and time make it difficult to attribute local NOx concentrations (and, thus, nitric acid) to a particular source. Several recent studies suggest that variability in the isotopic composition of nitric acid deposition is related to variability in the isotopic signatures of NOx emission sources. Nevertheless, the isotopic composition of most NOx sources has not been thoroughly constrained. Ultimately, the direct capture and quantification of the nitrogen isotopic signatures of NOx sources will allow for the tracing of NOx emissions sources and their impact on environmental quality. Moreover, this will provide a new means by which to verify emissions estimates and atmospheric models. We present laboratory results of methods used for capturing NOx from air into solution. A variety of methods have been used in field studies, but no independent laboratory verification of the efficiencies of these methods has been performed. When analyzing isotopic composition, it is important that NOx be collected quantitatively or the possibility of fractionation must be constrained. We have found that collection efficiency can vary widely under different conditions in the laboratory and fractionation does not vary predictably with collection efficiency. For example, prior measurements frequently utilized triethanolamine solution for collecting NOx, but the collection efficiency was found to drop quickly as the solution aged. The most promising method tested is a NaOH/KMnO4 solution (Margeson and Knoll, Anal. Chem., 1985) which can collect NOx quantitatively from the air. Laboratory tests of previously used methods, along with progress toward creating a suitable and verifiable field deployable collection method will be presented.
Non-Linear Cosmological Power Spectra in Real and Redshift Space
NASA Technical Reports Server (NTRS)
Taylor, A. N.; Hamilton, A. J. S.
1996-01-01
We present an expression for the non-linear evolution of the cosmological power spectrum based on Lagrangian trajectories. This is simplified using the Zel'dovich approximation to trace particle displacements, assuming Gaussian initial conditions. The model is found to exhibit the transfer of power from large to small scales expected in self-gravitating fields. Some exact solutions are found for power-law initial spectra. We have extended this analysis into red-shift space and found a solution for the non-linear, anisotropic redshift-space power spectrum in the limit of plane-parallel redshift distortions. The quadrupole-to-monopole ratio is calculated for the case of power-law initial spectra. We find that the shape of this ratio depends on the shape of the initial spectrum, but when scaled to linear theory depends only weakly on the redshift-space distortion parameter, beta. The point of zero-crossing of the quadrupole, kappa(sub o), is found to obey a simple scaling relation and we calculate this scale in the Zel'dovich approximation. This model is found to be in good agreement with a series of N-body simulations on scales down to the zero-crossing of the quadrupole, although the wavenumber at zero-crossing is underestimated. These results are applied to the quadrupole-to-monopole ratio found in the merged QDOT plus 1.2-Jy-IRAS redshift survey. Using a likelihood technique we have estimated that the distortion parameter is constrained to be beta greater than 0.5 at the 95 percent level. Our results are fairly insensitive to the local primordial spectral slope, but the likelihood analysis suggests n = -2 un the translinear regime. The zero-crossing scale of the quadrupole is k(sub 0) = 0.5 +/- 0.1 h Mpc(exp -1) and from this we infer that the amplitude of clustering is sigma(sub 8) = 0.7 +/- 0.05. We suggest that the success of this model is due to non-linear redshift-space effects arising from infall on to caustic and is not dominated by virialized cluster cores. The latter should start to dominate on scales below the zero-crossing of the quadrupole, where our model breaks down.
Path Following in the Exact Penalty Method of Convex Programming.
Zhou, Hua; Lange, Kenneth
2015-07-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.
Path Following in the Exact Penalty Method of Convex Programming
Zhou, Hua; Lange, Kenneth
2015-01-01
Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044
Electric dipole moments in natural supersymmetry
NASA Astrophysics Data System (ADS)
Nakai, Yuichiro; Reece, Matthew
2017-08-01
We discuss electric dipole moments (EDMs) in the framework of CP-violating natural supersymmetry (SUSY). Recent experimental results have significantly tightened constraints on the EDMs of electrons and of mercury, and substantial further progress is expected in the near future. We assess how these results constrain the parameter space of natural SUSY. In addition to our discussion of SUSY, we provide a set of general formulas for two-loop fermion EDMs, which can be applied to a wide range of models of new physics. In the SUSY context, the two-loop effects of stops and charginos respectively constrain the phases of A t μ and M 2 μ to be small in the natural part of parameter space. If the Higgs mass is lifted to 125 GeV by a new tree-level superpotential interaction and soft term with CP-violating phases, significant EDMs can arise from the two-loop effects of W bosons and tops. We compare the bounds arising from EDMs to those from other probes of new physics including colliders, b → sγ, and dark matter searches. Importantly, improvements in reach not only constrain higher masses, but require the phases to be significantly smaller in the natural parameter space at low mass. The required smallness of phases sharpens the CP problem of natural SUSY model building.
Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays
Trucco, Andrea; Traverso, Federico; Crocco, Marco
2015-01-01
For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987
Augmenting Parametric Optimal Ascent Trajectory Modeling with Graph Theory
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Matthew R.; Edwards, Stephen; Steffens, Michael
2016-01-01
It has been well documented that decisions made in the early stages of Conceptual and Pre-Conceptual design commit up to 80% of total Life-Cycle Cost (LCC) while engineers know the least about the product they are designing [1]. Once within Preliminary and Detailed design however, making changes to the design becomes far more difficult to enact in both cost and schedule. Primarily this has been due to a lack of detailed data usually uncovered later during the Preliminary and Detailed design phases. In our current budget-constrained environment, making decisions within Conceptual and Pre-Conceptual design which minimize LCC while meeting requirements is paramount to a program's success. Within the arena of launch vehicle design, optimizing the ascent trajectory is critical for minimizing the costs present within such concerns as propellant, aerodynamic, aeroheating, and acceleration loads while meeting requirements such as payload delivered to a desired orbit. In order to optimize the vehicle design its constraints and requirements must be known, however as the design cycle proceeds it is all but inevitable that the conditions will change. Upon that change, the previously optimized trajectory may no longer be optimal, or meet design requirements. The current paradigm for adjusting to these updates is generating point solutions for every change in the design's requirements [2]. This can be a tedious, time-consuming task as changes in virtually any piece of a launch vehicle's design can have a disproportionately large effect on the ascent trajectory, as the solution space of the trajectory optimization problem is both non-linear and multimodal [3]. In addition, an industry standard tool, Program to Optimize Simulated Trajectories (POST), requires an expert analyst to produce simulated trajectories that are feasible and optimal [4]. In a previous publication the authors presented a method for combatting these challenges [5]. In order to bring more detailed information into Conceptual and Pre-Conceptual design, knowledge of the effects originating from changes to the vehicle must be calculated. In order to do this, a model capable of quantitatively describing any vehicle within the entire design space under consideration must be constructed. This model must be based upon analysis of acceptable fidelity, which in this work comes from POST. Design space interrogation can be achieved with surrogate modeling, a parametric, polynomial equation representing a tool. A surrogate model must be informed by data from the tool with enough points to represent the solution space for the chosen number of variables with an acceptable level of error. Therefore, Design Of Experiments (DOE) is used to select points within the design space to maximize information gained on the design space while minimizing number of data points required. To represent a design space with a non-trivial number of variable parameters the number of points required still represent an amount of work which would take an inordinate amount of time via the current paradigm of manual analysis, and so an automated method was developed. The best practices of expert trajectory analysts working within NASA Marshall's Advanced Concepts Office (ACO) were implemented within a tool called multiPOST. These practices include how to use the output data from a previous run of POST to inform the next, determining whether a trajectory solution is feasible from a real-world perspective, and how to handle program execution errors. The tool was then augmented with multiprocessing capability to enable analysis on multiple trajectories simultaneously, allowing throughput to scale with available computational resources. In this update to the previous work the authors discuss issues with the method and solutions.
NASA Astrophysics Data System (ADS)
Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Hsu, Michael; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Chen, Fung; Socha, Robert; Park, JungChul; Gronlund, Keith
2005-11-01
Illumination optimization, often combined with optical proximity corrections (OPC) to the mask, is becoming one of the critical components for a production-worthy lithography process for 55nm-node DRAM/Flash memory devices and beyond. At low-k1, e.g. k1<0.31, both resolution and imaging contrast can be severely limited by the current imaging tools while using the standard illumination sources. Illumination optimization is a process where the source shape is varied, in both profile and intensity distribution, to achieve enhancement in the final image contrast as compared to using the non-optimized sources. The optimization can be done efficiently for repetitive patterns such as DRAM/Flash memory cores. However, illumination optimization often produces source shapes that are "free-form" like and they can be too complex to be directly applicable for production and lack the necessary radial and annular symmetries desirable for the diffractive optical element (DOE) based illumination systems in today's leading lithography tools. As a result, post-optimization rendering and verification of the optimized source shape are often necessary to meet the production-ready or manufacturability requirements and ensure optimal performance gains. In this work, we describe our approach to the illumination optimization for k1<0.31 DRAM/Flash memory patterns, using an ASML XT:1400i at NA 0.93, where the all necessary manufacturability requirements are fully accounted for during the optimization. The imaging contrast in the resist is optimized in a reduced solution space constrained by the manufacturability requirements, which include minimum distance between poles, minimum opening pole angles, minimum ring width and minimum source filling factor in the sigma space. For additional performance gains, the intensity within the optimized source can vary in a gray-tone fashion (eight shades used in this work). Although this new optimization approach can sometimes produce closely spaced solutions as gauged by the NILS based metrics, we show that the optimal and production-ready source shape solution can be easily determined by comparing the best solutions to the "free-form" solution and more importantly, by their respective imaging fidelity and process latitude ranking. Imaging fidelity and process latitude simulations are performed to analyze the impact and sensitivity of the manufacturability requirements on pattern specific illumination optimizations using ASML XT:1400i and other latest imaging systems. Mask model based OPC (MOPC) is applied and optimized sequentially to ensure that the CD uniformity requirements are met.
Visual Tracking via Sparse and Local Linear Coding.
Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan
2015-11-01
The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
Future Visions for Scientific Human Exploration
NASA Astrophysics Data System (ADS)
Garvin, James
2002-01-01
Human exploration has always played a vital role within NASA, in spite of current perceptions that today it is adrift as a consequence of the resource challenges associated with construction and operation of the International Space Station (ISS). On the basis of the significance of human spaceflight within NASA's overall mission, periodic evaluation of its strategic position has been conducted by various groups, most recently exemplified by the recent Human Exploration and Development of Space Enterprise Strategic Plan. While such reports paint one potential future pathway, they are necessarily constrained by the ground rules and assumptions under which they are developed. An alternate approach, involving a small team of individuals selected as "brainstormers," has been ongoing within NASA for the past two years in an effort to capture a vision of a long-term future for human spaceflight not limited by nearer-term "point design" solutions. This paper describes the guiding principles and concepts developed by this team. It is not intended to represent an implementation plan, but rather one perspective on what could result as human beings extend their range of experience in spaceflight beyond today's beach-head of Low-Earth Orbit (LEO).
Examining Mathematics Teacher Educators' Emerging Practices in Online Environments
ERIC Educational Resources Information Center
Kastberg, Signe; Lynch-Davis, Kathleen; D'Ambrosio, Beatriz
2014-01-01
Teacher professional development and course work using asynchronous online environments seems promising, yet little is known about how mathematics teacher educators (MTEs) develop practices for such spaces. Research has shown that views of learning impact design of online learning spaces, enabling and constraining particular student action. More…
Agency, Language Learning, and Multilingual Spaces
ERIC Educational Resources Information Center
Miller, Elizabeth R.
2012-01-01
This article explores the notion of agency in language learning and use as discursively, historically, and socially mediated. It further explores how agency can be understood as variously enabled and constrained as individuals move from one cultural, linguistic, and/or geographical space to another. These explorations focus on how agency is…
Some solutions of the general three body problem in form space
NASA Astrophysics Data System (ADS)
Titov, Vladimir
2018-05-01
Some solutions of three body problem with equal masses are first considered in form space. The solutions in usual euclidean space may be restored from these form space solutions. If constant energy h < 0, the trajectories are located inside of Hill's surface. Without loss of generality due to scale symmetry we can set h = -1. Such surface has a simple form in form space. Solutions of isosceles and rectilinear three body problems lie within Hill's curve; periodic solutions of free fall three body problem start in one point of this curve, and finish in another. The solutions are illustrated by number of figures.
Department of Defense Spacelift in a Fiscally Constrained Environment
2011-12-16
or space operations. Space weather may impact spacecraft and ground-based systems. Space weather is influenced by phenomena such as solar flare...shareholders included Rocket and Science Corporation Energia (Russian- based company), a Norwegian shipbuilder, and two Ukrainian rocket firms (Hennigan...Hennigan 2011b). In October 2010, Sea Launch AG emerged from Chapter 11 bankruptcy protection as a result of Rocket and Science Corporation Energia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehghani, M.H.; Department of Physics, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1; Perimeter Institute for Theoretical Physics, 35 Caroline Street North, Waterloo, Ontario
We investigate the existence of Taub-NUT (Newman-Unti-Tamburino) and Taub-bolt solutions in Gauss-Bonnet gravity and obtain the general form of these solutions in d dimensions. We find that for all nonextremal NUT solutions of Einstein gravity having no curvature singularity at r=N, there exist NUT solutions in Gauss-Bonnet gravity that contain these solutions in the limit that the Gauss-Bonnet parameter {alpha} goes to zero. Furthermore there are no NUT solutions in Gauss-Bonnet gravity that yield nonextremal NUT solutions to Einstein gravity having a curvature singularity at r=N in the limit {alpha}{yields}0. Indeed, we have nonextreme NUT solutions in 2+2k dimensions withmore » nontrivial fibration only when the 2k-dimensional base space is chosen to be CP{sup 2k}. We also find that the Gauss-Bonnet gravity has extremal NUT solutions whenever the base space is a product of 2-torii with at most a two-dimensional factor space of positive curvature. Indeed, when the base space has at most one positively curved two-dimensional space as one of its factor spaces, then Gauss-Bonnet gravity admits extreme NUT solutions, even though there a curvature singularity exists at r=N. We also find that one can have bolt solutions in Gauss-Bonnet gravity with any base space with factor spaces of zero or positive constant curvature. The only case for which one does not have bolt solutions is in the absence of a cosmological term with zero curvature base space.« less
Coriolis effects on nonlinear oscillations of rotating cylinders and rings
NASA Technical Reports Server (NTRS)
Padovan, J.
1976-01-01
The effects which moderately large deflections have on the frequency spectrum of rotating rings and cylinders are considered. To develop the requisite solution, a variationally constrained version of the Lindstedt-Poincare procedure is employed. Based on the solution developed, in addition to considering the effects of displacement induced nonlinearity, the role of Coriolis forces is also given special consideration.
NASA Technical Reports Server (NTRS)
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations
Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad
2013-01-01
Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194
An open-source model and solution method to predict co-contraction in the finger.
MacIntosh, Alexander R; Keir, Peter J
2017-10-01
A novel open-source biomechanical model of the index finger with an electromyography (EMG)-constrained static optimization solution method are developed with the goal of improving co-contraction estimates and providing means to assess tendon tension distribution through the finger. The Intrinsic model has four degrees of freedom and seven muscles (with a 14 component extensor mechanism). A novel plugin developed for the OpenSim modelling software applied the EMG-constrained static optimization solution method. Ten participants performed static pressing in three finger postures and five dynamic free motion tasks. Index finger 3D kinematics, force (5, 15, 30 N), and EMG (4 extrinsic muscles and first dorsal interosseous) were used in the analysis. The Intrinsic model predicted co-contraction increased by 29% during static pressing over the existing model. Further, tendon tension distribution patterns and forces, known to be essential to produce finger action, were determined by the model across all postures. The Intrinsic model and custom solution method improved co-contraction estimates to facilitate force propagation through the finger. These tools improve our interpretation of loads in the finger to develop better rehabilitation and workplace injury risk reduction strategies.
Aircraft conceptual design - an adaptable parametric sizing methodology
NASA Astrophysics Data System (ADS)
Coleman, Gary John, Jr.
Aerospace is a maturing industry with successful and refined baselines which work well for traditional baseline missions, markets and technologies. However, when new markets (space tourism) or new constrains (environmental) or new technologies (composite, natural laminar flow) emerge, the conventional solution is not necessarily best for the new situation. Which begs the question "how does a design team quickly screen and compare novel solutions to conventional solutions for new aerospace challenges?" The answer is rapid and flexible conceptual design Parametric Sizing. In the product design life-cycle, parametric sizing is the first step in screening the total vehicle in terms of mission, configuration and technology to quickly assess first order design and mission sensitivities. During this phase, various missions and technologies are assessed. During this phase, the designer is identifying design solutions of concepts and configurations to meet combinations of mission and technology. This research undertaking contributes the state-of-the-art in aircraft parametric sizing through (1) development of a dedicated conceptual design process and disciplinary methods library, (2) development of a novel and robust parametric sizing process based on 'best-practice' approaches found in the process and disciplinary methods library, and (3) application of the parametric sizing process to a variety of design missions (transonic, supersonic and hypersonic transports), different configurations (tail-aft, blended wing body, strut-braced wing, hypersonic blended bodies, etc.), and different technologies (composite, natural laminar flow, thrust vectored control, etc.), in order to demonstrate the robustness of the methodology and unearth first-order design sensitivities to current and future aerospace design problems. This research undertaking demonstrates the importance of this early design step in selecting the correct combination of mission, technologies and configuration to meet current aerospace challenges. Overarching goal is to avoid the reoccurring situation of optimizing an already ill-fated solution.
Nature vs. nurture debate on TNO carbons: constraints from Raman spectroscopy
NASA Astrophysics Data System (ADS)
Brunetto, R.
2012-02-01
We compare spectroscopic data of irradiated laboratory analogs with those of an interplanetary dust particle of cometary origin. We investigate if this comparison can help constraining the origin of carbonaceous materials on small icy bodies in the outer Solar System (TNOs, Centaurs, etc.). We suggest that Raman spectroscopy can help in interpreting the observed heterogeneity of the extraterrestrial carbonaceous component and in constraining the irradiation dose accumulated in space.
The Use of Non-Standard Devices in Finite Element Analysis
NASA Technical Reports Server (NTRS)
Schur, Willi W.; Broduer, Steve (Technical Monitor)
2001-01-01
A general mathematical description of the response behavior of thin-skin pneumatic envelopes and many other membrane and cable structures produces under-constrained systems that pose severe difficulties to analysis. These systems are mobile, and the general mathematical description exposes the mobility. Yet the response behavior of special under-constrained structures under special loadings can be accurately predicted using a constrained mathematical description. The static response behavior of systems that are infinitesimally mobile, such as a non-slack membrane subtended from a rigid or elastic boundary frame, can be easily analyzed using such general mathematical description as afforded by the non-linear, finite element method using an implicit solution scheme if the incremental uploading is guided through a suitable path. Similarly, if such structures are assembled with structural lack of fit that provides suitable self-stress, then dynamic response behavior can be predicted by the non-linear, finite element method and an implicit solution scheme. An explicit solution scheme is available for evolution problems. Such scheme can be used via the method of dynamic relaxation to obtain the solution to a static problem. In some sense, pneumatic envelopes and many other compliant structures can be said to have destiny under a specified loading system. What that means to the analyst is that what happens on the evolution path of the solution is irrelevant as long as equilibrium is achieved at destiny under full load and that the equilibrium is stable in the vicinity of that load. The purpose of this paper is to alert practitioners to the fact that non-standard procedures in finite element analysis are useful and can be legitimate although they burden their users with the requirement to use special caution. Some interesting findings that are useful to the US Scientific Balloon Program and that could not be obtained without non-standard techniques are presented.
Analytical Solution for the Free Vibration Analysis of Delaminated Timoshenko Beams
Abedi, Maryam
2014-01-01
This work presents a method to find the exact solutions for the free vibration analysis of a delaminated beam based on the Timoshenko type with different boundary conditions. The solutions are obtained by the method of Lagrange multipliers in which the free vibration problem is posed as a constrained variational problem. The Legendre orthogonal polynomials are used as the beam eigenfunctions. Natural frequencies and mode shapes of various Timoshenko beams are presented to demonstrate the efficiency of the methodology. PMID:24574879
NASA Technical Reports Server (NTRS)
Rizk, Magdi H.
1988-01-01
A scheme is developed for solving constrained optimization problems in which the objective function and the constraint function are dependent on the solution of the nonlinear flow equations. The scheme updates the design parameter iterative solutions and the flow variable iterative solutions simultaneously. It is applied to an advanced propeller design problem with the Euler equations used as the flow governing equations. The scheme's accuracy, efficiency and sensitivity to the computational parameters are tested.
Spherical cows in the sky with fab four
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaloper, Nemanja; Sandora, McCullen, E-mail: kaloper@physics.ucdavis.edu, E-mail: mesandora@ucdavis.edu
2014-05-01
We explore spherically symmetric static solutions in a subclass of unitary scalar-tensor theories of gravity, called the 'Fab Four' models. The weak field large distance solutions may be phenomenologically viable, but only if the Gauss-Bonnet term is negligible. Only in this limit will the Vainshtein mechanism work consistently. Further, classical constraints and unitarity bounds constrain the models quite tightly. Nevertheless, in the limits where the range of individual terms at large scales is respectively Kinetic Braiding, Horndeski, and Gauss-Bonnet, the horizon scale effects may occur while the theory satisfies Solar system constraints and, marginally, unitarity bounds. On the other hand,more » to bring the cutoff down to below a millimeter constrains all the couplings scales such that 'Fab Fours' can't be heard outside of the Solar system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vargas, L.S.; Quintana, V.H.; Vannelli, A.
This paper deals with the use of Successive Linear Programming (SLP) for the solution of the Security-Constrained Economic Dispatch (SCED) problem. The authors tutorially describe an Interior Point Method (IPM) for the solution of Linear Programming (LP) problems, discussing important implementation issues that really make this method far superior to the simplex method. A study of the convergence of the SLP technique and a practical criterion to avoid oscillatory behavior in the iteration process are also proposed. A comparison of the proposed method with an efficient simplex code (MINOS) is carried out by solving SCED problems on two standard IEEEmore » systems. The results show that the interior point technique is reliable, accurate and more than two times faster than the simplex algorithm.« less
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Sci—Sat AM: Stereo — 02: Implementation of a VMAT class solution for kidney SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonier, M; Lalani, N; Korol, R
An emerging treatment option for inoperable primary renal cell carcinoma and oligometastatic adrenal lesions is stereotactic body radiation therapy (SBRT). At our center, kidney SBRT treatments were originally planned with IMRT. The goal was to plan future patients using VMAT to improve treatment delivery efficiency. The purpose of this work was twofold: 1) to develop a VMAT class solution for the treatment of kidney SBRT; and, 2) to assess VMAT plan quality when compared to IMRT plans. Five patients treated with IMRT for kidney SBRT were reviewed and replanned in Pinnacle using a single VMAT arc with a 15° collimatormore » rotation, constrained leaf motion and 4° gantry spacing. In comparison, IMRT plans utilized 7–9 6MV beams, with various collimator rotations and up to 2 non-coplanar beams for maximum organ-at-risk (OAR) sparing. Comparisons were made concerning target volume conformity, homogeneity, dose to OARs, treatment time and monitor units (MUs). There was no difference in MUs; however, VMAT reduced the treatment time from 13.0±2.6min, for IMRT, to 4.0±0.9min. The collection of target and OAR constraints and SmartArc parameters, produced a class solution that generated VMAT plans with increased target homogeneity and improved 95% conformity index calculated at < 1.2. In general, the VMAT plans displayed a reduced maximum point dose to nearby OARs with increased intermediate dose to distant OARs. Overall, the introduction of a VMAT class solution for kidney SBRT improves efficiency by reducing treatment planning and delivery time.« less
Long-term and seasonal Caspian Sea level change from satellite gravity and altimeter measurements
NASA Astrophysics Data System (ADS)
Chen, J. L.; Wilson, C. R.; Tapley, B. D.; Save, H.; Cretaux, Jean-Francois
2017-03-01
We examine recent Caspian Sea level change by using both satellite radar altimetry and satellite gravity data. The altimetry record for 2002-2015 shows a declining level at a rate that is approximately 20 times greater than the rate of global sea level rise. Seasonal fluctuations are also much larger than in the world oceans. With a clearly defined geographic region and dominant signal magnitude, variations in the sea level and associated mass changes provide an excellent way to compare various approaches for processing satellite gravity data. An altimeter time series derived from several successive satellite missions is compared with mass measurements inferred from Gravity Recovery and Climate Experiment (GRACE) data in the form of both spherical harmonic (SH) and mass concentration (mascon) solutions. After correcting for spatial leakage in GRACE SH estimates by constrained forward modeling and accounting for steric and terrestrial water processes, GRACE and altimeter observations are in complete agreement at seasonal and longer time scales, including linear trends. This demonstrates that removal of spatial leakage error in GRACE SH estimates is both possible and critical to improving their accuracy and spatial resolution. Excellent agreement between GRACE and altimeter estimates also provides confirmation of steric Caspian Sea level change estimates. GRACE mascon estimates (both the Jet Propulsion Laboratory (JPL) coastline resolution improvement version 2 solution and the Center for Space Research (CSR) regularized) are also affected by leakage error. After leakage corrections, both JPL and CSR mascon solutions also agree well with altimeter observations. However, accurate quantification of leakage bias in GRACE mascon solutions is a more challenging problem.
2003-05-01
space requires both contractors---at least until sustainable performance is demonstrated • EELV program has occurred in highly cost constrained...both contractors • Take necessary actions to assure both contractors remain viable---at least until sustainable performance is demonstrated
Recovering a Probabilistic Knowledge Structure by Constraining Its Parameter Space
ERIC Educational Resources Information Center
Stefanutti, Luca; Robusto, Egidio
2009-01-01
In the Basic Local Independence Model (BLIM) of Doignon and Falmagne ("Knowledge Spaces," Springer, Berlin, 1999), the probabilistic relationship between the latent knowledge states and the observable response patterns is established by the introduction of a pair of parameters for each of the problems: a lucky guess probability and a careless…
NASA Technical Reports Server (NTRS)
Johnson, M.; Label, K.; McCabe, J.; Powell, W.; Bolotin, G.; Kolawa, E.; Ng, T.; Hyde, D.
2007-01-01
Implementation of challenging Exploration Systems Missions Directorate objectives and strategies can be constrained by onboard computing capabilities and power efficiencies. The Radiation Hardened Electronics for Space Environments (RHESE) High Performance Processors for Space Environments project will address this challenge by significantly advancing the sustained throughput and processing efficiency of high-per$ormance radiation-hardened processors, targeting delivery of products by the end of FY12.
Surface Exposure Ages of Space-Weathered Grains from Asteroid 25143 Itokawa
NASA Technical Reports Server (NTRS)
Keller, L. P.; Berger, E. L.; Christoffersen, R.
2015-01-01
Space weathering processes such as solar wind ion irradiation and micrometeorite impacts are widely known to alter the properties of regolith materials exposed on airless bodies. The rates of space weathering processes however, are poorly constrained for asteroid regoliths, with recent estimates ranging over many orders of magnitude. The return of surface samples by JAXA's Hayabusa mission to asteroid 25143 Itokawa, and their laboratory analysis provides "ground truth" to anchor the timescales for space weathering processes on airless bodies.
Space ventures and society long-term perspectives
NASA Technical Reports Server (NTRS)
Brown, W. M.
1985-01-01
A futuristic evaluation of mankind's potential long term future in space is presented. Progress in space will not be inhibited by shortages of the Earth's physical resources, since long term economic growth will be focused on ways to constrain industrial productivity by changing social values, management styles, or government competence. Future technological progress is likely to accelerate with an emphasis on international cooperation, making possible such large joint projects as lunar colonies or space stations on Mars. The long term future in space looks exceedingly bright even in relatively pessimistic scenarios. The principal driving forces will be technological progress, commercial and public-oriented satellites, space industrialization, space travel, and eventually space colonization.
Constraining new physics models with isotope shift spectroscopy
NASA Astrophysics Data System (ADS)
Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias
2017-07-01
Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.
Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.
Heydari, Ali; Balakrishnan, Sivasubramanya N
2013-01-01
To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.
Trajectory optimization for the National aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1993-01-01
While continuing the application of the inverse dynamics approach in obtaining the optimal numerical solutions, the research during the past six months has been focused on the formulation and derivation of closed-form solutions for constrained hypersonic flight trajectories. Since it was found in the research of the first year that a dominant portion of the optimal ascent trajectory of the aerospace plane is constrained by dynamic pressure and heating constraints, the application of the analytical solutions significantly enhances the efficiency in trajectory optimization, provides a better insight to understanding of the trajectory and conceivably has great potential in guidance of the vehicle. Work of this period has been reported in four technical papers. Two of the papers were presented in the AIAA Guidance, Navigation, and Control Conference (Hilton Head, SC, August, 1992) and Fourth International Aerospace Planes Conference (Orlando, FL, December, 1992). The other two papers have been accepted for publication by Journal of Guidance, Control, and Dynamics, and will appear in 1993. This report briefly summarizes the work done in the past six months and work currently underway.
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
Kandel, Saugat; Salomon-Ferrer, Romelia; Larsen, Adrien B; Jain, Abhinandan; Vaidehi, Nagarajan
2016-01-28
The Internal Coordinate Molecular Dynamics (ICMD) method is an attractive molecular dynamics (MD) method for studying the dynamics of bonded systems such as proteins and polymers. It offers a simple venue for coarsening the dynamics model of a system at multiple hierarchical levels. For example, large scale protein dynamics can be studied using torsional dynamics, where large domains or helical structures can be treated as rigid bodies and the loops connecting them as flexible torsions. ICMD with such a dynamic model of the protein, combined with enhanced conformational sampling method such as temperature replica exchange, allows the sampling of large scale domain motion involving high energy barrier transitions. Once these large scale conformational transitions are sampled, all-torsion, or even all-atom, MD simulations can be carried out for the low energy conformations sampled via coarse grained ICMD to calculate the energetics of distinct conformations. Such hierarchical MD simulations can be carried out with standard all-atom forcefields without the need for compromising on the accuracy of the forces. Using constraints to treat bond lengths and bond angles as rigid can, however, distort the potential energy landscape of the system and reduce the number of dihedral transitions as well as conformational sampling. We present here a two-part solution to overcome such distortions of the potential energy landscape with ICMD models. To alleviate the intrinsic distortion that stems from the reduced phase space in torsional MD, we use the Fixman compensating potential. To additionally alleviate the extrinsic distortion that arises from the coupling between the dihedral angles and bond angles within a force field, we propose a hybrid ICMD method that allows the selective relaxing of bond angles. This hybrid ICMD method bridges the gap between all-atom MD and torsional MD. We demonstrate with examples that these methods together offer a solution to eliminate the potential energy distortions encountered in constrained ICMD simulations of peptide molecules.
NASA Astrophysics Data System (ADS)
Kandel, Saugat; Salomon-Ferrer, Romelia; Larsen, Adrien B.; Jain, Abhinandan; Vaidehi, Nagarajan
2016-01-01
The Internal Coordinate Molecular Dynamics (ICMD) method is an attractive molecular dynamics (MD) method for studying the dynamics of bonded systems such as proteins and polymers. It offers a simple venue for coarsening the dynamics model of a system at multiple hierarchical levels. For example, large scale protein dynamics can be studied using torsional dynamics, where large domains or helical structures can be treated as rigid bodies and the loops connecting them as flexible torsions. ICMD with such a dynamic model of the protein, combined with enhanced conformational sampling method such as temperature replica exchange, allows the sampling of large scale domain motion involving high energy barrier transitions. Once these large scale conformational transitions are sampled, all-torsion, or even all-atom, MD simulations can be carried out for the low energy conformations sampled via coarse grained ICMD to calculate the energetics of distinct conformations. Such hierarchical MD simulations can be carried out with standard all-atom forcefields without the need for compromising on the accuracy of the forces. Using constraints to treat bond lengths and bond angles as rigid can, however, distort the potential energy landscape of the system and reduce the number of dihedral transitions as well as conformational sampling. We present here a two-part solution to overcome such distortions of the potential energy landscape with ICMD models. To alleviate the intrinsic distortion that stems from the reduced phase space in torsional MD, we use the Fixman compensating potential. To additionally alleviate the extrinsic distortion that arises from the coupling between the dihedral angles and bond angles within a force field, we propose a hybrid ICMD method that allows the selective relaxing of bond angles. This hybrid ICMD method bridges the gap between all-atom MD and torsional MD. We demonstrate with examples that these methods together offer a solution to eliminate the potential energy distortions encountered in constrained ICMD simulations of peptide molecules.
Quadratic Optimization in the Problems of Active Control of Sound
NASA Technical Reports Server (NTRS)
Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Solution techniques for transient stability-constrained optimal power flow – Part II
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu; ...
2017-06-28
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
Solution techniques for transient stability-constrained optimal power flow – Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
Phase-field model of domain structures in ferroelectric thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y. L.; Hu, S. Y.; Liu, Z. K.
A phase-field model for predicting the coherent microstructure evolution in constrained thin films is developed. It employs an analytical elastic solution derived for a constrained film with arbitrary eigenstrain distributions. The domain structure evolution during a cubic{r_arrow}tetragonal proper ferroelectric phase transition is studied. It is shown that the model is able to simultaneously predict the effects of substrate constraint and temperature on the volume fractions of domain variants, domain-wall orientations, domain shapes, and their temporal evolution. {copyright} 2001 American Institute of Physics.
An adaptive finite element method for the inequality-constrained Reynolds equation
NASA Astrophysics Data System (ADS)
Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha
2018-07-01
We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.
NASA Technical Reports Server (NTRS)
Hargrove, A.
1982-01-01
Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.
Configuration of the thermal landscape determines thermoregulatory performance of ectotherms
Sears, Michael W.; Angilletta, Michael J.; Schuler, Matthew S.; Borchert, Jason; Dilliplane, Katherine F.; Stegman, Monica; Rusch, Travis W.; Mitchell, William A.
2016-01-01
Although most organisms thermoregulate behaviorally, biologists still cannot easily predict whether mobile animals will thermoregulate in natural environments. Current models fail because they ignore how the spatial distribution of thermal resources constrains thermoregulatory performance over space and time. To overcome this limitation, we modeled the spatially explicit movements of animals constrained by access to thermal resources. Our models predict that ectotherms thermoregulate more accurately when thermal resources are dispersed throughout space than when these resources are clumped. This prediction was supported by thermoregulatory behaviors of lizards in outdoor arenas with known distributions of environmental temperatures. Further, simulations showed how the spatial structure of the landscape qualitatively affects responses of animals to climate. Biologists will need spatially explicit models to predict impacts of climate change on local scales. PMID:27601639
Redshift Space Distortion on the Small Scale Clustering of Structure
NASA Astrophysics Data System (ADS)
Park, Hyunbae; Sabiu, Cristiano; Li, Xiao-dong; Park, Changbom; Kim, Juhan
2018-01-01
The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. The shape of the two-point correlation of galaxies exhibits a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. In our previous works, we can made use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This current work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities. We now aim to understand the redshift evolution of the full shape of the small scale, anisotropic galaxy clustering and give a firmer theoretical footing to our previous works.
Direct reconstruction of dark energy.
Clarkson, Chris; Zunckel, Caroline
2010-05-28
An important issue in cosmology is reconstructing the effective dark energy equation of state directly from observations. With so few physically motivated models, future dark energy studies cannot only be based on constraining a dark energy parameter space. We present a new nonparametric method which can accurately reconstruct a wide variety of dark energy behavior with no prior assumptions about it. It is simple, quick and relatively accurate, and involves no expensive explorations of parameter space. The technique uses principal component analysis and a combination of information criteria to identify real features in the data, and tailors the fitting functions to pick up trends and smooth over noise. We find that we can constrain a large variety of w(z) models to within 10%-20% at redshifts z≲1 using just SNAP-quality data.
Circuit design advances for ultra-low power sensing platforms
NASA Astrophysics Data System (ADS)
Wieckowski, Michael; Dreslinski, Ronald G.; Mudge, Trevor; Blaauw, David; Sylvester, Dennis
2010-04-01
This paper explores the recent advances in circuit structures and design methodologies that have enabled ultra-low power sensing platforms and opened up a host of new applications. Central to this theme is the development of Near Threshold Computing (NTC) as a viable design space for low power sensing platforms. In this paradigm, the system's supply voltage is approximately equal to the threshold voltage of its transistors. Operating in this "near-threshold" region provides much of the energy savings previously demonstrated for subthreshold operation while offering more favorable performance and variability characteristics. This makes NTC applicable to a broad range of power-constrained computing segments including energy constrained sensing platforms. This paper explores the barriers to the adoption of NTC and describes current work aimed at overcoming these obstacles in the circuit design space.
Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.
Cabrera, M E; Casas, J A; Delgado, A
2012-01-13
The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11) GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.
NASA Astrophysics Data System (ADS)
Wang, Yong-Long; Lai, Meng-Yun; Wang, Fan; Zong, Hong-Shi; Chen, Yan-Feng
2018-04-01
Investigating the geometric effects resulting from the detailed behaviors of the confining potential, we consider square and circular confinements to constrain a particle to a space curve. We find a torsion-induced geometric potential and a curvature-induced geometric momentum just in the square case, while a geometric gauge potential solely in the circular case. In the presence of electromagnetic field, a geometrically induced magnetic moment couples with magnetic field as an induced Zeeman coupling only for the circular confinement also. As spin-orbit interaction is considered, we find some additional terms for the spin-orbit coupling, which are induced not only by torsion, but also curvature. Moreover, in the circular case, the spin also couples with an intrinsic angular momentum, which describes the azimuthal motions mapped on the space curve. As an important conclusion for the thin-layer quantization approach, some substantial geometric effects result from the confinement boundaries. Finally, these results are proved on a helical wire.
Photogrammetry on glaciers: Old and new knowledge
NASA Astrophysics Data System (ADS)
Pfeffer, W. T.; Welty, E.; O'Neel, S.
2014-12-01
In the past few decades terrestrial photogrammetry has become a widely used tool for glaciological research, brought about in part by the proliferation of high-quality, low-cost digital cameras, dramatic increases in image-processing power of computers, and very innovative progress in image processing, much of which has come from computer vision research and from the computer gaming industry. At present, glaciologists have developed their capacity to gather images much further than their ability to process them. Many researchers have accumulated vast inventories of imagery, but have no efficient means to extract the data they desire from them. In many cases these are single-image time series where the processing limitation lies in the paucity of methods to obtain 3-dimension object space information from measurements in the 2-dimensional image space; in other cases camera pairs have been operated but no automated means is in hand for conventional stereometric analysis of many thousands of image pairs. Often the processing task is further complicated by weak camera geometry or ground control distribution, either of which will compromise the quality of 3-dimensional object space solutions. Solutions exist for many of these problems, found sometimes among the latest computer vision results, and sometimes buried in decades-old pre-digital terrestrial photogrammetric literature. Other problems, particularly those arising from poorly constrained or underdetermined camera and ground control geometry, may be unsolvable. Small-scale, ground-based photography and photogrammetry of glaciers has grown over the past few decades in an organic and disorganized fashion, with much duplication of effort and little coordination or sharing of knowledge among researchers. Given the utility of terrestrial photogrammetry, its low cost (if properly developed and implemented), and the substantial value of the information to be had from it, some further effort to share knowledge and methods would be a great benefit for the community. We consider some of the main problems to be solved, and aspects of how optimal knowledge sharing might be accomplished.
Advanced Passive Microwave Radiometer Technology for GPM Mission
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Im, Eastwood; Kummerow, Christian; Principe, Caleb; Ruf, Christoper; Wilheit, Thomas; Starr, David (Technical Monitor)
2002-01-01
An interferometer-type passive microwave radiometer based on MMIC receiver technology and a thinned array antenna design is being developed under the Instrument Incubator Program (TIP) on a project entitled the Lightweight Rainfall Radiometer (LRR). The prototype single channel aircraft instrument will be ready for first testing in 2nd quarter 2003, for deployment on the NASA DC-8 aircraft and in a ground configuration manner; this version measures at 10.7 GHz in a crosstrack imaging mode. The design for a two (2) frequency preliminary space flight model at 19 and 35 GHz (also in crosstrack imaging mode) has also been completed, in which the design features would enable it to fly in a bore-sighted configuration with a new dual-frequency space radar (DPR) under development at the Communications Research Laboratory (CRL) in Tokyo, Japan. The DPR will be flown as one of two primary instruments on the Global Precipitation Measurement (GPM) mission's core satellite in the 2007 time frame. The dual frequency space flight design of the ERR matches the APR frequencies and will be proposed as an ancillary instrument on the GPM core satellite to advance space-based precipitation measurement by enabling better microphysical characterization and coincident volume data gathering for exercising combined algorithm techniques which make use of both radar backscatter and radiometer attenuation information to constrain rainrate solutions within a physical algorithm context. This talk will discuss the design features, performance capabilities, applications plans, and conical/polarametric imaging possibilities for the LRR, as well as a brief summary of the project status and schedule.
Rigidity of Major Plates and Microplates Estimated From GPS Solution GPS2006.0
NASA Astrophysics Data System (ADS)
Kogan, M. G.; Steblov, G. M.
2006-05-01
Here we analyze the rigidity of eight major lithospheric plates using our global GPS solution GPS2006.0. We included all daily observations in interval 1995.0 to 2006.0 collected at IGS stations, as well as observations at many important stations not included in IGS. Loose multiyear solution GPS2006.0 is based on daily solutions by GAMIT software, performed at SOPAC and at Columbia University; those daily solutions were combined by Kalman filter (GLOBK software) into a loose multiyear solution. The constrained solution for station positions and velocities was obtained without a conventional reference frame; instead, we applied translation and rotation in order to best fit the zero velocities of 76 stations in stable plate cores excluding the regions of postglacial rebound. Simultaneously, we estimated relative plate rotation vectors (RV) and the origin translation rate (OTR), and then corrected station velocities for it. Therefore, the velocities in GPS2006.0 are unaffected by the OTR error of ITRF2000 conventionally used to constrain a loose solution. The 1-sigma plate-residual velocity in a stable plate core is less than 1 mm/yr for the plates: Eurasia, Pacific, North and South Americas, Nubia, Australia, and Antarctica; it is 1.4 mm/yr for the Indian plate, most probably because of poorer data quality. Plate-residuals at other established plates (Arabia, Nazca, Caribbean, Philippine) were not assessed for lack of observations. From our analysis, an upper bound for the mobility of the plate inner area is 1 mm/yr. Plate- residual GPS velocities for several hypothesized microplates in east Asia, such as Okhotsk, Amuria, South China, are 3-4 times higher; corresponding strain rates for these microplates are an order of magnitude higher than for Eurasia, North America, and other large plates.
Controllability of switched singular mix-valued logical control networks with constraints
NASA Astrophysics Data System (ADS)
Deng, Lei; Gong, Mengmeng; Zhu, Peiyong
2018-03-01
The present paper investigates the controllability problem of switched singular mix-valued logical control networks (SSMLCNs) with constraints on states and controls. First, using the semi-tenser product (STP) of matrices, the SSMLCN is expressed in an algebraic form, based on which a necessary and sufficient condition is given for the uniqueness of solution of SSMLCNs. Second, a necessary and sufficient criteria is derived for the controllability of constrained SSMLCNs, by converting a constrained SSMLCN into a parallel constrained switched mix-valued logical control network. Third, an algorithm is presented to design a proper switching sequence and a control scheme which force a state to a reachable state. Finally, a numerical example is given to demonstrate the efficiency of the results obtained in this paper.
Improving Service Management in the Internet of Things
Sammarco, Chiara; Iera, Antonio
2012-01-01
In the Internet of Things (IoT) research arena, many efforts are devoted to adapt the existing IP standards to emerging IoT nodes. This is the direction followed by three Internet Engineering Task Force (IETF) Working Groups, which paved the way for research on IP-based constrained networks. Through a simplification of the whole TCP/IP stack, resource constrained nodes become direct interlocutors of application level entities in every point of the network. In this paper we analyze some side effects of this solution, when in the presence of large amounts of data to transmit. In particular, we conduct a performance analysis of the Constrained Application Protocol (CoAP), a widely accepted web transfer protocol for the Internet of Things, and propose a service management enhancement that improves the exploitation of the network and node resources. This is specifically thought for constrained nodes in the abovementioned conditions and proves to be able to significantly improve the node energetic performance when in the presence of large resource representations (hence, large data transmissions).
Development and application of a unified balancing approach with multiple constraints
NASA Technical Reports Server (NTRS)
Zorzi, E. S.; Lee, C. C.; Giordano, J. C.
1985-01-01
The development of a general analytic approach to constrained balancing that is consistent with past influence coefficient methods is described. The approach uses Lagrange multipliers to impose orbit and/or weight constraints; these constraints are combined with the least squares minimization process to provide a set of coupled equations that result in a single solution form for determining correction weights. Proper selection of constraints results in the capability to: (1) balance higher speeds without disturbing previously balanced modes, thru the use of modal trial weight sets; (2) balance off-critical speeds; and (3) balance decoupled modes by use of a single balance plane. If no constraints are imposed, this solution form reduces to the general weighted least squares influence coefficient method. A test facility used to examine the use of the general constrained balancing procedure and application of modal trial weight ratios is also described.
Constraining generalized non-local cosmology from Noether symmetries.
Bahamonde, Sebastian; Capozziello, Salvatore; Dialektopoulos, Konstantinos F
2017-01-01
We study a generalized non-local theory of gravity which, in specific limits, can become either the curvature non-local or teleparallel non-local theory. Using the Noether symmetry approach, we find that the coupling functions coming from the non-local terms are constrained to be either exponential or linear in form. It is well known that in some non-local theories, a certain kind of exponential non-local couplings is needed in order to achieve a renormalizable theory. In this paper, we explicitly show that this kind of coupling does not need to be introduced by hand, instead, it appears naturally from the symmetries of the Lagrangian in flat Friedmann-Robertson-Walker cosmology. Finally, we find de Sitter and power-law cosmological solutions for different non-local theories. The symmetries for the generalized non-local theory are also found and some cosmological solutions are also achieved using the full theory.
ENVIRONMENTAL SCIENCE. Profiling risk and sustainability in coastal deltas of the world.
Tessler, Z D; Vörösmarty, C J; Grossberg, M; Gladkova, I; Aizenman, H; Syvitski, J P M; Foufoula-Georgiou, E
2015-08-07
Deltas are highly sensitive to increasing risks arising from local human activities, land subsidence, regional water management, global sea-level rise, and climate extremes. We quantified changing flood risk due to extreme events using an integrated set of global environmental, geophysical, and social indicators. Although risks are distributed across all levels of economic development, wealthy countries effectively limit their present-day threat by gross domestic product-enabled infrastructure and coastal defense investments. In an energy-constrained future, such protections will probably prove to be unsustainable, raising relative risks by four to eight times in the Mississippi and Rhine deltas and by one-and-a-half to four times in the Chao Phraya and Yangtze deltas. The current emphasis on short-term solutions for the world's deltas will greatly constrain options for designing sustainable solutions in the long term. Copyright © 2015, American Association for the Advancement of Science.
Constraining generalized non-local cosmology from Noether symmetries
NASA Astrophysics Data System (ADS)
Bahamonde, Sebastian; Capozziello, Salvatore; Dialektopoulos, Konstantinos F.
2017-11-01
We study a generalized non-local theory of gravity which, in specific limits, can become either the curvature non-local or teleparallel non-local theory. Using the Noether symmetry approach, we find that the coupling functions coming from the non-local terms are constrained to be either exponential or linear in form. It is well known that in some non-local theories, a certain kind of exponential non-local couplings is needed in order to achieve a renormalizable theory. In this paper, we explicitly show that this kind of coupling does not need to be introduced by hand, instead, it appears naturally from the symmetries of the Lagrangian in flat Friedmann-Robertson-Walker cosmology. Finally, we find de Sitter and power-law cosmological solutions for different non-local theories. The symmetries for the generalized non-local theory are also found and some cosmological solutions are also achieved using the full theory.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Parsec's astrometry direct approaches .
NASA Astrophysics Data System (ADS)
Andrei, A. H.
Parallaxes - and hence the fundamental establishment of stellar distances - rank among the oldest, keyest, and hardest of astronomical determinations. Arguably amongst the most essential too. The direct approach to obtain trigonometric parallaxes, using a constrained set of equations to derive positions, proper motions, and parallaxes, has been labeled as risky. Properly so, because the axis of the parallactic apparent ellipse is smaller than one arcsec even for the nearest stars, and just a fraction of its perimeter can be followed. Thus the classical approach is of linearizing the description by locking the solution to a set of precise positions of the Earth at the instants of observation, rather than to the dynamics of its orbit, and of adopting a close examination of the never many points available. In the PARSEC program the parallaxes of 143 brown dwarfs were aimed at. Five years of observation of the fields were taken with the WIFI camera at the ESO 2.2m telescope, in Chile. The goal is to provide a statistically significant number of trigonometric parallaxes to BD sub-classes from L0 to T7. Taking advantage of the large, regularly spaced, quantity of observations, here we take the risky approach to fit an ellipse in ecliptical observed coordinates and derive the parallaxes. We also combine the solutions from different centroiding methods, widely proven in prior astrometric investigations. As each of those methods assess diverse properties of the PSFs, they are taken as independent measurements, and combined into a weighted least-square general solution.
Gas in the protoplanetary disc of HD 169142: Herschel's view
NASA Astrophysics Data System (ADS)
Meeus, G.; Pinte, C.; Woitke, P.; Montesinos, B.; Mendigutía, I.; Riviere-Marichalar, P.; Eiroa, C.; Mathews, G. S.; Vandenbussche, B.; Howard, C. D.; Roberge, A.; Sandell, G.; Duchêne, G.; Ménard, F.; Grady, C. A.; Dent, W. R. F.; Kamp, I.; Augereau, J. C.; Thi, W. F.; Tilling, I.; Alacid, J. M.; Andrews, S.; Ardila, D. R.; Aresu, G.; Barrado, D.; Brittain, S.; Ciardi, D. R.; Danchi, W.; Fedele, D.; de Gregorio-Monsalvo, I.; Heras, A.; Huelamo, N.; Krivov, A.; Lebreton, J.; Liseau, R.; Martin-Zaidi, C.; Mora, A.; Morales-Calderon, M.; Nomura, H.; Pantin, E.; Pascucci, I.; Phillips, N.; Podio, L.; Poelman, D. R.; Ramsay, S.; Riaz, B.; Rice, K.; Solano, E.; Walker, H.; White, G. J.; Williams, J. P.; Wright, G.
2010-07-01
In an effort to simultaneously study the gas and dust components of the disc surrounding the young Herbig Ae star HD 169142, we present far-IR observations obtained with the PACS instrument onboard the Herschel Space Observatory. This work is part of the open time key program GASPS, which is aimed at studying the evolution of protoplanetary discs. To constrain the gas properties in the outer disc, we observed the star at several key gas-lines, including [OI] 63.2 and 145.5 μm, [CII] 157.7 μm, CO 72.8 and 90.2 μm, and o-H2O 78.7 and 179.5 μm. We only detect the [OI] 63.2 μm line in our spectra, and derive upper limits for the other lines. We complement our data set with PACS photometry and 12/13CO data obtained with the Submillimeter Array. Furthermore, we derive accurate stellar parameters from optical spectra and UV to mm photometry. We model the dust continuum with the 3D radiative transfer code MCFOST and use this model as an input to analyse the gas lines with the thermo-chemical code ProDiMo. Our dataset is consistent with a simple model in which the gas and dust are well-mixed in a disc with a continuous structure between 20 and 200 AU, but this is not a unique solution. Our modelling effort allows us to constrain the gas-to-dust mass ratio as well as the relative abundance of the PAHs in the disc by simultaneously fitting the lines of several species that originate in different regions. Our results are inconsistent with a gas-poor disc with a large UV excess; a gas mass of 5.0 ± 2.0 × 10-3 M⊙ is still present in this disc, in agreement with earlier CO observations. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cembranos, Jose A. R.; Diaz-Cruz, J. Lorenzo; Prado, Lilian
Dark Matter direct detection experiments are able to exclude interesting parameter space regions of particle models which predict an important amount of thermal relics. We use recent data to constrain the branon model and to compute the region that is favored by CDMS measurements. Within this work, we also update present colliders constraints with new studies coming from the LHC. Despite the present low luminosity, it is remarkable that for heavy branons, CMS and ATLAS measurements are already more constraining than previous analyses performed with TEVATRON and LEP data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Sweta; Nelemans, Gijs, E-mail: s.shah@astro.ru.nl
The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, themore » inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.« less
Sequence-specific unusual (1-->2)-type helical turns in alpha/beta-hybrid peptides.
Prabhakaran, Panchami; Kale, Sangram S; Puranik, Vedavati G; Rajamohanan, P R; Chetina, Olga; Howard, Judith A K; Hofmann, Hans-Jörg; Sanjayan, Gangadhar J
2008-12-31
This article describes novel conformationally ordered alpha/beta-hybrid peptides consisting of repeating l-proline-anthranilic acid building blocks. These oligomers adopt a compact, right-handed helical architecture determined by the intrinsic conformational preferences of the individual amino acid residues. The striking feature of these oligomers is their ability to display an unusual periodic pseudo beta-turn network of nine-membered hydrogen-bonded rings formed in the forward direction of the sequence by 1-->2 amino acid interactions both in solid-state and in solution. Conformational investigations of several of these oligomers by single-crystal X-ray diffraction, solution-state NMR, and ab initio MO theory suggest that the characteristic steric and dihedral angle restraints exerted by proline are essential for stabilizing the unusual pseudo beta-turn network found in these oligomers. Replacing proline by the conformationally flexible analogue alanine (Ala) or by the conformationally more constrained alpha-amino isobutyric acid (Aib) had an adverse effect on the stabilization of this structural architecture. These findings increase the potential to design novel secondary structure elements profiting from the steric and dihedral angle constraints of the amino acid constituents and help to augment the conformational space available for synthetic oligomer design with diverse backbone structures.
Tests of the Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard; Attele, Rohan
2011-01-01
Satellite lightning imagers such as the NASA Tropical Rainfall Measuring Mission Lightning Imaging Sensor (TRMM/LIS) and the future GOES-R Geostationary Lightning Mapper (GLM) are designed to detect total lightning (ground flashes + cloud flashes). However, there is a desire to discriminate ground flashes from cloud flashes from the vantage point of space since this would enhance the overall information content of the satellite lightning data and likely improve its operational and scientific applications (e.g., in severe weather warning, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters (one of which is the ground flash fraction), a scalar function was minimized by a numerical method. In order to improve this optimization, a Grobner basis solution was introduced to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. In this study, we test the efficacy of the Grobner basis initialization using actual lightning imager measurements and ground flash truth derived from the national lightning network.
Traversable geometric dark energy wormholes constrained by astrophysical observations
NASA Astrophysics Data System (ADS)
Wang, Deng; Meng, Xin-he
2016-09-01
In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω <-1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω _X<-1 (or z<0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology.
Mantle viscosity structure constrained by joint inversions of seismic velocities and density
NASA Astrophysics Data System (ADS)
Rudolph, M. L.; Moulik, P.; Lekic, V.
2017-12-01
The viscosity structure of Earth's deep mantle affects the thermal evolution of Earth, the ascent of mantle upwellings, sinking of subducted oceanic lithosphere, and the mixing of compositional heterogeneities in the mantle. Modeling the long-wavelength dynamic geoid allows us to constrain the radial viscosity profile of the mantle. Typically, in inversions for the mantle viscosity structure, wavespeed variations are mapped into density variations using a constant- or depth-dependent scaling factor. Here, we use a newly developed joint model of anisotropic Vs, Vp, density and transition zone topographies to generate a suite of solutions for the mantle viscosity structure directly from the seismologically constrained density structure. The density structure used to drive our forward models includes contributions from both thermal and compositional variations, including important contributions from compositionally dense material in the Large Low Velocity Provinces at the base of the mantle. These compositional variations have been neglected in the forward models used in most previous inversions and have the potential to significantly affect large-scale flow and thus the inferred viscosity structure. We use a transdimensional, hierarchical, Bayesian approach to solve the inverse problem, and our solutions for viscosity structure include an increase in viscosity below the base of the transition zone, in the shallow lower mantle. Using geoid dynamic response functions and an analysis of the correlation between the observed geoid and mantle structure, we demonstrate the underlying reason for this inference. Finally, we present a new family of solutions in which the data uncertainty is accounted for using covariance matrices associated with the mantle structure models.
NASA Astrophysics Data System (ADS)
Guner, Ozkan; Korkmaz, Alper; Bekir, Ahmet
2017-02-01
Dark soliton solutions for space-time fractional Sharma-Tasso-Olver and space-time fractional potential Kadomtsev-Petviashvili equations are determined by using the properties of modified Riemann-Liouville derivative and fractional complex transform. After reducing both equations to nonlinear ODEs with constant coefficients, the \\tanh ansatz is substituted into the resultant nonlinear ODEs. The coefficients of the solutions in the ansatz are calculated by algebraic computer computations. Two different solutions are obtained for the Sharma-Tasso-Olver equation as only one solution for the potential Kadomtsev-Petviashvili equation. The solution profiles are demonstrated in 3D plots in finite domains of time and space.
Robust fuel- and time-optimal control of uncertain flexible space structures
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken
1993-01-01
The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.
Going Boldly Beyond: Progress on NASA's Space Launch System
NASA Technical Reports Server (NTRS)
Singer, Jody; Crumbly, Chris
2013-01-01
NASA's Space Launch System is implementing an evolvable configuration approach to system development in a resource-constrained era. Legacy systems enable non-traditional development funding and contribute to sustainability and affordability. Limited simultaneous developments reduce cost and schedule risk. Phased approach to advanced booster development enables innovation and competition, incrementally demonstrating affordability and performance enhancements. Advanced boosters will provide performance for the most capable heavy lift launcher in history, enabling unprecedented space exploration benefiting all of humanity.
NASA Astrophysics Data System (ADS)
Varène, Thibaut; Hillereau, Paul; Simonnet, Thierry
An increasing number of people are in need of help at home (elderly, isolated and/or disabled persons; people with mild cognitive impairment). Several solutions can be considered to maintain a social link while providing tele-care to these people. Many proposals suggest the use of a robot acting as a companion. In this paper we will look at an environment constrained solution, its drawbacks (such as latency) and its advantages (flexibility, integration…). A key design choice is to control the robot using a unified Voice over Internet Protocol (VoIP) solution, while addressing bandwidth limitations, providing good communication quality and reducing transmission latency
Plasma interactions with large spacecraft
NASA Technical Reports Server (NTRS)
Sagalyn, Rita C.; Maynard, Nelson C.
1986-01-01
Space is playing a rapidly expanding role in the conduct of the Air Force mission. Larger, more complex, high-power space platforms are planned and military astronauts will provide a new capability in spacecraft servicing. Interactions of operational satellites with the environment have been shown to degrade space sensors and electronics and to constrain systems operations. The environmental interaction effects grow nonlinearly with increasing size and power. Quantification of the interactions and development of mitigation techniques for systems-limiting interactions is essential to the success of future Air Force space operations.
Insights into Regolith Dynamics from the Irradiation Record Preserved in Hayabusa Samples
NASA Technical Reports Server (NTRS)
Keller, Lindsay P.; Berger, E. L.
2014-01-01
The rates of space weathering processes are poorly constrained for asteroid surfaces, with recent estimates ranging over 5 orders of magnitude. The return of the first surface samples from a space-weathered asteroid by the Hayabusa mission and their laboratory analysis provides "ground truth" to anchor the timescales for space weathering. We determine the rates of space weathering on Itokawa by measuring solar flare track densities and the widths of solar wind damaged rims on grains. These measurements are made possible through novel focused ion beam (FIB) sample preparation methods.
The general Lie group and similarity solutions for the one-dimensional Vlasov-Maxwell equations
NASA Technical Reports Server (NTRS)
Roberts, D.
1985-01-01
The general Lie point transformation group and the associated reduced differential equations and similarity forms for the solutions are derived here for the coupled (nonlinear) Vlasov-Maxwell equations in one spatial dimension. The case of one species in a background is shown to admit a larger group than the multispecies case. Previous exact solutions are shown to be special cases of the above solutions, and many of the new solutions are found to constrain the form of the distribution function much more than, for example, the BGK solutions do. The individual generators of the Lie group are used to find the possible subgroups. Finally, a simple physical argument is given to show that the asymptotic solution for a one-species, one-dimensional plasma is one of the general similarity solutions.
Toppino, Thomas C; Fearnow-Kenney, Melodie D; Kiepert, Marissa H; Teremula, Amanda C
2009-04-01
Preschoolers, elementary school children, and college students exhibited a spacing effect in the free recall of pictures when learning was intentional. When learning was incidental and a shallow processing task requiring little semantic processing was used during list presentation, young adults still exhibited a spacing effect, but children consistently failed to do so. Children, however, did manifest a spacing effect in incidental learning when an elaborate semantic processing task was used. These results limit the hypothesis that the spacing effect in free recall occurs automatically and constrain theoretical accounts of why the spacing between repetitions affects recall performance.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Exploring constrained quantum control landscapes
NASA Astrophysics Data System (ADS)
Moore, Katharine W.; Rabitz, Herschel
2012-10-01
The broad success of optimally controlling quantum systems with external fields has been attributed to the favorable topology of the underlying control landscape, where the landscape is the physical observable as a function of the controls. The control landscape can be shown to contain no suboptimal trapping extrema upon satisfaction of reasonable physical assumptions, but this topological analysis does not hold when significant constraints are placed on the control resources. This work employs simulations to explore the topology and features of the control landscape for pure-state population transfer with a constrained class of control fields. The fields are parameterized in terms of a set of uniformly spaced spectral frequencies, with the associated phases acting as the controls. This restricted family of fields provides a simple illustration for assessing the impact of constraints upon seeking optimal control. Optimization results reveal that the minimum number of phase controls necessary to assure a high yield in the target state has a special dependence on the number of accessible energy levels in the quantum system, revealed from an analysis of the first- and second-order variation of the yield with respect to the controls. When an insufficient number of controls and/or a weak control fluence are employed, trapping extrema and saddle points are observed on the landscape. When the control resources are sufficiently flexible, solutions producing the globally maximal yield are found to form connected "level sets" of continuously variable control fields that preserve the yield. These optimal yield level sets are found to shrink to isolated points on the top of the landscape as the control field fluence is decreased, and further reduction of the fluence turns these points into suboptimal trapping extrema on the landscape. Although constrained control fields can come in many forms beyond the cases explored here, the behavior found in this paper is illustrative of the impacts that constraints can introduce.
Computational Fluid Dynamics Demonstration of Rigid Bodies in Motion
NASA Technical Reports Server (NTRS)
Camarena, Ernesto; Vu, Bruce T.
2011-01-01
The Design Analysis Branch (NE-Ml) at the Kennedy Space Center has not had the ability to accurately couple Rigid Body Dynamics (RBD) and Computational Fluid Dynamics (CFD). OVERFLOW-D is a flow solver that has been developed by NASA to have the capability to analyze and simulate dynamic motions with up to six Degrees of Freedom (6-DOF). Two simulations were prepared over the course of the internship to demonstrate 6DOF motion of rigid bodies under aerodynamic loading. The geometries in the simulations were based on a conceptual Space Launch System (SLS). The first simulation that was prepared and computed was the motion of a Solid Rocket Booster (SRB) as it separates from its core stage. To reduce computational time during the development of the simulation, only half of the physical domain with respect to the symmetry plane was simulated. Then a full solution was prepared and computed. The second simulation was a model of the SLS as it departs from a launch pad under a 20 knot crosswind. This simulation was reduced to Two Dimensions (2D) to reduce both preparation and computation time. By allowing 2-DOF for translations and 1-DOF for rotation, the simulation predicted unrealistic rotation. The simulation was then constrained to only allow translations.
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion
Hamsici, Onur C.; Gotardo, Paulo F.U.; Martinez, Aleix M.
2013-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function. PMID:23946937
Learning Spatially-Smooth Mappings in Non-Rigid Structure from Motion.
Hamsici, Onur C; Gotardo, Paulo F U; Martinez, Aleix M
2012-01-01
Non-rigid structure from motion (NRSFM) is a classical underconstrained problem in computer vision. A common approach to make NRSFM more tractable is to constrain 3D shape deformation to be smooth over time. This constraint has been used to compress the deformation model and reduce the number of unknowns that are estimated. However, temporal smoothness cannot be enforced when the data lacks temporal ordering and its benefits are less evident when objects undergo abrupt deformations. This paper proposes a new NRSFM method that addresses these problems by considering deformations as spatial variations in shape space and then enforcing spatial, rather than temporal, smoothness. This is done by modeling each 3D shape coefficient as a function of its input 2D shape. This mapping is learned in the feature space of a rotation invariant kernel, where spatial smoothness is intrinsically defined by the mapping function. As a result, our model represents shape variations compactly using custom-built coefficient bases learned from the input data, rather than a pre-specified set such as the Discrete Cosine Transform. The resulting kernel-based mapping is a by-product of the NRSFM solution and leads to another fundamental advantage of our approach: for a newly observed 2D shape, its 3D shape is recovered by simply evaluating the learned function.
Discrimination of coherent features in turbulent boundary layers by the entropy method
NASA Technical Reports Server (NTRS)
Corke, T. C.; Guezennec, Y. G.
1984-01-01
Entropy in information theory is defined as the expected or mean value of the measure of the amount of self-information contained in the ith point of a distribution series x sub i, based on its probability of occurrence p(x sub i). If p(x sub i) is the probability of the ith state of the system in probability space, then the entropy, E(X) = - sigma p(x sub i) logp (x sub i), is a measure of the disorder in the system. Based on this concept, a method was devised which sought to minimize the entropy in a time series in order to construct the signature of the most coherent motions. The constrained minimization was performed using a Lagrange multiplier approach which resulted in the solution of a simultaneous set of non-linear coupled equations to obtain the coherent time series. The application of the method to space-time data taken by a rake of sensors in the near-wall region of a turbulent boundary layer was presented. The results yielded coherent velocity motions made up of locally decelerated or accelerated fluid having a streamwise scale of approximately 100 nu/u(tau), which is in qualitative agreement with the results from other less objective discrimination methods.
Jo, Min-Jeong; Jung, Hyung-Sup; Yun, Sang-Ho
2017-07-14
We reconstructed the three-dimensional (3D) surface displacement field of the 24 August 2014 M6.0 South Napa earthquake using SAR data from the Italian Space Agency's COSMO-SkyMed and the European Space Agency's Sentinel-1A satellites. Along-track and cross-track displacements produced with conventional SAR interferometry (InSAR) and multiple-aperture SAR interferometry (MAI) techniques were integrated to retrieve the east, north, and up components of surface deformation. The resulting 3D displacement maps clearly delineated the right-lateral shear motion of the fault rupture with a maximum surface displacement of approximately 45 cm along the fault's strike, showing the east and north components of the trace particularly clearly. These maps also suggested a better-constrained model for the South Napa earthquake. We determined a strike of approximately 338° and dip of 85° by applying the Okada dislocation model considering a single patch with a homogeneous slip motion. Using the distributed slip model obtained by a linear solution, we estimated that a peak slip of approximately 1.7 m occurred around 4 km depth from the surface. 3D modelling using the retrieved 3D maps helps clarify the fault's nature and thus characterize its behaviour.
Spatial and Spin Symmetry Breaking in Semidefinite-Programming-Based Hartree-Fock Theory.
Nascimento, Daniel R; DePrince, A Eugene
2018-05-08
The Hartree-Fock problem was recently recast as a semidefinite optimization over the space of rank-constrained two-body reduced-density matrices (RDMs) [ Phys. Rev. A 2014 , 89 , 010502(R) ]. This formulation of the problem transfers the nonconvexity of the Hartree-Fock energy functional to the rank constraint on the two-body RDM. We consider an equivalent optimization over the space of positive semidefinite one-electron RDMs (1-RDMs) that retains the nonconvexity of the Hartree-Fock energy expression. The optimized 1-RDM satisfies ensemble N-representability conditions, and ensemble spin-state conditions may be imposed as well. The spin-state conditions place additional linear and nonlinear constraints on the 1-RDM. We apply this RDM-based approach to several molecular systems and explore its spatial (point group) and spin ( Ŝ 2 and Ŝ 3 ) symmetry breaking properties. When imposing Ŝ 2 and Ŝ 3 symmetry but relaxing point group symmetry, the procedure often locates spatial-symmetry-broken solutions that are difficult to identify using standard algorithms. For example, the RDM-based approach yields a smooth, spatial-symmetry-broken potential energy curve for the well-known Be-H 2 insertion pathway. We also demonstrate numerically that, upon relaxation of Ŝ 2 and Ŝ 3 symmetry constraints, the RDM-based approach is equivalent to real-valued generalized Hartree-Fock theory.
Moving Towards a Common Ground and Flight Data Systems Architecture for NASA's Exploration Missions
NASA Technical Reports Server (NTRS)
Rader. Steve; Kearney, Mike; McVittie, Thom; Smith, Dan
2006-01-01
The National Aeronautics and Space Administration has embarked on an ambitious effort to return man to the moon and then on to Mars. The Exploration Vision requires development of major new space and ground assets and poses challenges well beyond those faced by many of NASA's recent programs. New crewed vehicles must be developed. Compatible supply vehicles, surface mobility modules and robotic exploration capabilities will supplement the manned exploration vehicle. New launch systems will be developed as well as a new ground communications and control infrastructure. The development must take place in a cost-constrained environment and must advance along an aggressive schedule. Common solutions and system interoperability and will be critical to the successful development of the Exploration data systems for this wide variety of flight and ground elements. To this end, NASA has assembled a team of engineers from across the agency to identify the key challenges for Exploration data systems and to establish the most beneficial strategic approach to be followed. Key challenges and the planned NASA approach for flight and ground systems will be discussed in the paper. The described approaches will capitalize on new technologies, and will result in cross-program interoperability between spacecraft and ground systems, from multiple suppliers and agencies.
ERIC Educational Resources Information Center
Duckworth, Vicky; Lord, Janet; Dunne, Linda; Atkins, Liz; Watmore, Sue
2016-01-01
The experiences of five female lecturers working in higher education in the UK are explored as they engage in the search for a feminised critical space as a refuge from the masculinised culture of performativity in which they feel constrained and devalued. Email exchanges were used as a form of narrative enquiry that provided opportunity and space…
Optimal lifting ascent trajectories for the space shuttle
NASA Technical Reports Server (NTRS)
Rau, T. R.; Elliott, J. R.
1972-01-01
The performance gains which are possible through the use of optimal trajectories for a particular space shuttle configuration are discussed. The spacecraft configurations and aerodynamic characteristics are described. Shuttle mission payload capability is examined with respect to the optimal orbit inclination for unconstrained, constrained, and nonlifting conditions. The effects of velocity loss and heating rate on the optimal ascent trajectory are investigated.
Configurational entropy as a tool to select a physical thick brane model
NASA Astrophysics Data System (ADS)
Chinaglia, M.; Cruz, W. T.; Correa, R. A. C.; de Paula, W.; Moraes, P. H. R. S.
2018-04-01
We analize braneworld scenarios via a configurational entropy (CE) formalism. Braneworld scenarios have drawn attention mainly due to the fact that they can explain the hierarchy problem and unify the fundamental forces through a symmetry breaking procedure. Those scenarios localize matter in a (3 + 1) hypersurface, the brane, which is inserted in a higher dimensional space, the bulk. Novel analytical braneworld models, in which the warp factor depends on a free parameter n, were recently released in the literature. In this article we will provide a way to constrain this parameter through the relation between information and dynamics of a system described by the CE. We demonstrate that in some cases the CE is an important tool in order to provide the most probable physical system among all the possibilities. In addition, we show that the highest CE is correlated to a tachyonic sector of the configuration, where the solutions for the corresponding model are dynamically unstable.
Hydroelastic analysis of ice shelves under long wave excitation
NASA Astrophysics Data System (ADS)
Papathanasiou, T. K.; Karperaki, A. E.; Theotokoglou, E. E.; Belibassakis, K. A.
2015-05-01
The transient hydroelastic response of an ice shelf under long wave excitation is analysed by means of the finite element method. The simple model, presented in this work, is used for the simulation of the generated kinematic and stress fields in an ice shelf, when the latter interacts with a tsunami wave. The ice shelf, being of large length compared to its thickness, is modelled as an elastic Euler-Bernoulli beam, constrained at the grounding line. The hydrodynamic field is represented by the linearised shallow water equations. The numerical solution is based on the development of a special hydroelastic finite element for the system of governing of equations. Motivated by the 2011 Sulzberger Ice Shelf (SIS) calving event and its correlation with the Honshu Tsunami, the SIS stable configuration is studied. The extreme values of the bending moment distribution in both space and time are examined. Finally, the location of these extrema is investigated for different values of ice shelf thickness and tsunami wave length.
Hydroelastic analysis of ice shelves under long wave excitation
NASA Astrophysics Data System (ADS)
Papathanasiou, T. K.; Karperaki, A. E.; Theotokoglou, E. E.; Belibassakis, K. A.
2015-08-01
The transient hydroelastic response of an ice shelf under long wave excitation is analysed by means of the finite element method. The simple model, presented in this work, is used for the simulation of the generated kinematic and stress fields in an ice shelf, when the latter interacts with a tsunami wave. The ice shelf, being of large length compared to its thickness, is modelled as an elastic Euler-Bernoulli beam, constrained at the grounding line. The hydrodynamic field is represented by the linearised shallow water equations. The numerical solution is based on the development of a special hydroelastic finite element for the system of governing of equations. Motivated by the 2011 Sulzberger Ice Shelf (SIS) calving event and its correlation with the Honshu Tsunami, the SIS stable configuration is studied. The extreme values of the bending moment distribution in both space and time are examined. Finally, the location of these extrema is investigated for different values of ice shelf thickness and tsunami wave length.
Open clusters in the Kepler field. II. NGC 6866
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janes, Kenneth; Hoq, Sadia; Barnes, Sydney A.
We have developed a maximum-likelihood procedure to fit theoretical isochrones to the observed cluster color-magnitude diagrams of NGC 6866, an open cluster in the Kepler spacecraft field of view. The Markov chain Monte Carlo algorithm permits exploration of the entire parameter space of a set of isochrones to find both the best solution and the statistical uncertainties. For clusters in the age range of NGC 6866 with few, if any, red giant members, a purely photometric determination of the cluster properties is not well-constrained. Nevertheless, based on our UBVRI photometry alone, we have derived the distance, reddening, age, and metallicitymore » of the cluster and established estimates for the binary nature and membership probability of individual stars. We derive the following values for the cluster properties: (m – M) {sub V} = 10.98 ± 0.24, E(B – V) = 0.16 ± 0.04 (so the distance = 1250 pc), age =705 ± 170 Myr, and Z = 0.014 ± 0.005.« less
Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.
Costa, Marcelo Azevedo; Braga, Antonio Padua; de Menezes, Benjamin Rodrigues
2012-09-01
The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function. Copyright © 2012 Elsevier Ltd. All rights reserved.
Advancing Public Health on the Changing Global Trade and Investment Agenda
Thow, Anne Marie; Gleeson, Deborah
2017-01-01
Concerns regarding the Trans-Pacific Partnership (TPP) have raised awareness about the negative public health impacts of trade and investment agreements. In the past decade, we have learned much about the implications of trade agreements for public health: reduced equity in access to health services; increased flows of unhealthy commodities; limits on access to medicines; and constrained policy space for health. Getting health on the trade agenda continues to prove challenging, despite some progress in moving towards policy coherence. Recent changes in trade and investment agendas highlight an opportunity for public health researchers and practitioners to engage in highly politicized debates about how future economic policy can protect and support equitable public health outcomes. To fulfil this opportunity, public health attention now needs to turn to strengthening policy coherence between trade and health, and identifying how solutions can be implemented. Key strategies include research agendas that address politics and power, and capacity building for both trade and health officials. PMID:28812819
Revilla-López, Guillem; Torras, Juan; Jiménez, Ana I.; Cativiela, Carlos; Nussinov, Ruth; Alemán, Carlos
2009-01-01
The intrinsic conformational preferences of the non-proteinogenic amino acids constructed by incorporating the arginine side chain in the β position of 1-aminocyclopentane-1-carboxylic acid (either in a cis or a trans orientation relative to the amino group) have been investigated using computational methods. These compounds may be considered as constrained analogues of arginine (denoted as c5Arg) in which the orientation of the side chain is fixed by the cyclopentane moiety. Specifically, the N-acetyl-N′-methylamide derivatives of cis and trans-c5Arg have been examined in the gas phase and in solution using B3LYP/6-311+G(d,p) calculations and Molecular Dynamics simulations. Results indicate that the conformational space available to these compounds is highly restricted, their conformational preferences being dictated by the ability of the guanidinium group in the side chain to establish hydrogen-bond interactions with the backbone. A comparison with the behavior previously described for the analogous phenylalanine derivatives is presented. PMID:19236034
A Search for Pulsations From Geminga Above 100 GeV With Veritas
Aliu, E.; Archambault, S.; Archer, A.; ...
2015-02-09
Here, we present the results of 71.6 hr of observations of the Geminga pulsar (PSR J0633+1746) with the VERITAS very-high-energy gamma-ray telescope array. Data taken with VERITAS between 2007 November and 2013 February were phase-folded using a Geminga pulsar timing solution derived from data recorded by the XMM- Newton and Fermi-LAT space telescopes. No significant pulsed emission above 100 GeV is observed, and we report upper limits at the 95% confidence level on the integral flux above 135 GeV (spectral analysis threshold) of 4.0 × 10 -13 s -1 cm -2 and 1.7 × 10 -13 s -1 cm -2more » for the two principal peaks in the emission profile. These upper limits, placed in context with phase-resolved spectral energy distributions determined from 5 yr of data from the Fermi-Large Area Telescope (LAT), constrain possible hardening of the Geminga pulsar emission spectra above ~50 GeV.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tracy, Cameron L.; Park, Sulgiye; Rittman, Dylan R.
High-entropy alloys, near-equiatomic solid solutions of five or more elements, represent a new strategy for the design of materials with properties superior to those of conventional alloys. However, their phase space remains constrained, with transition metal high-entropy alloys exhibiting only face- or body-centered cubic structures. Here, we report the high-pressure synthesis of a hexagonal close-packed phase of the prototypical high-entropy alloy CrMnFeCoNi. This martensitic transformation begins at 14 GPa and is attributed to suppression of the local magnetic moments, destabilizing the initial fcc structure. Similar to fcc-to-hcp transformations in Al and the noble gases, the transformation is sluggish, occurring overmore » a range of >40 GPa. However, the behaviour of CrMnFeCoNi is unique in that the hcp phase is retained following decompression to ambient pressure, yielding metastable fcc-hcp mixtures. This demonstrates a means of tuning the structures and properties of high-entropy alloys in a manner not achievable by conventional processing techniques.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-07-01
UTCHEM IMPLICIT is a three-dimensional chemical flooding simulator. The solution scheme is fully implicit. The pressure equation and the mass conservation equations are solved simultaneously for the aqueous phase pressure and the total concentrations of each component. A third-order-in-space, second-order-in-time finite-difference method and a new total-variation-diminishing (TVD) third-order flux limiter are used to reduce numerical dispersion effects. Saturations and phase concentrations are solved in a flash routine. The major physical phenomena modeled in the simulator are: dispersion, adsorption, aqueous-oleic-microemulsion phase behavior, interfacial tension, relative permeability, capillary trapping, compositional phase viscosity, capillary pressure, phase density, polymer properties: shear thinning viscosity, inaccessiblemore » pore volume, permeability reduction, and adsorption. The following options are available in the simulator: constant or variable time-step sizes, uniform or nonuniform grid, pressure or rate constrained wells, horizontal and vertical wells.« less
Investigating the Nature of and Methods for Managing Metroplex Operations
NASA Technical Reports Server (NTRS)
Atkins, Stephen; Capozzi, Brian; Hinkey, Jim; Idris, Husni; Kaiser, Kent
2011-01-01
A combination of traffic demand growth, Next Generation Air Transportation System (NextGen) technologies and operational concepts, and increased utilization of regional airports is expected to increase the occurrence and severity of coupling between operations at proximate airports. These metroplex phenomena constrain the efficiency and/or capacity of airport operations and, in NextGen, have the potential to reduce safety and prevent environmental benefits. Without understanding the nature of metroplexes and developing solutions that provide efficient coordination of operations between closely-spaced airports, the use of NextGen technologies and distribution of demand to regional airports may provide little increase in the overall metroplex capacity. However, the characteristics and control of metroplex operations have not received significant study. This project advanced the state of knowledge about metroplexes by completing three objectives: 1. developed a foundational understand of the nature of metroplexes; 2. provided a framework for discussing metroplexes; 3. suggested and studied an approach for optimally managing metroplexes that is consistent with other NextGen concepts
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Two statistics for evaluating parameter identifiability and error reduction
Doherty, John; Hunt, Randall J.
2009-01-01
Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.
Constraining the atmosphere of GJ 1214b using an optimal estimation technique
NASA Astrophysics Data System (ADS)
Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.; Fletcher, L. N.; Lee, J.-M.
2013-09-01
We explore cloudy, extended H2-He atmosphere scenarios for the warm super-Earth GJ 1214b using an optimal estimation retrieval technique. This planet, orbiting an M4.5 star only 13 pc from the Earth, is of particular interest because it lies between the Earth and Neptune in size and may be a member of a new class of planet that is neither terrestrial nor gas giant. Its relatively flat transmission spectrum has so far made atmospheric characterization difficult. The Non-linear optimal Estimator for MultivariateE spectral analySIS (NEMESIS) algorithm is used to explore the degenerate model parameter space for a cloudy, H2-He-dominated atmosphere scenario. Optimal estimation is a data-led approach that allows solutions beyond the range permitted by ab initio equilibrium model atmosphere calculations, and as such prevents restriction from prior expectations. We show that optimal estimation retrieval is a powerful tool for this kind of study, and present an exploration of the degenerate atmospheric scenarios for GJ 1214b. Whilst we find a family of solutions that provide a very good fit to the data, the quality and coverage of these data are insufficient for us to more precisely determine the abundances of cloud and trace gases given an H2-He atmosphere, and we also cannot rule out the possibility of a high molecular weight atmosphere. Future ground- and space-based observations will provide the opportunity to confirm or rule out an extended H2-He atmosphere, but more precise constraints will be limited by intrinsic degeneracies in the retrieval problem, such as variations in cloud top pressure and temperature.
NASA Astrophysics Data System (ADS)
Saria, E.; Calais, E.; Altamimi, Z.; Willis, P.; Farah, H.
2013-04-01
We analyzed 16 years of GPS and 17 years of Doppler orbitography and radiopositioning integrated by satellite (DORIS) data at continuously operating geodetic sites in Africa and surroundings to describe the present-day kinematics of the Nubian and Somalian plates and constrain relative motions across the East African Rift. The resulting velocity field describes horizontal and vertical motion at 133 GPS sites and 9 DORIS sites. Horizontal velocities at sites located on stable Nubia fit a single plate model with a weighted root mean square residual of 0.6 mm/yr (maximum residual 1 mm/yr), an upper bound for plate-wide motions and for regional-scale deformation in the seismically active southern Africa and Cameroon volcanic line. We confirm significant southward motion ( ˜ 1.5 mm/yr) in Morocco with respect to Nubia, consistent with earlier findings. We propose an updated angular velocity for the divergence between Nubia and Somalia, which provides the kinematic boundary conditions to rifting in East Africa. We update a plate motion model for the East African Rift and revise the counterclockwise rotation of the Victoria plate and clockwise rotation of the Rovuma plate with respect to Nubia. Vertical velocities range from - 2 to +2 mm/yr, close to their uncertainties, with no clear geographic pattern. This study provides the first continent-wide position/velocity solution for Africa, expressed in International Terrestrial Reference Frame (ITRF2008), a contribution to the upcoming African Reference Frame (AFREF). Except for a few regions, the African continent remains largely under-sampled by continuous space geodetic data. Efforts are needed to augment the geodetic infrastructure and openly share existing data sets so that the objectives of AFREF can be fully reached.
CONSTRAINTS ON THE SYNCHROTRON EMISSION MECHANISM IN GAMMA-RAY BURSTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beniamini, Paz; Piran, Tsvi, E-mail: paz.beniamini@mail.huji.ac.il, E-mail: tsvi.piran@mail.huji.ac.il
2013-05-20
We reexamine the general synchrotron model for gamma-ray bursts' (GRBs') prompt emission and determine the regime in the parameter phase space in which it is viable. We characterize a typical GRB pulse in terms of its peak energy, peak flux, and duration and use the latest Fermi observations to constrain the high-energy part of the spectrum. We solve for the intrinsic parameters at the emission region and find the possible parameter phase space for synchrotron emission. Our approach is general and it does not depend on a specific energy dissipation mechanism. Reasonable synchrotron solutions are found with energy ratios ofmore » 10{sup -4} < {epsilon}{sub B}/{epsilon}{sub e} < 10, bulk Lorentz factor values of 300 < {Gamma} < 3000, typical electrons' Lorentz factor values of 3 Multiplication-Sign 10{sup 3} < {gamma}{sub e} < 10{sup 5}, and emission radii of the order 10{sup 15} cm < R < 10{sup 17} cm. Most remarkable among those are the rather large values of the emission radius and the electron's Lorentz factor. We find that soft (with peak energy less than 100 keV) but luminous (isotropic luminosity of 1.5 Multiplication-Sign 10{sup 53}) pulses are inefficient. This may explain the lack of strong soft bursts. In cases when most of the energy is carried out by the kinetic energy of the flow, such as in the internal shocks, the synchrotron solution requires that only a small fraction of the electrons are accelerated to relativistic velocities by the shocks. We show that future observations of very high energy photons from GRBs by CTA could possibly determine all parameters of the synchrotron model or rule it out altogether.« less
Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew
2009-01-01
Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.
NASA Astrophysics Data System (ADS)
Baturin, A. P.
2014-12-01
The theme of NEO's impact orbits' regions detecting has been considered. The regions have been detected in the space of initial motion parameters. The detecting has been done by means of constrained minimization of so called "confidence coefficient". This coefficient determines the position of an orbit inside its confidence ellipsoid obtained from a least-square orbit fitting. As a condition the constraining of an asteroid-Earth distance at considered encounter has been used. By means of random variation of initial approximations for the minimization and of the parameter constraining an asteroid-Earth distance it has been demonstrated that impact regions usually have a form of some long tubes in the space of initial motion parameters. The demonstration has been done for the asteroids 2009 FD, 2011 TO and 2012 PB20 at their waited closest encounters to the Earth.
Earthquake focal mechanism forecasting in Italy for PSHA purposes
NASA Astrophysics Data System (ADS)
Roselli, Pamela; Marzocchi, Warner; Mariucci, Maria Teresa; Montone, Paola
2018-01-01
In this paper, we put forward a procedure that aims to forecast focal mechanism of future earthquakes. One of the primary uses of such forecasts is in probabilistic seismic hazard analysis (PSHA); in fact, aiming at reducing the epistemic uncertainty, most of the newer ground motion prediction equations consider, besides the seismicity rates, the forecast of the focal mechanism of the next large earthquakes as input data. The data set used to this purpose is relative to focal mechanisms taken from the latest stress map release for Italy containing 392 well-constrained solutions of events, from 1908 to 2015, with Mw ≥ 4 and depths from 0 down to 40 km. The data set considers polarity focal mechanism solutions until to 1975 (23 events), whereas for 1976-2015, it takes into account only the Centroid Moment Tensor (CMT)-like earthquake focal solutions for data homogeneity. The forecasting model is rooted in the Total Weighted Moment Tensor concept that weighs information of past focal mechanisms evenly distributed in space, according to their distance from the spatial cells and magnitude. Specifically, for each cell of a regular 0.1° × 0.1° spatial grid, the model estimates the probability to observe a normal, reverse, or strike-slip fault plane solution for the next large earthquakes, the expected moment tensor and the related maximum horizontal stress orientation. These results will be available for the new PSHA model for Italy under development. Finally, to evaluate the reliability of the forecasts, we test them with an independent data set that consists of some of the strongest earthquakes with Mw ≥ 3.9 occurred during 2016 in different Italian tectonic provinces.
1982-10-01
Element Unconstrained Variational Formulations," Innovativ’e Numerical Analysis For the Applied Engineering Science, R. P. Shaw, et at, Fitor...Initial Boundary Value of Gun Dynamics Solved by Finite Element Unconstrained Variational Formulations," Innovative Numerical Analysis For the Applied ... Engineering Science, R. P. Shaw, et al, Editors, University Press of Virginia, Charlottesville, pp. 733-741, 1980. 2 J. J. Wu, "Solutions to Initial
Neural network-based systems for handprint OCR applications.
Ganis, M D; Wilson, C L; Blue, J L
1998-01-01
Over the last five years or so, neural network (NN)-based approaches have been steadily gaining performance and popularity for a wide range of optical character recognition (OCR) problems, from isolated digit recognition to handprint recognition. We present an NN classification scheme based on an enhanced multilayer perceptron (MLP) and describe an end-to-end system for form-based handprint OCR applications designed by the National Institute of Standards and Technology (NIST) Visual Image Processing Group. The enhancements to the MLP are based on (i) neuron activations functions that reduce the occurrences of singular Jacobians; (ii) successive regularization to constrain the volume of the weight space; and (iii) Boltzmann pruning to constrain the dimension of the weight space. Performance characterization studies of NN systems evaluated at the first OCR systems conference and the NIST form-based handprint recognition system are also summarized.
A simple, space constrained NIRIM type reactor for chemical vapour deposition of diamond
NASA Astrophysics Data System (ADS)
Thomas, Evan L. H.; Ginés, Laia; Mandal, Soumen; Klemencic, Georgina M.; Williams, Oliver A.
2018-03-01
In this paper the design of a simple, space constrained chemical vapour deposition reactor for diamond growth is detailed. Based on the design by NIRIM, the reactor is composed of a quartz discharge tube placed within a 2.45 GHz waveguide to create the conditions required for metastable growth of diamond. Utilising largely off-the-shelf components and a modular design, the reactor allows for easy modification, repair, and cleaning between growth runs. The elements of the reactor design are laid out with the CAD files, parts list, and control files made easily available to enable replication. Finally, the quality of nanocrystalline diamond films produced are studied with SEM and Raman spectroscopy, with the observation of clear faceting and a large diamond fraction suggesting the design offers deposition of diamond with minimal complexity.
An Onsager Singularity Theorem for Turbulent Solutions of Compressible Euler Equations
NASA Astrophysics Data System (ADS)
Drivas, Theodore D.; Eyink, Gregory L.
2017-12-01
We prove that bounded weak solutions of the compressible Euler equations will conserve thermodynamic entropy unless the solution fields have sufficiently low space-time Besov regularity. A quantity measuring kinetic energy cascade will also vanish for such Euler solutions, unless the same singularity conditions are satisfied. It is shown furthermore that strong limits of solutions of compressible Navier-Stokes equations that are bounded and exhibit anomalous dissipation are weak Euler solutions. These inviscid limit solutions have non-negative anomalous entropy production and kinetic energy dissipation, with both vanishing when solutions are above the critical degree of Besov regularity. Stationary, planar shocks in Euclidean space with an ideal-gas equation of state provide simple examples that satisfy the conditions of our theorems and which demonstrate sharpness of our L 3-based conditions. These conditions involve space-time Besov regularity, but we show that they are satisfied by Euler solutions that possess similar space regularity uniformly in time.
Bryan, Stephen; Lilien, Steven
2003-10-01
Regulators are trying to clear up the muddle created by earnings-report adjustments called "pro formas" that companies issue. Constraining such reporting, as the regulators seem bent on doing, isn't the solution. Firms should increase alternative reporting--and fully account for their accounting.
Active Solution Space and Search on Job-shop Scheduling Problem
NASA Astrophysics Data System (ADS)
Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo
In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.
Natural Constraints to Species Diversification.
Lewitus, Eric; Morlon, Hélène
2016-08-01
Identifying modes of species diversification is fundamental to our understanding of how biodiversity changes over evolutionary time. Diversification modes are captured in species phylogenies, but characterizing the landscape of diversification has been limited by the analytical tools available for directly comparing phylogenetic trees of groups of organisms. Here, we use a novel, non-parametric approach and 214 family-level phylogenies of vertebrates representing over 500 million years of evolution to identify major diversification modes, to characterize phylogenetic space, and to evaluate the bounds and central tendencies of species diversification. We identify five principal patterns of diversification to which all vertebrate families hold. These patterns, mapped onto multidimensional space, constitute a phylogenetic space with distinct properties. Firstly, phylogenetic space occupies only a portion of all possible tree space, showing family-level phylogenies to be constrained to a limited range of diversification patterns. Secondly, the geometry of phylogenetic space is delimited by quantifiable trade-offs in tree size and the heterogeneity and stem-to-tip distribution of branching events. These trade-offs are indicative of the instability of certain diversification patterns and effectively bound speciation rates (for successful clades) within upper and lower limits. Finally, both the constrained range and geometry of phylogenetic space are established by the differential effects of macroevolutionary processes on patterns of diversification. Given these properties, we show that the average path through phylogenetic space over evolutionary time traverses several diversification stages, each of which is defined by a different principal pattern of diversification and directed by a different macroevolutionary process. The identification of universal patterns and natural constraints to diversification provides a foundation for understanding the deep-time evolution of biodiversity.
Numerical Estimation of Balanced and Falling States for Constrained Legged Systems
NASA Astrophysics Data System (ADS)
Mummolo, Carlotta; Mangialardi, Luigi; Kim, Joo H.
2017-08-01
Instability and risk of fall during standing and walking are common challenges for biped robots. While existing criteria from state-space dynamical systems approach or ground reference points are useful in some applications, complete system models and constraints have not been taken into account for prediction and indication of fall for general legged robots. In this study, a general numerical framework that estimates the balanced and falling states of legged systems is introduced. The overall approach is based on the integration of joint-space and Cartesian-space dynamics of a legged system model. The full-body constrained joint-space dynamics includes the contact forces and moments term due to current foot (or feet) support and another term due to altered contact configuration. According to the refined notions of balanced, falling, and fallen, the system parameters, physical constraints, and initial/final/boundary conditions for balancing are incorporated into constrained nonlinear optimization problems to solve for the velocity extrema (representing the maximum perturbation allowed to maintain balance without changing contacts) in the Cartesian space at each center-of-mass (COM) position within its workspace. The iterative algorithm constructs the stability boundary as a COM state-space partition between balanced and falling states. Inclusion in the resulting six-dimensional manifold is a necessary condition for a state of the given system to be balanced under the given contact configuration, while exclusion is a sufficient condition for falling. The framework is used to analyze the balance stability of example systems with various degrees of complexities. The manifold for a 1-degree-of-freedom (DOF) legged system is consistent with the experimental and simulation results in the existing studies for specific controller designs. The results for a 2-DOF system demonstrate the dependency of the COM state-space partition upon joint-space configuration (elbow-up vs. elbow-down). For both 1- and 2-DOF systems, the results are validated in simulation environments. Finally, the manifold for a biped walking robot is constructed and illustrated against its single-support walking trajectories. The manifold identified by the proposed framework for any given legged system can be evaluated beforehand as a system property and serves as a map for either a specified state or a specific controller's performance.
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Pope, Stephen B.
2013-04-01
The Rate-Controlled Constrained-Equilibrium (RCCE) method is a thermodynamic based dimension reduction method which enables representation of chemistry involving n s species in terms of fewer n r constraints. Here we focus on the application of the RCCE method to Lagrangian particle probability density function based computations. In these computations, at every reaction fractional step, given the initial particle composition (represented using RCCE), we need to compute the reaction mapping, i.e. the particle composition at the end of the time step. In this work we study three different implementations of RCCE for computing this reaction mapping, and compare their relative accuracy and efficiency. These implementations include: (1) RCCE/TIFS (Trajectory In Full Space): this involves solving a system of n s rate-equations for all the species in the full composition space to obtain the reaction mapping. The other two implementations obtain the reaction mapping by solving a reduced system of n r rate-equations obtained by projecting the n s rate-equations for species evaluated in the full space onto the constrained subspace. These implementations include (2) RCCE: this is the classical implementation of RCCE which uses a direct projection of the rate-equations for species onto the constrained subspace; and (3) RCCE/RAMP (Reaction-mixing Attracting Manifold Projector): this is a new implementation introduced here which uses an alternative projector obtained using the RAMP approach. We test these three implementations of RCCE for methane/air premixed combustion in the partially-stirred reactor with chemistry represented using the n s=31 species GRI-Mech 1.2 mechanism with n r=13 to 19 constraints. We show that: (a) the classical RCCE implementation involves an inaccurate projector which yields large errors (over 50%) in the reaction mapping; (b) both RCCE/RAMP and RCCE/TIFS approaches yield significantly lower errors (less than 2%); and (c) overall the RCCE/TIFS approach is the most accurate, efficient (by orders of magnitude) and robust implementation.
Experimental Investigations of the Weathering of Suspended Sediment by Alpine Glacial Meltwater
NASA Astrophysics Data System (ADS)
Brown, Giles H.; Tranter, M.; Sharp, M. J.
1996-04-01
The magnitude and processes of solute acquisition by dilute meltwater in contact with suspended sediment in the channelized component of the hydroglacial system have been investigated through a suite of controlled laboratory experiments. Constrained by field data from Haut Glacier d'Arolla, Valais, Switzerland the effects of the water to rock ratio, particle size, crushing, repeated wetting and the availability of protons on the rate of solute acquisition are demonstrated. These free-drift experiments suggest that the rock flour is extremely geochemically reactive and that dilute quickflow waters are certain to acquire solute from suspended sediment. These data have important implications for hydrological interpretations based on the solute content of glacial meltwater, mixing model calculations, geochemical denudation rates and solute provenance studies.
NASA Technical Reports Server (NTRS)
McCubbin, F. M.; Barnes, J. J.; Vander Kaaden, K. E.; Boyce, J. W.
2017-01-01
Apatite [Ca5(PO4)3(F,Cl,OH)] is present in a wide range of planetary materials. Due to the presence of volatiles within its crystal structure (Xsite), many recent studies have attempted to use apatite to constrain the volatile contents of planetary magmas and mantle sources. In order to use the volatile contents of apatite to accurately determine the abundances of volatiles in coexisting silicate melt or fluids, thermodynamic models for the apatite solid solution and for the apatite components in multicomponent silicate melts and fluids are required. Although some thermodynamic models for apatite have been developed, they are incomplete. Furthermore, no mixing model is available for all of the apatite components in silicate melts or fluids, especially for F and Cl components. Several experimental studies have investigated the apatite-melt and apatite-fluid partitioning behavior of F, Cl, and OH in terrestrial and planetary systems, which have determined that apatite-melt partitioning of volatiles are best described as exchange equilibria similar to Fe-Mg partitioning between olivine and silicate melt. However, McCubbin et al., recently reported that the exchange coefficients vary in portions of apatite compositional space where F, Cl, and OH do not mix ideally in apatite. In particular, solution calorimetry data of apatite compositions along the F-Cl join exhibit substantial excess enthalpies of mixing, and McCubbin et al. reported substantial deviations in the Cl-F exchange Kd along the F-Cl apatite join that could be explained by the preferential incorporation of F into apatite. In the present study, we assess the effect of apatite crystal chemistry on F-Cl exchange equilibria between apatite and melt at 4 GPa over the temperature range of 1300-1500 C. The goal of these experiments is to assess the variation in the Ap-melt Cl-F exchange Kd over a broad range of F:Cl ratios in apatite. The results of these experiments could be used to understand at what composition apatite shifts from a hexagonal unit cell with space group P63/m to a unit cell with monoclinic symmetry within space group P21/b. We anticipate that this transition occurs at >70% chlorapatite based on solution calorimetry data.
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Benameur, S.; Mignotte, M.; Meunier, J.; Soucy, J. -P.
2009-01-01
Image restoration is usually viewed as an ill-posed problem in image processing, since there is no unique solution associated with it. The quality of restored image closely depends on the constraints imposed of the characteristics of the solution. In this paper, we propose an original extension of the NAS-RIF restoration technique by using information fusion as prior information with application in SPECT medical imaging. That extension allows the restoration process to be constrained by efficiently incorporating, within the NAS-RIF method, a regularization term which stabilizes the inverse solution. Our restoration method is constrained by anatomical information extracted from a high resolution anatomical procedure such as magnetic resonance imaging (MRI). This structural anatomy-based regularization term uses the result of an unsupervised Markovian segmentation obtained after a preliminary registration step between the MRI and SPECT data volumes from each patient. This method was successfully tested on 30 pairs of brain MRI and SPECT acquisitions from different subjects and on Hoffman and Jaszczak SPECT phantoms. The experiments demonstrated that the method performs better, in terms of signal-to-noise ratio, than a classical supervised restoration approach using a Metz filter. PMID:19812704
Constraining axion dark matter with Big Bang Nucleosynthesis
Blum, Kfir; D'Agnolo, Raffaele Tito; Lisanti, Mariangela; ...
2014-08-04
We show that Big Bang Nucleosynthesis (BBN) significantly constrains axion-like dark matter. The axion acts like an oscillating QCD θ angle that redshifts in the early Universe, increasing the neutron–proton mass difference at neutron freeze-out. An axion-like particle that couples too strongly to QCD results in the underproduction of during BBN and is thus excluded. The BBN bound overlaps with much of the parameter space that would be covered by proposed searches for a time-varying neutron EDM. The QCD axion does not couple strongly enough to affect BBN
Constraining axion dark matter with Big Bang Nucleosynthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blum, Kfir; D'Agnolo, Raffaele Tito; Lisanti, Mariangela
We show that Big Bang Nucleosynthesis (BBN) significantly constrains axion-like dark matter. The axion acts like an oscillating QCD θ angle that redshifts in the early Universe, increasing the neutron–proton mass difference at neutron freeze-out. An axion-like particle that couples too strongly to QCD results in the underproduction of during BBN and is thus excluded. The BBN bound overlaps with much of the parameter space that would be covered by proposed searches for a time-varying neutron EDM. The QCD axion does not couple strongly enough to affect BBN
NASA Technical Reports Server (NTRS)
Alberts, Thomas E.; Xia, Houchun; Chen, Yung
1992-01-01
The effectiveness of constrained viscoelastic layer damping treatment designs is evaluated separately as passive control measures for low frequency joint dominated modes and higher frequency boom flexure dominated modes using a NASTRAN finite element analysis. Passive damping augmentation is proposed which is based on a constrained viscoelastic layer damping treatment applied to the surface of the manipulators's flexible booms. It is pointed out that even the joint compliance dominated modes can be damped to some degree through appropriate design of the treatment.
Characterizing Space Weather Effects in the Post-DMSP Era
NASA Astrophysics Data System (ADS)
Groves, K. M.
2015-12-01
Space weather generally refers to heliophysical phenomena or events that produce a negative impact on manmade systems. While many space weather events originate with impulsive disturbances on the sun, others result from complex internal interactions in the ionosphere-thermosphere system. The reliance of mankind on satellite-based services continues to increase rapidly, yet the global capacity for sensing space weather in the ionosphere seems headed towards decline. A number of recent ionospheric-focused space-based missions are either presently, or soon-to-be, no longer available, and the end of the multi-decade Defense Meteorological Satellite Program is now in sight. The challenge facing the space weather community is how to maintain or increase sensing capabilities in an operational environment constrained by a decreasing numbers of sensors. The upcoming launch of COSMIC-2 in 2016/2018 represents the most significant new capability planned for the future. GNSS RO data has some benefit for background ionospheric models, particularly over regions where ground-based GNSS TEC measurements are unavailable, but the space weather community has a dire need to leverage such missions for far more knowledge of the ionosphere, and specifically for information related to space weather impacts. Meanwhile, the number of ground-based GNSS sensors worldwide has increased substantially, yet progress instrumenting some vastly undersampled regions, such as Africa, remains slow. In fact, the recent loss of support for many existing ground stations in such areas under the former Scintillation Network Decision Aid (SCINDA) program may actually result in a decrease in such sensing sites over the next 1-2 years, abruptly reversing a positive trend established over the last decade. Here we present potential solutions to the challenges these developments pose to the space weather enterprise. Specific topics include modeling advances required to detect and accurately characterize irregularities and associated scintillations from GNSS RO measurements, the exploitation of existing/planned radio beacons for improved bottomside definition and scintillations, and an affordable approach to leverage existing ground stations to expand sensing capacity at critical locations in otherwise data-sparse regions.
NASA Technical Reports Server (NTRS)
Chen, Guanrong
1991-01-01
An optimal trajectory planning problem for a single-link, flexible joint manipulator is studied. A global feedback-linearization is first applied to formulate the nonlinear inequality-constrained optimization problem in a suitable way. Then, an exact and explicit structural formula for the optimal solution of the problem is derived and the solution is shown to be unique. It turns out that the optimal trajectory planning and control can be done off-line, so that the proposed method is applicable to both theoretical analysis and real time tele-robotics control engineering.
Surface Exposure Ages of Space-Weathered Grains from Asteroid 25143 Itokawa
NASA Technical Reports Server (NTRS)
Keller, L. P.; Berger, E. L.; Christoffersen, R.
2015-01-01
We use the observed effects of solar wind ion irradiation and the accumulation of solar flare particle tracks recorded in Itokawa grains to constrain the rates of space weathering and yield information about regolith dynamics. The track densities are consistent with exposure at mm depths for 104-105 years. The solar wind damaged rims form on a much faster timescale, <10(exp 3) years.
From the Frozen Wilderness to the Moody Sea: Rural Space, Girlhood and Popular Pedagogy
ERIC Educational Resources Information Center
Gottschall, Kristina
2014-01-01
This paper turns to debates in post-critical public pedagogy to focus on how a small body of films might potentially work as vehicles for teaching and learning about youth, gender and space. It is argued that representations of the rural shape what is possible for girlhood, being both enabling and constraining for the subject. Framed by discourses…
ERIC Educational Resources Information Center
Salazar, Maria del Carmen; Martinez, Lisa M.; Ortega, Debora
2016-01-01
The purpose of this study is to address how spaces in school and out of school support or constrain undocumented Latina/o youths' development as critical multicultural citizens. We draw on data from a multi-phase, qualitative study to present findings indicating that the youths persevered through academic and civic engagement. Ultimately, the…
Including geological information in the inverse problem of palaeothermal reconstruction
NASA Astrophysics Data System (ADS)
Trautner, S.; Nielsen, S. B.
2003-04-01
A reliable reconstruction of sediment thermal history is of central importance to the assessment of hydrocarbon potential and the understanding of basin evolution. However, only rarely do sedimentation history and borehole data in the form of present day temperatures and vitrinite reflectance constrain the past thermal evolution to a useful level of accuracy (Gallagher and Sambridge,1992; Nielsen,1998; Trautner and Nielsen,2003). This is reflected in the inverse solutions to the problem of determining heat flow history from borehole data: The recent heat flow is constrained by data while older values are governed by the chosen a prior heat flow. In this paper we reduce this problem by including geological information in the inverse problem. Through a careful analysis of geological and geophysical data the timing of the tectonic processes, which may influence heat flow, can be inferred. The heat flow history is then parameterised to allow for the temporal variations characteristic of the different tectonic events. The inversion scheme applies a Markov chain Monte Carlo (MCMC) approach (Nielsen and Gallagher, 1999; Ferrero and Gallagher,2002), which efficiently explores the model space and futhermore samples the posterior probability distribution of the model. The technique is demonstrated on wells in the northern North Sea with emphasis on the stretching event in Late Jurassic. The wells are characterised by maximum sediment temperature at the present day, which is the worst case for resolution of the past thermal history because vitrinite reflectance is determined mainly by the maximum temperature. Including geological information significantly improves the thermal resolution. Ferrero, C. and Gallagher,K.,2002. Stochastic thermal history modelling.1. Constraining heat flow histories and their uncertainty. Marine and Petroleum Geology, 19, 633-648. Gallagher,K. and Sambridge, M., 1992. The resolution of past heat flow in sedimentary basins from non-linear inversion of geochemical data: the smoothest model approach, with synthetic examples. Geophysical Journal International, 109, 78-95. Nielsen, S.B, 1998. Inversion and sensitivity analysis in basin modelling. Geoscience 98. Keele University, UK, Abstract Volume, 56. Nielsen, S.B. and Gallagher, K., 1999. Efficient sampling of 3-D basin modelling scenarios. Extended Abstracts Volume, 1999 AAPG International Conference &Exhibition, Birmingham, England, September 12-15, 1999, p. 369 - 372. Trautner S. and Nielsen, S.B., 2003. 2-D inverse thermal modelling in the Norwegian shelf using Fast Approximate Forward (FAF) solutions. In R. Marzi and Duppenbecker, S. (Ed.), Multi-Dimensional Basin Modeling, AAPG, in press.
NASA Technical Reports Server (NTRS)
Probst, D.; Jensen, L.
1991-01-01
Delay-insensitive VLSI systems have a certain appeal on the ground due to difficulties with clocks; they are even more attractive in space. We answer the question, is it possible to control state explosion arising from various sources during automatic verification (model checking) of delay-insensitive systems? State explosion due to concurrency is handled by introducing a partial-order representation for systems, and defining system correctness as a simple relation between two partial orders on the same set of system events (a graph problem). State explosion due to nondeterminism (chiefly arbitration) is handled when the system to be verified has a clean, finite recurrence structure. Backwards branching is a further optimization. The heart of this approach is the ability, during model checking, to discover a compact finite presentation of the verified system without prior composition of system components. The fully-implemented POM verification system has polynomial space and time performance on traditional asynchronous-circuit benchmarks that are exponential in space and time for other verification systems. We also sketch the generalization of this approach to handle delay-constrained VLSI systems.
NASA Astrophysics Data System (ADS)
Lamy, P. L.; Toth, I.; Weaver, H. A.; A'Hearn, M. F.; Jorda, L.
2011-04-01
We report on our on-going effort to detect and characterize cometary nuclei with the Hubble Space Telescope (HST). During cycle 9 (2000 July to 2001 June), we performed multi-orbit observations of 10 ecliptic comets with the Wide Field Planetary Camera 2. Nominally, eight contiguous orbits covering a time interval of ˜11 h were devoted to each comet but a few orbits were occasionally lost. In addition to the standard R band, we could additionally observe four of them in the V band and the two brightest ones in the B band. Time series photometry was used to constrain the size, shape and rotational period of the 10 nuclei. Assuming a geometric albedo of 0.04 for the R band, a linear phase law with a coefficient of 0.04 mag deg-1 and an opposition effect similar to that of comet 19P/Borrelly, we determined the following mean values of the effective radii 47P/Ashbrook-Jackson: 2.86±0.08 km, 61P/Shajn-Schaldach: 0.62±0.02 km, 70P/Kojima: 1.83±0.05 km, 74P/Smirnova-Chernykh: 2.23±0.04 km, 76P/West-Kohoutek-Ikemura: 0.30±0.02 km, 82P/Gehrels 3: 0.69±0.02 km, 86P/Wild 3: 0.41±0.03 km, 87P/Bus: 0.270.01 km, 110P/Hartley 3: 2.15±0.04 km and 147P/Kushida-Muramatsu: 0.21±0.01 km. Because of the limited time coverage (˜11 h), the rotational periods could not be accurately determined, multiple solutions were sometime found and three periods were not constrained at all. Our estimates range from ˜5 to ˜32 h. The lower limits for the ratio a/b of the semi-axis of the equivalent spheroids range from 1.10 (70P) to 2.20 (87P). The four nuclei for which we could measure (V-R) are all significantly redder than the Sun, with 86P/Wild 3 (V-R) = 0.86 ± 0.10 appearing as an ultrared object. We finally determined the dust activity parameter Afρ of their coma in the R band, the colour indices and the reflectivity spectra of four of them. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy under NASA contract NAS 5-26555.
SIRGAS: ITRF densification in Latin America and the Caribbean
NASA Astrophysics Data System (ADS)
Brunini, C.; Costa, S.; Mackern, V.; Martínez, W.; Sánchez, L.; Seemüller, W.; da Silva, A.
2009-04-01
The continental reference frame of SIRGAS (Sistema de Referencia Geocéntrico para las Américas) is at present realized by the SIRGAS Continuously Operating Network (SIRGAS-CON) composed by about 200 stations distributed over all Latin America and the Caribbean. SIRGAS member countries are qualifying their national reference frames by installing continuously operating GNSS stations, which have to be consistently integrated into the continental network. As the number of these stations is rapidly increasing, the processing strategy of the SIRGAS-CON network was redefined during the SIRGAS 2008 General Meeting in May 2008. The new strategy relies upon the definition of two hierarchy levels: a) A core network (SIRGAS-CON-C) with homogeneous continental coverage and stabile site locations ensures the long-term stability of the reference frame and provides the primary link to the ITRS. Stations belonging to this network have been selected so that each country contributes with a number of stations defined according to its surface and guarantying that the selected stations are the best in operability, continuity, reliability, and geographical coverage. b) Several densification sub-networks (SIRGAS-CON-D) improve the accessibility to the reference frame. The SIRGAS-CON-D sub-networks shall correspond to the national reference frames, i.e., as an optimum there shall be as many sub-networks as countries in the region. The goal is that each country processes its own continuously stations following the SIRGAS processing guidelines, which are defined in accordance with the IERS and IGS standards and conventions. Since at present not all of the countries are operating a processing centre, the existing stations are classified in three densification networks (a Northern, a middle, and a Southern one), which are processed by three local processing centres until new ones are installed. As SIRGAS is defined as a densification of the ITRS, stations included in the core network, as well as in the densification sub-networks match the requirements, characteristics, and processing performance of the ITRF. The SIRGAS-CON-C network is processed by DGFI (Deutsches Geodätisches Forschungsinstitut, Germany) as the IGS-RNAAC-SIR. The Local Processing Centres are for the Northern sub-network IGAC (Instituto Geográfico Augustín Codazzi, Colombia), for the middle sub-network IBGE (Instituto Brasileiro de Geografia e Estátistica, Brazil), and for the Southern sub-network IGG-CIMA (Instituto de Geodesia y Geodinámica, Universidad Nacional de Cuyo, Argentina). These four Processing Centres deliver loosely constrained weekly solutions for station coordinates (i.e., satellite orbits, satellite clock offsets, and Earth orientation parameters are fixed to the final weekly IGS solutions and coordinates for all sites are constrained to 1 m). The individual contributions are integrated in a unified solution by the SIRGAS Combination Centres (DGFI and IBGE) according to the following strategy: 1) Individual solutions are reviewed/corrected for possible format problems, data inconsistencies, etc. 2) Constraints imposed in the delivered normal equations are removed. 3) Sub-networks are individually aligned to the IGS05 reference frame by applying the No Net Rotation (NNR) and No Net Translation (NNT) conditions. 4) Coordinates obtained in (3) for each sub-network are compared to IGS05 values and to each other in order to identify possible outliers. 5) Stations with large residuals (more than 10 mm in the N-E component, and more than 20 mm in the Up component) are reduced from the normal equations. Steps (3), (4), and (5) are done iteratively. 6) Since at present the four Analysis Centres are processing GPS observations only and all of them use the Bernese Software for computing weekly solutions, relative weighting factors are not applied in the combination. 7) Individual normal equations are accumulated and solved for computing a loosely constrained weekly solution for station coordinates (i.e., coordinates for all stations are constrained to 1 m). This solution in SINEX format is submitted to IGS for the global polyhedron. 8) Combination obtained in (7) is constrained by applying NNR+NNT conditions with respect to the IGS05 stations included the SIRGAS region to provide constrained coordinates for all SIRGAS-CON (core + densification) stations. The applied IGS05 reference coordinates correspond to the weekly IGS solution for the global network, i.e., coordinates included in the igsYYPwwww.snx files. This constrained solution provides the final weekly SIRGAS-CON coordinates for practical applications. The DGFI (i.e. IGS RNAAC SIR) weekly combinations are delivered to the IGS Data Centres for combination in the global polyhedron, and made available for users as official SIRGAS products, respectively. The IBGE weekly combinations provide control and back-up. The above described analysis strategy is applied since GPS week 1495. Before (since June 1996 to August 2008), the SIRGAS-CON network was totally processed by DGFI. Until now, results show a very good agreement with previous computations; however, the present sub-networks distribution has two main disadvantages: 1) Not all SIRGAS-CON stations are included in the same number of individual solutions, i.e., they are unequally weighted in the weekly combinations, and 2) since there are not enough Local Processing Centres, the required redundancy (each station processed by at least three processing centres) is not fulfilled. Therefore, efforts are being made to install additional Local Processing Centres in Latin American countries as Argentina, Ecuador, Mexico, Peru, Uruguay, and Venezuela.
Double quick, double click reversible peptide "stapling".
Grison, Claire M; Burslem, George M; Miles, Jennifer A; Pilsl, Ludwig K A; Yeo, David J; Imani, Zeynab; Warriner, Stuart L; Webb, Michael E; Wilson, Andrew J
2017-07-01
The development of constrained peptides for inhibition of protein-protein interactions is an emerging strategy in chemical biology and drug discovery. This manuscript introduces a versatile, rapid and reversible approach to constrain peptides in a bioactive helical conformation using BID and RNase S peptides as models. Dibromomaleimide is used to constrain BID and RNase S peptide sequence variants bearing cysteine (Cys) or homocysteine ( h Cys) amino acids spaced at i and i + 4 positions by double substitution. The constraint can be readily removed by displacement of the maleimide using excess thiol. This new constraining methodology results in enhanced α-helical conformation (BID and RNase S peptide) as demonstrated by circular dichroism and molecular dynamics simulations, resistance to proteolysis (BID) as demonstrated by trypsin proteolysis experiments and retained or enhanced potency of inhibition for Bcl-2 family protein-protein interactions (BID), or greater capability to restore the hydrolytic activity of the RNAse S protein (RNase S peptide). Finally, use of a dibromomaleimide functionalized with an alkyne permits further divergent functionalization through alkyne-azide cycloaddition chemistry on the constrained peptide with fluorescein, oligoethylene glycol or biotin groups to facilitate biophysical and cellular analyses. Hence this methodology may extend the scope and accessibility of peptide stapling.
Exploring Lovelock theory moduli space for Schrödinger solutions
NASA Astrophysics Data System (ADS)
Jatkar, Dileep P.; Kundu, Nilay
2016-09-01
We look for Schrödinger solutions in Lovelock gravity in D > 4. We span the entire parameter space and determine parametric relations under which the Schrödinger solution exists. We find that in arbitrary dimensions pure Lovelock theories have Schrödinger solutions of arbitrary radius, on a co-dimension one locus in the Lovelock parameter space. This co-dimension one locus contains the subspace over which the Lovelock gravity can be written in the Chern-Simons form. Schrödinger solutions do not exist outside this locus and on this locus they exist for arbitrary dynamical exponent z. This freedom in z is due to the degeneracy in the configuration space. We show that this degeneracy survives certain deformation away from the Lovelock moduli space.
NASA Astrophysics Data System (ADS)
Arenberg, Jonathan; Conti, Alberto; Atkinson, Charles
2017-01-01
Pursuing ground breaking science in a highly cost and funding constrained environment presents new challenges to the development of future space astrophysics missions. Within the conventional cost models for large observatories, executing a flagship “mission after next” appears to be unstainable. To achieve our nation’s space astrophysics ambitions requires new paradigms in system design, development and manufacture. Implementation of this new paradigm requires that the space astrophysics community adopt new answers to a new set of questions. This paper will discuss the origins of these new questions and the steps to their answers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, C.; Udalski, A.; Szymański, M. K.
2016-09-01
We present a combined analysis of the observations of the gravitational microlensing event OGLE-2015-BLG-0479 taken both from the ground and by the Spitzer Space Telescope . The light curves seen from the ground and from space exhibit a time offset of ∼13 days between the caustic spikes, indicating that the relative lens-source positions seen from the two places are displaced by parallax effects. From modeling the light curves, we measure the space-based microlens parallax. Combined with the angular Einstein radius measured by analyzing the caustic crossings, we determine the mass and distance of the lens. We find that the lensmore » is a binary composed of two G-type stars with masses of ∼1.0 M {sub ⊙} and ∼0.9 M {sub ⊙} located at a distance of ∼3 kpc. In addition, we are able to constrain the complete orbital parameters of the lens thanks to the precise measurement of the microlens parallax derived from the joint analysis. In contrast to the binary event OGLE-2014-BLG-1050, which was also observed by Spitzer, we find that the interpretation of OGLE-2015-BLG-0479 does not suffer from the degeneracy between (±, ±) and (±, ∓) solutions, confirming that the four-fold parallax degeneracy in single-lens events collapses into the two-fold degeneracy for the general case of binary-lens events. The location of the blend in the color–magnitude diagram is consistent with the lens properties, suggesting that the blend is the lens itself. The blend is bright enough for spectroscopy and thus this possibility can be checked from future follow-up observations.« less
Some exact solutions for maximally symmetric topological defects in Anti de Sitter space
NASA Astrophysics Data System (ADS)
Alvarez, Orlando; Haddad, Matthew
2018-03-01
We obtain exact analytical solutions for a class of SO( l) Higgs field theories in a non-dynamic background n-dimensional anti de Sitter space. These finite transverse energy solutions are maximally symmetric p-dimensional topological defects where n = ( p + 1) + l. The radius of curvature of anti de Sitter space provides an extra length scale that allows us to study the equations of motion in a limit where the masses of the Higgs field and the massive vector bosons are both vanishing. We call this the double BPS limit. In anti de Sitter space, the equations of motion depend on both p and l. The exact analytical solutions are expressed in terms of standard special functions. The known exact analytical solutions are for kink-like defects ( p = 0 , 1 , 2 , . . . ; l = 1), vortex-like defects ( p = 1 , 2 , 3; l = 2), and the 't Hooft-Polyakov monopole ( p = 0; l = 3). A bonus is that the double BPS limit automatically gives a maximally symmetric classical glueball type solution. In certain cases where we did not find an analytic solution, we present numerical solutions to the equations of motion. The asymptotically exponentially increasing volume with distance of anti de Sitter space imposes different constraints than those found in the study of defects in Minkowski space.
NASA Astrophysics Data System (ADS)
Sahraei, S.; Asadzadeh, M.
2017-12-01
Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.
A Hamiltonian approach to Thermodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldiotti, M.C., E-mail: baldiotti@uel.br; Fresneda, R., E-mail: rodrigo.fresneda@ufabc.edu.br; Molina, C., E-mail: cmolina@usp.br
In the present work we develop a strictly Hamiltonian approach to Thermodynamics. A thermodynamic description based on symplectic geometry is introduced, where all thermodynamic processes can be described within the framework of Analytic Mechanics. Our proposal is constructed on top of a usual symplectic manifold, where phase space is even dimensional and one has well-defined Poisson brackets. The main idea is the introduction of an extended phase space where thermodynamic equations of state are realized as constraints. We are then able to apply the canonical transformation toolkit to thermodynamic problems. Throughout this development, Dirac’s theory of constrained systems is extensivelymore » used. To illustrate the formalism, we consider paradigmatic examples, namely, the ideal, van der Waals and Clausius gases. - Highlights: • A strictly Hamiltonian approach to Thermodynamics is proposed. • Dirac’s theory of constrained systems is extensively used. • Thermodynamic equations of state are realized as constraints. • Thermodynamic potentials are related by canonical transformations.« less
Two-Channel Transparency-Optimized Control Architectures in Bilateral Teleoperation With Time Delay.
Kim, Jonghyun; Chang, Pyung Hun; Park, Hyung-Soon
2013-01-01
This paper introduces transparency-optimized control architectures (TOCAs) using two communication channels. Two classes of two-channel TOCAs are found, thereby showing that two channels are sufficient to achieve transparency. These TOCAs achieve a greater level of transparency but poorer stability than three-channel TOCAs and four-channel TOCAs. Stability of the two-channel TOCAs has been enhanced while minimizing transparency degradation by adding a filter; and a combined use of the two classes of two-channel TOCAs is proposed for both free space and constrained motion, which involve switching between two TOCAs for transition between free space and constrained motions. The stability condition of the switched teleoperation system is derived for practical applications. Through the one degree-of-freedom (DOF) experiment, the proposed two-channel TOCAs were shown to operate stably, while achieving better transparency under time delay than the other TOCAs.
Two-Channel Transparency-Optimized Control Architectures in Bilateral Teleoperation With Time Delay
Kim, Jonghyun; Chang, Pyung Hun; Park, Hyung-Soon
2013-01-01
This paper introduces transparency-optimized control architectures (TOCAs) using two communication channels. Two classes of two-channel TOCAs are found, thereby showing that two channels are sufficient to achieve transparency. These TOCAs achieve a greater level of transparency but poorer stability than three-channel TOCAs and four-channel TOCAs. Stability of the two-channel TOCAs has been enhanced while minimizing transparency degradation by adding a filter; and a combined use of the two classes of two-channel TOCAs is proposed for both free space and constrained motion, which involve switching between two TOCAs for transition between free space and constrained motions. The stability condition of the switched teleoperation system is derived for practical applications. Through the one degree-of-freedom (DOF) experiment, the proposed two-channel TOCAs were shown to operate stably, while achieving better transparency under time delay than the other TOCAs. PMID:23833548
Constrained space camera assembly
Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.
1999-01-01
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.
Constraining storm-scale forecasts of deep convective initiation with surface weather observations
NASA Astrophysics Data System (ADS)
Madaus, Luke
Successfully forecasting when and where individual convective storms will form remains an elusive goal for short-term numerical weather prediction. In this dissertation, the convective initiation (CI) challenge is considered as a problem of insufficiently resolved initial conditions and dense surface weather observations are explored as a possible solution. To better quantify convective-scale surface variability in numerical simulations of discrete convective initiation, idealized ensemble simulations of a variety of environments where CI occurs in response to boundary-layer processes are examined. Coherent features 1-2 hours prior to CI are found in all surface fields examined. While some features were broadly expected, such as positive temperature anomalies and convergent winds, negative temperature anomalies due to cloud shadowing are the largest surface anomaly seen prior to CI. Based on these simulations, several hypotheses about the required characteristics of a surface observing network to constrain CI forecasts are developed. Principally, these suggest that observation spacings of less than 4---5 km would be required, based on correlation length scales. Furthermore, it is anticipated that 2-m temperature and 10-m wind observations would likely be more relevant for effectively constraining variability than surface pressure or 2-m moisture observations based on the magnitudes of observed anomalies relative to observation error. These hypotheses are tested with a series of observing system simulation experiments (OSSEs) using a single CI-capable environment. The OSSE results largely confirm the hypotheses, and with 4-km and particularly 1-km surface observation spacing, skillful forecasts of CI are possible, but only within two hours of CI time. Several facets of convective-scale assimilation, including the need for properly-calibrated localization and problems from non-Gaussian ensemble estimates of the cloud field are discussed. Finally, the characteristics of one candidate dense surface observing network are examined: smartphone pressure observations. Available smartphone pressure observations (and 1-hr pressure tendency observations) are tested by assimilating them into convective-allowing ensemble forecasts for a three-day active convective period in the eastern United States. Although smartphone observations contain noise and internal disagreement, they are effective at reducing short-term forecast errors in surface pressure, wind and precipitation. The results suggest that smartphone pressure observations could become a viable mesoscale observation platform, but more work is needed to enhance their density and reduce error. This work concludes by reviewing and suggesting other novel candidate observation platforms with a potential to improve convective-scale forecasts of CI.
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
Constraint-Based Local Search for Constrained Optimum Paths Problems
NASA Astrophysics Data System (ADS)
Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal
Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.
Chance-constrained economic dispatch with renewable energy and storage
Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.; ...
2018-04-19
Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less
Chance-constrained economic dispatch with renewable energy and storage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Jianqiang; Chen, Richard Li-Yang; Najm, Habib N.
Increased penetration of renewables, along with uncertainties associated with them, have transformed how power systems are operated. High levels of uncertainty means that it is not longer possible to guarantee operational feasibility with certainty, instead constraints are required to be satisfied with high probability. We present a chance-constrained economic dispatch model that efficiently integrates energy storage and high renewable penetration to satisfy renewable portfolio requirements. Specifically, it is required that wind energy contributes at least a prespecified ratio of the total demand and that the scheduled wind energy is dispatchable with high probability. We develop an approximated partial sample averagemore » approximation (PSAA) framework to enable efficient solution of large-scale chanceconstrained economic dispatch problems. Computational experiments on the IEEE-24 bus system show that the proposed PSAA approach is more accurate, closer to the prescribed tolerance, and about 100 times faster than sample average approximation. Improved efficiency of our PSAA approach enables solution of WECC-240 system in minutes.« less
NASA Astrophysics Data System (ADS)
Cho, Won Sang; Gainer, James S.; Kim, Doojin; Matchev, Konstantin T.; Moortgat, Filip; Pape, Luc; Park, Myeonghun
2014-08-01
We consider a class of on-shell constrained mass variables that are 3+1 dimensional generalizations of the Cambridge M T2 variable and that automatically incorporate various assumptions about the underlying event topology. The presence of additional on-shell constraints causes their kinematic distributions to exhibit sharper endpoints than the usual M T2 distribution. We study the mathematical properties of these new variables, e.g., the uniqueness of the solution selected by the minimization over the invisible particle 4-momenta. We then use this solution to reconstruct the masses of various particles along the decay chain. We propose several tests for validating the assumed event topology in missing energy events from new physics. The tests are able to determine: 1) whether the decays in the event are two-body or three-body, 2) if the decay is two-body, whether the intermediate resonances in the two decay chains are the same, and 3) the exact sequence in which the visible particles are emitted from each decay chain.
Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares
NASA Technical Reports Server (NTRS)
Orr, Jeb S.
2012-01-01
A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2017-06-01
Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.
Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms
Roald, Line Alnaes; Andersson, Goran
2017-08-29
Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Nonnegative least-squares image deblurring: improved gradient projection approaches
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.
2010-02-01
The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.
PARSEC's Astrometry - The Risky Approach
NASA Astrophysics Data System (ADS)
Andrei, A. H.
2015-10-01
Parallaxes - and hence the fundamental establishment of stellar distances - rank among the oldest, most direct, and hardest of astronomical determinations. Arguably amongst the most essential too. The direct approach to obtain trigonometric parallaxes, using a constrained set of equations to derive positions, proper motions, and parallaxes, has been labelled as risky. Properly so, because the axis of the parallactic apparent ellipse is smaller than one arcsec even for the nearest stars, and just a fraction of its perimeter can be followed. Thus the classical approach is of linearizing the description by locking the solution to a set of precise positions of the Earth at the instants of observation, rather than to the dynamics of its orbit, and of adopting a close examination of the few observations available. In the PARSEC program the parallaxes of 143 brown dwarfs were planned. Five years of observation of the fields were taken with the WFI camera at the ESO 2.2m telescope in Chile. The goal is to provide a statistically significant number of trigonometric parallaxes for BD sub-classes from L0 to T7. Taking advantage of the large, regularly spaced, quantity of observations, here we take the risky approach to fit an ellipse to the observed ecliptic coordinates and derive the parallaxes. We also combine the solutions from different centroiding methods, widely proven in prior astrometric investigations. As each of those methods assess diverse properties of the PSFs, they are taken as independent measurements, and combined into a weighted least-squares general solution. The results obtained compare well with the literature and with the classical approach.
Re-engineering NASA's space communications to remain viable in a constrained fiscal environment
NASA Astrophysics Data System (ADS)
Hornstein, Rhoda Shaller; Hei, Donald J., Jr.; Kelly, Angelita C.; Lightfoot, Patricia C.; Bell, Holland T.; Cureton-Snead, Izeller E.; Hurd, William J.; Scales, Charles H.
1994-11-01
Along with the Red and Blue Teams commissioned by the NASA Administrator in 1992, NASA's Associate Administrator for Space Communications commissioned a Blue Team to review the Office of Space Communications (Code O) Core Program and determine how the program could be conducted faster, better, and cheaper. Since there was no corresponding Red Team for the Code O Blue Team, the Blue Team assumed a Red Team independent attitude and challenged the status quo, including current work processes, functional distinctions, interfaces, and information flow, as well as traditional management and system development practices. The Blue Team's unconstrained, non-parochial, and imaginative look at NASA's space communications program produced a simplified representation of the space communications infrastructure that transcends organizational and functional boundaries, in addition to existing systems and facilities. Further, the Blue Team adapted the 'faster, better, cheaper' charter to be relevant to the multi-mission, continuous nature of the space communications program and to serve as a gauge for improving customer services concurrent with achieving more efficient operations and infrastructure life cycle economies. This simplified representation, together with the adapted metrics, offers a future view and process model for reengineering NASA's space communications to remain viable in a constrained fiscal environment. Code O remains firm in its commitment to improve productivity, effectiveness, and efficiency. In October 1992, the Associate Administrator reconstituted the Blue Team as the Code O Success Team (COST) to serve as a catalyst for change. In this paper, the COST presents the chronicle and significance of the simplified representation and adapted metrics, and their application during the FY 1993-1994 activities.
Re-engineering NASA's space communications to remain viable in a constrained fiscal environment
NASA Technical Reports Server (NTRS)
Hornstein, Rhoda Shaller; Hei, Donald J., Jr.; Kelly, Angelita C.; Lightfoot, Patricia C.; Bell, Holland T.; Cureton-Snead, Izeller E.; Hurd, William J.; Scales, Charles H.
1994-01-01
Along with the Red and Blue Teams commissioned by the NASA Administrator in 1992, NASA's Associate Administrator for Space Communications commissioned a Blue Team to review the Office of Space Communications (Code O) Core Program and determine how the program could be conducted faster, better, and cheaper. Since there was no corresponding Red Team for the Code O Blue Team, the Blue Team assumed a Red Team independent attitude and challenged the status quo, including current work processes, functional distinctions, interfaces, and information flow, as well as traditional management and system development practices. The Blue Team's unconstrained, non-parochial, and imaginative look at NASA's space communications program produced a simplified representation of the space communications infrastructure that transcends organizational and functional boundaries, in addition to existing systems and facilities. Further, the Blue Team adapted the 'faster, better, cheaper' charter to be relevant to the multi-mission, continuous nature of the space communications program and to serve as a gauge for improving customer services concurrent with achieving more efficient operations and infrastructure life cycle economies. This simplified representation, together with the adapted metrics, offers a future view and process model for reengineering NASA's space communications to remain viable in a constrained fiscal environment. Code O remains firm in its commitment to improve productivity, effectiveness, and efficiency. In October 1992, the Associate Administrator reconstituted the Blue Team as the Code O Success Team (COST) to serve as a catalyst for change. In this paper, the COST presents the chronicle and significance of the simplified representation and adapted metrics, and their application during the FY 1993-1994 activities.
NASA Astrophysics Data System (ADS)
Prástaro, Agostino
2008-02-01
Following our previous results on this subject [R.P. Agarwal, A. Prástaro, Geometry of PDE's. III(I): Webs on PDE's and integral bordism groups. The general theory, Adv. Math. Sci. Appl. 17 (2007) 239-266; R.P. Agarwal, A. Prástaro, Geometry of PDE's. III(II): Webs on PDE's and integral bordism groups. Applications to Riemannian geometry PDE's, Adv. Math. Sci. Appl. 17 (2007) 267-285; A. Prástaro, Geometry of PDE's and Mechanics, World Scientific, Singapore, 1996; A. Prástaro, Quantum and integral (co)bordism in partial differential equations, Acta Appl. Math. (5) (3) (1998) 243-302; A. Prástaro, (Co)bordism groups in PDE's, Acta Appl. Math. 59 (2) (1999) 111-201; A. Prástaro, Quantized Partial Differential Equations, World Scientific Publishing Co, Singapore, 2004, 500 pp.; A. Prástaro, Geometry of PDE's. I: Integral bordism groups in PDE's, J. Math. Anal. Appl. 319 (2006) 547-566; A. Prástaro, Geometry of PDE's. II: Variational PDE's and integral bordism groups, J. Math. Anal. Appl. 321 (2006) 930-948; A. Prástaro, Th.M. Rassias, Ulam stability in geometry of PDE's, Nonlinear Funct. Anal. Appl. 8 (2) (2003) 259-278; I. Stakgold, Boundary Value Problems of Mathematical Physics, I, The MacMillan Company, New York, 1967; I. Stakgold, Boundary Value Problems of Mathematical Physics, II, Collier-MacMillan, Canada, Ltd, Toronto, Ontario, 1968], integral bordism groups of the Navier-Stokes equation are calculated for smooth, singular and weak solutions, respectively. Then a characterization of global solutions is made on this ground. Enough conditions to assure existence of global smooth solutions are given and related to nullity of integral characteristic numbers of the boundaries. Stability of global solutions are related to some characteristic numbers of the space-like Cauchy dataE Global solutions of variational problems constrained by (NS) are classified by means of suitable integral bordism groups too.
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Krider, E. P.; Murray, N.; Boccippio, D. J.
2007-01-01
A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes (DELTAEs) whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight (x, y, z, Q, x', y', z', Q') to just four (x, y, z, Q). The four unknowns (x, y, z, Q) are found by performing a numerical minimization of a chi-square function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source" (x', y', z', Q'), given the values of (x, y, z, Q). In this way, all 8 parameters (x, y, z, Q, x', y', z', Q') are found, yet a numerical search of only 4 parameters (x, y, z, Q) is required. The DR method has been used to analyze lightning-caused DeltaEs derived from multiple ground-based electric field measurements at the NASA Kennedy Space Center (KSC) and USAF Eastern Range (ER). The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) system at the KSC-ER, and from least squares error estimation theory, and the method is shown to be a useful "stand-alone" charge retrieval tool. Since more than one charge distribution describes a finite set of DELTAEs (i.e., solutions are non-unique), and since there can exist appreciable differences in the physical characteristics of these solutions, not all DR solutions are physically acceptable. Hence, an alternative and more accurate method of analysis is introduced that uses LDAR data to constrain the geometry of the charge solutions, thereby removing physically unacceptable retrievals. The charge solutions derived from this method are shown to compare well with independent satellite- and ground-based observations of lightning in several Florida storms.
Method and apparatus for measuring volatile compounds in an aqueous solution
Gilmore, Tyler J [Pasco, WA; Cantrell, Kirk J [West Richland, WA
2002-07-16
The present invention is an improvement to the method and apparatus for measuring volatile compounds in an aqueous solution. The apparatus is a chamber with sides and two ends, where the first end is closed. The chamber contains a solution volume of the aqueous solution and a gas that is trapped within the first end of the chamber above the solution volume. The gas defines a head space within the chamber above the solution volume. The chamber may also be a cup with the second end. open and facing down and submerged in the aqueous solution so that the gas defines the head space within the cup above the solution volume. The cup can also be entirely submerged in the aqueous solution. The second end of the. chamber may be closed such that the chamber can be used while resting on a flat surface such as a bench. The improvement is a sparger for mixing the gas with the solution volume. The sparger can be a rotating element such as a propeller on a shaft or a cavitating impeller. The sparger can also be a pump and nozzle where the pump is a liquid pump and the nozzle is a liquid spray nozzle open, to the head space for spraying the solution volume into the head space of gas. The pump could also be a gas pump and the nozzle a gas nozzle submerged in the solution volume for spraying the head space gas into the solution volume.
NASA Astrophysics Data System (ADS)
Murad, Mohammad Hassan; Fatema, Saba
2013-02-01
This paper presents a new family of interior solutions of Einstein-Maxwell field equations in general relativity for a static spherically symmetric distribution of a charged perfect fluid with a particular form of charge distribution. This solution gives us wide range of parameter, K, for which the solution is well behaved hence, suitable for modeling of superdense star. For this solution the gravitational mass of a star is maximized with all degree of suitability by assuming the surface density equal to normal nuclear density, ρ nm=2.5×1017 kg m-3. By this model we obtain the mass of the Crab pulsar, M Crab, 1.36 M ⊙ and radius 13.21 km, constraining the moment of inertia > 1.61×1038 kg m2 for the conservative estimate of Crab nebula mass 2 M ⊙. And M Crab=1.96 M ⊙ with radius R Crab=14.38 km constraining the moment of inertia > 3.04×1038 kg m2 for the newest estimate of Crab nebula mass, 4.6 M ⊙. These results are quite well in agreement with the possible values of mass and radius of Crab pulsar. Besides this, our model yields moments of inertia for PSR J0737-3039A and PSR J0737-3039B, I A =1.4285×1038 kg m2 and I B =1.3647×1038 kg m2 respectively. It has been observed that under well behaved conditions this class of solutions gives us the overall maximum gravitational mass of super dense object, M G(max)=4.7487 M ⊙ with radius R_{M_{max}}=15.24 km, surface redshift 0.9878, charge 7.47×1020 C, and central density 4.31 ρ nm.
Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings
NASA Astrophysics Data System (ADS)
Mader, Charles Alexander
A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behboodi, Sahand; Chassin, David P.; Djilali, Ned
This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less
Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...
2016-12-23
This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less
NASA Astrophysics Data System (ADS)
Liu, Qiao
2015-06-01
In recent paper [7], Y. Du and K. Wang (2013) proved that the global-in-time Koch-Tataru type solution (u, d) to the n-dimensional incompressible nematic liquid crystal flow with small initial data (u0, d0) in BMO-1 × BMO has arbitrary space-time derivative estimates in the so-called Koch-Tataru space norms. The purpose of this paper is to show that the Koch-Tataru type solution satisfies the decay estimates for any space-time derivative involving some borderline Besov space norms.
Complicated asymptotic behavior of solutions for porous medium equation in unbounded space
NASA Astrophysics Data System (ADS)
Wang, Liangwei; Yin, Jingxue; Zhou, Yong
2018-05-01
In this paper, we find that the unbounded spaces Yσ (RN) (0 < σ <2/m-1 ) can provide the work spaces where complicated asymptotic behavior appears in the solutions of the Cauchy problem of the porous medium equation. To overcome the difficulties caused by the nonlinearity of the equation and the unbounded solutions, we establish the propagation estimates, the growth estimates and the weighted L1-L∞ estimates for the solutions.
An improved cosmic crystallography method to detect holonomies in flat spaces
NASA Astrophysics Data System (ADS)
Fujii, H.; Yoshii, Y.
2011-05-01
A new, improved version of a cosmic crystallography method for constraining cosmic topology is introduced. Like the circles-in-the-sky method using CMB data, we work in a thin, shell-like region containing plenty of objects. Two pairs of objects (quadruplet) linked by a holonomy show a specific distribution pattern, and three filters of separation, vectorial condition, and lifetime of objects extract these quadruplets. Each object Pi is assigned an integer si, which is the number of candidate quadruplets including Pi as their members. Then an additional device of si-histogram is used to extract topological ghosts, which tend to have high values of si. In this paper we consider flat spaces with Euclidean geometry, and the filters are designed to constrain their holonomies. As the second filter, we prepared five types that are specialized for constraining specific holonomies: one for translation, one for half-turn corkscrew motion and glide reflection, and three for nth turn corkscrew motion for n = 4,3, and 6. Every multiconnected space has holonomies that are detected by at least one of these five filters.Our method is applied to the catalogs of toy quasars in flat Λ-CDM universes whose typical sizes correspond to z ~ 5. With these simulations our method is found to work quite well. These are the situations in which type-II pair crystallography methods are insensitive because of the tiny number of ghosts. Moreover, in the flat cases, our method should be more sensitive than the type-I pair (or, in general, n-tuplet) methods because of its multifilter construction and its independence from n.
Natural Constraints to Species Diversification
Lewitus, Eric; Morlon, Hélène
2016-01-01
Identifying modes of species diversification is fundamental to our understanding of how biodiversity changes over evolutionary time. Diversification modes are captured in species phylogenies, but characterizing the landscape of diversification has been limited by the analytical tools available for directly comparing phylogenetic trees of groups of organisms. Here, we use a novel, non-parametric approach and 214 family-level phylogenies of vertebrates representing over 500 million years of evolution to identify major diversification modes, to characterize phylogenetic space, and to evaluate the bounds and central tendencies of species diversification. We identify five principal patterns of diversification to which all vertebrate families hold. These patterns, mapped onto multidimensional space, constitute a phylogenetic space with distinct properties. Firstly, phylogenetic space occupies only a portion of all possible tree space, showing family-level phylogenies to be constrained to a limited range of diversification patterns. Secondly, the geometry of phylogenetic space is delimited by quantifiable trade-offs in tree size and the heterogeneity and stem-to-tip distribution of branching events. These trade-offs are indicative of the instability of certain diversification patterns and effectively bound speciation rates (for successful clades) within upper and lower limits. Finally, both the constrained range and geometry of phylogenetic space are established by the differential effects of macroevolutionary processes on patterns of diversification. Given these properties, we show that the average path through phylogenetic space over evolutionary time traverses several diversification stages, each of which is defined by a different principal pattern of diversification and directed by a different macroevolutionary process. The identification of universal patterns and natural constraints to diversification provides a foundation for understanding the deep-time evolution of biodiversity. PMID:27505866
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petiteau, Antoine; Babak, Stanislav; Sesana, Alberto
Gravitational wave (GW) signals from coalescing massive black hole (MBH) binaries could be used as standard sirens to measure cosmological parameters. The future space-based GW observatory Laser Interferometer Space Antenna (LISA) will detect up to a hundred of those events, providing very accurate measurements of their luminosity distances. To constrain the cosmological parameters, we also need to measure the redshift of the galaxy (or cluster of galaxies) hosting the merger. This requires the identification of a distinctive electromagnetic event associated with the binary coalescence. However, putative electromagnetic signatures may be too weak to be observed. Instead, we study here themore » possibility of constraining the cosmological parameters by enforcing statistical consistency between all the possible hosts detected within the measurement error box of a few dozen of low-redshift (z < 3) events. We construct MBH populations using merger tree realizations of the dark matter hierarchy in a {Lambda}CDM universe, and we use data from the Millennium simulation to model the galaxy distribution in the LISA error box. We show that, assuming that all the other cosmological parameters are known, the parameter w describing the dark energy equation of state can be constrained to a 4%-8% level (2{sigma} error), competitive with current uncertainties obtained by type Ia supernovae measurements, providing an independent test of our cosmological model.« less
Thermodynamic Constraints Improve Metabolic Networks.
Krumholz, Elias W; Libourel, Igor G L
2017-08-08
In pursuit of establishing a realistic metabolic phenotypic space, the reversibility of reactions is thermodynamically constrained in modern metabolic networks. The reversibility constraints follow from heuristic thermodynamic poise approximations that take anticipated cellular metabolite concentration ranges into account. Because constraints reduce the feasible space, draft metabolic network reconstructions may need more extensive reconciliation, and a larger number of genes may become essential. Notwithstanding ubiquitous application, the effect of reversibility constraints on the predictive capabilities of metabolic networks has not been investigated in detail. Instead, work has focused on the implementation and validation of the thermodynamic poise calculation itself. With the advance of fast linear programming-based network reconciliation, the effects of reversibility constraints on network reconciliation and gene essentiality predictions have become feasible and are the subject of this study. Networks with thermodynamically informed reversibility constraints outperformed gene essentiality predictions compared to networks that were constrained with randomly shuffled constraints. Unconstrained networks predicted gene essentiality as accurately as thermodynamically constrained networks, but predicted substantially fewer essential genes. Networks that were reconciled with sequence similarity data and strongly enforced reversibility constraints outperformed all other networks. We conclude that metabolic network analysis confirmed the validity of the thermodynamic constraints, and that thermodynamic poise information is actionable during network reconciliation. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Dipole and quadrupole synthesis of electric potential fields. M.S. Thesis
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1979-01-01
A general technique for expanding an unknown potential field in terms of a linear summation of weighted dipole or quadrupole fields is described. Computational methods were developed for the iterative addition of dipole fields. Various solution potentials were compared inside the boundary with a more precise calculation of the potential to derive optimal schemes for locating the singularities of the dipole fields. Then, the problem of determining solutions to Laplace's equation on an unbounded domain as constrained by pertinent electron trajectory data was considered.
Optimality conditions for the numerical solution of optimization problems with PDE constraints :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro; Ridzal, Denis
2014-03-01
A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.
Constrained Profile Retrieval Applied to MIPAS Observation Mode
NASA Technical Reports Server (NTRS)
Steck, T.; Clarmann, T. von
2000-01-01
To investigate the atmosphere of the Earth, and to detect changes in our environment, the Environmental Satellite (ENVISAT) will be launched by the European Space Agency (ESA) in a polar orbit in the mid of the year 2001.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stavros, E. Natasha; Schimel, David; Pavlick, Ryan
Technologies on the International Space Station will provide ~1 year of synchronous observations of ecosystem composition, structure and function, in 2018. Here, we discuss these instruments and how they can be used to constrain global models and improve our understanding of the current state of terrestrial ecosystems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jennings, Elise; Wechsler, Risa H.
We present the nonlinear 2D galaxy power spectrum, P(k, µ), in redshift space, measured from the Dark Sky simulations, using galaxy catalogs constructed with both halo occupation distribution and subhalo abundance matching methods, chosen to represent an intermediate redshift sample of luminous red galaxies. We find that the information content in individual µ (cosine of the angle to the line of sight) bins is substantially richer then multipole moments, and show that this can be used to isolate the impact of nonlinear growth and redshift space distortion (RSD) effects. Using the µ < 0.2 simulation data, which we show ismore » not impacted by RSD effects, we can successfully measure the nonlinear bias to an accuracy of ~ 5% at k < 0.6hMpc-1 . This use of individual µ bins to extract the nonlinear bias successfully removes a large parameter degeneracy when constraining the linear growth rate of structure. We carry out a joint parameter estimation, using the low µ simulation data to constrain the nonlinear bias, and µ > 0.2 to constrain the growth rate and show that f can be constrained to ~ 26(22)% to a kmax < 0.4(0.6)hMpc-1 from clustering alone using a simple dispersion model, for a range of galaxy models. Our analysis of individual µ bins also reveals interesting physical effects which arise simply from different methods of populating halos with galaxies. We also find a prominent turnaround scale, at which RSD damping effects are greater then the nonlinear growth, which differs not only for each µ bin but also for each galaxy model. These features may provide unique signatures which could be used to shed light on the galaxy–dark matter connection. Furthermore, the idea of separating nonlinear growth and RSD effects making use of the full information in the 2D galaxy power spectrum yields significant improvements in constraining cosmological parameters and may be a promising probe of galaxy formation models.« less
Effects of the oceans on polar motion: Extended investigations
NASA Technical Reports Server (NTRS)
Dickman, Steven R.
1986-01-01
A method was found for expressing the tide current velocities in terms of the tide height (with all variables expanded in spherical harmonics). All time equations were then combined into a single, nondifferential matrix equation involving only the unknown tide height. The pole tide was constrained so that no tidewater flows across continental boundaries. The constraint was derived for the case of turbulent oceans; with the tide velocities expressed in terms of the tide height. The two matrix equations were combined. Simple matrix inversion then yielded the constrained solution. Programs to construct and invert the matrix equations were written. Preliminary results were obtained and are discussed.
Motion Planning and Synthesis of Human-Like Characters in Constrained Environments
NASA Astrophysics Data System (ADS)
Zhang, Liangjun; Pan, Jia; Manocha, Dinesh
We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.
Initial conditions for cosmological perturbations
NASA Astrophysics Data System (ADS)
Ashtekar, Abhay; Gupt, Brajesh
2017-02-01
Penrose proposed that the big bang singularity should be constrained by requiring that the Weyl curvature vanishes there. The idea behind this past hypothesis is attractive because it constrains the initial conditions for the universe in geometric terms and is not confined to a specific early universe paradigm. However, the precise statement of Penrose’s hypothesis is tied to classical space-times and furthermore restricts only the gravitational degrees of freedom. These are encapsulated only in the tensor modes of the commonly used cosmological perturbation theory. Drawing inspiration from the underlying idea, we propose a quantum generalization of Penrose’s hypothesis using the Planck regime in place of the big bang, and simultaneously incorporating tensor as well as scalar modes. Initial conditions selected by this generalization constrain the universe to be as homogeneous and isotropic in the Planck regime as permitted by the Heisenberg uncertainty relations.
Microbial Fuel Cell Performance with a Pressurized Cathode Chamber
USDA-ARS?s Scientific Manuscript database
Microbial fuel cell (MFC) power densities are often constrained by the oxygen reduction reaction rate on the cathode electrode. One important factor for this is the normally low solubility of oxygen in the aqueous cathode solution creating mass transport limitations, which hinder oxygen reduction a...
Optimal Control Strategies for Constrained Relative Orbits
2007-09-01
the chief. The work assumes the Clohessy - Wiltshire closeness assump- tion between the deputy and chief is valid, however, elliptical chief orbits are...133 Appendix G. A Closed-Form Solution of the Linear Clohessy - Wiltshire Equa- tions...Counterspace . . . . . . . . . . . . . . . . . . . 1 CW Clohessy - Wiltshire . . . . . . . . . . . . . . . . . . . . . . 4 DARPA Defense Advanced Research
Stability of Internal Space in Kaluza-Klein Theory
NASA Astrophysics Data System (ADS)
Maeda, K.; Soda, J.
1998-12-01
We extend a model studied by Li and Gott III to investigate a stability of internal space in Kaluza-Klein theory. Our model is a four-dimensional de-Sitter space plus a n-dimensional compactified internal space. We introduce a solution of the semi-classical Einstein equation which shows us the fact that a n-dimensional compactified internal space can be stable by the Casimir effect. The self-consistency of this solution is checked. One may apply this solution to study the issue of the Black Hole singularity.
Doppelgänger dark energy: modified gravity with non-universal couplings after GW170817
NASA Astrophysics Data System (ADS)
Amendola, Luca; Bettoni, Dario; Domènech, Guillem; Gomes, Adalto R.
2018-06-01
Gravitational Wave (GW) astronomy severely narrowed down the theoretical space for scalar-tensor theories. We propose a new class of attractor models {for Horndeski action} in which GWs propagate at the speed of light in the nearby universe but not in the past. To do so we derive new solutions to the interacting dark sector in which the ratio of dark energy and dark matter remains constant, which we refer to as doppelgänger dark energy (DDE). We then remove the interaction between dark matter and dark energy by a suitable change of variables. The accelerated expansion that (we) baryons observe is due to a conformal coupling to the dark energy scalar field. We show how in this context it is possible to find a non trivial subset of solutions in which GWs propagate at the speed of light only at low red-shifts. The model is an attractor, thus reaching the limit cT→1 relatively fast. However, the effect of baryons turns out to be non-negligible and severely constrains the form of the Lagrangian. In passing, we found that in the simplest DDE models the no-ghost conditions for perturbations require a non-universal coupling to gravity. In the end, we comment on possible ways to solve the lack of matter domination stage for DDE models.
A CityGML Extension for Handling Very Large Tins
NASA Astrophysics Data System (ADS)
Kumar, K.; Ledoux, H.; Stoter, J.
2016-10-01
In addition to buildings, the terrain forms an important part of a 3D city model. Although in GIS terrains are usually represented with 2D grids, TINs are also increasingly being used in practice. One example is 3DTOP10NL, the 3D city model covering the whole of the Netherlands, which stores the relief with a constrained TIN containing more than 1 billion triangles. Due to the massive size of such datasets, the main problem that arises is: how to efficiently store and maintain them? While CityGML supports the storage of TINs, we argue in this paper that the current solution is not adequate. For instance, the 1 billion+ triangles of 3DTOP10NL require 686 GB of storage space with CityGML. Furthermore, the current solution does not store the topological relationships of the triangles, and also there are no clear mechanisms to handle several LODs. We propose in this paper a CityGML extension for the compact representation of terrains. We describe our abstract and implementation specifications (modelled in UML), and our prototype implementation to convert TINs to our CityGML structure. It increases the topological relationships that are explicitly represented, and allows us to compress up to a factor of ∼ 25 in our experiments with massive real-world terrains (more than 1 billion triangles).
Dynamics of Numerics & Spurious Behaviors in CFD Computations. Revised
NASA Technical Reports Server (NTRS)
Yee, Helen C.; Sweby, Peter K.
1997-01-01
The global nonlinear behavior of finite discretizations for constant time steps and fixed or adaptive grid spacings is studied using tools from dynamical systems theory. Detailed analysis of commonly used temporal and spatial discretizations for simple model problems is presented. The role of dynamics in the understanding of long time behavior of numerical integration and the nonlinear stability, convergence, and reliability of using time-marching approaches for obtaining steady-state numerical solutions in computational fluid dynamics (CFD) is explored. The study is complemented with examples of spurious behavior observed in steady and unsteady CFD computations. The CFD examples were chosen to illustrate non-apparent spurious behavior that was difficult to detect without extensive grid and temporal refinement studies and some knowledge from dynamical systems theory. Studies revealed the various possible dangers of misinterpreting numerical simulation of realistic complex flows that are constrained by available computing power. In large scale computations where the physics of the problem under study is not well understood and numerical simulations are the only viable means of solution, extreme care must be taken in both computation and interpretation of the numerical data. The goal of this paper is to explore the important role that dynamical systems theory can play in the understanding of the global nonlinear behavior of numerical algorithms and to aid the identification of the sources of numerical uncertainties in CFD.
Vetting, Matthew W.; Al-Obaidi, Nawar; Zhao, Suwen; ...
2014-12-25
The rate at which genome sequencing data is accruing demands enhanced methods for functional annotation and metabolism discovery. Solute binding proteins (SBPs) facilitate the transport of the first reactant in a metabolic pathway, thereby constraining the regions of chemical space and the chemistries that must be considered for pathway reconstruction. Here in this paper, we describe high-throughput protein production and differential scanning fluorimetry platforms, which enabled the screening of 158 SBPs against a 189 component library specifically tailored for this class of proteins. Like all screening efforts, this approach is limited by the practical constraints imposed by construction of themore » library, i.e., we can study only those metabolites that are known to exist and which can be made in sufficient quantities for experimentation. To move beyond these inherent limitations, we illustrate the promise of crystallographic- and mass spectrometric-based approaches for the unbiased use of entire metabolomes as screening libraries. Together, our approaches identified 40 new SBP ligands, generated experiment-based annotations for 2084 SBPs in 71 isofunctional clusters, and defined numerous metabolic pathways, including novel catabolic pathways for the utilization of ethanolamine as sole nitrogen source and the use of D-Ala-D-Ala as sole carbon source. These efforts begin to define an integrated strategy for realizing the full value of amassing genome sequence data.« less
Hamiltonian Monte Carlo Inversion of Seismic Sources in Complex Media
NASA Astrophysics Data System (ADS)
Fichtner, A.; Simutė, S.
2017-12-01
We present a probabilistic seismic source inversion method that properly accounts for 3D heterogeneous Earth structure and provides full uncertainty information on the timing, location and mechanism of the event. Our method rests on two essential elements: (1) reciprocity and spectral-element simulations in complex media, and (2) Hamiltonian Monte Carlo sampling that requires only a small amount of test models. Using spectral-element simulations of 3D, visco-elastic, anisotropic wave propagation, we precompute a data base of the strain tensor in time and space by placing sources at the positions of receivers. Exploiting reciprocity, this receiver-side strain data base can be used to promptly compute synthetic seismograms at the receiver locations for any hypothetical source within the volume of interest. The rapid solution of the forward problem enables a Bayesian solution of the inverse problem. For this, we developed a variant of Hamiltonian Monte Carlo (HMC) sampling. Taking advantage of easily computable derivatives, HMC converges to the posterior probability density with orders of magnitude less samples than derivative-free Monte Carlo methods. (Exact numbers depend on observational errors and the quality of the prior). We apply our method to the Japanese Islands region where we previously constrained 3D structure of the crust and upper mantle using full-waveform inversion with a minimum period of around 15 s.
Initiation and propagation of a PKN hydraulic fracture in permeable rock: Toughness dominated regime
NASA Astrophysics Data System (ADS)
Sarvaramini, E.; Garagash, D.
2011-12-01
The present work investigates the injection of a low-viscosity fluid into a pre-existing fracture with constrained height (PKN), as in waterflooding or supercritical CO2 injection. Contrary to conventional hydraulic fracturing, where 'cake build up' limits diffusion to a small zone, the low viscosity fluid allows for diffusion over a wider range of scales. Over large injection times the pattern becomes 2 or 3-D, necessitating a full-space diffusion modeling. In addition, the dissipation of energy associated with fracturing of rock dominates the energy needed for the low-viscosity fluid flow into the propagating crack. As a result, the fracture toughness is important in evaluating both the initiation and the ensuing propagation of these fractures. Classical PKN hydraulic fracturing model, amended to account for full-space leak-off and the toughness [Garagash, unpublished 2009], is used to evaluate the pressure history and fluid leak-off volume during the injection of low viscosity fluid into a pre-existing and initially stationary. In order to find the pressure history, the stationary crack is first subject to a step pressure increase. The response of the porous medium to the step pressure increase in terms of fluid leak-off volume provides the fundamental solution, which then can be used to find the transient pressurization using Duhamel theorem [Detournay & Cheng, IJSS 1991]. For the step pressure increase an integral equation technique is used to find the leak-off rate history. For small time the solution must converge to short time asymptote, which corresponds to 1-D diffusion pattern. However, as the diffusion length in the zone around the fracture increases the assumption of a 1-D pattern is no longer valid and the diffusion follows a 2-D pattern. The solution to the corresponding integral equation gives the leak-off rate history, which is used to find the cumulative leak-off volume. The transient pressurization solution is obtained using global conservation of fluid injected into the fracture. With increasing pressure in the fracture due to the fluid injection, the energy release rate eventually becomes equal to the toughness and fracture propagates. The evolution of the fracture length is established using the method similar to the one employed for the stationary crack.
On the mass concentration of L^2-constrained minimizers for a class of Schrödinger-Poisson equations
NASA Astrophysics Data System (ADS)
Ye, Hongyu; Luo, Tingjian
2018-06-01
In this paper, we study the mass concentration behavior of positive solutions with prescribed L^2-norm for a class of Schrödinger-Poisson equations in R^3 -Δ u-μ u+φ _uu-|u|^{p-2}u=0, &{} x\\in R^3, μ \\in R, -Δ φ _u=|u|^2, where p\\in (2,6). We show that positive solutions with prescribed L^2-norm as which tends to 0 (in some cases) or to + ∞ (in others), behave like the positive solution of Schrödinger equation -Δ u+u=|u|^{p-2}u in R^3.
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
NASA Astrophysics Data System (ADS)
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Zeb, Salman; Yousaf, Muhammad
2017-01-01
In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.
Design Optimization Toolkit: Users' Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less
Design of an efficient space constrained diffuser for supercritical CO2 turbines
NASA Astrophysics Data System (ADS)
Keep, Joshua A.; Head, Adam J.; Jahn, Ingo H.
2017-03-01
Radial inflow turbines are an arguably relevant architecture for energy extraction from ORC and supercritical CO 2 power cycles. At small scale, design constraints can prescribe high exit velocities for such turbines, which lead to high kinetic energy in the turbine exhaust stream. The inclusion of a suitable diffuser in a radial turbine system allows some exhaust kinetic energy to be recovered as static pressure, thereby ensuring efficient operation of the overall turbine system. In supercritical CO 2 Brayton cycles, the high turbine inlet pressure can lead to a sealing challenge if the rotor is supported from the rotor rear side, due to the seal operating at rotor inlet pressure. An alternative to this is a cantilevered layout with the rotor exit facing the bearing system. While such a layout is attractive for the sealing system, it limits the axial space claim of any diffuser. Previous studies into conical diffuser geometries for supercritical CO 2 have shown that in order to achieve optimal static pressure recovery, longer geometries of a shallower cone angle are necessitated when compared to air. A diffuser with a combined annular-radial arrangement is investigated as a means to package the aforementioned geometric characteristics into a limited space claim for a 100kW radial inflow turbine. Simulation results show that a diffuser of this design can attain static pressure rise coefficients greater than 0.88. This confirms that annular-radial diffusers are a viable design solution for supercritical CO2 radial inflow turbines, thus enabling an alternative cantilevered rotor layout.
NASA Astrophysics Data System (ADS)
Denicol, Gabriel; Heinz, Ulrich; Martinez, Mauricio; Noronha, Jorge; Strickland, Michael
2014-12-01
We present an exact solution to the Boltzmann equation which describes a system undergoing boost-invariant longitudinal and azimuthally symmetric radial expansion for arbitrary shear viscosity to entropy density ratio. This new solution is constructed by considering the conformal map between Minkowski space and the direct product of three-dimensional de Sitter space with a line. The resulting solution respects S O (3 )q⊗S O (1 ,1 )⊗Z2 symmetry. We compare the exact kinetic solution with exact solutions of the corresponding macroscopic equations that were obtained from the kinetic theory in ideal and second-order viscous hydrodynamic approximations. The macroscopic solutions are obtained in de Sitter space and are subject to the same symmetries used to obtain the exact kinetic solution.
Constrained coding for the deep-spaced optical channel
NASA Technical Reports Server (NTRS)
Moision, B.; Hamkins, J.
2002-01-01
In this paper, we demonstrate a class of low-complexity modulation codes satisfying the (d,k) constraint that offer throughput gains over M-PPM on the order of 10-15%, which translate into SNR gains of .4 - .6 dB.
Robotic influence in the conceptual design of mechanical systems in space and vice versa - A survey
NASA Technical Reports Server (NTRS)
Sanger, George F.
1988-01-01
A survey of methods using robotic devices to construct structural elements in space is presented. Two approaches to robotic construction are considered: one in which the structural elements are designed using conventional aerospace techniques which tend to constrain the function aspects of robotics and one in which the structural elements are designed from the conceptual stage with built-in robotic features. Examples are presented of structural building concepts using robotics, including the construction of the SP-100 nuclear reactor power system, a multimirror large aperture IR space telescope concept, retrieval and repair in space, and the Flight Telerobotic Servicer.
New Paradigms for Ensuring the Enduring Viability of the Space Science Enterprise
NASA Astrophysics Data System (ADS)
Arenberg, Jonathan; Conti, Alberto
2018-01-01
Pursuing ground breaking science in a highly cost and funding constrained environment presents new challenges to the development of future large space astrophysics missions. Within the conventional cost models for large observatories, executing a flagship “mission after next” appears to be unstainable. To achieve our nation’s space astrophysics ambitions requires new paradigms in system design, development and manufacture. Implementation of this new paradigm requires that the space astrophysics community adopt new answers to a new set of questions. This poster will present our recent results on the origins of these new questions and the steps to their answers.
Concepts for a Space-Based Gravitational-Wave Observatory (SGO)
NASA Technical Reports Server (NTRS)
Stebbins, Robin T.
2012-01-01
The low-frequency band (0.0001 - 1 Hz) of the gravitational wave spectrum has the most interesting astrophysical sources. It is only accessible from space. The Laser Interferometer Space Antenna (LISA) concept has been the leading contender for a space-based detector in this band. Despite a strong recommendation from Astro2010, constrained budgets motivate the search for a less expensive concept, even at the loss of some science. We have explored the range of lower cost mission concepts derived from two decades of studying the LISA concept We describe LlSA-like concepts that span the range of affordable and scientifically worthwhile missions, and summarize the analyses behind them.
Dynamic Steering for Improved Sensor Autonomy and Catalogue Maintenance
NASA Astrophysics Data System (ADS)
Hobson, T.; Gordon, N.; Clarkson, I.; Rutten, M.; Bessell, T.
A number of international agencies endeavour to maintain catalogues of the man-made resident space objects (RSOs) currently orbiting the Earth. Such catalogues are primarily created to anticipate and avoid destructive collisions involving important space assets such as manned missions and active satellites. An agencys ability to achieve this objective is dependent on the accuracy, reliability and timeliness of the information used to update its catalogue. A primary means for gathering this information is by regularly making direct observations of the tens-of-thousands of currently detectable RSOs via networks of space surveillance sensors. But operational constraints sometimes prevent accurate and timely reacquisition of all known RSOs, which can cause them to become lost to the tracking system. Furthermore, when comprehensive acquisition of new objects does not occur, these objects, in addition to the lost RSOs, result in uncorrelated detections when next observed. Due to the rising number of space-missions and the introduction of newer, more capable space-sensors, the number of uncorrelated targets is at an all-time high. The process of differentiating uncorrelated detections caused by once-acquired now-lost RSOs from newly detected RSOs is a difficult and often labour intensive task. Current methods for overcoming this challenge focus on advancements in orbit propagation and object characterisation to improve prediction accuracy and target identification. In this paper, we describe a complementary approach that incorporates increased awareness of error and failed observations into the RSO tracking solution. Our methodology employs a technique called dynamic steering to improve the autonomy and capability of a space surveillance networks steerable sensors. By co-situating each sensor with a low-cost high-performance computer, the steerable sensor can quickly and intelligently decide how to steer itself. The sensor-system uses a dedicated parallel-processing architecture to enable it to compute a high-fidelity estimate of the targets prior state error distribution in real-time. Negative information, such as when an RSO is targeted for observation but it is not observed, is incorporated to improve the likelihood of reacquiring the target when attempting to observe the target in future. The sensor is consequently capable of improving its utility by planning each observation using a sensor steering solution that is informed by all prior attempts at observing the target. We describe the practical implementation of a single experimental sensor and offer the results of recent field trials. These trials involved reacquisition and constrained Initial Orbit Determination of RSOs, a number of months after prior observation and initial detection. Using the proposed approach, the system is capable of using targeting information that would be unusable by existing space surveillance networks. The system consequently offers a means of enhancing space surveillance for SSA via increased system capacity, a higher degree of autonomy and the ability to reacquire objects whose dynamics are insufficiently modelled to cue a conventional space surveillance system for observation and tracking.
NASA Astrophysics Data System (ADS)
Khode, Urmi B.
High Altitude Long Endurance (HALE) airships are platform of interest due to their persistent observation and persistent communication capabilities. A novel HALE airship design configuration incorporates a composite sandwich propulsive hull duct between the front and the back of the hull for significant drag reduction via blown wake effects. The sandwich composite shell duct is subjected to hull pressure on its outer walls and flow suction on its inner walls which result in in-plane wall compressive stress, which may cause duct buckling. An approach based upon finite element stability analysis combined with a ply layup and foam thickness determination weight minimization search algorithm is utilized. Its goal is to achieve an optimized solution for the configuration of the sandwich composite as a solution to a constrained minimum weight design problem, for which the shell duct remains stable with a prescribed margin of safety under prescribed loading. The stability analysis methodology is first verified by comparing published analytical results for a number of simple cylindrical shell configurations with FEM counterpart solutions obtained using the commercially available code ABAQUS. Results show that the approach is effective in identifying minimum weight composite duct configurations for a number of representative combinations of duct geometry, composite material and foam properties, and propulsive duct applied pressure loading.
NASA Astrophysics Data System (ADS)
Provencher, Stephen W.
1982-09-01
CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.
Zeolite crystal growth in space - What has been learned
NASA Technical Reports Server (NTRS)
Sacco, A., Jr.; Thompson, R. W.; Dixon, A. G.
1993-01-01
Three zeolite crystal growth experiments developed at WPI have been performed in space in last twelve months. One experiment, GAS-1, illustrated that to grow large, crystallographically uniform crystals in space, the precursor solutions should be mixed in microgravity. Another experiment evaluated the optimum mixing protocol for solutions that chemically interact ('gel') on contact. These results were utilized in setting the protocol for mixing nineteen zeolite solutions that were then processed and yielded zeolites A, X and mordenite. All solutions in which the nucleation event was influenced produced larger, more 'uniform' crystals than did identical solutions processed on earth.
All symmetric space solutions of eleven-dimensional supergravity
NASA Astrophysics Data System (ADS)
Wulff, Linus
2017-06-01
We find all symmetric space solutions of eleven-dimensional supergravity completing an earlier classification by Figueroa-O’Farrill. They come in two types: AdS solutions and pp-wave solutions. We analyze the supersymmetry conditions and show that out of the 99 AdS geometries the only supersymmetric ones are the well known backgrounds arising as near-horizon limits of (intersecting) branes and preserving 32, 16 or 8 supersymmetries. The general form of the superisometry algebra for symmetric space backgrounds is also derived.
Teaching IP Networking Fundamentals in Resource Constrained Educational Environments
ERIC Educational Resources Information Center
Armitage, Grenville; Harrop, Warren
2005-01-01
Many educational institutions suffer from a lack of funding to keep telecommunications laboratory classes up to date and flexible. This paper describes our Remote Unix Lab Environment (RULE), a solution for exposing students to the latest Internet based telecommunications software tools in a Unix like environment. RULE leverages existing PC…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hempling, Scott; Elefant, Carolyn; Cory, Karlynn
2010-01-01
This report details how state feed-in tariff (FIT) programs can be legally implemented and how they can comply with federal requirements. The report describes the federal constraints on FIT programs and identifies legal methods that are free of those constrains.
Free energy from molecular dynamics with multiple constraints
NASA Astrophysics Data System (ADS)
den Otter, W. K.; Briels, W. J.
In molecular dynamics simulations of reacting systems, the key step to determining the equilibrium constant and the reaction rate is the calculation of the free energy as a function of the reaction coordinate. Intuitively the derivative of the free energy is equal to the average force needed to constrain the reaction coordinate to a constant value, but the metric tensor effect of the constraint on the sampled phase space distribution complicates this relation. The appropriately corrected expression for the potential of mean constraint force method (PMCF) for systems in which only the reaction coordinate is constrained was published recently. Here we will consider the general case of a system with multiple constraints. This situation arises when both the reaction coordinate and the 'hard' coordinates are constrained, and also in systems with several reaction coordinates. The obvious advantage of this method over the established thermodynamic integration and free energy perturbation methods is that it avoids the cumbersome introduction of a full set of generalized coordinates complementing the constrained coordinates. Simulations of n -butane and n -pentane in vacuum illustrate the method.
Mechanical Design of Spacecraft
NASA Technical Reports Server (NTRS)
1962-01-01
In the spring of 1962, engineers from the Engineering Mechanics Division of the Jet Propulsion Laboratory gave a series of lectures on spacecraft design at the Engineering Design seminars conducted at the California Institute of Technology. Several of these lectures were subsequently given at Stanford University as part of the Space Technology seminar series sponsored by the Department of Aeronautics and Astronautics. Presented here are notes taken from these lectures. The lectures were conceived with the intent of providing the audience with a glimpse of the activities of a few mechanical engineers who are involved in designing, building, and testing spacecraft. Engineering courses generally consist of heavily idealized problems in order to allow the more efficient teaching of mathematical technique. Students, therefore, receive a somewhat limited exposure to actual engineering problems, which are typified by more unknowns than equations. For this reason it was considered valuable to demonstrate some of the problems faced by spacecraft designers, the processes used to arrive at solutions, and the interactions between the engineer and the remainder of the organization in which he is constrained to operate. These lecture notes are not so much a compilation of sophisticated techniques of analysis as they are a collection of examples of spacecraft hardware and associated problems. They will be of interest not so much to the experienced spacecraft designer as to those who wonder what part the mechanical engineer plays in an effort such as the exploration of space.
NASA Astrophysics Data System (ADS)
Cannavo', Flavio; Scandura, Danila; Palano, Mimmo; Musumeci, Carla
2014-05-01
Seismicity and ground deformation represent the principal geophysical methods for volcano monitoring and provide important constraints on subsurface magma movements. The occurrence of migrating seismic swarms, as observed at several volcanoes worldwide, are commonly associated with dike intrusions. In addition, on active volcanoes, (de)pressurization and/or intrusion of magmatic bodies stress and deform the surrounding crustal rocks, often causing earthquakes randomly distributed in time within a volume extending about 5-10 km from the wall of the magmatic bodies. Despite advances in space-based, geodetic and seismic networks have significantly improved volcano monitoring in the last decades on an increasing worldwide number of volcanoes, quantitative models relating deformation and seismicity are not common. The observation of several episodes of volcanic unrest throughout the world, where the movement of magma through the shallow crust was able to produce local rotation of the ambient stress field, introduces an opportunity to improve the estimate of the parameters of a deformation source. In particular, during these episodes of volcanic unrest a radial pattern of P-axes of the focal mechanism solutions, similar to that of ground deformation, has been observed. Therefore, taking into account additional information from focal mechanisms data, we propose a novel approach to volcanic source modeling based on the joint inversion of deformation and focal plane solutions assuming that both observations are due to the same source. The methodology is first verified against a synthetic dataset of surface deformation and strain within the medium, and then applied to real data from an unrest episode occurred before the May 13th 2008 eruption at Mt. Etna (Italy). The main results clearly indicate as the joint inversion improves the accuracy of the estimated source parameters of about 70%. The statistical tests indicate that the source depth is the parameter with the highest increment of accuracy. In addition a sensitivity analysis confirms that displacements data are more useful to constrain the pressure and the horizontal location of the source than its depth, while the P-axes better constrain the depth estimation.
NASA Astrophysics Data System (ADS)
Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten
2015-04-01
A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956.
NASA Astrophysics Data System (ADS)
Shallal, Muhannad A.; Jabbar, Hawraz N.; Ali, Khalid K.
2018-03-01
In this paper, we constructed a travelling wave solution for space-time fractional nonlinear partial differential equations by using the modified extended Tanh method with Riccati equation. The method is used to obtain analytic solutions for the space-time fractional Klein-Gordon and coupled conformable space-time fractional Boussinesq equations. The fractional complex transforms and the properties of modified Riemann-Liouville derivative have been used to convert these equations into nonlinear ordinary differential equations.
An exact solution of the Currie-Hill equations in 1 + 1 dimensional Minkowski space
NASA Astrophysics Data System (ADS)
Balog, János
2014-11-01
We present an exact two-particle solution of the Currie-Hill equations of Predictive Relativistic Mechanics in 1 + 1 dimensional Minkowski space. The instantaneous accelerations are given in terms of elementary functions depending on the relative particle position and velocities. The general solution of the equations of motion is given and by studying the global phase space of this system it is shown that this is a subspace of the full kinematic phase space.
Atmospheric Variability of CO2 impact on space observation Requirements
NASA Astrophysics Data System (ADS)
Swanson, A. L.; Sen, B.; Newhart, L.; Segal, G.
2009-12-01
If International governments are to reduce GHG levels by 80% by 2050, as recommended by most scientific bodies concerned with avoiding the most hazardous changes in climate, then massive investments in infrastructure and new technology will be required over the coming decades. Such an investment will be a huge commitment by governments and corporations, and while it will offer long-term dividends in lower energy costs, a healthier environment and averted additional global warming, the shear magnitude of upfront costs will drive a call for a monitoring and verification system. Such a system will be required to offer accountability to signatories of governing bodies, as well as, for the global public. Measuring the average global distribution of CO2 is straight forward, as exemplified by the long running station measurements managed by NOAA’s Global Monitoring Division that includes the longterm Keeling record. However, quantifying anthropogenic and natural source/sink distributions and atmospheric mixing have been much more difficult to constrain. And, yet, an accurate accounting of all anthropogenic source strengths is required for Global Treaty verification. The only way to accurately assess Global GHG emissions is to construct an integrated system of ground, air and space based observations with extensive chemical modeling capabilities. We look at the measurement requirements for the space based component of the solutions. To determine what space sensor performance requirements for ground resolution, coverage, and revisit, we have analyzed regional CO2 distributions and variability using NASA and NOAA aircraft flight campaigns. The results of our analysis are presented as variograms showing average spatial variability over several Northern Hemispheric regions. There are distinct regional differences with the starkest contrast between urban versus rural and Coastal Asia versus Coastal US. The results suggest specific consequences on what spatial and temporal requirements might need to be for space based observations.
Urban agriculture: a global analysis of the space constraint to meet urban vegetable demand
NASA Astrophysics Data System (ADS)
Martellozzo, F.; Landry, J.-S.; Plouffe, D.; Seufert, V.; Rowhani, P.; Ramankutty, N.
2014-05-01
Urban agriculture (UA) has been drawing a lot of attention recently for several reasons: the majority of the world population has shifted from living in rural to urban areas; the environmental impact of agriculture is a matter of rising concern; and food insecurity, especially the accessibility of food, remains a major challenge. UA has often been proposed as a solution to some of these issues, for example by producing food in places where population density is highest, reducing transportation costs, connecting people directly to food systems and using urban areas efficiently. However, to date no study has examined how much food could actually be produced in urban areas at the global scale. Here we use a simple approach, based on different global-scale datasets, to assess to what extent UA is constrained by the existing amount of urban space. Our results suggest that UA would require roughly one third of the total global urban area to meet the global vegetable consumption of urban dwellers. This estimate does not consider how much urban area may actually be suitable and available for UA, which likely varies substantially around the world and according to the type of UA performed. Further, this global average value masks variations of more than two orders of magnitude among individual countries. The variations in the space required across countries derive mostly from variations in urban population density, and much less from variations in yields or per capita consumption. Overall, the space required is regrettably the highest where UA is most needed, i.e., in more food insecure countries. We also show that smaller urban clusters (i.e., <100 km2 each) together represent about two thirds of the global urban extent; thus UA discourse and policies should not focus on large cities exclusively, but should also target smaller urban areas that offer the greatest potential in terms of physical space.
Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.
Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin
2009-01-01
Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.
Analytical and exact solutions of the spherical and cylindrical diodes of Langmuir-Blodgett law
NASA Astrophysics Data System (ADS)
Torres-Cordoba, Rafael; Martinez-Garcia, Edgar
2017-10-01
This paper discloses the exact solutions of a mathematical model that describes the cylindrical and spherical electron current emissions within the context of a physics approximation method. The solution involves analyzing the 1D nonlinear Poisson equation, for the radial component. Although an asymptotic solution has been previously obtained, we present a theoretical solution that satisfies arbitrary boundary conditions. The solution is found in its parametric form (i.e., φ(r )=φ(r (τ)) ) and is valid when the electric field at the cathode surface is non-zero. Furthermore, the non-stationary spatial solution of the electric potential between the anode and the cathode is also presented. In this work, the particle-beam interface is considered to be at the end of the plasma sheath as described by Sutherland et al. [Phys. Plasmas 12, 033103 2005]. Three regimes of space charge effects—no space charge saturation, space charge limited, and space charge saturation—are also considered.
NASA Astrophysics Data System (ADS)
Katayama, Soichiro
We consider the Cauchy problem for systems of nonlinear wave equations with multiple propagation speeds in three space dimensions. Under the null condition for such systems, the global existence of small amplitude solutions is known. In this paper, we will show that the global solution is asymptotically free in the energy sense, by obtaining the asymptotic pointwise behavior of the derivatives of the solution. Nonetheless we can also show that the pointwise behavior of the solution itself may be quite different from that of the free solution. In connection with the above results, a theorem is also developed to characterize asymptotically free solutions for wave equations in arbitrary space dimensions.
On supersymmetric AdS6 solutions in 10 and 11 dimensions
NASA Astrophysics Data System (ADS)
Gutowski, J.; Papadopoulos, G.
2017-12-01
We prove a non-existence theorem for smooth, supersymmetric, warped AdS 6 solutions with connected, compact without boundary internal space in D = 11 and (massive) IIA supergravities. In IIB supergravity we show that if such AdS 6 solutions exist, then the NSNS and RR 3-form fluxes must be linearly independent and certain spinor bilinears must be appropriately restricted. Moreover we demonstrate that the internal space admits an so(3) action which leaves all the fields invariant and for smooth solutions the principal orbits must have co-dimension two. We also describe the topology and geometry of internal spaces that admit such a so(3) action and show that there are no solutions for which the internal space has topology F × S 2, where F is an oriented surface.
The eigenvalue problem in phase space.
Cohen, Leon
2018-06-30
We formulate the standard quantum mechanical eigenvalue problem in quantum phase space. The equation obtained involves the c-function that corresponds to the quantum operator. We use the Wigner distribution for the phase space function. We argue that the phase space eigenvalue equation obtained has, in addition to the proper solutions, improper solutions. That is, solutions for which no wave function exists which could generate the distribution. We discuss the conditions for ascertaining whether a position momentum function is a proper phase space distribution. We call these conditions psi-representability conditions, and show that if these conditions are imposed, one extracts the correct phase space eigenfunctions. We also derive the phase space eigenvalue equation for arbitrary phase space distributions functions. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems
Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...
2015-10-30
In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-01-01
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-10-22
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.
Homoclinic accretion solutions in the Schwarzschild-anti-de Sitter space-time
NASA Astrophysics Data System (ADS)
Mach, Patryk
2015-04-01
The aim of this paper is to clarify the distinction between homoclinic and standard (global) Bondi-type accretion solutions in the Schwarzschild-anti-de Sitter space-time. The homoclinic solutions have recently been discovered numerically for polytropic equations of state. Here I show that they exist also for certain isothermal (linear) equations of state, and an analytic solution of this type is obtained. It is argued that the existence of such solutions is generic, although for sufficiently relativistic matter models (photon gas, ultrahard equation of state) there exist global solutions that can be continued to infinity, similarly to standard Michel's solutions in the Schwarzschild space-time. In contrast to that global solutions should not exist for matter models with a nonvanishing rest-mass component, and this is demonstrated for polytropes. For homoclinic isothermal solutions I derive an upper bound on the mass of the black hole for which stationary transonic accretion is allowed.
NASA Astrophysics Data System (ADS)
Prasetyo, I.; Ramadhan, H. S.
2017-07-01
Here we present some solutions with noncanonical global monopole in nonlinear sigma model in 4-dimensional spacetime. We discuss some blackhole solutions and its horizons. We also obtain some compactification solutions. We list some possible compactification channels from 4-space to 2 × 2-spaces of constant curvatures.
Advances in locally constrained k-space-based parallel MRI.
Samsonov, Alexey A; Block, Walter F; Arunachalam, Arjun; Field, Aaron S
2006-02-01
In this article, several theoretical and methodological developments regarding k-space-based, locally constrained parallel MRI (pMRI) reconstruction are presented. A connection between Parallel MRI with Adaptive Radius in k-Space (PARS) and GRAPPA methods is demonstrated. The analysis provides a basis for unified treatment of both methods. Additionally, a weighted PARS reconstruction is proposed, which may absorb different weighting strategies for improved image reconstruction. Next, a fast and efficient method for pMRI reconstruction of data sampled on non-Cartesian trajectories is described. In the new technique, the computational burden associated with the numerous matrix inversions in the original PARS method is drastically reduced by limiting direct calculation of reconstruction coefficients to only a few reference points. The rest of the coefficients are found by interpolating between the reference sets, which is possible due to the similar configuration of points participating in reconstruction for highly symmetric trajectories, such as radial and spirals. As a result, the time requirements are drastically reduced, which makes it practical to use pMRI with non-Cartesian trajectories in many applications. The new technique was demonstrated with simulated and actual data sampled on radial trajectories. Copyright 2006 Wiley-Liss, Inc.
O'Halloran, Rafael L; Holmes, James H; Wu, Yu-Chien; Alexander, Andrew; Fain, Sean B
2010-01-01
An undersampled diffusion-weighted stack-of-stars acquisition is combined with iterative highly constrained back-projection to perform hyperpolarized helium-3 MR q-space imaging with combined regional correction of radiofrequency- and T1-related signal loss in a single breath-held scan. The technique is tested in computer simulations and phantom experiments and demonstrated in a healthy human volunteer with whole-lung coverage in a 13-sec breath-hold. Measures of lung microstructure at three different lung volumes are evaluated using inhaled gas volumes of 500 mL, 1000 mL, and 1500 mL to demonstrate feasibility. Phantom results demonstrate that the proposed technique is in agreement with theoretical values, as well as with a fully sampled two-dimensional Cartesian acquisition. Results from the volunteer study demonstrate that the root mean squared diffusion distance increased significantly from the 500-mL volume to the 1000-mL volume. This technique represents the first demonstration of a spatially resolved hyperpolarized helium-3 q-space imaging technique and shows promise for microstructural evaluation of lung disease in three dimensions. Copyright (c) 2009 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Zhang, Yongjing; Chen, Zhe; Yao, Lei; Wang, Xiao; Fu, Ping; Lin, Zhidong
2018-04-01
The interlayer spacing of graphene oxide (GO) is a key property for GO membrane. To probe the variation of interlayer spacing of the GO membrane immersing in KCl aqueous solution, electrochemical impedance spectroscopy (EIS), x-ray diffraction (XRD) and computational calculation was utilized in this study. The XRD patterns show that soaking in KCl aqueous solution leads to an increase of interlayer spacing of GO membrane. And the EIS results indicate that during the immersing process, the charge transfer resistance of GO membrane decreases first and then increases. Computational calculation confirms that intercalated water molecules can result in an increase of interlayer spacing of GO membrane, while the permeation of K+ ions would lead to a decrease of interlayer spacing. All the results are in agreement with each other. It suggests that during the immersing process, the interlayer spacing of GO enlarges first and then decreases. EIS can be a promisingly online method for examining the interlayer spacing of GO in the aqueous solution.
NASA Technical Reports Server (NTRS)
Easterly, Jill
1993-01-01
This software package does ergonomic human modeling for maintenance tasks. Technician capabilities can be directed to represent actual situations of work environment, strengths and capabilities of the individual, particular limitations (such as constraining characteristics of a particular space suit), tools required, and procedures or tasks to be performed.
Low-Cost Virtual Laboratory Workbench for Electronic Engineering
ERIC Educational Resources Information Center
Achumba, Ifeyinwa E.; Azzi, Djamel; Stocker, James
2010-01-01
The laboratory component of undergraduate engineering education poses challenges in resource constrained engineering faculties. The cost, time, space and physical presence requirements of the traditional (real) laboratory approach are the contributory factors. These resource constraints may mitigate the acquisition of meaningful laboratory…
NASA Astrophysics Data System (ADS)
Swanson, Ryan David
The advection-dispersion equation (ADE) fails to describe non-Fickian solute transport breakthrough curves (BTCs) in saturated porous media in both laboratory and field experiments, necessitating the use of other models. The dual-domain mass transfer (DDMT) model partitions the total porosity into mobile and less-mobile domains with an exchange of mass between the two domains, and this model can reproduce better fits to BTCs in many systems than ADE-based models. However, direct experimental estimation of DDMT model parameters remains elusive and model parameters are often calculated a posteriori by an optimization procedure. Here, we investigate the use of geophysical tools (direct-current resistivity, nuclear magnetic resonance, and complex conductivity) to estimate these model parameters directly. We use two different samples of the zeolite clinoptilolite, a material shown to demonstrate solute mass transfer due to a significant internal porosity, and provide the first evidence that direct-current electrical methods can track solute movement into and out of a less-mobile pore space in controlled laboratory experiments. We quantify the effects of assuming single-rate DDMT for multirate mass transfer systems. We analyze pore structures using material characterization methods (mercury porosimetry, scanning electron microscopy, and X-ray computer tomography), and compare these observations to geophysical measurements. Nuclear magnetic resonance in conjunction with direct-current resistivity measurements can constrain mobile and less-mobile porosities, but complex conductivity may have little value in relation to mass transfer despite the hypothesis that mass transfer and complex conductivity lengths scales are related. Finally, we conduct a geoelectrical monitored tracer test at the Macrodispersion Experiment (MADE) site in Columbus, MS. We relate hydraulic and electrical conductivity measurements to generate a 3D hydraulic conductivity field, and compare to hydraulic conductivity fields estimated through ordinary kriging and sequential Gaussian simulation. Time-lapse electrical measurements are used to verify or dismiss aspects of breakthrough curves for different hydraulic conductivity fields. Our results quantify the potential for geophysical measurements to infer on single-rate DDMT parameters, show site-specific relations between hydraulic and electrical conductivity, and track solute exchange into and out of less-mobile domains.
Quantum mechanics of a constrained particle
NASA Astrophysics Data System (ADS)
da Costa, R. C. T.
1981-04-01
The motion of a particle rigidly bounded to a surface is discussed, considering the Schrödinger equation of a free particle constrained to move, by the action of an external potential, in an infinitely thin sheet of the ordinary three-dimensional space. Contrary to what seems to be the general belief expressed in the literature, this limiting process gives a perfectly well-defined result, provided that we take some simple precautions in the definition of the potentials and wave functions. It can then be shown that the wave function splits into two parts: the normal part, which contains the infinite energies required by the uncertainty principle, and a tangent part which contains "surface potentials" depending both on the Gaussian and mean curvatures. An immediate consequence of these results is the existence of different quantum mechanical properties for two isometric surfaces, as can be seen from the bound state which appears along the edge of a folded (but not stretched) plane. The fact that this surface potential is not a bending invariant (cannot be expressed as a function of the components of the metric tensor and their derivatives) is also interesting from the more general point of view of the quantum mechanics in curved spaces, since it can never be obtained from the classical Lagrangian of an a priori constrained particle without substantial modifications in the usual quantization procedures. Similar calculations are also presented for the case of a particle bounded to a curve. The properties of the constraining spatial potential, necessary to a meaningful limiting process, are discussed in some detail, and, as expected, the resulting Schrödinger equation contains a "linear potential" which is a function of the curvature.