Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods
NASA Astrophysics Data System (ADS)
Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong
2008-12-01
Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.
A methodology based on reduced complexity algorithm for system applications using microprocessors
NASA Technical Reports Server (NTRS)
Yan, T. Y.; Yao, K.
1988-01-01
The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.
An historical survey of computational methods in optimal control.
NASA Technical Reports Server (NTRS)
Polak, E.
1973-01-01
Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.
NASA Astrophysics Data System (ADS)
Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li
2017-01-01
Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.
Guidance strategies and analysis for low thrust navigation
NASA Technical Reports Server (NTRS)
Jacobson, R. A.
1973-01-01
A low-thrust guidance algorithm suitable for operational use was formulated. A constrained linear feedback control law was obtained using a minimum terminal miss criterion and restricting control corrections to constant changes for specified time periods. Both fixed- and variable-time-of-arrival guidance were considered. The performance of the guidance law was evaluated by applying it to the approach phase of the 1980 rendezvous mission with the comet Encke.
A MATLAB implementation of the minimum relative entropy method for linear inverse problems
NASA Astrophysics Data System (ADS)
Neupauer, Roseanna M.; Borchers, Brian
2001-08-01
The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.
De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas
2015-03-01
Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.
ERIC Educational Resources Information Center
Hsu, Chun-Hsien; Lee, Chia-Ying; Marantz, Alec
2011-01-01
We employ a linear mixed-effects model to estimate the effects of visual form and the linguistic properties of Chinese characters on M100 and M170 MEG responses from single-trial data of Chinese and English speakers in a Chinese lexical decision task. Cortically constrained minimum-norm estimation is used to compute the activation of M100 and M170…
A method of minimum volume simplex analysis constrained unmixing for hyperspectral image
NASA Astrophysics Data System (ADS)
Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao
2017-07-01
The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.
Linear-constraint wavefront control for exoplanet coronagraphic imaging systems
NASA Astrophysics Data System (ADS)
Sun, He; Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Vanderbei, Robert J.; Groff, Tyler Dean
2017-01-01
A coronagraph is a leading technology for achieving high-contrast imaging of exoplanets in a space telescope. It uses a system of several masks to modify the diffraction and achieve extremely high contrast in the image plane around target stars. However, coronagraphic imaging systems are very sensitive to optical aberrations, so wavefront correction using deformable mirrors (DMs) is necessary to avoid contrast degradation in the image plane. Electric field conjugation (EFC) and Stroke minimization (SM) are two primary high-contrast wavefront controllers explored in the past decade. EFC minimizes the average contrast in the search areas while regularizing the strength of the control inputs. Stroke minimization calculates the minimum DM commands under the constraint that a target average contrast is achieved. Recently in the High Contrast Imaging Lab at Princeton University (HCIL), a new linear-constraint wavefront controller based on stroke minimization was developed and demonstrated using numerical simulation. Instead of only constraining the average contrast over the entire search area, the new controller constrains the electric field of each single pixel using linear programming, which could led to significant increases in speed of the wavefront correction and also create more uniform dark holes. As a follow-up of this work, another linear-constraint controller modified from EFC is demonstrated theoretically and numerically and the lab verification of the linear-constraint controllers is reported. Based on the simulation and lab results, the pros and cons of linear-constraint controllers are carefully compared with EFC and stroke minimization.
Digital robust active control law synthesis for large order systems using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1987-01-01
This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
Support Minimized Inversion of Acoustic and Elastic Wave Scattering
NASA Astrophysics Data System (ADS)
Safaeinili, Ali
Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).
On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems
Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...
2015-10-30
In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less
Minimum-Cost Aircraft Descent Trajectories with a Constrained Altitude Profile
NASA Technical Reports Server (NTRS)
Wu, Minghong G.; Sadovsky, Alexander V.
2015-01-01
An analytical formula for solving the speed profile that accrues minimum cost during an aircraft descent with a constrained altitude profile is derived. The optimal speed profile first reaches a certain speed, called the minimum-cost speed, as quickly as possible using an appropriate extreme value of thrust. The speed profile then stays on the minimum-cost speed as long as possible, before switching to an extreme value of thrust for the rest of the descent. The formula is applied to an actual arrival route and its sensitivity to winds and airlines' business objectives is analyzed.
NASA Astrophysics Data System (ADS)
Chandra, Rishabh
Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.
On the functional optimization of a certain class of nonstationary spatial functions
Christakos, G.; Paraskevopoulos, P.N.
1987-01-01
Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.
Bertrand, Alexander; Seo, Dongjin; Maksimovic, Filip; Carmena, Jose M; Maharbiz, Michel M; Alon, Elad; Rabaey, Jan M
2014-01-01
In this paper, we examine the use of beamforming techniques to interrogate a multitude of neural implants in a distributed, ultrasound-based intra-cortical recording platform known as Neural Dust. We propose a general framework to analyze system design tradeoffs in the ultrasonic beamformer that extracts neural signals from modulated ultrasound waves that are backscattered by free-floating neural dust (ND) motes. Simulations indicate that high-resolution linearly-constrained minimum variance beamforming sufficiently suppresses interference from unselected ND motes and can be incorporated into the ND-based cortical recording system.
Two-point method uncertainty during control and measurement of cylindrical element diameters
NASA Astrophysics Data System (ADS)
Glukhov, V. I.; Shalay, V. V.; Radev, H.
2018-04-01
The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.
MM Algorithms for Geometric and Signomial Programming
Lange, Kenneth; Zhou, Hua
2013-01-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545
MM Algorithms for Geometric and Signomial Programming.
Lange, Kenneth; Zhou, Hua
2014-02-01
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.
Diameter-Constrained Steiner Tree
NASA Astrophysics Data System (ADS)
Ding, Wei; Lin, Guohui; Xue, Guoliang
Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.
1974-01-01
REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans
Application of multivariable search techniques to structural design optimization
NASA Technical Reports Server (NTRS)
Jones, R. T.; Hague, D. S.
1972-01-01
Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.
Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.
Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li
2018-06-01
State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.
Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya
2018-01-30
Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Resolving Mixed Algal Species in Hyperspectral Images
Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.
2014-01-01
We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451
Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm
NASA Technical Reports Server (NTRS)
Lee, Allan Y.
1991-01-01
Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.
NASA Technical Reports Server (NTRS)
Goodrich, Kenneth H.; Sliwa, Steven M.; Lallman, Frederick J.
1989-01-01
Airplane designs are currently being proposed with a multitude of lifting and control devices. Because of the redundancy in ways to generate moments and forces, there are a variety of strategies for trimming each airplane. A linear optimum trim solution (LOTS) is derived using a Lagrange formulation. LOTS enables the rapid calculation of the longitudinal load distribution resulting in the minimum trim drag in level, steady-state flight for airplanes with a mixture of three or more aerodynamic surfaces and propulsive control effectors. Comparisons of the trim drags obtained using LOTS, a direct constrained optimization method, and several ad hoc methods are presented for vortex-lattice representations of a three-surface airplane and two-surface airplane with thrust vectoring. These comparisons show that LOTS accurately predicts the results obtained from the nonlinear optimization and that the optimum methods result in trim drag reductions of up to 80 percent compared to the ad hoc methods.
Observations of non-linear plasmon damping in dense plasmas
NASA Astrophysics Data System (ADS)
Witte, B. B. L.; Sperling, P.; French, M.; Recoules, V.; Glenzer, S. H.; Redmer, R.
2018-05-01
We present simulations using finite-temperature density-functional-theory molecular-dynamics to calculate dynamic dielectric properties in warm dense aluminum. The comparison between exchange-correlation functionals in the Perdew, Burke, Ernzerhof approximation, Strongly Constrained and Appropriately Normed Semilocal Density Functional, and Heyd, Scuseria, Ernzerhof (HSE) approximation indicates evident differences in the electron transition energies, dc conductivity, and Lorenz number. The HSE calculations show excellent agreement with x-ray scattering data [Witte et al., Phys. Rev. Lett. 118, 225001 (2017)] as well as dc conductivity and absorption measurements. These findings demonstrate non-Drude behavior of the dynamic conductivity above the Cooper minimum that needs to be taken into account to determine optical properties in the warm dense matter regime.
Design of an ignition target for the laser megajoule, mitigating parametric instabilities
NASA Astrophysics Data System (ADS)
Laffite, S.; Loiseau, P.
2010-10-01
Laser plasma interaction (LPI) is a critical issue in ignition target design. Based on both scaling laws and two-dimensional calculations, this article describes how we can constrain a laser megajoule (LMJ) [J. Ebrardt and J. M. Chaput, J. Phys.: Conf. Ser. 112, 032005 (2008)] target design by mitigating LPI. An ignition indirect drive target has been designed for the 2/3 LMJ step. It requires 0.9 MJ and 260 TW of laser energy and power, to achieve a temperature of 300 eV in a rugby-shaped Hohlraum and give a yield of about 20 MJ. The study focuses on the analysis of linear gain for stimulated Raman and Brillouin scatterings. Enlarging the focal spot is an obvious way to reduce linear gains. We show that this reduction is nonlinear with the focal spot size. For relatively small focal spot area, linear gains are significantly reduced by enlarging the focal spot. However, there is no benefit in too large focal spots because of necessary larger laser entrance holes, which require more laser energy. Furthermore, this leads to the existence, for a given design, of a minimum value for linear gains for which we cannot go below.
Survival of primary condylar-constrained total knee arthroplasty at a minimum of 7 years.
Maynard, Lance M; Sauber, Timothy J; Kostopoulos, Vasileios K; Lavigne, Gregory S; Sewecke, Jeffrey J; Sotereanos, Nicholas G
2014-06-01
The purpose of the present study is to retrospectively analyze clinical and radiographic outcomes in primary constrained condylar knee arthroplasty at a minimum follow-up of 7 years. Given the concern for early aseptic loosening in constrained implants, we focused on this outcome. Our cohort consists of 127 constrained condylar knees. The mean age of patients in the study was 68.3 years, with a mean follow-up of 110.7 months. The diagnosis was primary osteoarthritis in 92%. There were four periprosthetic distal femur fractures, with a rate of revision of 0.8%. No implants were revised for aseptic loosening. Kaplan-Meier survivorship analysis with removal of any component as the end point revealed that the 10-year rate of survival of the primary CCK was 97.6% (95% CI, 94%-100%). Copyright © 2014. Published by Elsevier Inc.
A feasible DY conjugate gradient method for linear equality constraints
NASA Astrophysics Data System (ADS)
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
Scheduling Aircraft Landings under Constrained Position Shifting
NASA Technical Reports Server (NTRS)
Balakrishnan, Hamsa; Chandran, Bala
2006-01-01
Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.
Toward Overcoming the Local Minimum Trap in MFBD
2015-07-14
the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind
Biyikli, Emre; To, Albert C.
2015-01-01
A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849
NASA Astrophysics Data System (ADS)
Sorini, D.
2017-04-01
Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ``light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. An analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ~ 0.80 h Mpc-1 and within 10% up to k ~ 0.94 h Mpc-1, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Minimum energy control and optimal-satisfactory control of Boolean control network
NASA Astrophysics Data System (ADS)
Li, Fangfei; Lu, Xiwen
2013-12-01
In the literatures, to transfer the Boolean control network from the initial state to the desired state, the expenditure of energy has been rarely considered. Motivated by this, this Letter investigates the minimum energy control and optimal-satisfactory control of Boolean control network. Based on the semi-tensor product of matrices and Floyd's algorithm, minimum energy, constrained minimum energy and optimal-satisfactory control design for Boolean control network are given respectively. A numerical example is presented to illustrate the efficiency of the obtained results.
Constrained dynamics approach for motion synchronization and consensus
NASA Astrophysics Data System (ADS)
Bhatia, Divya
In this research we propose to develop constrained dynamical systems based stable attitude synchronization, consensus and tracking (SCT) control laws for the formation of rigid bodies. The generalized constrained dynamics Equations of Motion (EOM) are developed utilizing constraint potential energy functions that enforce communication constraints. Euler-Lagrange equations are employed to develop the non-linear constrained dynamics of multiple vehicle systems. The constraint potential energy is synthesized based on a graph theoretic formulation of the vehicle-vehicle communication. Constraint stabilization is achieved via Baumgarte's method. The performance of these constrained dynamics based formations is evaluated for bounded control authority. The above method has been applied to various cases and the results have been obtained using MATLAB simulations showing stability, synchronization, consensus and tracking of formations. The first case corresponds to an N-pendulum formation without external disturbances, in which the springs and the dampers connected between the pendulums act as the communication constraints. The damper helps in stabilizing the system by damping the motion whereas the spring acts as a communication link relaying relative position information between two connected pendulums. Lyapunov stabilization (energy based stabilization) technique is employed to depict the attitude stabilization and boundedness. Various scenarios involving different values of springs and dampers are simulated and studied. Motivated by the first case study, we study the formation of N 2-link robotic manipulators. The governing EOM for this system is derived using Euler-Lagrange equations. A generalized set of communication constraints are developed for this system using graph theory. The constraints are stabilized using Baumgarte's techniques. The attitude SCT is established for this system and the results are shown for the special case of three 2-link robotic manipulators. These methods are then applied to the formation of N-spacecraft. Modified Rodrigues Parameters (MRP) are used for attitude representation of the spacecraft because of their advantage of being a minimum parameter representation. Constrained non-linear equations of motion for this system are developed and stabilized using a Proportional-Derivative (PD) controller derived based on Baumgarte's method. A system of 3 spacecraft is simulated and the results for SCT are shown and analyzed. Another problem studied in this research is that of maintaining SCT under unknown external disturbances. We use an adaptive control algorithm to derive control laws for the actuator torques and develop an estimation law for the unknown disturbance parameters to achieve SCT. The estimate of the disturbance is added as a feed forward term in the actual control law to obtain the stabilization of a 3-spacecraft formation. The disturbance estimates are generated via a Lyapunov analysis of the closed loop system. In summary, the constrained dynamics method shows a lot of potential in formation control, achieving stabilization, synchronization, consensus and tracking of a set of dynamical systems.
Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.
Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong
2014-09-01
A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
1989-01-01
A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.
Reversed magnetic shear suppression of electron-scale turbulence on NSTX
NASA Astrophysics Data System (ADS)
Yuh, Howard Y.; Levinton, F. M.; Bell, R. E.; Hosea, J. C.; Kaye, S. M.; Leblanc, B. P.; Mazzucato, E.; Smith, D. R.; Domier, C. W.; Luhmann, N. C.; Park, H. K.
2009-11-01
Electron thermal internal transport barriers (e-ITBs) are observed in reversed (negative) magnetic shear NSTX discharges^1. These e-ITBs can be created with either neutral beam heating or High Harmonic Fast Wave (HHFW) RF heating. The e-ITB location occurs at the location of minimum magnetic shear determined by Motional Stark Effect (MSE) constrained equilibria. Statistical studies show a threshold condition in magnetic shear for e-ITB formation. High-k fluctuation measurements at electron turbulence wavenumbers^3 have been made under several different transport regimes, including a bursty regime that limits temperature gradients at intermediate magnetic shear. The growth rate of fluctuations has been calculated immediately following a change in the local magnetic shear, resulting in electron temperature gradient relaxation. Linear gyrokinetic simulation results for NSTX show that while measured electron temperature gradients exceed critical linear thresholds for ETG instability, growth rates can remain low under reversed shear conditions up to high electron temperatures gradients. ^1H. Yuh, et. al., PoP 16, 056120 ^2D.R. Smith, E. Mazzucato et al., RSI 75, 3840 ^3E. Mazzucato, D.R. Smith et al., PRL 101, 075001
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, D., E-mail: sorini@mpia-hd.mpg.de
2017-04-01
Measuring the clustering of galaxies from surveys allows us to estimate the power spectrum of matter density fluctuations, thus constraining cosmological models. This requires careful modelling of observational effects to avoid misinterpretation of data. In particular, signals coming from different distances encode information from different epochs. This is known as ''light-cone effect'' and is going to have a higher impact as upcoming galaxy surveys probe larger redshift ranges. Generalising the method by Feldman, Kaiser and Peacock (1994) [1], I define a minimum-variance estimator of the linear power spectrum at a fixed time, properly taking into account the light-cone effect. Anmore » analytic expression for the estimator is provided, and that is consistent with the findings of previous works in the literature. I test the method within the context of the Halofit model, assuming Planck 2014 cosmological parameters [2]. I show that the estimator presented recovers the fiducial linear power spectrum at present time within 5% accuracy up to k ∼ 0.80 h Mpc{sup −1} and within 10% up to k ∼ 0.94 h Mpc{sup −1}, well into the non-linear regime of the growth of density perturbations. As such, the method could be useful in the analysis of the data from future large-scale surveys, like Euclid.« less
The generalized quadratic knapsack problem. A neuronal network approach.
Talaván, Pedro M; Yáñez, Javier
2006-05-01
The solution of an optimization problem through the continuous Hopfield network (CHN) is based on some energy or Lyapunov function, which decreases as the system evolves until a local minimum value is attained. A new energy function is proposed in this paper so that any 0-1 linear constrains programming with quadratic objective function can be solved. This problem, denoted as the generalized quadratic knapsack problem (GQKP), includes as particular cases well-known problems such as the traveling salesman problem (TSP) and the quadratic assignment problem (QAP). This new energy function generalizes those proposed by other authors. Through this energy function, any GQKP can be solved with an appropriate parameter setting procedure, which is detailed in this paper. As a particular case, and in order to test this generalized energy function, some computational experiments solving the traveling salesman problem are also included.
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
Homotopy approach to optimal, linear quadratic, fixed architecture compensation
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1991-01-01
Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.
Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks
Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng
2014-01-01
Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408
NASA Astrophysics Data System (ADS)
Lesmana, E.; Chaerani, D.; Khansa, H. N.
2018-03-01
Energy-Saving Generation Dispatch (ESGD) is a scheme made by Chinese Government in attempt to minimize CO2 emission produced by power plant. This scheme is made related to global warming which is primarily caused by too much CO2 in earth’s atmosphere, and while the need of electricity is something absolute, the power plants producing it are mostly thermal-power plant which produced many CO2. Many approach to fulfill this scheme has been made, one of them came through Minimum Cost Flow in which resulted in a Quadratically Constrained Quadratic Programming (QCQP) form. In this paper, ESGD problem with Minimum Cost Flow in QCQP form will be solved using Lagrange’s Multiplier Method
A method for fitting regression splines with varying polynomial order in the linear mixed model.
Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W
2006-02-15
The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.
ERIC Educational Resources Information Center
Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.
2009-01-01
The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…
Cutting planes for the multistage stochastic unit commitment problem
Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul
2016-04-20
As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less
Cutting planes for the multistage stochastic unit commitment problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul
As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less
Analysis of Formation Flying in Eccentric Orbits Using Linearized Equations of Relative Motion
NASA Technical Reports Server (NTRS)
Lane, Christopher; Axelrad, Penina
2004-01-01
Geometrical methods for formation flying design based on the analytical solution to Hill's equations have been previously developed and used to specify desired relative motions in near circular orbits. By generating relationships between the vehicles that are intuitive, these approaches offer valuable insight into the relative motion and allow for the rapid design of satellite configurations to achieve mission specific requirements, such as vehicle separation at perigee or apogee, minimum separation, or a specific geometrical shape. Furthermore, the results obtained using geometrical approaches can be used to better constrain numerical optimization methods; allowing those methods to converge to optimal satellite configurations faster. This paper presents a set of geometrical relationships for formations in eccentric orbits, where Hill.s equations are not valid, and shows how these relationships can be used to investigate formation designs and how they evolve with time.
NASA Technical Reports Server (NTRS)
Vandervelde, W. E.; Carignan, C. R.
1982-01-01
The degree of controllability of a large space structure is found by a four step procedure: (1) finding the minimum control energy for driving the system from a given initial state to the origin in the prescribed time; (2) finding the region of initial state which can be driven to the origin with constrained control energy and time using optimal control strategy; (3) scaling the axes so that a unit displacement in every direction is equally important to control; and (4) finding the linear measurement of the weighted "volume" of the ellipsoid in the equicontrol space. For observability, the error covariance must be reduced toward zero using measurements optimally, and the criterion must be standardized by the magnitude of tolerable errors. The results obtained using these methods are applied to the vibration modes of a free-free beam.
Can re-regulation reservoirs and batteries cost-effectively mitigate sub-daily hydropeaking?
NASA Astrophysics Data System (ADS)
Haas, J.; Nowak, W.; Anindito, Y.; Olivares, M. A.
2017-12-01
To compensate for mismatches between generation and load, hydropower plants frequently operate in strong hydropeaking schemes, which is harmful to the downstream ecosystem. Furthermore, new power market structures and variable renewable systems may exacerbate this behavior. Ecological constraints (minimum flows, maximum ramps) are frequently used to mitigate hydropeaking, but these stand in direct tradeoff with the operational flexibility required for integrating renewable technologies. Fortunately, there are also physical methods (i.e. re-regulation reservoirs and batteries) but to date, there are no studies about their cost-effectiveness for hydropeaking mitigation. This study aims to fill that gap. For this, we formulate an hourly mixed-integer linear optimization model to plan the weekly operation of a hydro-thermal-renewable power system from southern Chile. The opportunity cost of water (needed for this weekly scheduling) is obtained from a mid-term programming solved with dynamic programming. We compare the current (unconstrained) hydropower operation with an ecologically constrained operation. The resulting cost increase is then contrasted with the annual payments necessary for the physical hydropeaking mitigation options. For highly constrained operations, both re-regulation reservoirs and batteries show to be economically attractive for hydropeaking mitigation. For intermediate constrained scenarios, re-regulation reservoirs are still economic, whereas batteries can be a viable solution only if they become cheaper in future. Given current cost projections, their break-even point (for hydropeaking mitigation) is expected within the next ten years. Finally, less stringent hydropeaking constraints do not justify physical mitigation measures, as the necessary flexibility can be provided by other power plants of the system.
Coding for Communication Channels with Dead-Time Constraints
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon
2004-01-01
Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.
Time and frequency constrained sonar signal design for optimal detection of elastic objects.
Hamschin, Brandon; Loughlin, Patrick J
2013-04-01
In this paper, the task of model-based transmit signal design for optimizing detection is considered. Building on past work that designs the spectral magnitude for optimizing detection, two methods for synthesizing minimum duration signals with this spectral magnitude are developed. The methods are applied to the design of signals that are optimal for detecting elastic objects in the presence of additive noise and self-noise. Elastic objects are modeled as linear time-invariant systems with known impulse responses, while additive noise (e.g., ocean noise or receiver noise) and acoustic self-noise (e.g., reverberation or clutter) are modeled as stationary Gaussian random processes with known power spectral densities. The first approach finds the waveform that preserves the optimal spectral magnitude while achieving the minimum temporal duration. The second approach yields a finite-length time-domain sequence by maximizing temporal energy concentration, subject to the constraint that the spectral magnitude is close (in a least-squares sense) to the optimal spectral magnitude. The two approaches are then connected analytically, showing the former is a limiting case of the latter. Simulation examples that illustrate the theory are accompanied by discussions that address practical applicability and how one might satisfy the need for target and environmental models in the real-world.
Multi-objective trajectory optimization for the space exploration vehicle
NASA Astrophysics Data System (ADS)
Qin, Xiaoli; Xiao, Zhen
2016-07-01
The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.
Constraining new physics models with isotope shift spectroscopy
NASA Astrophysics Data System (ADS)
Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias
2017-07-01
Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Plessow, Philipp N
2018-02-13
This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all having the linear minimum distance property, and structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds, and families of such codes of different rates can be decoded efficiently using a common decoding architecture.
NASA Astrophysics Data System (ADS)
Guo, Sangang
2017-09-01
There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.
Estimating Stresses, Fault Friction and Fluid Pressure from Topography and Coseismic Slip Models
NASA Astrophysics Data System (ADS)
Styron, R. H.; Hetland, E. A.
2014-12-01
Stress is a first-order control on the deformation state of the earth. However, stress is notoriously hard to measure, and researchers typically only estimate the directions and relative magnitudes of principal stresses, with little quantification of the uncertainties or absolute magnitude. To improve upon this, we have developed methods to constrain the full stress tensor field in a region surrounding a fault, including tectonic, topographic, and lithostatic components, as well as static friction and pore fluid pressure on the fault. Our methods are based on elastic halfspace techniques for estimating topographic stresses from a DEM, and we use a Bayesian approach to estimate accumulated tectonic stress, fluid pressure, and friction from fault geometry and slip rake, assuming Mohr-Coulomb fault mechanics. The nature of the tectonic stress inversion is such that either the stress maximum or minimum is better constrained, depending on the topography and fault deformation style. Our results from the 2008 Wenchuan event yield shear stresses from topography up to 20 MPa (normal-sinistral shear sense) and topographic normal stresses up to 80 MPa on the faults; tectonic stress had to be large enough to overcome topography to produce the observed reverse-dextral slip. Maximum tectonic stress is constrained to be >0.3 * lithostatic stress (depth-increasing), with a most likely value around 0.8, trending 90-110°E. Minimum tectonic stress is about half of maximum. Static fault friction is constrained at 0.1-0.4, and fluid pressure at 0-0.6 * total pressure on the fault. Additionally, the patterns of topographic stress and slip suggest that topographic normal stress may limit fault slip once failure has occurred. Preliminary results from the 2013 Balochistan earthquake are similar, but yield stronger constraints on the upper limits of maximum tectonic stress, as well as tight constraints on the magnitude of minimum tectonic stress and stress orientation. Work in progress on the Wasatch fault suggests that maximum tectonic stress may also be able to be constrained, and that some of the shallow rupture segmentation may be due in part to localized topographic loading. Future directions of this work include regions where high relief influences fault kinematics (such as Tibet).
Directional constraint of endpoint force emerges from hindlimb anatomy.
Bunderson, Nathan E; McKay, J Lucas; Ting, Lena H; Burkholder, Thomas J
2010-06-15
Postural control requires the coordination of force production at the limb endpoints to apply an appropriate force to the body. Subjected to horizontal plane perturbations, quadruped limbs stereotypically produce force constrained along a line that passes near the center of mass. This phenomenon, referred to as the force constraint strategy, may reflect mechanical constraints on the limb or body, a specific neural control strategy or an interaction among neural controls and mechanical constraints. We used a neuromuscular model of the cat hindlimb to test the hypothesis that the anatomical constraints restrict the mechanical action of individual muscles during stance and constrain the response to perturbations to a line independent of perturbation direction. In a linearized neuromuscular model of the cat hindlimb, muscle lengthening directions were highly conserved across 10,000 different muscle activation patterns, each of which produced an identical, stance-like endpoint force. These lengthening directions were closely aligned with the sagittal plane and reveal an anatomical structure for directionally constrained force responses. Each of the 10,000 activation patterns was predicted to produce stable stance based on Lyapunov stability analysis. In forward simulations of the nonlinear, seven degree of freedom model under the action of 200 random muscle activation patterns, displacement of the endpoint from its equilibrium position produced restoring forces, which were also biased toward the sagittal plane. The single exception was an activation pattern based on minimum muscle stress optimization, which produced destabilizing force responses in some perturbation directions. The sagittal force constraint increased during simulations as the system shifted from an inertial response during the acceleration phase to a viscoelastic response as peak velocity was obtained. These results qualitatively match similar experimental observations and suggest that the force constraint phenomenon may result from the anatomical arrangement of the limb.
Directional constraint of endpoint force emerges from hindlimb anatomy
Bunderson, Nathan E.; McKay, J. Lucas; Ting, Lena H.; Burkholder, Thomas J.
2010-01-01
Postural control requires the coordination of force production at the limb endpoints to apply an appropriate force to the body. Subjected to horizontal plane perturbations, quadruped limbs stereotypically produce force constrained along a line that passes near the center of mass. This phenomenon, referred to as the force constraint strategy, may reflect mechanical constraints on the limb or body, a specific neural control strategy or an interaction among neural controls and mechanical constraints. We used a neuromuscular model of the cat hindlimb to test the hypothesis that the anatomical constraints restrict the mechanical action of individual muscles during stance and constrain the response to perturbations to a line independent of perturbation direction. In a linearized neuromuscular model of the cat hindlimb, muscle lengthening directions were highly conserved across 10,000 different muscle activation patterns, each of which produced an identical, stance-like endpoint force. These lengthening directions were closely aligned with the sagittal plane and reveal an anatomical structure for directionally constrained force responses. Each of the 10,000 activation patterns was predicted to produce stable stance based on Lyapunov stability analysis. In forward simulations of the nonlinear, seven degree of freedom model under the action of 200 random muscle activation patterns, displacement of the endpoint from its equilibrium position produced restoring forces, which were also biased toward the sagittal plane. The single exception was an activation pattern based on minimum muscle stress optimization, which produced destabilizing force responses in some perturbation directions. The sagittal force constraint increased during simulations as the system shifted from an inertial response during the acceleration phase to a viscoelastic response as peak velocity was obtained. These results qualitatively match similar experimental observations and suggest that the force constraint phenomenon may result from the anatomical arrangement of the limb. PMID:20511528
Simultaneous elastic parameter inversion in 2-D/3-D TTI medium combined later arrival times
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; Wang, Tao; Yang, Shang-bei; Li, Xing-wang; Huang, Guo-jiao
2016-04-01
Traditional traveltime inversion for anisotropic medium is, in general, based on a "weak" assumption in the anisotropic property, which simplifies both the forward part (ray tracing is performed once only) and the inversion part (a linear inversion solver is possible). But for some real applications, a general (both "weak" and "strong") anisotropic medium should be considered. In such cases, one has to develop a ray tracing algorithm to handle with the general (including "strong") anisotropic medium and also to design a non-linear inversion solver for later tomography. Meanwhile, it is constructive to investigate how much the tomographic resolution can be improved by introducing the later arrivals. For this motivation, we incorporated our newly developed ray tracing algorithm (multistage irregular shortest-path method) for general anisotropic media with a non-linear inversion solver (a damped minimum norm, constrained least squares problem with a conjugate gradient approach) to formulate a non-linear inversion solver for anisotropic medium. This anisotropic traveltime inversion procedure is able to combine the later (reflected) arrival times. Both 2-D/3-D synthetic inversion experiments and comparison tests show that (1) the proposed anisotropic traveltime inversion scheme is able to recover the high contrast anomalies and (2) it is possible to improve the tomographic resolution by introducing the later (reflected) arrivals, but not as expected in the isotropic medium, because the different velocity (qP, qSV and qSH) sensitivities (or derivatives) respective to the different elastic parameters are not the same but are also dependent on the inclination angle.
Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W
2009-03-01
The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-08-28
Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birge, J. R.; Qi, L.; Wei, Z.
In this paper we give a variant of the Topkis-Veinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a Fritz-John point of the problem. We introduce a Fritz-John (FJ) function, an FJ1 strong second-order sufficiency condition (FJ1-SSOSC), and an FJ2 strong second-order sufficiency condition (FJ2-SSOSC), and then show, without any constraint qualification (CQ), that (i) ifmore » an FJ point z satisfies the FJ1-SSOSC, then there exists a neighborhood N(z) of z such that, for any FJ point y element of N(z) {l_brace}z {r_brace} , f{sub 0}(y) {ne} f{sub 0}(z) , where f{sub 0} is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2-SSOSC, then z is a strict local minimum of the problem. The result (i) implies that the entire iteration point sequence generated by the method converges to an FJ point. We also show that if the parameters are chosen large enough, a unit step length can be accepted by the proposed algorithm.« less
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Liu, Qingshan; Wang, Jun
2011-04-01
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
NASA Technical Reports Server (NTRS)
Swei, Sean
2014-01-01
We propose to develop a robust guidance and control system for the ADEPT (Adaptable Deployable Entry and Placement Technology) entry vehicle. A control-centric model of ADEPT will be developed to quantify the performance of candidate guidance and control architectures for both aerocapture and precision landing missions. The evaluation will be based on recent breakthroughs in constrained controllability/reachability analysis of control systems and constrained-based energy-minimum trajectory optimization for guidance development operating in complex environments.
Constrained coding for the deep-space optical channel
NASA Technical Reports Server (NTRS)
Moision, B. E.; Hamkins, J.
2002-01-01
We investigate methods of coding for a channel subject to a large dead-time constraint, i.e. a constraint on the minimum spacing between transmitted pulses, with the deep-space optical channel as the motivating example.
Ensemble Weight Enumerators for Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush
2006-01-01
Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.
Constrained optimization of image restoration filters
NASA Technical Reports Server (NTRS)
Riemer, T. E.; Mcgillem, C. D.
1973-01-01
A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.
Explicit reference governor for linear systems
NASA Astrophysics Data System (ADS)
Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo
2018-06-01
The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.
Hawking-Moss instanton in nonlinear massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Ying-li; Saito, Ryo; Sasaki, Misao, E-mail: yingli@yukawa.kyoto-u.ac.jp, E-mail: rsaito@yukawa.kyoto-u.ac.jp, E-mail: misao@yukawa.kyoto-u.ac.jp
2013-02-01
As a first step toward understanding a lanscape of vacua in a theory of non-linear massive gravity, we consider a landscape of a single scalar field and study tunneling between a pair of adjacent vacua. We study the Hawking-Moss (HM) instanton that sits at a local maximum of the potential, and evaluate the dependence of the tunneling rate on the parameters of the theory. It is found that provided with the same physical HM Hubble parameter H{sub HM}, depending on the values of parameters α{sub 3} and α{sub 4} in the action (2.2), the corresponding tunneling rate can be eithermore » enhanced or suppressed when compared to the one in the context of General Relativity (GR). Furthermore, we find the constraint on the ratio of the physical Hubble parameter to the fiducial one, which constrains the form of potential. This result is in sharp contrast to GR where there is no bound on the minimum value of the potential.« less
Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics
Petrov, Yury
2012-01-01
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2018-05-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
NASA Astrophysics Data System (ADS)
Anderson, Monica; David, Phillip
2007-04-01
Implementation of an intelligent, automated target acquisition and tracking systems alleviates the need for operators to monitor video continuously. This system could identify situations that fatigued operators could easily miss. If an automated acquisition and tracking system plans motions to maximize a coverage metric, how does the performance of that system change when the user intervenes and manually moves the camera? How can the operator give input to the system about what is important and understand how that relates to the overall task balance between surveillance and coverage? In this paper, we address these issues by introducing a new formulation of the average linear uncovered length (ALUL) metric, specially designed for use in surveilling urban environments. This metric coordinates the often competing goals of acquiring new targets and tracking existing targets. In addition, it provides current system performance feedback to system users in terms of the system's theoretical maximum and minimum performance. We show the successful integration of the algorithm via simulation.
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; He, Lei-yu; Li, Xing-wang; Sun, Jia-yu
2017-12-01
To conduct forward and simultaneous inversion in a complex geological model, including an irregular topography (or irregular reflector or velocity anomaly), we in this paper combined our previous multiphase arrival tracking method (referred as triangular shortest-path method, TSPM) in triangular (2D) or tetrahedral (3D) cell model and a linearized inversion solver (referred to as damped minimum norms and constrained least squares problem solved using the conjugate gradient method, DMNCLS-CG) to formulate a simultaneous travel time inversion method for updating both velocity and reflector geometry by using multiphase arrival times. In the triangular/tetrahedral cells, we deduced the partial derivative of velocity variation with respective to the depth change of reflector. The numerical simulation results show that the computational accuracy can be tuned to a high precision in forward modeling and the irregular velocity anomaly and reflector geometry can be accurately captured in the simultaneous inversion, because the triangular/tetrahedral cell can be easily used to stitch the irregular topography or subsurface interface.
Protograph LDPC Codes with Node Degrees at Least 3
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher
2006-01-01
In this paper we present protograph codes with a small number of degree-3 nodes and one high degree node. The iterative decoding threshold for proposed rate 1/2 codes are lower, by about 0.2 dB, than the best known irregular LDPC codes with degree at least 3. The main motivation is to gain linear minimum distance to achieve low error floor. Also to construct rate-compatible protograph-based LDPC codes for fixed block length that simultaneously achieves low iterative decoding threshold and linear minimum distance. We start with a rate 1/2 protograph LDPC code with degree-3 nodes and one high degree node. Higher rate codes are obtained by connecting check nodes with degree-2 non-transmitted nodes. This is equivalent to constraint combining in the protograph. The condition where all constraints are combined corresponds to the highest rate code. This constraint must be connected to nodes of degree at least three for the graph to have linear minimum distance. Thus having node degree at least 3 for rate 1/2 guarantees linear minimum distance property to be preserved for higher rates. Through examples we show that the iterative decoding threshold as low as 0.544 dB can be achieved for small protographs with node degrees at least three. A family of low- to high-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
Flexure Based Linear and Rotary Bearings
NASA Technical Reports Server (NTRS)
Voellmer, George M. (Inventor)
2016-01-01
A flexure based linear bearing includes top and bottom parallel rigid plates; first and second flexures connecting the top and bottom plates and constraining exactly four degrees of freedom of relative motion of the plates, the four degrees of freedom being X and Y axis translation and rotation about the X and Y axes; and a strut connecting the top and bottom plates and further constraining exactly one degree of freedom of the plates, the one degree of freedom being one of Z axis translation and rotation about the Z axis.
A Constrained Linear Estimator for Multiple Regression
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.
2010-01-01
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
Automated design of minimum drag light aircraft fuselages and nacelles
NASA Technical Reports Server (NTRS)
Smetana, F. O.; Fox, S. R.; Karlin, B. E.
1982-01-01
The constrained minimization algorithm of Vanderplaats is applied to the problem of designing minimum drag faired bodies such as fuselages and nacelles. Body drag is computed by a variation of the Hess-Smith code. This variation includes a boundary layer computation. The encased payload provides arbitrary geometric constraints, specified a priori by the designer, below which the fairing cannot shrink. The optimization may include engine cooling air flows entering and exhausting through specific port locations on the body.
Source-space ICA for MEG source imaging.
Jonmohamadi, Yaqub; Jones, Richard D
2016-02-01
One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.
Minimal complexity control law synthesis
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.
1989-01-01
A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.
Suppression of turbulent transport in NSTX internal transport barriers
NASA Astrophysics Data System (ADS)
Yuh, Howard
2008-11-01
Electron transport will be important for ITER where fusion alphas and high-energy beam ions will primarily heat electrons. In the NSTX, internal transport barriers (ITBs) are observed in reversed (negative) shear discharges where diffusivities for electron and ion thermal channels and momentum are reduced. While neutral beam heating can produce ITBs in both electron and ion channels, High Harmonic Fast Wave (HHFW) heating can produce electron thermal ITBs under reversed magnetic shear conditions without momentum input. Interestingly, the location of the electron ITB does not necessarily match that of the ion ITB: the electron ITB correlates well with the minimum in the magnetic shear determined by Motional Stark Effect (MSE) [1] constrained equilibria, whereas the ion ITB better correlates with the maximum ExB shearing rate. Measured electron temperature gradients can exceed critical linear thresholds for ETG instability calculated by linear gyrokinetic codes in the ITB confinement region. The high-k microwave scattering diagnostic [2] shows reduced local density fluctuations at wavenumbers characteristic of electron turbulence for discharges with strongly negative magnetic shear versus weakly negative or positive magnetic shear. Fluctuation reductions are found to be spatially and temporally correlated with the local magnetic shear. These results are consistent with non-linear gyrokinetic simulations predictions showing the reduction of electron transport in negative magnetic shear conditions despite being linearly unstable [3]. Electron transport improvement via negative magnetic shear rather than ExB shear highlights the importance of current profile control in ITER and future devices. [1] F.M. Levinton, H. Yuh et al., PoP 14, 056119 [2] D.R. Smith, E. Mazzucato et al., RSI 75, 3840 [3] Jenko, F. and Dorland, W., PRL 89 225001
NASA Astrophysics Data System (ADS)
Kajiwara, Yoshiyuki; Shiraishi, Junya; Kobayashi, Shoei; Yamagami, Tamotsu
2009-03-01
A digital phase-locked loop (PLL) with a linearly constrained adaptive filter (LCAF) has been studied for higher-linear-density optical discs. LCAF has been implemented before an interpolated timing recovery (ITR) PLL unit in order to improve the quality of phase error calculation by using an adaptively equalized partial response (PR) signal. Coefficient update of an asynchronous sampled adaptive FIR filter with a least-mean-square (LMS) algorithm has been constrained by a projection matrix in order to suppress the phase shift of the tap coefficients of the adaptive filter. We have developed projection matrices that are suitable for Blu-ray disc (BD) drive systems by numerical simulation. Results have shown the properties of the projection matrices. Then, we have designed the read channel system of the ITR PLL with an LCAF model on the FPGA board for experiments. Results have shown that the LCAF improves the tilt margins of 30 gigabytes (GB) recordable BD (BD-R) and 33 GB BD read-only memory (BD-ROM) with a sufficient LMS adaptation stability.
Wang, Cong; Du, Hua-qiang; Zhou, Guo-mo; Xu, Xiao-jun; Sun, Shao-bo; Gao, Guo-long
2015-05-01
This research focused on the application of remotely sensed imagery from unmanned aerial vehicle (UAV) with high spatial resolution for the estimation of crown closure of moso bamboo forest based on the geometric-optical model, and analyzed the influence of unconstrained and fully constrained linear spectral mixture analysis (SMA) on the accuracy of the estimated results. The results demonstrated that the combination of UAV remotely sensed imagery and geometric-optical model could, to some degrees, achieve the estimation of crown closure. However, the different SMA methods led to significant differentiation in the estimation accuracy. Compared with unconstrained SMA, the fully constrained linear SMA method resulted in higher accuracy of the estimated values, with the coefficient of determination (R2) of 0.63 at 0.01 level, against the measured values acquired during the field survey. Root mean square error (RMSE) of approximate 0.04 was low, indicating that the usage of fully constrained linear SMA could bring about better results in crown closure estimation, which was closer to the actual condition in moso bamboo forest.
Protograph based LDPC codes with minimum distance linearly growing with block size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Sam; Thorpe, Jeremy
2005-01-01
We propose several LDPC code constructions that simultaneously achieve good threshold and error floor performance. Minimum distance is shown to grow linearly with block size (similar to regular codes of variable degree at least 3) by considering ensemble average weight enumerators. Our constructions are based on projected graph, or protograph, structures that support high-speed decoder implementations. As with irregular ensembles, our constructions are sensitive to the proportion of degree-2 variable nodes. A code with too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code with too many such nodes tends to not exhibit a minimum distance that grows linearly in block length. In this paper we also show that precoding can be used to lower the threshold of regular LDPC codes. The decoding thresholds of the proposed codes, which have linearly increasing minimum distance in block size, outperform that of regular LDPC codes. Furthermore, a family of low to high rate codes, with thresholds that adhere closely to their respective channel capacity thresholds, is presented. Simulation results for a few example codes show that the proposed codes have low error floors as well as good threshold SNFt performance.
NASA Astrophysics Data System (ADS)
Los, S. O.
2015-06-01
A model was developed to simulate spatial, seasonal and interannual variations in vegetation in response to temperature, precipitation and atmospheric CO2 concentrations; the model addresses shortcomings in current implementations. The model uses the minimum of 12 temperature and precipitation constraint functions to simulate NDVI. Functions vary based on the Köppen-Trewartha climate classification to take adaptations of vegetation to climate into account. The simulated NDVI, referred to as the climate constrained vegetation index (CCVI), captured the spatial variability (0.82 < r <0.87), seasonal variability (median r = 0.83) and interannual variability (median global r = 0.24) in NDVI. The CCVI simulated the effects of adverse climate on vegetation during the 1984 drought in the Sahel and during dust bowls of the 1930s and 1950s in the Great Plains in North America. A global CO2 fertilisation effect was found in NDVI data, similar in magnitude to that of earlier estimates (8 % for the 20th century). This effect increased linearly with simple ratio, a transformation of the NDVI. Three CCVI scenarios, based on climate simulations using the representative concentration pathway RCP4.5, showed a greater sensitivity of vegetation towards precipitation in Northern Hemisphere mid latitudes than is currently implemented in climate models. This higher sensitivity is of importance to assess the impact of climate variability on vegetation, in particular on agricultural productivity.
Investigation on Constrained Matrix Factorization for Hyperspectral Image Analysis
2005-07-25
analysis. Keywords: matrix factorization; nonnegative matrix factorization; linear mixture model ; unsupervised linear unmixing; hyperspectral imagery...spatial resolution permits different materials present in the area covered by a single pixel. The linear mixture model says that a pixel reflectance in...in r. In the linear mixture model , r is considered as the linear mixture of m1, m2, …, mP as nMαr += (1) where n is included to account for
NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide
NASA Technical Reports Server (NTRS)
Miura, H.; Schmit, L. A., Jr.
1979-01-01
A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.
Magnetic Footpoint Velocities: A Combination Of Minimum Energy Fit AndLocal Correlation Tracking
NASA Astrophysics Data System (ADS)
Belur, Ravindra; Longcope, D.
2006-06-01
Many numerical and time dependent MHD simulations of the solar atmosphererequire the underlying velocity fields which should be consistent with theinduction equation. Recently, Longcope (2004) introduced a new techniqueto infer the photospheric velocity field from sequence of vector magnetogramswhich are in agreement with the induction equation. The method, the Minimum Energy Fit (MEF), determines a set of velocities and selects the velocity which is smallest overall flow speed by minimizing an energy functional. The inferred velocity can be further constrained by information aboutthe velocity inferred from other techniques. With this adopted techniquewe would expect that the inferred velocity will be close to the photospheric velocity of magnetic footpoints. Here, we demonstrate that the inferred horizontal velocities from LCT can be used to constrain the MEFvelocities. We also apply this technique to actual vector magnetogramsequences and compare these velocities with velocities from LCT alone.This work is supported by DoD MURI and NSF SHINE programs.
Planning maximally smooth hand movements constrained to nonplanar workspaces.
Liebermann, Dario G; Krasovsky, Tal; Berman, Sigal
2008-11-01
The article characterizes hand paths and speed profiles for movements performed in a nonplanar, 2-dimensional workspace (a hemisphere of constant curvature). The authors assessed endpoint kinematics (i.e., paths and speeds) under the minimum-jerk model assumptions and calculated minimal amplitude paths (geodesics) and the corresponding speed profiles. The authors also calculated hand speeds using the 2/3 power law. They then compared modeled results with the empirical observations. In all, 10 participants moved their hands forward and backward from a common starting position toward 3 targets located within a hemispheric workspace of small or large curvature. Comparisons of modeled observed differences using 2-way RM-ANOVAs showed that movement direction had no clear influence on hand kinetics (p < .05). Workspace curvature affected the hand paths, which seldom followed geodesic lines. Constraining the paths to different curvatures did not affect the hand speed profiles. Minimum-jerk speed profiles closely matched the observations and were superior to those predicted by 2/3 power law (p < .001). The authors conclude that speed and path cannot be unambiguously linked under the minimum-jerk assumption when individuals move the hand in a nonplanar 2-dimensional workspace. In such a case, the hands do not follow geodesic paths, but they preserve the speed profile, regardless of the geometric features of the workspace.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
Testing the Einstein's equivalence principle with polarized gamma-ray bursts
NASA Astrophysics Data System (ADS)
Yang, Chao; Zou, Yuan-Chuan; Zhang, Yue-Yang; Liao, Bin; Lei, Wei-Hua
2017-07-01
The Einstein's equivalence principle can be tested by using parametrized post-Newtonian parameters, of which the parameter γ has been constrained by comparing the arrival times of photons with different energies. It has been constrained by a variety of astronomical transient events, such as gamma-ray bursts (GRBs), fast radio bursts as well as pulses of pulsars, with the most stringent constraint of Δγ ≲ 10-15. In this Letter, we consider the arrival times of lights with different circular polarization. For a linearly polarized light, it is the combination of two circularly polarized lights. If the arrival time difference between the two circularly polarized lights is too large, their combination may lose the linear polarization. We constrain the value of Δγp < 1.6 × 10-27 by the measurement of the polarization of GRB 110721A, which is the most stringent constraint ever achieved.
NASA Astrophysics Data System (ADS)
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
2016-10-01
The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.
Solution of a Complex Least Squares Problem with Constrained Phase.
Bydder, Mark
2010-12-30
The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.
Papoutsi, Athanasia; Sidiropoulou, Kyriaki; Poirazi, Panayiota
2014-07-01
Technological advances have unraveled the existence of small clusters of co-active neurons in the neocortex. The functional implications of these microcircuits are in large part unexplored. Using a heavily constrained biophysical model of a L5 PFC microcircuit, we recently showed that these structures act as tunable modules of persistent activity, the cellular correlate of working memory. Here, we investigate the mechanisms that underlie persistent activity emergence (ON) and termination (OFF) and search for the minimum network size required for expressing these states within physiological regimes. We show that (a) NMDA-mediated dendritic spikes gate the induction of persistent firing in the microcircuit. (b) The minimum network size required for persistent activity induction is inversely proportional to the synaptic drive of each excitatory neuron. (c) Relaxation of connectivity and synaptic delay constraints eliminates the gating effect of NMDA spikes, albeit at a cost of much larger networks. (d) Persistent activity termination by increased inhibition depends on the strength of the synaptic input and is negatively modulated by dADP. (e) Slow synaptic mechanisms and network activity contain predictive information regarding the ability of a given stimulus to turn ON and/or OFF persistent firing in the microcircuit model. Overall, this study zooms out from dendrites to cell assemblies and suggests a tight interaction between dendritic non-linearities and network properties (size/connectivity) that may facilitate the short-memory function of the PFC.
Constrained signal reconstruction from wavelet transform coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brislawn, C.M.
1991-12-31
A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less
2013-08-14
Connectivity Graph; Graph Search; Bounded Disturbances; Linear Time-Varying (LTV); Clohessy - Wiltshire -Hill (CWH) 16. SECURITY CLASSIFICATION OF: 17...the linearization of the relative motion model given by the Hill- Clohessy - Wiltshire (CWH) equations is used [14]. A. Nonlinear equations of motion...equations can be used to describe the motion of the debris. B. Linearized HCW equations in discrete-time For δr << R, the linearized Hill- Clohessy
Competitive learning with pairwise constraints.
Covões, Thiago F; Hruschka, Eduardo R; Ghosh, Joydeep
2013-01-01
Constrained clustering has been an active research topic since the last decade. Most studies focus on batch-mode algorithms. This brief introduces two algorithms for on-line constrained learning, named on-line linear constrained vector quantization error (O-LCVQE) and constrained rival penalized competitive learning (C-RPCL). The former is a variant of the LCVQE algorithm for on-line settings, whereas the latter is an adaptation of the (on-line) RPCL algorithm to deal with constrained clustering. The accuracy results--in terms of the normalized mutual information (NMI)--from experiments with nine datasets show that the partitions induced by O-LCVQE are competitive with those found by the (batch-mode) LCVQE. Compared with this formidable baseline algorithm, it is surprising that C-RPCL can provide better partitions (in terms of the NMI) for most of the datasets. Also, experiments on a large dataset show that on-line algorithms for constrained clustering can significantly reduce the computational time.
A linear programming approach to characterizing norm bounded uncertainty from experimental data
NASA Technical Reports Server (NTRS)
Scheid, R. E.; Bayard, D. S.; Yam, Y.
1991-01-01
The linear programming spectral overbounding and factorization (LPSOF) algorithm, an algorithm for finding a minimum phase transfer function of specified order whose magnitude tightly overbounds a specified nonparametric function of frequency, is introduced. This method has direct application to transforming nonparametric uncertainty bounds (available from system identification experiments) into parametric representations required for modern robust control design software (i.e., a minimum-phase transfer function multiplied by a norm-bounded perturbation).
NASA Technical Reports Server (NTRS)
Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.
1995-01-01
Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.
EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing.
Cohen, Michael X; Ridderinkhof, K Richard
2013-01-01
Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus) conflict modulation in stimulus-contralateral parietal gamma (30-50 Hz), followed by a later alpha-band (8-12 Hz) conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4-8 Hz), alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials) had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions.
Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities
NASA Astrophysics Data System (ADS)
Eyuboglu, B. Murat; Pilkington, Theo C.
1993-08-01
In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.
NASA Astrophysics Data System (ADS)
Zhu, Dechao; Deng, Zhongmin; Wang, Xingwei
2001-08-01
In the present paper, a series of hierarchical warping functions is developed to analyze the static and dynamic problems of thin walled composite laminated helicopter rotors composed of several layers with single closed cell. This method is the development and extension of the traditional constrained warping theory of thin walled metallic beams, which had been proved very successful since 1940s. The warping distribution along the perimeter of each layer is expanded into a series of successively corrective warping functions with the traditional warping function caused by free torsion or free bending as the first term, and is assumed to be piecewise linear along the thickness direction of layers. The governing equations are derived based upon the variational principle of minimum potential energy for static analysis and Rayleigh Quotient for free vibration analysis. Then the hierarchical finite element method is introduced to form a numerical algorithm. Both static and natural vibration problems of sample box beams are analyzed with the present method to show the main mechanical behavior of the thin walled composite laminated helicopter rotor.
Control of the constrained planar simple inverted pendulum
NASA Technical Reports Server (NTRS)
Bavarian, B.; Wyman, B. F.; Hemami, H.
1983-01-01
Control of a constrained planar inverted pendulum by eigenstructure assignment is considered. Linear feedback is used to stabilize and decouple the system in such a way that specified subspaces of the state space are invariant for the closed-loop system. The effectiveness of the feedback law is tested by digital computer simulation. Pre-compensation by an inverse plant is used to improve performance.
How to Test the SME with Space Missions?
NASA Technical Reports Server (NTRS)
Hees, A.; Lamine, B.; Le Poncin-Lafitte, C.; Wolf, P.
2013-01-01
In this communication, we focus on possibilities to constrain SME coefficients using Cassini and Messenger data. We present simulations of radio science observables within the framework of the SME, identify the linear combinations of SME coefficients the observations depend on and determine the sensitivity of these measurements to the SME coefficients. We show that these datasets are very powerful for constraining SME coefficients.
Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions
2007-09-01
C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Energy Efficiency Building Code for Commercial Buildings in Sri Lanka
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busch, John; Greenberg, Steve; Rubinstein, Francis
2000-09-30
1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.
Unifying Rules for Aquatic Locomotion
NASA Astrophysics Data System (ADS)
Saadat, Mehdi; Domel, August; di Santo, Valentina; Lauder, George; Haj-Hariri, Hossein
2016-11-01
Strouhal number, St (=fA/U) , a scaling parameter that relates speed, U, to the tail-beat frequency, f, and tail-beat amplitude, A, has been used many times to describe animal locomotion. It has been observed that swimming animals cruise at 0.2 <=St <=0.4. Using simple dimensional and scaling analyses supported by new experimental evidence of a self-propelled fish-like swimmer, we show that when cruising at minimum hydrodynamic input power, St is predetermined, and is only a function of the shape, i.e. drag coefficient and area. The narrow range for St, 0.2-0.4, has been previously associated with optimal propulsive efficiency. However, St alone is insufficient for deciding optimal motion. We show that hydrodynamic input power (energy usage to propel over a unit distance) in fish locomotion is minimized at all cruising speeds when A* (= A/L), a scaling parameter that relates tail-beat amplitude, A, to the length of the swimmer, L, is constrained to a narrow range of 0.15-0.25. Our analysis proposes a constraint on A*, in addition to the previously found constraint on St, to fully describe the optimal swimming gait for fast swimmers. A survey of kinematics for dolphin, as well as new data for trout, show that the range of St and A* for fast swimmers indeed are constrained to 0.2-0.4 and 0.15-0.25, respectively. Our findings provide physical explanation as to why fast aquatic swimmers cruise with relatively constant tail-beat amplitude at approximately 20 percent of body length, while their swimming speed is linearly correlated with their tail-beat frequency.
A globally convergent LCL method for nonlinear optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedlander, M. P.; Saunders, M. A.; Mathematics and Computer Science
2005-01-01
For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form 'minimize an augmented Lagrangian function subject to linearized constraints.' Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the well-known software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an optionmore » to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.« less
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Constraint elimination in dynamical systems
NASA Technical Reports Server (NTRS)
Singh, R. P.; Likins, P. W.
1989-01-01
Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.
NASA Technical Reports Server (NTRS)
Kaula, W. M.
1993-01-01
The geoid and topography heights of Atla Regio and Beta Regio, both peaks and slopes, appear explicable as steady-state plumes, if non-linear viscosity eta(Tau, epsilon) is taken into account. Strongly constrained by the data are an effective plume depth of about 700 km, with a temperature anomaly thereat of about 30 degrees, leading to more than 400 degrees at the plume head. Also well constrained is the combination Q(eta)/s(sup 4)(sub 0) = (volume flow rate)(viscosity)/(plume radius): about 11 Pa/m/sec. The topographic slopes dh/ds constrain the combination Q/A, where A is the thickness of the spreading layer, since the slope varies inversely with velocity. The geoid slopes dN/ds require enhancement of the deeper flow, as expected from non-linear viscosity. The Beta data are best fit by Q = 500 m(sup 3)/sec and A equals 140 km; the Atla, by Q equals 440 m(exp 3)/sec and A equals 260 km. The dynamic contribution to the topographic slope is minor.
NASA Technical Reports Server (NTRS)
Abercromby, Kira J.; Rapp, Jason; Bedard, Donald; Seitzer, Patrick; Cardona, Tommaso; Cowardin, Heather; Barker, Ed; Lederer, Susan
2013-01-01
Constrained Linear Least Squares model is generally more accurate than the "human-in-the-loop". However, "human-in-the-loop" can remove materials that make no sense. The speed of the model in determining a "first cut" at the material ID makes it a viable option for spectral unmixing of debris objects.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.
Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian
2018-05-23
Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, J; Chao, M
2016-06-15
Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less
Stress-Constrained Structural Topology Optimization with Design-Dependent Loads
NASA Astrophysics Data System (ADS)
Lee, Edmund
Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
Constraining Basin Depth and Fault Displacement in the Malombe Basin Using Potential Field Methods
NASA Astrophysics Data System (ADS)
Beresh, S. C. M.; Elifritz, E. A.; Méndez, K.; Johnson, S.; Mynatt, W. G.; Mayle, M.; Atekwana, E. A.; Laó-Dávila, D. A.; Chindandali, P. R. N.; Chisenga, C.; Gondwe, S.; Mkumbwa, M.; Kalaguluka, D.; Kalindekafe, L.; Salima, J.
2017-12-01
The Malombe Basin is part of the Malawi Rift which forms the southern part of the Western Branch of the East African Rift System. At its southern end, the Malawi Rift bifurcates into the Bilila-Mtakataka and Chirobwe-Ntcheu fault systems and the Lake Malombe Rift Basin around the Shire Horst, a competent block under the Nankumba Peninsula. The Malombe Basin is approximately 70km from north to south and 35km at its widest point from east to west, bounded by reversing-polarity border faults. We aim to constrain the depth of the basin to better understand displacement of each border fault. Our work utilizes two east-west gravity profiles across the basin coupled with Source Parameter Imaging (SPI) derived from a high-resolution aeromagnetic survey. The first gravity profile was done across the northern portion of the basin and the second across the southern portion. Gravity and magnetic data will be used to constrain basement depths and the thickness of the sedimentary cover. Additionally, Shuttle Radar Topography Mission (SRTM) data is used to understand the topographic expression of the fault scarps. Estimates for minimum displacement of the border faults on either side of the basin were made by adding the elevation of the scarps to the deepest SPI basement estimates at the basin borders. Our preliminary results using SPI and SRTM data show a minimum displacement of approximately 1.3km for the western border fault; the minimum displacement for the eastern border fault is 740m. However, SPI merely shows the depth to the first significantly magnetic layer in the subsurface, which may or may not be the actual basement layer. Gravimetric readings are based on subsurface density and thus circumvent issues arising from magnetic layers located above the basement; therefore expected results for our work will be to constrain more accurate basin depth by integrating the gravity profiles. Through more accurate basement depth estimates we also gain more accurate displacement estimates for the Basin's faults. Not only do the improved depth estimates serve as a proxy to the viability of hydrocarbon exploration efforts in the region, but the improved displacement estimates also provide a better understanding of extension accommodation within the Malawi Rift.
Number of minimum-weight code words in a product code
NASA Technical Reports Server (NTRS)
Miller, R. L.
1978-01-01
Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.
String theory origin of constrained multiplets
NASA Astrophysics Data System (ADS)
Kallosh, Renata; Vercnocke, Bert; Wrase, Timm
2016-09-01
We study the non-linearly realized spontaneously broken supersymmetry of the (anti-)D3-brane action in type IIB string theory. The worldvolume fields are one vector A μ , three complex scalars ϕ i and four 4d fermions λ 0, λ i. These transform, in addition to the more familiar {N}=4 linear supersymmetry, also under 16 spontaneously broken, non-linearly realized supersymmetries. We argue that the worldvolume fields can be packaged into the following constrained 4d non-linear {N}=1 multiplets: four chiral multiplets S, Y i that satisfy S 2 = SY i =0 and contain the worldvolume fermions λ 0 and λ i ; and four chiral multiplets W α , H i that satisfy S{W}_{α }=S{overline{D}}_{overset{\\cdotp }{α }}{overline{H}}^{overline{imath}}=0 and contain the vector A μ and the scalars ϕ i . We also discuss how placing an anti-D3-brane on top of intersecting O7-planes can lead to an orthogonal multiplet Φ that satisfies S(Φ -overline{Φ})=0 , which is particularly interesting for inflationary cosmology.
On the minimum quantum requirement of photosynthesis.
Zeinalov, Yuzeir
2009-01-01
An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.
Solving LP Relaxations of Large-Scale Precedence Constrained Problems
NASA Astrophysics Data System (ADS)
Bienstock, Daniel; Zuckerberg, Mark
We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.
ERIC Educational Resources Information Center
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack
2014-01-01
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…
Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A
2011-05-01
Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Masternak, Tadeusz J.
This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.
Design optimization and probabilistic analysis of a hydrodynamic journal bearing
NASA Technical Reports Server (NTRS)
Liniecki, Alexander G.
1990-01-01
A nonlinear constrained optimization of a hydrodynamic bearing was performed yielding three main variables: radial clearance, bearing length to diameter ratio, and lubricating oil viscosity. As an objective function a combined model of temperature rise and oil supply has been adopted. The optimized model of the bearing has been simulated for population of 1000 cases using Monte Carlo statistical method. It appeared that the so called 'optimal solution' generated more than 50 percent of failed bearings, because their minimum oil film thickness violated stipulated minimum constraint value. As a remedy change of oil viscosity is suggested after several sensitivities of variables have been investigated.
A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem
NASA Technical Reports Server (NTRS)
Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.
2002-01-01
In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX
2010-08-25
Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for buildingmore » parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable model-based fault estimation and correction for particle accelerators and industrial plants feasible.« less
Density and lithospheric structure at Tyrrhena Patera, Mars, from gravity and topography data
NASA Astrophysics Data System (ADS)
Grott, M.; Wieczorek, M. A.
2012-09-01
The Tyrrhena Patera highland volcano, Mars, is associated with a relatively well localized gravity anomaly and we have carried out a localized admittance analysis in the region to constrain the density of the volcanic load, the load thickness, and the elastic thickness at the time of load emplacement. The employed admittance model considers loading of an initially spherical surface, and surface as well as subsurface loading is taken into account. Our results indicate that the gravity and topography data available at Tyrrhena Patera is consistent with the absence of subsurface loading, but the presence of a small subsurface load cannot be ruled out. We obtain minimum load densities of 2960 kg m-3, minimum load thicknesses of 5 km, and minimum load volumes of 0.6 × 106 km3. Photogeological evidence suggests that pyroclastic deposits make up at most 30% of this volume, such that the bulk of Tyrrhena Patera is likely composed of competent basalt. Best fitting model parameters are a load density of 3343 kg m-3, a load thickness of 10.8 km, and a load volume of 1.7 × 106 km3. These relatively large load densities indicate that lava compositions are comparable to those at other martian volcanoes, and densities are comparable to those of the martian meteorites. The elastic thickness in the region is constrained to be smaller than 27.5 km at the time of loading, indicating surface heat flows in excess of 24 mW m-2.
CONSTRAINTS ON HYBRID METRIC-PALATINI GRAVITY FROM BACKGROUND EVOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lima, N. A.; Barreto, V. S., E-mail: ndal@roe.ac.uk, E-mail: vsm@roe.ac.uk
2016-02-20
In this work, we introduce two models of the hybrid metric-Palatini theory of gravitation. We explore their background evolution, showing explicitly that one recovers standard General Relativity with an effective cosmological constant at late times. This happens because the Palatini Ricci scalar evolves toward and asymptotically settles at the minimum of its effective potential during cosmological evolution. We then use a combination of cosmic microwave background, supernovae, and baryonic accoustic oscillations background data to constrain the models’ free parameters. For both models, we are able to constrain the maximum deviation from the gravitational constant G one can have at earlymore » times to be around 1%.« less
Consistent description of kinetic equation with triangle anomaly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pu Shi; Gao Jianhua; Wang Qun
2011-05-01
We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less
On the formation of granulites
Bohlen, S.R.
1991-01-01
The tectonic settings for the formation and evolution of regional granulite terranes and the lowermost continental crust can be deduced from pressure-temperature-time (P-T-time) paths and constrained by petrological and geophysical considerations. P-T conditions deduced for regional granulites require transient, average geothermal gradients of greater than 35??C km-1, implying minimum heat flow in excess of 100 mW m-2. Such high heat flow is probably caused by magmatic heating. Tectonic settings wherein such conditions are found include convergent plate margins, continental rifts, hot spots and at the margins of large, deep-seated batholiths. Cooling paths can be constrained by solid-solid and devolatilization equilibria and geophysical modelling. -from Author
Constrained Surface-Level Gateway Placement for Underwater Acoustic Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Hong
One approach to guarantee the performance of underwater acoustic sensor networks is to deploy multiple Surface-level Gateways (SGs) at the surface. This paper addresses the connected (or survivable) Constrained Surface-level Gateway Placement (C-SGP) problem for 3-D underwater acoustic sensor networks. Given a set of candidate locations where SGs can be placed, our objective is to place minimum number of SGs at a subset of candidate locations such that it is connected (or 2-connected) from any USN to the base station. We propose a polynomial time approximation algorithm for the connected C-SGP problem and survivable C-SGP problem, respectively. Simulations are conducted to verify our algorithms' efficiency.
Effects of temperature and salinity on light scattering by water
NASA Astrophysics Data System (ADS)
Zhang, Xiaodong; Hu, Lianbo
2010-04-01
A theoretical model on light scattering by water was developed from the thermodynamic principles and was used to evaluate the effects of temperature and salinity. The results agreed with the measurements by Morel within 1%. The scattering increases with salinity in a non-linear manner and the empirical linear model underestimate the scattering by seawater for S < 40 psu. Seawater also exhibits an 'anomalous' scattering behavior with a minimum occurring at 24.64 °C for pure water and this minimum increases with the salinity, reaching 27.49 °C at 40 psu.
Effect of leading-edge load constraints on the design and performance of supersonic wings
NASA Technical Reports Server (NTRS)
Darden, C. M.
1985-01-01
A theoretical and experimental investigation was conducted to assess the effect of leading-edge load constraints on supersonic wing design and performance. In the effort to delay flow separation and the formation of leading-edge vortices, two constrained, linear-theory optimization approaches were used to limit the loadings on the leading edge of a variable-sweep planform design. Experimental force and moment tests were made on two constrained camber wings, a flat uncambered wing, and an optimum design with no constraints. Results indicate that vortex strength and separation regions were mildest on the severely and moderately constrained wings.
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
Bhosle, Govind S; Fernandes, Moneesha
2017-11-08
Arginine-rich peptides having the (R-X-R) n motif are among the most effective cell-penetrating peptides (CPPs). Herein we report a several-fold increase in the efficacy of such CPPs if the linear flexible spacer (-X-) in the (R-X-R) motif is replaced by constrained cyclic 1,4-substituted-cyclohexane-derived spacers. Internalization of these oligomers in mammalian cell lines was found to be an energy-dependent process. Incorporation of these constrained, non-proteinogenic amino acid spacers in the CPPs is shown to enhance their proteolytic stability. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2017-07-10
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
Large deformation image classification using generalized locality-constrained linear coding.
Zhang, Pei; Wee, Chong-Yaw; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian
2013-01-01
Magnetic resonance (MR) imaging has been demonstrated to be very useful for clinical diagnosis of Alzheimer's disease (AD). A common approach to using MR images for AD detection is to spatially normalize the images by non-rigid image registration, and then perform statistical analysis on the resulting deformation fields. Due to the high nonlinearity of the deformation field, recent studies suggest to use initial momentum instead as it lies in a linear space and fully encodes the deformation field. In this paper we explore the use of initial momentum for image classification by focusing on the problem of AD detection. Experiments on the public ADNI dataset show that the initial momentum, together with a simple sparse coding technique-locality-constrained linear coding (LLC)--can achieve a classification accuracy that is comparable to or even better than the state of the art. We also show that the performance of LLC can be greatly improved by introducing proper weights to the codebook.
Analysis of 20 magnetic clouds at 1 AU during a solar minimum
NASA Astrophysics Data System (ADS)
Gulisano, A. M.; Dasso, S.; Mandrini, C. H.; Démoulin, P.
We study 20 magnetic clouds, observed in situ by the spacecraft Wind, at the Lagrangian point L1, from 22 August, 1995, to 7 November, 1997. In previous works, assuming a cylindrical symmetry for the local magnetic configuration and a satellite trajectory crossing the axis of the cloud, we obtained their orientations using a minimum variance analysis. In this work we compute the orientations and magnetic configurations using a non-linear simultaneous fit of the geometric and physical parameters for a linear force-free model, including the possibility of a not null impact parameter. We quantify global magnitudes such as the relative magnetic helicity per unit length and compare the values found with both methods (minimum variance and the simultaneous fit). FULL TEXT IN SPANISH
33 CFR 207.460 - Fox River, Wis.
Code of Federal Regulations, 2012 CFR
2012-07-01
... desiring to use the Kaukauna drydock will give notice to the U.S. Assistant Engineer in local charge at... per linear foot; $25 minimum charge. Barges, dump scows, and derrick boats, 65 cents per linear foot... made on such Sundays and holidays): For all vessels, 20 cents per linear foot per calendar day or part...
33 CFR 207.460 - Fox River, Wis.
Code of Federal Regulations, 2013 CFR
2013-07-01
... desiring to use the Kaukauna drydock will give notice to the U.S. Assistant Engineer in local charge at... per linear foot; $25 minimum charge. Barges, dump scows, and derrick boats, 65 cents per linear foot... made on such Sundays and holidays): For all vessels, 20 cents per linear foot per calendar day or part...
33 CFR 207.460 - Fox River, Wis.
Code of Federal Regulations, 2014 CFR
2014-07-01
... desiring to use the Kaukauna drydock will give notice to the U.S. Assistant Engineer in local charge at... per linear foot; $25 minimum charge. Barges, dump scows, and derrick boats, 65 cents per linear foot... made on such Sundays and holidays): For all vessels, 20 cents per linear foot per calendar day or part...
Barnes-Davis, Maria E; Merhar, Stephanie L; Holland, Scott K; Kadis, Darren S
2018-04-16
Children born extremely preterm are at significant risk for cognitive impairment, including language deficits. The relationship between preterm birth and neurological changes that underlie cognitive deficits is poorly understood. We use a stories-listening task in fMRI and MEG to characterize language network representation and connectivity in children born extremely preterm (n = 15, <28 weeks gestation, ages 4-6 years), and in a group of typically developing control participants (n = 15, term birth, 4-6 years). Participants completed a brief neuropsychological assessment. Conventional fMRI analyses revealed no significant differences in language network representation across groups (p > .05, corrected). The whole-group fMRI activation map was parcellated to define the language network as a set of discrete nodes, and the timecourse of neuronal activity at each position was estimated using linearly constrained minimum variance beamformer in MEG. Virtual timecourses were subjected to connectivity and network-based analyses. We observed significantly increased beta-band functional connectivity in extremely preterm compared to controls (p < .05). Specifically, we observed an increase in connectivity between left and right perisylvian cortex. Subsequent effective connectivity analyses revealed that hyperconnectivity in preterms was due to significantly increased information flux originating from the right hemisphere (p < 0.05). The total strength and density of the language network were not related to language or nonverbal performance, suggesting that the observed hyperconnectivity is a "pure" effect of prematurity. Although our extremely preterm children exhibited typical language network architecture, we observed significantly altered network dynamics, indicating reliance on an alternative neural strategy for the language task. © 2018 The Authors. Developmental Science Published by John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Ippolito, Corey; Nguyen, Nhan; Totah, Joe; Trinh, Khanh; Ting, Eric
2013-01-01
In this paper, we describe an initial optimization study of a Variable-Camber Continuous Trailing-Edge Flap (VCCTEF) system. The VCCTEF provides a light-weight control system for aircraft with long flexible wings, providing efficient high-lift capability for takeoff and landing, and greater efficiency with reduced drag at cruising flight by considering the effects of aeroelastic wing deformations in the control law. The VCCTEF system is comprised of a large number of distributed and individually-actuatable control surfaces that are constrained in movement relative to neighboring surfaces, and are non-trivially coupled through structural aeroelastic dynamics. Minimzation of drag results in a constrained, coupled, non-linear optimization over a high-dimension search space. In this paper, we describe the modeling, analysis, and optimization of the VCCTEF system control inputs for minimum drag in cruise. The purpose of this initial study is to quantify the expected benefits of the system concept. The scope of this analysis is limited to consideration of a rigid wing without structural flexibility in a steady-state cruise condition at various fuel weights. For analysis, we developed an optimization engine that couples geometric synthesis with vortex-lattice analysis to automate the optimization procedure. In this paper, we present and describe the VCCTEF system concept, optimization approach and tools, run-time performance, and results of the optimization at 20%, 50%, and 80% fuel load. This initial limited-scope study finds the VCCTEF system can potentially gain nearly 10% reduction in cruise drag, provides greater drag savings at lower operating weight, and efficiency is negatively impacted by the severity of relative constraints between control surfaces.
NASA Astrophysics Data System (ADS)
Veiga-Pires, C. C.; Hillaire-Marcel, C.
1999-04-01
The duration and sequence of events recorded in Heinrich layers at sites near the Hudson Strait source area for ice-rafted material are still poorly constrained, notably because of the limit and uncertainties of the 14C chronology. Here we use high-resolution 230Th-excess measurements, in a 6 m sequence raised from Orphan Knoll (southern Labrador Sea), to constrain the duration of the deposition of the five most recent Heinrich (H) layers. On the basis of maximum/minimum estimates for the mean glacial 230Th-excess flux at the studied site a minimum/maximum duration of 1.0/0.6, 1.4/0.8, 1.3/0.8, 1.5/0.9, and 2.1/1.3 kyr is obtained for H0 (˜Younger Dryas), Hl, H2, H3, and H4, respectively. Thorium-230-excess inventories and other sedimentological features indicate a reduced but still significant lateral sedimentary supply by the Western Boundary Undercurrent during the glacial interval. U and Th series systematics also provide insights into source rocks of H layer sediments (i.e., into distal Irminger Basin/local Labrador Sea supplies).
Cosmicflows Constrained Local UniversE Simulations
NASA Astrophysics Data System (ADS)
Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo
2016-01-01
This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodigas, Timothy J.; Hinz, Philip M.; Malhotra, Renu, E-mail: rodigas@as.arizona.edu
Planets can affect debris disk structure by creating gaps, sharp edges, warps, and other potentially observable signatures. However, there is currently no simple way for observers to deduce a disk-shepherding planet's properties from the observed features of the disk. Here we present a single equation that relates a shepherding planet's maximum mass to the debris ring's observed width in scattered light, along with a procedure to estimate the planet's eccentricity and minimum semimajor axis. We accomplish this by performing dynamical N-body simulations of model systems containing a star, a single planet, and an exterior disk of parent bodies and dustmore » grains to determine the resulting debris disk properties over a wide range of input parameters. We find that the relationship between planet mass and debris disk width is linear, with increasing planet mass producing broader debris rings. We apply our methods to five imaged debris rings to constrain the putative planet masses and orbits in each system. Observers can use our empirically derived equation as a guide for future direct imaging searches for planets in debris disk systems. In the fortuitous case of an imaged planet orbiting interior to an imaged disk, the planet's maximum mass can be estimated independent of atmospheric models.« less
Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆
Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny
2014-01-01
There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702
Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.
Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong
2016-06-01
Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.
EEG Source Reconstruction Reveals Frontal-Parietal Dynamics of Spatial Conflict Processing
Cohen, Michael X; Ridderinkhof, K. Richard
2013-01-01
Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus) conflict modulation in stimulus-contralateral parietal gamma (30–50 Hz), followed by a later alpha-band (8–12 Hz) conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4–8 Hz), alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials) had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions. PMID:23451201
Contraction fracture: From 90° to 120° crack intersections
NASA Astrophysics Data System (ADS)
Lazarus, V.; Gauthier, G.; Pauchard, L.
2009-12-01
Giant's Causeway, Port Arthur tessellated pavement, Bimini Road, Mars polygons (whose presence indicated past occurrence of water), fracture networks in permafrost, septarias are some more or less known examples of self-organized crack patterns that have intrigued people through out history. Even now, they are sometimes attributed to legendary figures : Giant's, Atlantis mythical citizens. These pavements are in fact formed by constrained shrinking of the media due, for instance, to cooling or drying leading to fracture. The crack networks form mostly 90° or 120° angles. Here, we report experiments allowing to control the transition between 90° and 120°. We show that the transition is governed by the linear elastic fracture mechanics energy minimization principle, hence by two parameters: the cell size and the Griffith's length (minimum crack length beyond which the bulk energy is not sufficient to allow its propagation). This was achieved by measuring the Griffith's length directly on the same type of experiments by changing the cell geometry. Example of 90 degree and 120 crack intersections. Top-left : Giant's Causeway hexagonal tessellated pavement, Ireland (courtesy A. Davaille). Top-right: Port Arthur rectangular tessellated pavement, Tasmania (courtesy Wayne Bentley). Bottom : septarias (courtesy A. Rifki and M. Toussaint)
Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning
NASA Astrophysics Data System (ADS)
Zuberi, M. AH; Pratt, R. G.
2018-04-01
The Born approximation successfully linearizes seismic full waveform inversion if the background velocity is sufficiently accurate. When the background velocity is not known it can be estimated by using model scale separation methods. A frequently used technique is to separate the spatial scales of the model according to the scattering angles present in the data, by using either first- or second-order terms in the Born series. For example, the well-known `banana-donut' and the `rabbit ear' shaped kernels are, respectively, the first- and second-order Born terms in which at least one of the scattering events is associated with a large angle. Whichever term of the Born series is used, all such methods suffer from errors in the starting velocity model because all terms in the Born series assume that the background Green's function is known. An alternative approach to Born-based scale separation is to work in the model domain, for example, by Gaussian smoothing of the update vectors, or some other approach for separation by model wavenumbers. However such model domain methods are usually based on a strict separation in which only the low-wavenumber updates are retained. This implies that the scattered information in the data is not taken into account. This can lead to the inversion being trapped in a false (local) minimum when sharp features are updated incorrectly. In this study we propose a scaled-Sobolev pre-conditioning (SSP) of the updates to achieve a constrained scale separation in the model domain. The SSP is obtained by introducing a scaled Sobolev inner product (SSIP) into the measure of the gradient of the objective function with respect to the model parameters. This modified measure seeks reductions in the L2 norm of the spatial derivatives of the gradient without changing the objective function. The SSP does not rely on the Born prediction of scale based on scattering angles, and requires negligible extra computational cost per iteration. Synthetic examples from the Marmousi model show that the constrained scale separation using SSP is able to keep the background updates in the zone of attraction of the global minimum, in spite of using a poor starting model in which conventional methods fail.
Emadi Andani, Mehran; Bahrami, Fariba
2012-10-01
Flash and Hogan (1985) suggested that the CNS employs a minimum jerk strategy when planning any given movement. Later, Nakano et al. (1999) showed that minimum angle jerk predicts the actual arm trajectory curvature better than the minimum jerk model. Friedman and Flash (2009) confirmed this claim. Besides the behavioral support that we will discuss, we will show that this model allows simplicity in planning any given movement. In particular, we prove mathematically that each movement that satisfies the minimum joint angle jerk condition is reproducible by a linear combination of six functions. These functions are calculated independent of the type of the movement and are normalized in the time domain. Hence, we call these six universal functions the Movement Elements (ME). We also show that the kinematic information at the beginning and end of the movement determines the coefficients of the linear combination. On the other hand, in analyzing recorded data from sit-to-stand (STS) transfer, arm-reaching movement (ARM) and gait, we observed that minimum joint angle jerk condition is satisfied only during different successive phases of these movements and not for the entire movement. Driven by these observations, we assumed that any given ballistic movement may be decomposed into several successive phases without overlap, such that for each phase the minimum joint angle jerk condition is satisfied. At the boundaries of each phase the angular acceleration of each joint should obtain its extremum (zero third derivative). As a consequence, joint angles at each phase will be linear combinations of the introduced MEs. Coefficients of the linear combination at each phase are the values of the joint kinematics at the boundaries of that phase. Finally, we conclude that these observations may constitute the basis of a computational interpretation, put differently, of the strategy used by the Central Nervous System (CNS) for motor planning. We call this possible interpretation "Coordinated Minimum Angle jerk Policy" or COMAP. Based on this policy, the function of the CNS in generating the desired pattern of any given task (like STS, ARM or gait) can be described computationally using three factors: (1) the kinematics of the motor system at given body states, i.e., at certain movement events/instances, (2) the time length of each phase, and (3) the proposed MEs. From a computational point of view, this model significantly simplifies the processes of movement planning as well as feature abstraction for saving characterizing information of any given movement in memory. Copyright © 2012 Elsevier B.V. All rights reserved.
Bit error rate tester using fast parallel generation of linear recurring sequences
Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.
2003-05-06
A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.
Angular velocity discrimination
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.
1990-01-01
Three experiments designed to investigate the ability of naive observers to discriminate rotational velocities of two simultaneously viewed objects are described. Rotations are constrained to occur about the x and y axes, resulting in linear two-dimensional image trajectories. The results indicate that observers can discriminate angular velocities with a competence near that for linear velocities. However, perceived angular rate is influenced by structural aspects of the stimuli.
Influence of central set on anticipatory and triggered grip-force adjustments
NASA Technical Reports Server (NTRS)
Winstein, C. J.; Horak, F. B.; Fisher, B. E.; Peterson, B. W. (Principal Investigator)
2000-01-01
The effects of predictability of load magnitude on anticipatory and triggered grip-force adjustments were studied as nine normal subjects used a precision grip to lift, hold, and replace an instrumented test object. Experience with a predictable stimulus has been shown to enhance magnitude scaling of triggered postural responses to different amplitudes of perturbations. However, this phenomenon, known as a central-set effect, has not been tested systematically for grip-force responses in the hand. In our study, predictability was manipulated by applying load perturbations of different magnitudes to the test object under conditions in which the upcoming load magnitude was presented repeatedly or under conditions in which the load magnitudes were presented randomly, each with two different pre-load grip conditions (unconstrained and constrained). In constrained conditions, initial grip forces were maintained near the minimum level necessary to prevent pre-loaded object slippage, while in unconstrained conditions, no initial grip force restrictions were imposed. The effect of predictable (blocked) and unpredictable (random) load presentations on scaling of anticipatory and triggered grip responses was tested by comparing the slopes of linear regressions between the imposed load and grip response magnitude. Anticipatory and triggered grip force responses were scaled to load magnitude in all conditions. However, regardless of pre-load grip force constraint, the gains (slopes) of grip responses relative to load magnitudes were greater when the magnitude of the upcoming load was predictable than when the load increase was unpredictable. In addition, a central-set effect was evidenced by the fewer number of drop trials in the predictable relative to unpredictable load conditions. Pre-load grip forces showed the greatest set effects. However, grip responses showed larger set effects, based on prediction, when pre-load grip force was constrained to lower levels. These results suggest that anticipatory processes pertaining to load magnitude permit the response gain of both voluntary and triggered rapid grip force adjustments to be set, at least partially, prior to perturbation onset. Comparison of anticipatory set effects for reactive torque and lower extremity EMG postural responses triggered by surface translation perturbations suggests a more general rule governing anticipatory processes.
Mars Observer trajectory and orbit design
NASA Technical Reports Server (NTRS)
Beerer, Joseph G.; Roncoli, Ralph B.
1991-01-01
The Mars Observer launch, interplanetary, Mars orbit insertion, and mapping orbit designs are described. The design objective is to enable a near-maximum spacecraft mass to be placed in orbit about Mars. This is accomplished by keeping spacecraft propellant requirements to a minimum, selecting a minimum acceptable launch period, equalizing the spacecraft velocity change requirement at the beginning and end of the launch period, and constraining the orbit insertion maneuvers to be coplanar. The mapping orbit design objective is to provide the opportunity for global observation of the planet by the science instruments while facilitating the spacecraft design. This is realized with a sun-synchronous near-polar orbit whose ground-track pattern covers the planet at progressively finer resolution.
Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties
NASA Astrophysics Data System (ADS)
Lazzaro, D.; Loli Piccolomini, E.; Zama, F.
2016-10-01
This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.
NASA Technical Reports Server (NTRS)
Chapman, Dean R
1952-01-01
A theoretical investigation is made of the airfoil profile for minimum pressure drag at zero lift in supersonic flow. In the first part of the report a general method is developed for calculating the profile having the least pressure drag for a given auxiliary condition, such as a given structural requirement or a given thickness ratio. The various structural requirements considered include bending strength, bending stiffness, torsional strength, and torsional stiffness. No assumption is made regarding the trailing-edge thickness; the optimum value is determined in the calculations as a function of the base pressure. To illustrate the general method, the optimum airfoil, defined as the airfoil having minimum pressure drag for a given auxiliary condition, is calculated in a second part of the report using the equations of linearized supersonic flow.
NASA Astrophysics Data System (ADS)
Gusriani, N.; Firdaniza
2018-03-01
The existence of outliers on multiple linear regression analysis causes the Gaussian assumption to be unfulfilled. If the Least Square method is forcedly used on these data, it will produce a model that cannot represent most data. For that, we need a robust regression method against outliers. This paper will compare the Minimum Covariance Determinant (MCD) method and the TELBS method on secondary data on the productivity of phytoplankton, which contains outliers. Based on the robust determinant coefficient value, MCD method produces a better model compared to TELBS method.
Resource Constrained Planning of Multiple Projects with Separable Activities
NASA Astrophysics Data System (ADS)
Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya
In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.
Molecularly imprinted cavities template the macrocyclization of tetrapeptides.
Tai, Dar-Fu; Lin, Yee-Fung
2008-11-21
Cavities formed using cyclic tetrapeptides (CTPs) or heat-induced conformers act as templates for cyclization; the cavities bind to linear tetrapeptides and enforce turn conformations to enhance cyclization to constrained CTPs.
Experimental joint quantum measurements with minimum uncertainty.
Ringbauer, Martin; Biggerstaff, Devon N; Broome, Matthew A; Fedrizzi, Alessandro; Branciard, Cyril; White, Andrew G
2014-01-17
Quantum physics constrains the accuracy of joint measurements of incompatible observables. Here we test tight measurement-uncertainty relations using single photons. We implement two independent, idealized uncertainty-estimation methods, the three-state method and the weak-measurement method, and adapt them to realistic experimental conditions. Exceptional quantum state fidelities of up to 0.999 98(6) allow us to verge upon the fundamental limits of measurement uncertainty.
Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.
2010-01-01
Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Linear diffusion model dating of cinder cones in Central Anatolia, Turkey
NASA Astrophysics Data System (ADS)
O'Sadnick, L. G.; Reid, M. R.; Cline, M. L.; Cosca, M. A.; Kuscu, G.
2013-12-01
The progressive decrease in slope angle, cone height and cone height/width ratio over time provides the basis for geomorphic dating of cinder cones using linear diffusion models. Previous research using diffusion models to date cinder cones has focused on the cone height/width ratio as the basis for dating cones of unknown age [1,2]. Here we apply linear diffusion models to dating cinder cones. A suite of 16 cinder cones from the Hasandağ volcano area of the Neogene-Quaternary Central Anatolian Volcanic Zone, for which samples are available, were selected for morphologic dating analysis. New 40Ar/39Ar dates for five of these cones range from 62 × 4 to 517 × 9 ka. Linear diffusion models were used to model the erosional degradation of each cone. Diffusion coefficients (κ) for the 5 cinder cones with known ages were constrained by comparing various modeled slope profiles to the current slope profile. The resulting κ is 7.5×0.5 m2kyr-1. Using this κ value, eruption ages were modeled for the remaining 11 cinder cones and range from 53×3 to 455×30 ka. These ages are within the range of ages previously reported for cinder cones in the Hasandağ region. The linear diffusion model-derived ages are being compared to additional new 40Ar/39Ar dates in order to further assess the applicability of morphological dating to constrain the ages of cinder cones. The relatively well-constrained κ value we obtained by applying the linear diffusion model to cinder cones that range in age by nearly 500 ka suggests that this model can be used to date cinder cones. This κ value is higher than the well-established value of κ =3.9 for a cinder cone in a similar climate [3]. Therefore our work confirms the importance of determining appropriate κ values from nearby cones with known ages. References 1. C.A. Wood, J. Volcanol. Geotherm. Res. 8, 137 (1980) 2. D.M. Wood, M.F. Sheridan, J. Volcanol. Geotherm. Res. 83, 241 (1998) 3. J.D. Pelletier, M.L. Cline, Geology 35, 1067 (2007)
Fitting and forecasting coupled dark energy in the non-linear regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casas, Santiago; Amendola, Luca; Pettorino, Valeria
2016-01-01
We consider cosmological models in which dark matter feels a fifth force mediated by the dark energy scalar field, also known as coupled dark energy. Our interest resides in estimating forecasts for future surveys like Euclid when we take into account non-linear effects, relying on new fitting functions that reproduce the non-linear matter power spectrum obtained from N-body simulations. We obtain fitting functions for models in which the dark matter-dark energy coupling is constant. Their validity is demonstrated for all available simulations in the redshift range 0z=–1.6 and wave modes below 0k=1 h/Mpc. These fitting formulas can be used tomore » test the predictions of the model in the non-linear regime without the need for additional computing-intensive N-body simulations. We then use these fitting functions to perform forecasts on the constraining power that future galaxy-redshift surveys like Euclid will have on the coupling parameter, using the Fisher matrix method for galaxy clustering (GC) and weak lensing (WL). We find that by using information in the non-linear power spectrum, and combining the GC and WL probes, we can constrain the dark matter-dark energy coupling constant squared, β{sup 2}, with precision smaller than 4% and all other cosmological parameters better than 1%, which is a considerable improvement of more than an order of magnitude compared to corresponding linear power spectrum forecasts with the same survey specifications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vargas, L.S.; Quintana, V.H.; Vannelli, A.
This paper deals with the use of Successive Linear Programming (SLP) for the solution of the Security-Constrained Economic Dispatch (SCED) problem. The authors tutorially describe an Interior Point Method (IPM) for the solution of Linear Programming (LP) problems, discussing important implementation issues that really make this method far superior to the simplex method. A study of the convergence of the SLP technique and a practical criterion to avoid oscillatory behavior in the iteration process are also proposed. A comparison of the proposed method with an efficient simplex code (MINOS) is carried out by solving SCED problems on two standard IEEEmore » systems. The results show that the interior point technique is reliable, accurate and more than two times faster than the simplex algorithm.« less
Improving the Nulling Beamformer Using Subspace Suppression.
Rana, Kunjan D; Hämäläinen, Matti S; Vaina, Lucia M
2018-01-01
Magnetoencephalography (MEG) captures the magnetic fields generated by neuronal current sources with sensors outside the head. In MEG analysis these current sources are estimated from the measured data to identify the locations and time courses of neural activity. Since there is no unique solution to this so-called inverse problem, multiple source estimation techniques have been developed. The nulling beamformer (NB), a modified form of the linearly constrained minimum variance (LCMV) beamformer, is specifically used in the process of inferring interregional interactions and is designed to eliminate shared signal contributions, or cross-talk, between regions of interest (ROIs) that would otherwise interfere with the connectivity analyses. The nulling beamformer applies the truncated singular value decomposition (TSVD) to remove small signal contributions from a ROI to the sensor signals. However, ROIs with strong crosstalk will have high separating power in the weaker components, which may be removed by the TSVD operation. To address this issue we propose a new method, the nulling beamformer with subspace suppression (NBSS). This method, controlled by a tuning parameter, reweights the singular values of the gain matrix mapping from source to sensor space such that components with high overlap are reduced. By doing so, we are able to measure signals between nearby source locations with limited cross-talk interference, allowing for reliable cortical connectivity analysis between them. In two simulations, we demonstrated that NBSS reduces cross-talk while retaining ROIs' signal power, and has higher separating power than both the minimum norm estimate (MNE) and the nulling beamformer without subspace suppression. We also showed that NBSS successfully localized the auditory M100 event-related field in primary auditory cortex, measured from a subject undergoing an auditory localizer task, and suppressed cross-talk in a nearby region in the superior temporal sulcus.
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
A new approach for minimum phase output definition
NASA Astrophysics Data System (ADS)
Jahangiri, Fatemeh; Talebi, Heidar Ali; Menhaj, Mohammad Bagher; Ebenbauer, Christian
2017-01-01
This paper presents a novel method for output redefinition for linear systems. The approach also determines possible relative degrees for the systems corresponding to any new output vector. To guarantee the minimum phase property with a prescribed relative degree, a set of new conditions is introduced. A key feature of these conditions is that there is no need to any form of transformations which make the scheme suitable for optimisation problems in control to ensure the minimum phase property. Moreover, the results are useful for sensor placement problems and for obtaining minimum phase approximations of non-minimum phase systems. Numerical examples including an example of unmanned aerial vehicle systems are given to demonstrate the effectiveness of the methodology.
40 CFR 86.316-79 - Carbon monoxide and carbon dioxide analyzer specifications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND...) The use of linearizing circuits is permitted. (c) The minimum water rejection ratio (maximum CO 2...) The minimum CO 2 rejection ratio (maximum CO 2 interference) as measured by § 86.322 for CO analyzers...
NASA Astrophysics Data System (ADS)
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
Yu, Huapeng; Zhu, Hai; Gao, Dayuan; Yu, Meng; Wu, Wenqi
2015-01-01
The Kalman filter (KF) has always been used to improve north-finding performance under practical conditions. By analyzing the characteristics of the azimuth rotational inertial measurement unit (ARIMU) on a stationary base, a linear state equality constraint for the conventional KF used in the fine north-finding filtering phase is derived. Then, a constrained KF using the state equality constraint is proposed and studied in depth. Estimation behaviors of the concerned navigation errors when implementing the conventional KF scheme and the constrained KF scheme during stationary north-finding are investigated analytically by the stochastic observability approach, which can provide explicit formulations of the navigation errors with influencing variables. Finally, multiple practical experimental tests at a fixed position are done on a postulate system to compare the stationary north-finding performance of the two filtering schemes. In conclusion, this study has successfully extended the utilization of the stochastic observability approach for analytic descriptions of estimation behaviors of the concerned navigation errors, and the constrained KF scheme has demonstrated its superiority over the conventional KF scheme for ARIMU stationary north-finding both theoretically and practically. PMID:25688588
NASA Astrophysics Data System (ADS)
Alemadi, Nasser Ahmed
Deregulation has brought opportunities for increasing efficiency of production and delivery and reduced costs to customers. Deregulation has also bought great challenges to provide the reliability and security customers have come to expect and demand from the electrical delivery system. One of the challenges in the deregulated power system is voltage instability. Voltage instability has become the principal constraint on power system operation for many utilities. Voltage instability is a unique problem because it can produce an uncontrollable, cascading instability that results in blackout for a large region or an entire country. In this work we define a system of advanced analytical methods and tools for secure and efficient operation of the power system in the deregulated environment. The work consists of two modules; (a) contingency selection module and (b) a Security Constrained Optimization module. The contingency selection module to be used for voltage instability is the Voltage Stability Security Assessment and Diagnosis (VSSAD). VSSAD shows that each voltage control area and its reactive reserve basin describe a subsystem or agent that has a unique voltage instability problem. VSSAD identifies each such agent. VS SAD is to assess proximity to voltage instability for each agent and rank voltage instability agents for each contingency simulated. Contingency selection and ranking for each agent is also performed. Diagnosis of where, why, when, and what can be done to cure voltage instability for each equipment outage and transaction change combination that has no load flow solution is also performed. A security constrained optimization module developed solves a minimum control solvability problem. A minimum control solvability problem obtains the reactive reserves through action of voltage control devices that VSSAD determines are needed in each agent to obtain solution of the load flow. VSSAD makes a physically impossible recommendation of adding reactive generation capability to specific generators to allow a load flow solution to be obtained. The minimum control solvability problem can also obtain solution of the load flow without curtailing transactions that shed load and generation as recommended by VSSAD. A minimum control solvability problem will be implemented as a corrective control, that will achieve the above objectives by using minimum control changes. The control includes; (1) voltage setpoint on generator bus voltage terminals; (2) under load tap changer tap positions and switchable shunt capacitors; and (3) active generation at generator buses. The minimum control solvability problem uses the VSSAD recommendation to obtain the feasible stable starting point but completely eliminates the impossible or onerous recommendation made by VSSAD. This thesis reviews the capabilities of Voltage Stability Security Assessment and Diagnosis and how it can be used to implement a contingency selection module for the Open Access System Dispatch (OASYDIS). The OASYDIS will also use the corrective control computed by Security Constrained Dispatch. The corrective control would be computed off line and stored for each contingency that produces voltage instability. The control is triggered and implemented to correct the voltage instability in the agent experiencing voltage instability only after the equipment outage or operating changes predicted to produce voltage instability have occurred. The advantages and the requirements to implement the corrective control are also discussed.
NASA Astrophysics Data System (ADS)
Sun, Jingliang; Liu, Chunsheng
2018-01-01
In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.
Fast secant methods for the iterative solution of large nonsymmetric linear systems
NASA Technical Reports Server (NTRS)
Deuflhard, Peter; Freund, Roland; Walter, Artur
1990-01-01
A family of secant methods based on general rank-1 updates was revisited in view of the construction of iterative solvers for large non-Hermitian linear systems. As it turns out, both Broyden's good and bad update techniques play a special role, but should be associated with two different line search principles. For Broyden's bad update technique, a minimum residual principle is natural, thus making it theoretically comparable with a series of well known algorithms like GMRES. Broyden's good update technique, however, is shown to be naturally linked with a minimum next correction principle, which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized partial differential equations of convection diffusion type in 2-D with integral layers give a first impression of the possible power of the derived good Broyden variant.
NASA Astrophysics Data System (ADS)
Kim, Seunggyu; Lee, Seokhun; Jeon, Jessie S.
2017-11-01
To determine the most effective antimicrobial treatments of infectious pathogen, high-throughput antibiotic susceptibility test (AST) is critically required. However, the conventional AST requires at least 16 hours to reach the minimum observable population. Therefore, we developed a microfluidic system that allows maintenance of linear antibiotic concentration and measurement of local bacterial density. Based on the Stokes-Einstein equation, the flow rate in the microchannel was optimized so that linearization was achieved within 10 minutes, taking into account the diffusion coefficient of each antibiotic in the agar gel. As a result, the minimum inhibitory concentration (MIC) of each antibiotic against P. aeruginosa could be immediately determined 6 hours after treatment of the linear antibiotic concentration. In conclusion, our system proved the efficacy of a high-throughput AST platform through MIC comparison with Clinical and Laboratory Standards Institute (CLSI) range of antibiotics. This work was supported by the Climate Change Research Hub (Grant No. N11170060) of the KAIST and by the Brain Korea 21 Plus project.
Impact of longitudinal flying qualities upon the design of a transport with active controls
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1980-01-01
Direct constrained parameter optimization was used to optimally size a medium range transport for minimum direct operating cost. Several stability and control constraints were varied to study the sensitivity of the configuration to specifying the unaugmented flying qualities of transports designed with relaxed static stability. Additionally, a number of handling quality related design constants were studied with respect to their impact to the design.
EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter
Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.
2012-01-01
A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018
Concurrent schedules: Effects of time- and response-allocation constraints
Davison, Michael
1991-01-01
Five pigeons were trained on concurrent variable-interval schedules arranged on two keys. In Part 1 of the experiment, the subjects responded under no constraints, and the ratios of reinforcers obtainable were varied over five levels. In Part 2, the conditions of the experiment were changed such that the time spent responding on the left key before a subsequent changeover to the right key determined the minimum time that must be spent responding on the right key before a changeover to the left key could occur. When the left key provided a higher reinforcer rate than the right key, this procedure ensured that the time allocated to the two keys was approximately equal. The data showed that such a time-allocation constraint only marginally constrained response allocation. In Part 3, the numbers of responses emitted on the left key before a changeover to the right key determined the minimum number of responses that had to be emitted on the right key before a changeover to the left key could occur. This response constraint completely constrained time allocation. These data are consistent with the view that response allocation is a fundamental process (and time allocation a derivative process), or that response and time allocation are independently controlled, in concurrent-schedule performance. PMID:16812632
NASA Astrophysics Data System (ADS)
Khode, Urmi B.
High Altitude Long Endurance (HALE) airships are platform of interest due to their persistent observation and persistent communication capabilities. A novel HALE airship design configuration incorporates a composite sandwich propulsive hull duct between the front and the back of the hull for significant drag reduction via blown wake effects. The sandwich composite shell duct is subjected to hull pressure on its outer walls and flow suction on its inner walls which result in in-plane wall compressive stress, which may cause duct buckling. An approach based upon finite element stability analysis combined with a ply layup and foam thickness determination weight minimization search algorithm is utilized. Its goal is to achieve an optimized solution for the configuration of the sandwich composite as a solution to a constrained minimum weight design problem, for which the shell duct remains stable with a prescribed margin of safety under prescribed loading. The stability analysis methodology is first verified by comparing published analytical results for a number of simple cylindrical shell configurations with FEM counterpart solutions obtained using the commercially available code ABAQUS. Results show that the approach is effective in identifying minimum weight composite duct configurations for a number of representative combinations of duct geometry, composite material and foam properties, and propulsive duct applied pressure loading.
Evidence for ultrafast outflows in radio-quiet AGNs - III. Location and energetics
NASA Astrophysics Data System (ADS)
Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.
2012-05-01
Using the results of a previous X-ray photoionization modelling of blueshifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this Letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval ˜0.0003-0.03 pc (˜ 102-104rs) from the central black hole, consistent with what is expected for accretion disc winds/outflows. The mass outflow rates are constrained between ˜0.01 and 1 M⊙ yr-1, corresponding to >rsim5-10 per cent of the accretion rates. The average lower/upper limits on the mechanical power are log? 42.6-44.6 erg s-1. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN cosmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyfert galaxies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu Bingnan; Zhao Enguang; Center of Theoretical Nuclear Physics, National Laboratory of Heavy Ion Accelerator, Lanzhou 730000
2011-07-15
The shapes of light normal nuclei and {Lambda} hypernuclei are investigated in the ({beta},{gamma}) deformation plane by using a newly developed constrained relativistic mean field (RMF) model. As examples, the results of some C, Mg, and Si nuclei are presented and discussed in details. We found that for normal nuclei the present RMF calculations and previous Skyrme-Hartree-Fock models predict similar trends of the shape evolution with the neutron number increasing. But some quantitative aspects from these two approaches, such as the depth of the minimum and the softness in the {gamma} direction, differ a lot for several nuclei. For {Lambda}more » hypernuclei, in most cases, the addition of a {Lambda} hyperon alters slightly the location of the ground state minimum toward the direction of smaller {beta} and softer {gamma} in the potential energy surface E{approx}({beta},{gamma}). There are three exceptions, namely, {sub {Lambda}}{sup 13}C, {sub {Lambda}}{sup 23}C, and {sub {Lambda}}{sup 31}Si in which the polarization effect of the additional {Lambda} is so strong that the shapes of these three hypernuclei are drastically different from their corresponding core nuclei.« less
On a Minimum Problem in Smectic Elastomers
NASA Astrophysics Data System (ADS)
Buonsanti, Michele; Giovine, Pasquale
2008-07-01
Smectic elastomers are layered materials exhibiting a solid-like elastic response along the layer normal and a rubbery one in the plane. Balance equations for smectic elastomers are derived from the general theory of continua with constrained microstructure. In this work we investigate a very simple minimum problem based on multi-well potentials where the microstructure is taken into account. The set of polymeric strains minimizing the elastic energy contains a one-parameter family of simple strain associated with a micro-variation of the degree of freedom. We develop the energy functional through two terms, the first one nematic and the second one considering the tilting phenomenon; after, by developing in the rubber elasticity framework, we minimize over the tilt rotation angle and extract the engineering stress.
Uncertainty relation for the discrete Fourier transform.
Massar, Serge; Spindel, Philippe
2008-05-16
We derive an uncertainty relation for two unitary operators which obey a commutation relation of the form UV=e(i phi) VU. Its most important application is to constrain how much a quantum state can be localized simultaneously in two mutually unbiased bases related by a discrete fourier transform. It provides an uncertainty relation which smoothly interpolates between the well-known cases of the Pauli operators in two dimensions and the continuous variables position and momentum. This work also provides an uncertainty relation for modular variables, and could find applications in signal processing. In the finite dimensional case the minimum uncertainty states, discrete analogues of coherent and squeezed states, are minimum energy solutions of Harper's equation, a discrete version of the harmonic oscillator equation.
Reversible geling co-polymer and method of making
Gutowska, Anna
2005-12-27
The present invention is a thereapeutic agent carrier having a thermally reversible gel or geling copolymer that is a linear random copolymer of an [meth-]acrylamide derivative and a hydrophilic comonomer, wherein the linear random copolymer is in the form of a plurality of linear chains having a plurality of molecular weights greater than or equal to a minimum geling molecular weight cutoff and a therapeutic agent.
Representing Lumped Markov Chains by Minimal Polynomials over Field GF(q)
NASA Astrophysics Data System (ADS)
Zakharov, V. M.; Shalagin, S. V.; Eminov, B. F.
2018-05-01
A method has been proposed to represent lumped Markov chains by minimal polynomials over a finite field. The accuracy of representing lumped stochastic matrices, the law of lumped Markov chains depends linearly on the minimum degree of polynomials over field GF(q). The method allows constructing the realizations of lumped Markov chains on linear shift registers with a pre-defined “linear complexity”.
Optimal mistuning for enhanced aeroelastic stability of transonic fans
NASA Technical Reports Server (NTRS)
Hall, K. C.; Crawley, E. F.
1983-01-01
An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.
ERIC Educational Resources Information Center
Rule, David L.
Several regression methods were examined within the framework of weighted structural regression (WSR), comparing their regression weight stability and score estimation accuracy in the presence of outlier contamination. The methods compared are: (1) ordinary least squares; (2) WSR ridge regression; (3) minimum risk regression; (4) minimum risk 2;…
NASA Astrophysics Data System (ADS)
le Graverend, J.-B.
2018-05-01
A lattice-misfit-dependent damage density function is developed to predict the non-linear accumulation of damage when a thermal jump from 1050 °C to 1200 °C is introduced somewhere in the creep life. Furthermore, a phenomenological model aimed at describing the evolution of the constrained lattice misfit during monotonous creep load is also formulated. The response of the lattice-misfit-dependent plasticity-coupled damage model is compared with the experimental results obtained at 140 and 160 MPa on the first generation Ni-based single crystal superalloy MC2. The comparison reveals that the damage model is well suited at 160 MPa and less at 140 MPa because the transfer of stress to the γ' phase occurs for stresses above 150 MPa which leads to larger variations and, therefore, larger effects of the constrained lattice misfit on the lifetime during thermo-mechanical loading.
Locality-constrained anomaly detection for hyperspectral imagery
NASA Astrophysics Data System (ADS)
Liu, Jiabin; Li, Wei; Du, Qian; Liu, Kui
2015-12-01
Detecting a target with low-occurrence-probability from unknown background in a hyperspectral image, namely anomaly detection, is of practical significance. Reed-Xiaoli (RX) algorithm is considered as a classic anomaly detector, which calculates the Mahalanobis distance between local background and the pixel under test. Local RX, as an adaptive RX detector, employs a dual-window strategy to consider pixels within the frame between inner and outer windows as local background. However, the detector is sensitive if such a local region contains anomalous pixels (i.e., outliers). In this paper, a locality-constrained anomaly detector is proposed to remove outliers in the local background region before employing the RX algorithm. Specifically, a local linear representation is designed to exploit the internal relationship between linearly correlated pixels in the local background region and the pixel under test and its neighbors. Experimental results demonstrate that the proposed detector improves the original local RX algorithm.
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
Linear and non-linear Modified Gravity forecasts with future surveys
NASA Astrophysics Data System (ADS)
Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria
2017-12-01
Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.
State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.
1978-12-01
The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared
Energy efficient LED layout optimization for near-uniform illumination
NASA Astrophysics Data System (ADS)
Ali, Ramy E.; Elgala, Hany
2016-09-01
In this paper, we consider the problem of designing energy efficient light emitting diodes (LEDs) layout while satisfying the illumination constraints. Towards this objective, we present a simple approach to the illumination design problem based on the concept of the virtual LED. We formulate a constrained optimization problem for minimizing the power consumption while maintaining a near-uniform illumination throughout the room. By solving the resulting constrained linear program, we obtain the number of required LEDs and the optimal output luminous intensities that achieve the desired illumination constraints.
Maximum principle for a stochastic delayed system involving terminal state constraints.
Wen, Jiaqiang; Shi, Yufeng
2017-01-01
We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.
Mars double-aeroflyby free returns
NASA Astrophysics Data System (ADS)
Jesick, Mark
2017-09-01
Mars double-flyby free-return trajectories that pass twice through the Martian atmosphere are documented. This class of trajectories is advantageous for potential Mars atmospheric sample return missions because of its low geocentric energy at departure and arrival, because it would enable two sample collections at unique locations during different Martian seasons, and because of its lack of deterministic maneuvers. Free return opportunities are documented over Earth departure dates ranging from 2015 through 2100, with viable missions available every Earth-Mars synodic period. After constraining the maximum lift-to-drag ratio to be less than one, the minimum observed Earth departure hyperbolic excess speed is 3.23 km/s, the minimum Earth atmospheric entry speed is 11.42 km/s, and the minimum round-trip flight time is 805 days. An algorithm using simplified dynamics is developed along with a method to derive an initial estimate for trajectories in a more realistic dynamic model. Multiple examples are presented, including free returns that pass outside and inside of Mars's appreciable atmosphere.
Price, Stephen F.; Payne, Antony J.; Howat, Ian M.; Smith, Benjamin E.
2011-01-01
We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland’s three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing. PMID:21576500
Price, Stephen F; Payne, Antony J; Howat, Ian M; Smith, Benjamin E
2011-05-31
We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland's three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing.
Periodic Forced Response of Structures Having Three-Dimensional Frictional Constraints
NASA Astrophysics Data System (ADS)
CHEN, J. J.; YANG, B. D.; MENQ, C. H.
2000-01-01
Many mechanical systems have moving components that are mutually constrained through frictional contacts. When subjected to cyclic excitations, a contact interface may undergo constant changes among sticks, slips and separations, which leads to very complex contact kinematics. In this paper, a 3-D friction contact model is employed to predict the periodic forced response of structures having 3-D frictional constraints. Analytical criteria based on this friction contact model are used to determine the transitions among sticks, slips and separations of the friction contact, and subsequently the constrained force which consists of the induced stick-slip friction force on the contact plane and the contact normal load. The resulting constrained force is often a periodic function and can be considered as a feedback force that influences the response of the constrained structures. By using the Multi-Harmonic Balance Method along with Fast Fourier Transform, the constrained force can be integrated with the receptance of the structures so as to calculate the forced response of the constrained structures. It results in a set of non-linear algebraic equations that can be solved iteratively to yield the relative motion as well as the constrained force at the friction contact. This method is used to predict the periodic response of a frictionally constrained 3-d.o.f. oscillator. The predicted results are compared with those of the direct time integration method so as to validate the proposed method. In addition, the effect of super-harmonic components on the resonant response and jump phenomenon is examined.
2016-11-22
structure of the graph, we replace the ℓ1- norm by the nonconvex Capped -ℓ1 norm , and obtain the Generalized Capped -ℓ1 regularized logistic regression...X. M. Yuan. Linearized augmented lagrangian and alternating direction methods for nuclear norm minimization. Mathematics of Computation, 82(281):301...better approximations of ℓ0- norm theoretically and computationally beyond ℓ1- norm , for example, the compressive sensing (Xiao et al., 2011). The
A robust approach to chance constrained optimal power flow with renewable generation
Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.
2016-09-01
Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less
Order-constrained linear optimization.
Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P
2017-11-01
Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.
Matter coupling in partially constrained vielbein formulation of massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Antonio De; Mukohyama, Shinji; Gümrükçüoğlu, A. Emir
2016-01-01
We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less
Matter coupling in partially constrained vielbein formulation of massive gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Antonio De; Gümrükçüoğlu, A. Emir; Heisenberg, Lavinia
2016-01-04
We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less
NASA Astrophysics Data System (ADS)
Castro, Marcelo A.; Pham, Dzung L.; Butman, John
2016-03-01
Minimum intensity projection is a technique commonly used to display magnetic resonance susceptibility weighted images, allowing the observer to better visualize hemorrhages and vasculature. The technique displays the minimum intensity in a given projection within a thick slab, allowing different connectivity patterns to be easily revealed. Unfortunately, the low signal intensity of the skull within the thick slab can mask superficial tissues near the skull base and other regions. Because superficial microhemorrhages are a common feature of traumatic brain injury, this effect limits the ability to proper diagnose and follow up patients. In order to overcome this limitation, we developed a method to allow minimum intensity projection to properly display superficial tissues adjacent to the skull. Our approach is based on two brain masks, the largest of which includes extracerebral voxels. The analysis of the rind within both masks containing the actual brain boundary allows reclassification of those voxels initially missed in the smaller mask. Morphological operations are applied to guarantee accuracy and topological correctness, and the mean intensity within the mask is assigned to all outer voxels. This prevents bone from dominating superficial regions in the projection, enabling superior visualization of cortical hemorrhages and vessels.
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.
Spacecraft Mission Design for the Mitigation of the 2017 PDC Hypothetical Asteroid Threat
NASA Technical Reports Server (NTRS)
Barbee, Brent W.; Sarli, Bruno V.; Lyzhoft, Josh; Chodas, Paul W.; Englander, Jacob A.
2017-01-01
This paper presents a detailed mission design analysis results for the 2017 Planetary Defense Conference (PDC) Hypothetical Asteroid Impact Scenario, documented at https:cneos.jpl.nasa.govpdcspdc17. The mission design includes campaigns for both reconnaissance (flyby or rendezvous) of the asteroid (to characterize it and the nature of the threat it poses to Earth) and mitigation of the asteroid, via kinetic impactor deflection, nuclear explosive device (NED) deflection, or NED disruption. Relevant scenario parameters are varied to assess the sensitivity of the design outcome, such as asteroid bulk density, asteroid diameter, momentum enhancement factor, spacecraft launch vehicle, and mitigation system type. Different trajectory types are evaluated in the mission design process from purely ballistic to those involving optimal midcourse maneuvers, planetary gravity assists, and/or low-thrust solar electric propulsion. The trajectory optimization is targeted around peak deflection points that were found through a novel linear numerical technique method. The optimization process includes constrain parameters, such as Earth departure date, launch declination, spacecraft, asteroid relative velocity and solar phase angle, spacecraft dry mass, minimum/maximum spacecraft distances from Sun and Earth, and Earth-spacecraft communications line of sight. Results show that one of the best options for the 2017 PDC deflection is solar electric propelled rendezvous mission with a single spacecraft using NED for the deflection.
NASA Astrophysics Data System (ADS)
Markenscoff, Xanthippi; Ni, Luqun
2010-01-01
In the context of the linear theory of elasticity with eigenstrains, the radiated field including inertia effects of a spherical inclusion with dilatational eigenstrain radially expanding is obtained on the basis of the dynamic Green's function, and one of the half-space inclusion boundary (with dilatational eigenstrain) moving from rest in general subsonic motion is obtained by a limiting process from the spherically expanding inclusion as the radius tends to infinity while the eigenstrain remains constrained, and this is the minimum energy solution. The global energy-release rate required to move the plane inclusion boundary and to create an incremental region of eigenstrain is defined analogously to the one for moving cracks and dislocations and represents the mechanical rate of work needed to be provide for the expansion of the inclusion. The calculated value, which is the "self-force" of the expanding inclusion, has a static component plus a dynamic one depending only on the current value of the velocity, while in the case of the spherical boundary, there is an additional contribution accounting for the jump in the strain at the farthest part at the back of the inclusion having the time to reach the front boundary, thus making the dynamic "self-force" history dependent.
Reconstructing matter profiles of spherically compensated cosmic regions in ΛCDM cosmology
NASA Astrophysics Data System (ADS)
de Fromont, Paul; Alimi, Jean-Michel
2018-02-01
The absence of a physically motivated model for large-scale profiles of cosmic voids limits our ability to extract valuable cosmological information from their study. In this paper, we address this problem by introducing the spherically compensated cosmic regions, named CoSpheres. Such cosmic regions are identified around local extrema in the density field and admit a unique compensation radius R1 where the internal spherical mass is exactly compensated. Their origin is studied by extending the standard peak model and implementing the compensation condition. Since the compensation radius evolves as the Universe itself, R1(t) ∝ a(t), CoSpheres behave as bubble Universes with fixed comoving volume. Using the spherical collapse model, we reconstruct their profiles with a very high accuracy until z = 0 in N-body simulations. CoSpheres are symmetrically defined and reconstructed for both central maximum (seeding haloes and galaxies) and minimum (identified with cosmic voids). We show that the full non-linear dynamics can be solved analytically around this particular compensation radius, providing useful predictions for cosmology. This formalism highlights original correlations between local extremum and their large-scale cosmic environment. The statistical properties of these spherically compensated cosmic regions and the possibilities to constrain efficiently both cosmology and gravity will be investigated in companion papers.
Halford, Keith J.
2006-01-01
MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.
Minimum relative entropy, Bayes and Kapur
NASA Astrophysics Data System (ADS)
Woodbury, Allan D.
2011-04-01
The focus of this paper is to illustrate important philosophies on inversion and the similarly and differences between Bayesian and minimum relative entropy (MRE) methods. The development of each approach is illustrated through the general-discrete linear inverse. MRE differs from both Bayes and classical statistical methods in that knowledge of moments are used as ‘data’ rather than sample values. MRE like Bayes, presumes knowledge of a prior probability distribution and produces the posterior pdf itself. MRE attempts to produce this pdf based on the information provided by new moments. It will use moments of the prior distribution only if new data on these moments is not available. It is important to note that MRE makes a strong statement that the imposed constraints are exact and complete. In this way, MRE is maximally uncommitted with respect to unknown information. In general, since input data are known only to within a certain accuracy, it is important that any inversion method should allow for errors in the measured data. The MRE approach can accommodate such uncertainty and in new work described here, previous results are modified to include a Gaussian prior. A variety of MRE solutions are reproduced under a number of assumed moments and these include second-order central moments. Various solutions of Jacobs & van der Geest were repeated and clarified. Menke's weighted minimum length solution was shown to have a basis in information theory, and the classic least-squares estimate is shown as a solution to MRE under the conditions of more data than unknowns and where we utilize the observed data and their associated noise. An example inverse problem involving a gravity survey over a layered and faulted zone is shown. In all cases the inverse results match quite closely the actual density profile, at least in the upper portions of the profile. The similar results to Bayes presented in are a reflection of the fact that the MRE posterior pdf, and its mean are constrained not by d=Gm but by its first moment E(d=Gm), a weakened form of the constraints. If there is no error in the data then one should expect a complete agreement between Bayes and MRE and this is what is shown. Similar results are shown when second moment data is available (e.g. posterior covariance equal to zero). But dissimilar results are noted when we attempt to derive a Bayesian like result from MRE. In the various examples given in this paper, the problems look similar but are, in the final analysis, not equal. The methods of attack are different and so are the results even though we have used the linear inverse problem as a common template.
Code of Federal Regulations, 2013 CFR
2013-10-01
... to owners or operators of motor vehicles and any person who regrooves his own tires for use on motor... tread groove which is at or below the new regrooved depth shall have a minimum of 90 linear inches of tread edges per linear foot of the circumference; (iv) After regrooving, the new groove width generated...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
40 CFR 91.321 - NDIR analyzer calibration.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of full-scale concentration. A minimum of six evenly spaced points covering at least 80 percent of..., a linear calibration may be used. To determine if this criterion is met: (1) Perform a linear least-square regression on the data generated. Use an equation of the form y=mx, where x is the actual chart...
Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.
Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao
2017-06-21
In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.
Good initialization model with constrained body structure for scene text recognition
NASA Astrophysics Data System (ADS)
Zhu, Anna; Wang, Guoyou; Dong, Yangbo
2016-09-01
Scene text recognition has gained significant attention in the computer vision community. Character detection and recognition are the promise of text recognition and affect the overall performance to a large extent. We proposed a good initialization model for scene character recognition from cropped text regions. We use constrained character's body structures with deformable part-based models to detect and recognize characters in various backgrounds. The character's body structures are achieved by an unsupervised discriminative clustering approach followed by a statistical model and a self-build minimum spanning tree model. Our method utilizes part appearance and location information, and combines character detection and recognition in cropped text region together. The evaluation results on the benchmark datasets demonstrate that our proposed scheme outperforms the state-of-the-art methods both on scene character recognition and word recognition aspects.
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Discrete Methods and their Applications
1993-02-03
problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute
Discovery of a suspected giant radio galaxy with the KAT-7 array
NASA Astrophysics Data System (ADS)
Colafrancesco, S.; Mhlahlo, N.; Jarrett, T.; Oozeer, N.; Marchegiani, P.
2016-02-01
We detect a new suspected giant radio galaxy (GRG) discovered by KAT-7. The GRG core is identified with the Wide-field Infrared Survey Explorer source J013313.50-130330.5, an extragalactic source based on its infrared colours and consistent with a misaligned active galactic nuclei-type spectrum at z ≈ 0.3. The multi-ν spectral energy distribution (SED) of the object associated with the GRG core shows a synchrotron peak at ν ≈ 1014 Hz consistent with the SED of a radio galaxy blazar-like core. The angular size of the lobes are ˜4 arcmin for the NW lobe and ˜1.2 arcmin for the SE lobe, corresponding to projected linear distances of ˜1078 kpc and ˜324 kpc, respectively. The best-fitting parameters for the SED of the GRG core and the value of jet boosting parameter δ = 2, indicate that the GRG jet has maximum inclination θ ≈ 30 deg with respect to the line of sight, a value obtained for δ = Γ, while the minimum value of θ is not constrained due to the degeneracy existing with the value of Lorentz factor Γ. Given the photometric redshift z ≈ 0.3, this GRG shows a core luminosity of P1.4 GHz ≈ 5.52 × 1024 W Hz-1, and a luminosity P1.4 GHz ≈ 1.29 × 1025 W Hz-1 for the NW lobe and P1.4 GHz ≈ 0.46 × 1025 W Hz-1 for the SE lobe, consistent with the typical GRG luminosities. The radio lobes show a fractional linear polarization ≈9 per cent consistent with typical values found in other GRG lobes.
Travel time tomography with local image regularization by sparsity constrained dictionary learning
NASA Astrophysics Data System (ADS)
Bianco, M.; Gerstoft, P.
2017-12-01
We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.
Biophysical constraints on leaf expansion in a tall conifer.
Meinzer, Frederick C; Bond, Barbara J; Karanian, Jennifer A
2008-02-01
The physiological mechanisms responsible for reduced extension growth as trees increase in height remain elusive. We evaluated biophysical constraints on leaf expansion in old-growth Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) trees. Needle elongation rates, plastic and elastic extensibility, bulk leaf water (Psi(L)) and osmotic (Psi(pi)) potential, bulk tissue yield threshold and final needle length were characterized along a height gradient in crowns of > 50-m-tall trees during the period between bud break and full expansion (May to June). Although needle length decreased with increasing height, there was no height-related trend in leaf plastic extensibility, which was highest immediately after bud break (2.9%) and declined rapidly to a stable minimum value (0.3%) over a 3-week period during which leaf expansion was completed. There was a significant positive linear relationship between needle elongation rates and plastic extensibility. Yield thresholds were consistently lower at the upper and middle crown sampling heights. The mean yield threshold across all sampling heights was 0.12 +/- 0.03 MPa on June 8, rising to 0.34 +/- 0.03 MPa on June 15 and 0.45 +/- 0.05 MPa on June 24. Bulk leaf Psi(pi) decreased linearly with increasing height at a rate of 0.004 MPa m(-1) during the period of most rapid needle elongation, but the vertical osmotic gradient was not sufficient to fully compensate for the 0.015 MPa m(-1) vertical gradient in Psi(L), implying that bulk leaf turgor declined at a rate of about 0.011 MPa m(-1) increase in height. Although height-dependent reductions in turgor appeared to constrain leaf expansion, it is possible that the impact of reduced turgor was mitigated by delayed phenological development with increasing height, which resulted in an increase with height in the temperature during leaf expansion.
1991-11-08
only simple bounds on delays but also relate the delays in linear inequalities so that tradeoffs are apparent. We model circuits as communicating...set of linear inequalities constraining the variables. These relations provide synthesis tools with information about tradeoffs between circuit delays...available to express the original circuit as a graph of elementary gates and then cover the graph’s fanout-free trees with collections of three-input
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Greg F.; Cooley, Scott K.; Vienna, John D.
This article presents a case study of developing an experimental design for a constrained mixture experiment when the experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this article. The case study involves a 15-component nuclear waste glass example in which SO3 is one of the components. SO3 has a solubility limit inmore » glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture (PQM) model expressed in the relative proportions of the 14 other components. The PQM model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This article discusses the waste glass example and how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study.« less
NASA Astrophysics Data System (ADS)
Yang, B. D.; Chu, M. L.; Menq, C. H.
1998-03-01
Mechanical systems in which moving components are mutually constrained through contacts often lead to complex contact kinematics involving tangential and normal relative motions. A friction contact model is proposed to characterize this type of contact kinematics that imposes both friction non-linearity and intermittent separation non-linearity on the system. The stick-slip friction phenomenon is analyzed by establishing analytical criteria that predict the transition between stick, slip, and separation of the interface. The established analytical transition criteria are particularly important to the proposed friction contact model for the transition conditions of the contact kinematics are complicated by the effect of normal load variation and possible interface separation. With these transition criteria, the induced friction force on the contact plane and the variable normal load perpendicular to the contact plane, can be predicted for any given cyclic relative motions at the contact interface and hysteresis loops can be produced so as to characterize the equivalent damping and stiffness of the friction contact. These-non-linear damping and stiffness methods along with the harmonic balance method are then used to predict the resonant response of a frictionally constrained two-degree-of-freedom oscillator. The predicted results are compared with those of the time integration method and the damping effect, the resonant frequency shift, and the jump phenomenon are examined.
NASA Astrophysics Data System (ADS)
Jerousek, Richard Gregory; Colwell, Josh; Hedman, Matthew M.; French, Richard G.; Marouf, Essam A.; Esposito, Larry; Nicholson, Philip D.
2017-10-01
The Cassini Ultraviolet Imaging Spectrograph (UVIS) and Visual and Infrared Mapping Spectrometer (VIMS) have measured ring optical depths over a wide range of viewing geometries at effective wavelengths of 0.15 μm and 2.9 μm respectively. Using Voyager S and X band radio occultations and the direct inversion of the forward scattered S band signal, Marouf et al. (1982), (1983), and Zebker et al. (1985) determined the power-law size distribution parameters assuming a minimum particle radius of 1 mm. Many further studies have also constrained aspects of the particle size distribution throughout the main rings. Marouf et al. (2008a) determined the smallest ring particles to have radii of 4-5 mm using Cassini RSS data. Harbison et al. (2013) used VIMS solar occultations and also found minimum particle sizes of 4-5 mm in the C ring with q ~ 3.1, where n(a)da=Ca^(-q)da is the assumed differential power-law size distribution for particles of radius a. Recent studies of excess variance in stellar signal by Colwell et al. (2017, submitted) constrain the cross-section-weighted effective particle radius to 1 m to several meters. Using the wide range of viewing geometries available to VIMS and UVIS stellar occultations we find that normal optical depth does not strongly depend on viewing geometry at 10km resolution (which would be the case if self-gravity wakes were present). Throughout the C ring, we fit power-law derived optical depths to those measured by UVIS, VIMS, and by the Cassini Radio Science Subsystem (RSS) at 0.94 and 3.6 cm wavelengths to constrain the four parameters of the size distribution at 10km radial resolution. We find significant amounts of particle size sorting throughout the region with a positive correlation between maximum particles size (amax) and normal optical depth with a mean value of amax ~ 3 m in the background C ring. This correlation is negative in the C ring plateaus. We find an inverse correlation in minimum particle radius with normal optical depth and a mean value of amin ~ 4 mm in the background C ring with slightly larger smallest particles in the C ring plateaus.
Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.
Mulder, Joris
2014-02-01
Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.
NASA Technical Reports Server (NTRS)
Karpel, M.
1994-01-01
Various control analysis, design, and simulation techniques of aeroservoelastic systems require the equations of motion to be cast in a linear, time-invariant state-space form. In order to account for unsteady aerodynamics, rational function approximations must be obtained to represent them in the first order equations of the state-space formulation. A computer program, MIST, has been developed which determines minimum-state approximations of the coefficient matrices of the unsteady aerodynamic forces. The Minimum-State Method facilitates the design of lower-order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena such as the outboard-wing acceleration response to gust velocity. Engineers using this program will be able to calculate minimum-state rational approximations of the generalized unsteady aerodynamic forces. Using the Minimum-State formulation of the state-space equations, they will be able to obtain state-space models with good open-loop characteristics while reducing the number of aerodynamic equations by an order of magnitude more than traditional approaches. These low-order state-space mathematical models are good for design and simulation of aeroservoelastic systems. The computer program, MIST, accepts tabular values of the generalized aerodynamic forces over a set of reduced frequencies. It then determines approximations to these tabular data in the LaPlace domain using rational functions. MIST provides the capability to select the denominator coefficients in the rational approximations, to selectably constrain the approximations without increasing the problem size, and to determine and emphasize critical frequency ranges in determining the approximations. MIST has been written to allow two types data weighting options. The first weighting is a traditional normalization of the aerodynamic data to the maximum unit value of each aerodynamic coefficient. The second allows weighting the importance of different tabular values in determining the approximations based upon physical characteristics of the system. Specifically, the physical weighting capability is such that each tabulated aerodynamic coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. In both cases, the resulting approximations yield a relatively low number of aerodynamic lag states in the subsequent state-space model. MIST is written in ANSI FORTRAN 77 for DEC VAX series computers running VMS. It requires approximately 1Mb of RAM for execution. The standard distribution medium for this package is a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. MIST was developed in 1991. DEC VAX and VMS are trademarks of Digital Equipment Corporation. FORTRAN 77 is a registered trademark of Lahey Computer Systems, Inc.
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1980-01-01
Direct constrained parameter optimization was used to optimally size a medium range transport for minimum direct operating cost. Several stability and control constraints were varied to study the sensitivity of the configuration to specifying the unaugmented flying qualities of transports designed to take maximum advantage of relaxed static stability augmentation systems. Additionally, a number of handling qualities related design constants were studied with respect to their impact on the design.
2015-03-26
Turkish Airborne Early Warning and Control (AEW& C ) aircraft in the combat arena. He examines three combat scenarios Turkey might encounter to cover and...to limited SAR assets, constrained budgets, logistic- maintenance problems, and high risk level of military flights. In recent years, the Turkish Air...model, Set Covering Location Problem (SCLP), defines the minimum number of SAR DPs to cover all fighter aircraft training areas (TAs). The second
The Three-Dimensional Power Spectrum Of Galaxies from the Sloan Digital Sky Survey
2004-05-10
aspects of the three-dimensional clustering of a much larger data set involving over 200,000 galaxies with redshifts. This paper is focused on measuring... papers , we will constrain galaxy bias empirically by using clustering measurements on smaller scales (e.g., I. Zehavi et al. 2004, in preparation...minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well
Evidence for Ultra-Fast Outflows in Radio-Quiet AGNs: III - Location and Energetics
NASA Technical Reports Server (NTRS)
Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.
2012-01-01
Using the results of a previous X-ray photo-ionization modelling of blue-shifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval approx.0.0003-0.03pc (approx.10(exp 2)-10(exp 4)tau(sub s) from the central black hole, consistent with what is expected for accretion disk winds/outflows. The mass outflow rates are constrained between approx.0.01- 1 Stellar Mass/y, corresponding to approx. or >5-10% of the accretion rates. The average lower-upper limits on the mechanical power are logE(sub K) approx. or = 42.6-44.6 erg/s. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN r.osmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyferts galaxies .
Wing morphology and flight development in the short-nosed fruit bat Cynopterus sphinx.
Elangovan, Vadamalai; Yuvana Satya Priya, Elangovan; Raghuram, Hanumanth; Marimuthu, Ganapathy
2007-01-01
Postnatal changes in wing morphology, flight development and aerodynamics were studied in captive free-flying short-nosed fruit bats, Cynopterus sphinx. Pups were reluctant to move until 25 days of age and started fluttering at the mean age of 40 days. The wingspan and wing area increased linearly until 45 days of age by which time the young bats exhibited clumsy flight with gentle turns. At birth, C. sphinx had less-developed handwings compared to armwings; however, the handwing developed faster than the armwing during the postnatal period. Young bats achieved sustained flight at 55 days of age. Wing loading decreased linearly until 35 days of age and thereafter increased to a maximum of 12.82 Nm(-2) at 125 days of age. The logistic equation fitted the postnatal changes in wingspan and wing area better than the Gompertz and von Bertalanffy equations. The predicted minimum power speed (V(mp)) and maximum range speed (V(mr)) decreased until the onset of flight and thereafter the V(mp) and V(mr) increased linearly and approached 96.2% and 96.4%, respectively, of the speed of postpartum females at the age of 125 days. The requirement of minimum flight power (P(mp)) and maximum range power (P(mr)) increased until 85 days of age and thereafter stabilised. The minimum theoretical radius of banked turn (r(min)) decreased until 35 days of age and thereafter increased linearly and attained 86.5% of the r(min) of postpartum females at the age of 125 days.
Encapsulated Ball Bearings for Rotary Micro Machines
2007-01-01
maintaining fabrication simplicity and stability. Although ball bearings have been demonstrated in devices such as linear micromotors [6, 7] and rotary... micromotors [8], they have yet to be integrated into the microfabrication process to fully constrain the dynamic element. In the cases of both Modafe et
A method to stabilize linear systems using eigenvalue gradient information
NASA Technical Reports Server (NTRS)
Wieseman, C. D.
1985-01-01
Formal optimization methods and eigenvalue gradient information are used to develop a stabilizing control law for a closed loop linear system that is initially unstable. The method was originally formulated by using direct, constrained optimization methods with the constraints being the real parts of the eigenvalues. However, because of problems in trying to achieve stabilizing control laws, the problem was reformulated to be solved differently. The method described uses the Davidon-Fletcher-Powell minimization technique to solve an indirect, constrained minimization problem in which the performance index is the Kreisselmeier-Steinhauser function of the real parts of all the eigenvalues. The method is applied successfully to solve two different problems: the determination of a fourth-order control law stabilizes a single-input single-output active flutter suppression system and the determination of a second-order control law for a multi-input multi-output lateral-directional flight control system. Various sets of design variables and initial starting points were chosen to show the robustness of the method.
Programmable motion of DNA origami mechanisms.
Marras, Alexander E; Zhou, Lifeng; Su, Hai-Jun; Castro, Carlos E
2015-01-20
DNA origami enables the precise fabrication of nanoscale geometries. We demonstrate an approach to engineer complex and reversible motion of nanoscale DNA origami machine elements. We first design, fabricate, and characterize the mechanical behavior of flexible DNA origami rotational and linear joints that integrate stiff double-stranded DNA components and flexible single-stranded DNA components to constrain motion along a single degree of freedom and demonstrate the ability to tune the flexibility and range of motion. Multiple joints with simple 1D motion were then integrated into higher order mechanisms. One mechanism is a crank-slider that couples rotational and linear motion, and the other is a Bennett linkage that moves between a compacted bundle and an expanded frame configuration with a constrained 3D motion path. Finally, we demonstrate distributed actuation of the linkage using DNA input strands to achieve reversible conformational changes of the entire structure on ∼ minute timescales. Our results demonstrate programmable motion of 2D and 3D DNA origami mechanisms constructed following a macroscopic machine design approach.
Programmable motion of DNA origami mechanisms
Marras, Alexander E.; Zhou, Lifeng; Su, Hai-Jun; Castro, Carlos E.
2015-01-01
DNA origami enables the precise fabrication of nanoscale geometries. We demonstrate an approach to engineer complex and reversible motion of nanoscale DNA origami machine elements. We first design, fabricate, and characterize the mechanical behavior of flexible DNA origami rotational and linear joints that integrate stiff double-stranded DNA components and flexible single-stranded DNA components to constrain motion along a single degree of freedom and demonstrate the ability to tune the flexibility and range of motion. Multiple joints with simple 1D motion were then integrated into higher order mechanisms. One mechanism is a crank–slider that couples rotational and linear motion, and the other is a Bennett linkage that moves between a compacted bundle and an expanded frame configuration with a constrained 3D motion path. Finally, we demonstrate distributed actuation of the linkage using DNA input strands to achieve reversible conformational changes of the entire structure on ∼minute timescales. Our results demonstrate programmable motion of 2D and 3D DNA origami mechanisms constructed following a macroscopic machine design approach. PMID:25561550
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
NASA Astrophysics Data System (ADS)
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Zeb, Salman; Yousaf, Muhammad
2017-01-01
In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.
Spin noise spectroscopy beyond thermal equilibrium and linear response.
Glasenapp, P; Sinitsyn, N A; Yang, Luyi; Rickel, D G; Roy, D; Greilich, A; Bayer, M; Crooker, S A
2014-10-10
Per the fluctuation-dissipation theorem, the information obtained from spin fluctuation studies in thermal equilibrium is necessarily constrained by the system's linear response functions. However, by including weak radio frequency magnetic fields, we demonstrate that intrinsic and random spin fluctuations even in strictly unpolarized ensembles can reveal underlying patterns of correlation and coupling beyond linear response, and can be used to study nonequilibrium and even multiphoton coherent spin phenomena. We demonstrate this capability in a classical vapor of (41)K alkali atoms, where spin fluctuations alone directly reveal Rabi splittings, the formation of Mollow triplets and Autler-Townes doublets, ac Zeeman shifts, and even nonlinear multiphoton coherences.
Wu, Xiaocheng; Lang, Lingling; Ma, Wenjun; Song, Tie; Kang, Min; He, Jianfeng; Zhang, Yonghui; Lu, Liang; Lin, Hualiang; Ling, Li
2018-07-01
Dengue fever is an important infectious disease in Guangzhou, China; previous studies on the effects of weather factors on the incidence of dengue fever did not consider the linearity of the associations. This study evaluated the effects of daily mean temperature, relative humidity and rainfall on the incidence of dengue fever. A generalized additive model with splines smoothing function was performed to examine the effects of daily mean, minimum and maximum temperatures, relative humidity and rainfall on incidence of dengue fever during 2006-2014. Our analysis detected a non-linear effect of mean, minimum and maximum temperatures and relative humidity on dengue fever with the thresholds at 28°C, 23°C and 32°C for daily mean, minimum and maximum temperatures, 76% for relative humidity, respectively. Below the thresholds, there was a significant positive effect, the excess risk in dengue fever for each 1°C in the mean temperature at lag7-14days was 10.21%, (95% CI: 6.62% to 13.92%), 7.10% (95% CI: 4.99%, 9.26%) for 1°C increase in daily minimum temperature in lag 11days, and 2.27% (95% CI: 0.84%, 3.72%) for 1°C increase in daily maximum temperature in lag 10days; and each 1% increase in relative humidity of lag7-14days was associated with 1.95% (95% CI: 1.21% to 2.69%) in risk of dengue fever. Future prevention and control measures and epidemiology studies on dengue fever should consider these weather factors based on their exposure-response relationship. Copyright © 2018. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Jafari, S.; Hojjati, M. H.
2011-12-01
Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk thickness profile for minimum weight design using the simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. In using semi-analytical the radial domain of the disk is divided into some virtual sub-domains as rings where the weight of each rings must be minimized. Inequality constrain equation used in optimization is to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk and rotating disk does not fail. The results show that the minimum weight obtained for all two methods is almost identical. The PSO method gives a profile with slightly less weight (6.9% less than SA) while the implementation of both PSO and SA methods are easy and provide more flexibility compared with classical methods.
2dFLenS and KiDS: determining source redshift distributions with cross-correlations
NASA Astrophysics Data System (ADS)
Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian
2017-03-01
We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.; Nguyen, Duc T.
2008-01-01
A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.
Computational strategies in the dynamic simulation of constrained flexible MBS
NASA Technical Reports Server (NTRS)
Amirouche, F. M. L.; Xie, M.
1993-01-01
This research focuses on the computational dynamics of flexible constrained multibody systems. At first a recursive mapping formulation of the kinematical expressions in a minimum dimension as well as the matrix representation of the equations of motion are presented. The method employs Kane's equation, FEM, and concepts of continuum mechanics. The generalized active forces are extended to include the effects of high temperature conditions, such as creep, thermal stress, and elastic-plastic deformation. The time variant constraint relations for rolling/contact conditions between two flexible bodies are also studied. The constraints for validation of MBS simulation of gear meshing contact using a modified Timoshenko beam theory are also presented. The last part deals with minimization of vibration/deformation of the elastic beam in multibody systems making use of time variant boundary conditions. The above methodologies and computational procedures developed are being implemented in a program called DYAMUS.
NASA Astrophysics Data System (ADS)
Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai
2008-04-01
Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.
Natural migration rates of trees: Global terrestrial carbon cycle implications. Book chapter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomon, A.M.
The paper discusses the forest-ecological processes which constrain the rate of response by forests to rapid future environmental change. It establishes a minimum response time by natural tree populations which invade alien landscapes and reach the status of a mature, closed canopy forest when maximum carbon storage is realized. It considers rare long-distance and frequent short-distance seed transport, seedling and tree establishment, sequential tree and stand maturation, and spread between newly established colonies.
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
Optimal Control Strategies for Constrained Relative Orbits
2007-09-01
the chief. The work assumes the Clohessy - Wiltshire closeness assump- tion between the deputy and chief is valid, however, elliptical chief orbits are...133 Appendix G. A Closed-Form Solution of the Linear Clohessy - Wiltshire Equa- tions...Counterspace . . . . . . . . . . . . . . . . . . . 1 CW Clohessy - Wiltshire . . . . . . . . . . . . . . . . . . . . . . 4 DARPA Defense Advanced Research
NASA Astrophysics Data System (ADS)
Sagar, M. W.; Seward, D.; Norton, K. P.
2016-12-01
The 650 km-long Australian-Pacific plate boundary Alpine Fault is remarkably straight at a regional scale, except for a prominent S-shaped bend in the northern South Island. This is a restraining bend and has been referred to as the `Big Bend' due to similarities with the Transverse Ranges section of the San Andreas Fault. The Alpine Fault is the main source of seismic hazard in the South Island, yet there are no constraints on slip rates at the Big Bend. Furthermore, the timing of Big Bend development is poorly constrained to the Miocene. To address these issues we are using the fission-track (FT) and 40Ar/39Ar thermochronometers, together with basin-averaged cosmogenic nuclide 10Be concentrations to constrain the onset and rate of Neogene-Quaternary exhumation of the Australian and Pacific plates at the Big Bend. Exhumation rates at the Big Bend are expected to be greater than those for adjoining sections of the Alpine Fault due to locally enhanced shortening. Apatite FT ages and modelled thermal histories indicate that exhumation of the Australian Plate had begun by 13 Ma and 3 km of exhumation has occurred since that time, requiring a minimum exhumation rate of 0.2 mm/year. In contrast, on the Pacific Plate, zircon FT cooling ages suggest ≥7 km of exhumation in the past 2-3 Ma, corresponding to a minimum exhumation rate of 2 mm/year. Preliminary assessment of stream channel gradients either side of the Big Bend suggests equilibrium between uplift and erosion. The implication of this is that Quaternary erosion rates estimated from 10Be concentrations will approximate uplift rates. These uplift rates will help to better constrain the dip-slip rate of the Alpine Fault, which will allow the National Seismic Hazard Model to be updated.
Xia, Yangkun; Fu, Zhuo; Pan, Lijun; Duan, Fenghua
2018-01-01
The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can't be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution.
Xia, Yangkun; Pan, Lijun; Duan, Fenghua
2018-01-01
The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can’t be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution. PMID:29763419
Modeling hardwood crown radii using circular data analysis
Paul F. Doruska; Hal O. Liechty; Douglas J. Marshall
2003-01-01
Cylindrical data are bivariate data composed of a linear and an angular component. One can use uniform, first-order (one maximum and one minimum) or second-order (two maxima and two minima) models to relate the linear component to the angular component. Crown radii can be treated as cylindrical data when the azimuths at which the radii are measured are also recorded....
Single-Grain (U-Th)/He Ages of Phosphates from St. Severin Chondrite
NASA Astrophysics Data System (ADS)
Min, K. K.; Reiners, P. W.; Shuster, D. L.
2010-12-01
Thermal evolution of chondrites provides valuable information on the heat budget, internal structure and dimensions of their parent bodies once existed before disruption. St. Severin LL6 ordinary chondrite is known to have experienced relatively slow cooling compared to H chondrites. The timings of primary cooling and subsequent thermal metamorphism were constrained by U/Pb (4.55 Ga), Sm/Nd (4.55 Ga), Rb/Sr (4.51 Ga) and K/Ar (4.4 Ga) systems. However, cooling history after the thermal metamorphism in a low temperature range (<200 °C) is poorly understood. In order to constrain the low-T thermal history of this meteorite, we performed (1) single-grain (U-Th)/He dating for five chlorapatite and fourteen merrillite aggregates from St. Severin, (2) examination of textural and chemical features of the phosphate aggregates using a scanning electron microscope (SEM), and (3) proton-irradiation followed by 4He and 3He diffusion experiments for single grains of chlorapatite and merrillite from Guarena meteorite, for general characterization of He diffusivity in these major U-Th reservoirs in meteorites. The α-recoil-uncorrected ages from St. Severin are distributed in a wide range of 333 ± 6 Ma and 4620 ± 1307 Ma. The probability density plot of these data shows a typical younging-skewed age distribution with a prominent peak at ~ 4.3 Ga. The weighted mean of the nine oldest samples is 4.284 ± 0.130 Ga, which is consistent with the peak of the probability plot. The linear dimensions of the phosphates are generally in the range of ~50 µm to 200 µm. The α recoil correction factor (FT) based on the morphology of the phosphate yields improbably old ages (>4.6 Ga), suggesting that within the sample aggregates, significant amounts of the α particles ejected from phosphates were implanted into the adjacent phases and therefore that this correction may not be appropriate in this case. The minimum FT value of 0.95 is calculated based on the peak (U-Th)/He age and 40Ar/39Ar data which provide the upper limit of the α-recoil-corrected (U-Th)/He ages. From these data, we conclude that the St. Severin cooled through the closure temperatures of chlorapatite and merrillite during ~4.3 - 4.4 Ga. The radiogenic 4He and proton-induced 3He diffusion experiments yield two well-defined linear trends in Arrhenius plot for chlorapatite (r = 43 µm) and merrillite (r = 59 µm) grains. The linear regression of 3He data for chlorapatite yields Ea = 128.1 ± 2.4 kJ/mol, and ln(Do/a2) = 11.6 ± 0.5 ln(s-1) which are generally consistent with the terrestrial Durango apatite and meteoritic Acapulco apatite. Linear regression to the merrillite data corresponds to Ea = 135.1 ± 2.5 kJ/mol, and ln(Do/a2) = 5.73 ± 0.37 ln(s-1). The new data indicate that diffusive retentivity of He within merrillite is significantly higher than that of chlorapatite, which has implications for quantitative interpretation of He ages measured in meteoritic phosphates.
NASA Astrophysics Data System (ADS)
Suzuki, Masuo
2013-01-01
A new variational principle of steady states is found by introducing an integrated type of energy dissipation (or entropy production) instead of instantaneous energy dissipation. This new principle is valid both in linear and nonlinear transport phenomena. Prigogine’s dream has now been realized by this new general principle of minimum “integrated” entropy production (or energy dissipation). This new principle does not contradict with the Onsager-Prigogine principle of minimum instantaneous entropy production in the linear regime, but it is conceptually different from the latter which does not hold in the nonlinear regime. Applications of this theory to electric conduction, heat conduction, particle diffusion and chemical reactions are presented. The irreversibility (or positive entropy production) and long time tail problem in Kubo’s formula are also discussed in the Introduction and last section. This constitutes the complementary explanation of our theory of entropy production given in the previous papers (M. Suzuki, Physica A 390 (2011) 1904 and M. Suzuki, Physica A 391 (2012) 1074) and has given the motivation of the present investigation of variational principle.
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α -uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α =2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c =e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c =1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α ≥3 , minimum vertex covers on α -uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c =e /(α -1 ) where the replica symmetry is broken.
Takabe, Satoshi; Hukushima, Koji
2016-05-01
Typical behavior of the linear programming (LP) problem is studied as a relaxation of the minimum vertex cover (min-VC), a type of integer programming (IP) problem. A lattice-gas model on the Erdös-Rényi random graphs of α-uniform hyperedges is proposed to express both the LP and IP problems of the min-VC in the common statistical mechanical model with a one-parameter family. Statistical mechanical analyses reveal for α=2 that the LP optimal solution is typically equal to that given by the IP below the critical average degree c=e in the thermodynamic limit. The critical threshold for good accuracy of the relaxation extends the mathematical result c=1 and coincides with the replica symmetry-breaking threshold of the IP. The LP relaxation for the minimum hitting sets with α≥3, minimum vertex covers on α-uniform random graphs, is also studied. Analytic and numerical results strongly suggest that the LP relaxation fails to estimate optimal values above the critical average degree c=e/(α-1) where the replica symmetry is broken.
Rate-compatible protograph LDPC code families with linear minimum distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush (Inventor); Dolinar, Jr., Samuel J. (Inventor); Jones, Christopher R. (Inventor)
2012-01-01
Digital communication coding methods are shown, which generate certain types of low-density parity-check (LDPC) codes built from protographs. A first method creates protographs having the linear minimum distance property and comprising at least one variable node with degree less than 3. A second method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of certain variable nodes as transmitted or non-transmitted. A third method creates families of protographs of different rates, all structurally identical for all rates except for a rate-dependent designation of the status of certain variable nodes as non-transmitted or set to zero. LDPC codes built from the protographs created by these methods can simultaneously have low error floors and low iterative decoding thresholds.
NASA Technical Reports Server (NTRS)
Hauser, F. D.; Szollosi, G. D.; Lakin, W. S.
1972-01-01
COEBRA, the Computerized Optimization of Elastic Booster Autopilots, is an autopilot design program. The bulk of the design criteria is presented in the form of minimum allowed gain/phase stability margins. COEBRA has two optimization phases: (1) a phase to maximize stability margins; and (2) a phase to optimize structural bending moment load relief capability in the presence of minimum requirements on gain/phase stability margins.
Optimal vibration control of a rotating plate with self-sensing active constrained layer damping
NASA Astrophysics Data System (ADS)
Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng
2012-04-01
This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.
A transformation method for constrained-function minimization
NASA Technical Reports Server (NTRS)
Park, S. K.
1975-01-01
A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Prado, K
Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors,more » cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.« less
Women, Education and Empowerment in Asia.
ERIC Educational Resources Information Center
Jayaweera, Swarna
1997-01-01
Examines the relationship between education and economic, political, and social status for empowering women in Asia. Using macro statistics from each country, no positive linear relationship is revealed. Further examined are factors that surface in each area, including gender relations within the family, that constrain the role of education as an…
Weed Diversity Affects Soybean and Maize Yield in a Long Term Experiment in Michigan, USA.
Ferrero, Rosana; Lima, Mauricio; Davis, Adam S; Gonzalez-Andujar, Jose L
2017-01-01
Managing production environments in ways that promote weed community diversity may enhance both crop production and the development of a more sustainable agriculture. This study analyzed data of productivity of maize (corn) and soybean in plots in the Main Cropping System Experiment (MCSE) at the W. K. Kellogg Biological Station Long-Term Ecological Research (KBS-LTER) in Michigan, USA, from 1996 to 2011. We used models derived from population ecology to explore how weed diversity, temperature, and precipitation interact with crop yields. Using three types of models that considered internal and external (climate and weeds) factors, with additive or non-linear variants, we found that changes in weed diversity were associated with changes in rates of crop yield increase over time for both maize and soybeans. The intrinsic capacity for soybean yield increase in response to the environment was greater under more diverse weed communities. Soybean production risks were greatest in the least weed diverse systems, in which each weed species lost was associated with progressively greater crop yield losses. Managing for weed community diversity, while suppressing dominant, highly competitive weeds, may be a helpful strategy for supporting long term increases in soybean productivity. In maize, there was a negative and non-additive response of yields to the interaction between weed diversity and minimum air temperatures. When cold temperatures constrained potential maize productivity through limited resources, negative interactions with weed diversity became more pronounced. We suggest that: (1) maize was less competitive in cold years allowing higher weed diversity and the dominance of some weed species; or (2) that cold years resulted in increased weed richness and prevalence of competitive weeds, thus reducing crop yields. Therefore, we propose to control dominant weed species especially in the years of low yield and extreme minimum temperatures to improve maize yields. Results of our study indicate that through the proactive management of weed diversity, it may be possible to promote both high productivity of crops and environmental sustainability.
Weed Diversity Affects Soybean and Maize Yield in a Long Term Experiment in Michigan, USA
Ferrero, Rosana; Lima, Mauricio; Davis, Adam S.; Gonzalez-Andujar, Jose L.
2017-01-01
Managing production environments in ways that promote weed community diversity may enhance both crop production and the development of a more sustainable agriculture. This study analyzed data of productivity of maize (corn) and soybean in plots in the Main Cropping System Experiment (MCSE) at the W. K. Kellogg Biological Station Long-Term Ecological Research (KBS-LTER) in Michigan, USA, from 1996 to 2011. We used models derived from population ecology to explore how weed diversity, temperature, and precipitation interact with crop yields. Using three types of models that considered internal and external (climate and weeds) factors, with additive or non-linear variants, we found that changes in weed diversity were associated with changes in rates of crop yield increase over time for both maize and soybeans. The intrinsic capacity for soybean yield increase in response to the environment was greater under more diverse weed communities. Soybean production risks were greatest in the least weed diverse systems, in which each weed species lost was associated with progressively greater crop yield losses. Managing for weed community diversity, while suppressing dominant, highly competitive weeds, may be a helpful strategy for supporting long term increases in soybean productivity. In maize, there was a negative and non-additive response of yields to the interaction between weed diversity and minimum air temperatures. When cold temperatures constrained potential maize productivity through limited resources, negative interactions with weed diversity became more pronounced. We suggest that: (1) maize was less competitive in cold years allowing higher weed diversity and the dominance of some weed species; or (2) that cold years resulted in increased weed richness and prevalence of competitive weeds, thus reducing crop yields. Therefore, we propose to control dominant weed species especially in the years of low yield and extreme minimum temperatures to improve maize yields. Results of our study indicate that through the proactive management of weed diversity, it may be possible to promote both high productivity of crops and environmental sustainability. PMID:28286509
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, D; Salmon, H; Pavan, G
2014-06-01
Purpose: Evaluate and compare retrospective prostate treatment plan using Volumetric Modulated Arc Therapy (RapidArc™ - Varian) technique with single or double arcs at COI Group. Methods: Ten patients with present prostate and seminal vesicle neoplasia were replanned as a target treatment volume and a prescribed dose of 78 Gy. A baseline planning, using single arc, was developed for each case reaching for the best result on PTV, in order to minimize the dose on organs at risk (OAR). Maintaining the same optimization objectives used on baseline plan, two copies for optimizing single and double arcs, have been developed. The plansmore » were performed with 10 MV photon beam energy on Eclipse software, version 11.0, making use of Trilogy linear accelerator with Millenium HD120 multileaf collimator. Comparisons on PTV have been performed, such as: maximum, minimum and mean dose, gradient dose, as well as the quantity of monitor units, treatment time and homogeneity and conformity index. OARs constrains dose have been evaluated, comparing both optimizations. Results: Regarding PTV coverage, the difference of the minimum, maximum and mean dose were 1.28%, 0.7% and 0.2% respectively higher for single arc. When analyzed the index of homogeneity found a difference of 0.99% higher when compared with double arcs. However homogeneity index was 0.97% lower on average by using single arc. The doses on the OARs, in both cases, were in compliance to the recommended limits RTOG 0415. With the use of single arc, the quantity of monitor units was 10,1% lower, as well as the Beam-On time, 41,78%, when comparing double arcs, respectively. Conclusion: Concerning the optimization of patients with present prostate and seminal vesicle neoplasia, the use of single arc reaches similar objectives, when compared to double arcs, in order to decrease the treatment time and the quantity of monitor units.« less
Adikaram, K K L B; Hussein, M A; Effenberger, M; Becker, T
2015-01-01
Data processing requires a robust linear fit identification method. In this paper, we introduce a non-parametric robust linear fit identification method for time series. The method uses an indicator 2/n to identify linear fit, where n is number of terms in a series. The ratio Rmax of amax - amin and Sn - amin*n and that of Rmin of amax - amin and amax*n - Sn are always equal to 2/n, where amax is the maximum element, amin is the minimum element and Sn is the sum of all elements. If any series expected to follow y = c consists of data that do not agree with y = c form, Rmax > 2/n and Rmin > 2/n imply that the maximum and minimum elements, respectively, do not agree with linear fit. We define threshold values for outliers and noise detection as 2/n * (1 + k1) and 2/n * (1 + k2), respectively, where k1 > k2 and 0 ≤ k1 ≤ n/2 - 1. Given this relation and transformation technique, which transforms data into the form y = c, we show that removing all data that do not agree with linear fit is possible. Furthermore, the method is independent of the number of data points, missing data, removed data points and nature of distribution (Gaussian or non-Gaussian) of outliers, noise and clean data. These are major advantages over the existing linear fit methods. Since having a perfect linear relation between two variables in the real world is impossible, we used artificial data sets with extreme conditions to verify the method. The method detects the correct linear fit when the percentage of data agreeing with linear fit is less than 50%, and the deviation of data that do not agree with linear fit is very small, of the order of ±10-4%. The method results in incorrect detections only when numerical accuracy is insufficient in the calculation process.
Energy Requirements of Hydrogen-utilizing Microbes: A Boundary Condition for Subsurface Life
NASA Technical Reports Server (NTRS)
Hoehler, Tori M.; Alperin, Marc J.; Albert, Daniel B.; Martens, Christopher S.
2003-01-01
Microbial ecosystems based on the energy supplied by water-rock chemistry carry particular significance in the context of geo- and astrobiology. With no direct dependence on solar energy, lithotrophic microbes could conceivably penetrate a planetary crust to a depth limited only by temperature or pressure constraints (several kilometers or more). The deep lithospheric habitat is thereby potentially much greater in volume than its surface counterpart, and in addition offers a stable refuge against inhospitable surface conditions related to climatic or atmospheric evolution (e.g., Mars) or even high-energy impacts (e.g., early in Earth's history). The possibilities for a deep microbial biosphere are, however, greatly constrained by life s need to obtain energy at a certain minimum rate (the maintenance energy requirement) and of a certain minimum magnitude (the energy quantum requirement). The mere existence of these requirements implies that a significant fraction of the chemical free energy available in the subsurface environment cannot be exploited by life. Similar limits may also apply to the usefulness of light energy at very low intensities or long wavelengths. Quantification of these minimum energy requirements in terrestrial microbial ecosystems will help to establish a criterion of energetic habitability that can significantly constrain the prospects for life in Earth's subsurface, or on other bodies in the solar system. Our early work has focused on quantifying the biological energy quantum requirement for methanogenic archaea, as representatives of a plausible subsurface metabolism, in anoxic sediments (where energy availability is among the most limiting factors in microbial population growth). In both field and laboratory experiments utilizing these sediments, methanogens retain a remarkably consistent free energy intake, in the face of fluctuating environmental conditions that affect energy availability. The energy yields apparently required by methanogens in these sediment systems for sustained metabolism are about half that previously thought necessary. Lowered energy requirements would imply that a correspondingly greater proportion of the planetary subsurface could represent viable habitat for microorganisms.
Alarcón, Diego; Cavieres, Lohengrin A
2015-01-01
In order to assess the effects of climate change in temperate rainforest plants in southern South America in terms of habitat size, representation in protected areas, considering also if the expected impacts are similar for dominant trees and understory plant species, we used niche modeling constrained by species migration on 118 plant species, considering two groups of dominant trees and two groups of understory ferns. Representation in protected areas included Chilean national protected areas, private protected areas, and priority areas planned for future reserves, with two thresholds for minimum representation at the country level: 10% and 17%. With a 10% representation threshold, national protected areas currently represent only 50% of the assessed species. Private reserves are important since they increase up to 66% the species representation level. Besides, 97% of the evaluated species may achieve the minimum representation target only if the proposed priority areas were included. With the climate change scenario representation levels slightly increase to 53%, 69%, and 99%, respectively, to the categories previously mentioned. Thus, the current location of all the representation categories is useful for overcoming climate change by 2050. Climate change impacts on habitat size and representation of dominant trees in protected areas are not applicable to understory plants, highlighting the importance of assessing these effects with a larger number of species. Although climate change will modify the habitat size of plant species in South American temperate rainforests, it will have no significant impact in terms of the number of species adequately represented in Chile, where the implementation of the proposed reserves is vital to accomplish the present and future minimum representation. Our results also show the importance of using migration dispersal constraints to develop more realistic future habitat maps from climate change predictions.
Alarcón, Diego; Cavieres, Lohengrin A.
2015-01-01
In order to assess the effects of climate change in temperate rainforest plants in southern South America in terms of habitat size, representation in protected areas, considering also if the expected impacts are similar for dominant trees and understory plant species, we used niche modeling constrained by species migration on 118 plant species, considering two groups of dominant trees and two groups of understory ferns. Representation in protected areas included Chilean national protected areas, private protected areas, and priority areas planned for future reserves, with two thresholds for minimum representation at the country level: 10% and 17%. With a 10% representation threshold, national protected areas currently represent only 50% of the assessed species. Private reserves are important since they increase up to 66% the species representation level. Besides, 97% of the evaluated species may achieve the minimum representation target only if the proposed priority areas were included. With the climate change scenario representation levels slightly increase to 53%, 69%, and 99%, respectively, to the categories previously mentioned. Thus, the current location of all the representation categories is useful for overcoming climate change by 2050. Climate change impacts on habitat size and representation of dominant trees in protected areas are not applicable to understory plants, highlighting the importance of assessing these effects with a larger number of species. Although climate change will modify the habitat size of plant species in South American temperate rainforests, it will have no significant impact in terms of the number of species adequately represented in Chile, where the implementation of the proposed reserves is vital to accomplish the present and future minimum representation. Our results also show the importance of using migration dispersal constraints to develop more realistic future habitat maps from climate change predictions. PMID:25786226
Wahab, M Farooq; Patel, Darshan C; Armstrong, Daniel W
2017-08-04
Most peak shapes obtained in separation science depart from linearity for various reasons such as thermodynamic, kinetic, or flow based effects. An indication of the nature of asymmetry often helps in problem solving e.g. in column overloading, slurry packing, buffer mismatch, and extra-column band broadening. However, existing tests for symmetry/asymmetry only indicate the skewness in excess (tail or front) and not the presence of both. Two simple graphical approaches are presented to analyze peak shapes typically observed in gas, liquid, and supercritical fluid chromatography as well as capillary electrophoresis. The derivative test relies on the symmetry of the inflection points and the maximum and minimum values of the derivative. The Gaussian test is a constrained curve fitting approach and determines the residuals. The residual pattern graphically allows the user to assess the problematic regions in a given peak, e.g., concurrent tailing or fronting, something which cannot be easily done with other current methods. The template provided in MS Excel automates this process. The total peak shape analysis extracts the peak parameters from the upper sections (>80% height) of the peak rather than the half height as is done conventionally. A number of situations are presented and the utility of this approach in solving practical problems is demonstrated. Copyright © 2017 Elsevier B.V. All rights reserved.
Spacecraft Mission Design for the Mitigation of the 2017 PDC Hypothetical Asteroid Threat
NASA Technical Reports Server (NTRS)
Barbee, Brent W.; Sarli, Bruno V.; Lyzhoft, Joshua; Chodas, Paul W.; Englander, Jacob A.
2017-01-01
This paper presents a detailed mission design analysis results for the 2017 Planetary Defense Conference (PDC) Hypothetical Asteroid Impact Scenario, documented at https://cneos.jpl.nasa.gov/ pd/cs/pdc17/. The mission design includes campaigns for both reconnaissance (flyby or rendezvous) of the asteroid (to characterize it and the nature of the threat it poses to Earth) and mitigation of the asteroid, via kinetic impactor deflection, nuclear explosive device (NED) deflection, or NED disruption. Relevant scenario parameters are varied to assess the sensitivity of the design outcome, such as asteroid bulk density, asteroid diameter, momentum enhancement factor, spacecraft launch vehicle, and mitigation system type. Different trajectory types are evaluated in the mission design process from purely ballistic to those involving optimal midcourse maneuvers, planetary gravity assists, and/or lowthrust solar electric propulsion. The trajectory optimization is targeted around peak deflection points that were found through a novel linear numerical technique method. The optimization process includes constrain parameters, such as Earth departure date, launch declination, spacecraft/asteroid relative velocity and solar phase angle, spacecraft dry mass, minimum/maximum spacecraft distances from Sun and Earth, and Earth/spacecraft communications line of sight. Results show that one of the best options for the 2017 PDC deflection is solar electric propelled rendezvous mission with a single spacecraft using NED for the deflection
Visual communication with retinex coding.
Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R
2000-04-10
Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.
Visual Communication with Retinex Coding
NASA Astrophysics Data System (ADS)
Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel
2000-04-01
Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.
NASA Astrophysics Data System (ADS)
Hanada, Masaki; Nakazato, Hidenori; Watanabe, Hitoshi
Multimedia applications such as music or video streaming, video teleconferencing and IP telephony are flourishing in packet-switched networks. Applications that generate such real-time data can have very diverse quality-of-service (QoS) requirements. In order to guarantee diverse QoS requirements, the combined use of a packet scheduling algorithm based on Generalized Processor Sharing (GPS) and leaky bucket traffic regulator is the most successful QoS mechanism. GPS can provide a minimum guaranteed service rate for each session and tight delay bounds for leaky bucket constrained sessions. However, the delay bounds for leaky bucket constrained sessions under GPS are unnecessarily large because each session is served according to its associated constant weight until the session buffer is empty. In order to solve this problem, a scheduling policy called Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) was proposed in [17]. ORC-GPS is a rate-based scheduling like GPS, and controls the service rate in order to lower the delay bounds for leaky bucket constrained sessions. In this paper, we propose a call admission control (CAC) algorithm for ORC-GPS, for leaky-bucket constrained sessions with deterministic delay requirements. This CAC algorithm for ORC-GPS determines the optimal values of parameters of ORC-GPS from the deterministic delay requirements of the sessions. In numerical experiments, we compare the CAC algorithm for ORC-GPS with one for GPS in terms of schedulable region and computational complexity.
On the linear relation between the mean and the standard deviation of a response time distribution.
Wagenmakers, Eric-Jan; Brown, Scott
2007-07-01
Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different experimental paradigms support a linear relation between RT mean and RT standard deviation. Both R. Ratcliff's (1978) diffusion model and G. D. Logan's (1988) instance theory of automatization provide explanations for this linear relation. The authors identify and discuss 3 specific boundary conditions for the linear law to hold. The law constrains RT models and supports the use of the coefficient of variation to (a) compare variability while controlling for differences in baseline speed of processing and (b) assess whether changes in performance with practice are due to quantitative speedup or qualitative reorganization. Copyright 2007 APA.
Staging optics considerations for a plasma wakefield acceleration linear collider
NASA Astrophysics Data System (ADS)
Lindstrøm, C. A.; Adli, E.; Allen, J. M.; Delahaye, J. P.; Hogan, M. J.; Joshi, C.; Muggli, P.; Raubenheimer, T. O.; Yakimenko, V.
2016-09-01
Plasma wakefield acceleration offers acceleration gradients of several GeV/m, ideal for a next-generation linear collider. The beam optics requirements between plasma cells include injection and extraction of drive beams, matching the main beam beta functions into the next cell, canceling dispersion as well as constraining bunch lengthening and chromaticity. To maintain a high effective acceleration gradient, this must be accomplished in the shortest distance possible. A working example is presented, using novel methods to correct chromaticity, as well as scaling laws for a high energy regime.
A linearized theory method of constrained optimization for supersonic cruise wing design
NASA Technical Reports Server (NTRS)
Miller, D. S.; Carlson, H. W.; Middleton, W. D.
1976-01-01
A linearized theory wing design and optimization procedure which allows physical realism and practical considerations to be imposed as constraints on the optimum (least drag due to lift) solution is discussed and examples of application are presented. In addition to the usual constraints on lift and pitching moment, constraints are imposed on wing surface ordinates and wing upper surface pressure levels and gradients. The design procedure also provides the capability of including directly in the optimization process the effects of other aircraft components such as a fuselage, canards, and nacelles.
Control of linear uncertain systems utilizing mismatched state observers
NASA Technical Reports Server (NTRS)
Goldstein, B.
1972-01-01
The control of linear continuous dynamical systems is investigated as a problem of limited state feedback control. The equations which describe the structure of an observer are developed constrained to time-invarient systems. The optimal control problem is formulated, accounting for the uncertainty in the design parameters. Expressions for bounds on closed loop stability are also developed. The results indicate that very little uncertainty may be tolerated before divergence occurs in the recursive computation algorithms, and the derived stability bound yields extremely conservative estimates of regions of allowable parameter variations.
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Oscillations in stellar atmospheres
NASA Technical Reports Server (NTRS)
Costa, A.; Ringuelet, A. E.; Fontenla, J. M.
1989-01-01
Atmospheric excitation and propagation of oscillations are analyzed for typical pulsating stars. The linear, plane-parallel approach for the pulsating atmosphere gives a local description of the phenomenon. From the local analysis of oscillations, the minimum frequencies are obtained for radially propagating waves. The comparison of the minimum frequencies obtained for a variety of stellar types is in good agreement with the observed periods of the oscillations. The role of the atmosphere in the globar stellar pulsations is thus emphasized.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Application of quadratic optimization to supersonic inlet control.
NASA Technical Reports Server (NTRS)
Lehtinen, B.; Zeller, J. R.
1972-01-01
This paper describes the application of linear stochastic optimal control theory to the design of the control system for the air intake, the inlet, of a supersonic air-breathing propulsion system. The controls must maintain a stable inlet shock position in the presence of random airflow disturbances and prevent inlet unstart. Two different linear time invariant controllers are developed. One is designed to minimize a nonquadratic index, the expected frequency of inlet unstart, and the other is designed to minimize the mean square value of inlet shock motion. The quadratic equivalence principle is used to obtain a linear controller that minimizes the nonquadratic index. The two controllers are compared on the basis of unstart prevention, control effort requirements, and frequency response. It is concluded that while controls designed to minimize unstarts are desirable in that the index minimized is physically meaningful, computation time required is longer than for the minimum mean square shock position approach. The simpler minimum mean square shock position solution produced expected unstart frequency values which were not significantly larger than those of the nonquadratic solution.
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
NASA Astrophysics Data System (ADS)
Brunner, Philip; Doherty, J.; Simmons, Craig T.
2012-07-01
The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.
H2, fixed architecture, control design for large scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1990-01-01
The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.
A comparison of optimization algorithms for localized in vivo B0 shimming.
Nassirpour, Sahar; Chang, Paul; Fillmer, Ariane; Henning, Anke
2018-02-01
To compare several different optimization algorithms currently used for localized in vivo B 0 shimming, and to introduce a novel, fast, and robust constrained regularized algorithm (ConsTru) for this purpose. Ten different optimization algorithms (including samples from both generic and dedicated least-squares solvers, and a novel constrained regularized inversion method) were implemented and compared for shimming in five different shimming volumes on 66 in vivo data sets from both 7 T and 9.4 T. The best algorithm was chosen to perform single-voxel spectroscopy at 9.4 T in the frontal cortex of the brain on 10 volunteers. The results of the performance tests proved that the shimming algorithm is prone to unstable solutions if it depends on the value of a starting point, and is not regularized to handle ill-conditioned problems. The ConsTru algorithm proved to be the most robust, fast, and efficient algorithm among all of the chosen algorithms. It enabled acquisition of spectra of reproducible high quality in the frontal cortex at 9.4 T. For localized in vivo B 0 shimming, the use of a dedicated linear least-squares solver instead of a generic nonlinear one is highly recommended. Among all of the linear solvers, the constrained regularized method (ConsTru) was found to be both fast and most robust. Magn Reson Med 79:1145-1156, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Sparsest representations and approximations of an underdetermined linear system
NASA Astrophysics Data System (ADS)
Tardivel, Patrick J. C.; Servien, Rémi; Concordet, Didier
2018-05-01
In an underdetermined linear system of equations, constrained l 1 minimization methods such as the basis pursuit or the lasso are often used to recover one of the sparsest representations or approximations of the system. The null space property is a sufficient and ‘almost’ necessary condition to recover a sparsest representation with the basis pursuit. Unfortunately, this property cannot be easily checked. On the other hand, the mutual coherence is an easily checkable sufficient condition insuring the basis pursuit to recover one of the sparsest representations. Because the mutual coherence condition is too strong, it is hardly met in practice. Even if one of these conditions holds, to our knowledge, there is no theoretical result insuring that the lasso solution is one of the sparsest approximations. In this article, we study a novel constrained problem that gives, without any condition, one of the sparsest representations or approximations. To solve this problem, we provide a numerical method and we prove its convergence. Numerical experiments show that this approach gives better results than both the basis pursuit problem and the reweighted l 1 minimization problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elbert, Stephen T.; Kalsi, Karanjit; Vlachopoulou, Maria
Financial Transmission Rights (FTRs) help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, a novel non-linear dynamical system (NDS) approach is proposed tomore » solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on large-scale systems using data from the Western Electricity Coordinating Council (WECC). The NDS is demonstrated to outperform the widely used CPLEX algorithms while exhibiting superior scalability. Furthermore, the NDS based solver can be easily parallelized which results in significant computational improvement.« less
The Efficiency of Split Panel Designs in an Analysis of Variance Model
Wang, Wei-Guo; Liu, Hai-Jun
2016-01-01
We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
Microgrid Optimal Scheduling With Chance-Constrained Islanding Capability
Liu, Guodong; Starke, Michael R.; Xiao, B.; ...
2017-01-13
To facilitate the integration of variable renewable generation and improve the resilience of electricity sup-ply in a microgrid, this paper proposes an optimal scheduling strategy for microgrid operation considering constraints of islanding capability. A new concept, probability of successful islanding (PSI), indicating the probability that a microgrid maintains enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation after instantaneously islanding from the main grid, is developed. The PSI is formulated as mixed-integer linear program using multi-interval approximation taking into account the probability distributions of forecast errors of wind, PV and load. With themore » goal of minimizing the total operating cost while preserving user specified PSI, a chance-constrained optimization problem is formulated for the optimal scheduling of mirogrids and solved by mixed integer linear programming (MILP). Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling strategy. Lastly, we verify the relationship between PSI and various factors.« less
Optimal apodization design for medical ultrasound using constrained least squares part I: theory.
Guenther, Drake A; Walker, William F
2007-02-01
Aperture weighting functions are critical design parameters in the development of ultrasound systems because beam characteristics affect the contrast and point resolution of the final output image. In previous work by our group, we developed a metric that quantifies a broadband imaging system's contrast resolution performance. We now use this metric to formulate a novel general ultrasound beamformer design method. In our algorithm, we use constrained least squares (CLS) techniques and a linear algebra formulation to describe the system point spread function (PSF) as a function of the aperture weightings. In one approach, we minimize the energy of the PSF outside a certain boundary and impose a linear constraint on the aperture weights. In a second approach, we minimize the energy of the PSF outside a certain boundary while imposing a quadratic constraint on the energy of the PSF inside the boundary. We present detailed analysis for an arbitrary ultrasound imaging system and discuss several possible applications of the CLS techniques, such as designing aperture weightings to maximize contrast resolution and improve the system depth of field.
A Model-Data Fusion Approach for Constraining Modeled GPP at Global Scales Using GOME2 SIF Data
NASA Astrophysics Data System (ADS)
MacBean, N.; Maignan, F.; Lewis, P.; Guanter, L.; Koehler, P.; Bacour, C.; Peylin, P.; Gomez-Dans, J.; Disney, M.; Chevallier, F.
2015-12-01
Predicting the fate of the ecosystem carbon, C, stocks and their sensitivity to climate change relies heavily on our ability to accurately model the gross carbon fluxes, i.e. photosynthesis and respiration. However, there are large differences in the Gross Primary Productivity (GPP) simulated by different land surface models (LSMs), not only in terms of mean value, but also in terms of phase and amplitude when compared to independent data-based estimates. This strongly limits our ability to provide accurate predictions of carbon-climate feedbacks. One possible source of this uncertainty is from inaccurate parameter values resulting from incomplete model calibration. Solar Induced Fluorescence (SIF) has been shown to have a linear relationship with GPP at the typical spatio-temporal scales used in LSMs (Guanter et al., 2011). New satellite-derived SIF datasets have the potential to constrain LSM parameters related to C uptake at global scales due to their coverage. Here we use SIF data derived from the GOME2 instrument (Köhler et al., 2014) to optimize parameters related to photosynthesis and leaf phenology of the ORCHIDEE LSM, as well as the linear relationship between SIF and GPP. We use a multi-site approach that combines many model grid cells covering a wide spatial distribution within the same optimization (e.g. Kuppel et al., 2014). The parameters are constrained per Plant Functional type as the linear relationship described above varies depending on vegetation structural properties. The relative skill of the optimization is compared to a case where only satellite-derived vegetation index data are used to constrain the model, and to a case where both data streams are used. We evaluate the results using an independent data-driven estimate derived from FLUXNET data (Jung et al., 2011) and with a new atmospheric tracer, Carbonyl sulphide (OCS) following the approach of Launois et al. (ACPD, in review). We show that the optimization reduces the strong positive bias of the ORCHIDEE model and increases the correlation compared to independent estimates. Differences in spatial patterns and gradients between simulated GPP and observed SIF remain largely unchanged however, suggesting that the underlying representation of vegetation type and/or structure and functioning in the model requires further investigation.
A Unique Technique to get Kaprekar Iteration in Linear Programming Problem
NASA Astrophysics Data System (ADS)
Sumathi, P.; Preethy, V.
2018-04-01
This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
NASA Astrophysics Data System (ADS)
Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo
2017-12-01
A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.
The Apollo 16 regolith - A petrographically-constrained chemical mixing model
NASA Technical Reports Server (NTRS)
Kempa, M. J.; Papike, J. J.; White, C.
1980-01-01
A mixing model for Apollo 16 regolith samples has been developed, which differs from other A-16 mixing models in that it is both petrographically constrained and statistically sound. The model was developed using three components representative of rock types present at the A-16 site, plus a representative mare basalt. A linear least-squares fitting program employing the chi-squared test and sum of components was used to determine goodness of fit. Results for surface soils indicate that either there are no significant differences between Cayley and Descartes material at the A-16 site or, if differences do exist, they have been obscured by meteoritic reworking and mixing of the lithologies.
Phase space flows for non-Hamiltonian systems with constraints
NASA Astrophysics Data System (ADS)
Sergi, Alessandro
2005-09-01
In this paper, non-Hamiltonian systems with holonomic constraints are treated by a generalization of Dirac’s formalism. Non-Hamiltonian phase space flows can be described by generalized antisymmetric brackets or by general Liouville operators which cannot be derived from brackets. Both situations are treated. In the first case, a Nosé-Dirac bracket is introduced as an example. In the second one, Dirac’s recipe for projecting out constrained variables from time translation operators is generalized and then applied to non-Hamiltonian linear response. Dirac’s formalism avoids spurious terms in the response function of constrained systems. However, corrections coming from phase space measure must be considered for general perturbations.
Low authority-threshold control for large flexible structures
NASA Technical Reports Server (NTRS)
Zimmerman, D. C.; Inman, D. J.; Juang, J.-N.
1988-01-01
An improved active control strategy for the vibration control of large flexible structures is presented. A minimum force, low authority-threshold controller is developed to bring a system with or without known external disturbances back into an 'allowable' state manifold over a finite time interval. The concept of a constrained, or allowable feedback form of the controller is introduced that reflects practical hardware implementation concerns. The robustness properties of the control strategy are then assessed. Finally, examples are presented which highlight the key points made within the paper.
Detection of Ionospheric Alfven Resonator Signatures in the Equatorial Ionosphere
NASA Technical Reports Server (NTRS)
Simoes, Fernando; Klenzing, Jeffrey; Ivanov, Stoyan; Pfaff, Robert; Freudenreich, Henry; Bilitza, Dieter; Rowland, Douglas; Bromund, Kenneth; Liebrecht, Maria Carmen; Martin, Steven;
2012-01-01
The ionosphere response resulting from minimum solar activity during cycle 23/24 was unusual and offered unique opportunities for investigating space weather in the near-Earth environment. We report ultra low frequency electric field signatures related to the ionospheric Alfven resonator detected by the Communications/Navigation Outage Forecasting System (C/NOFS) satellite in the equatorial region. These signatures are used to constrain ionospheric empirical models and offer a new approach for monitoring ionosphere dynamics and space weather phenomena, namely aeronomy processes, Alfven wave propagation, and troposphere24 ionosphere-magnetosphere coupling mechanisms.
NASA Astrophysics Data System (ADS)
Epishin, V. A.; Maslov, Vyacheslav A.; Ryabykh, V. N.; Svich, V. A.; Topkov, A. N.
1990-04-01
Theoretical and experimental investigations are reported of the propagation of axisymmetric linearly polarized laser radiation beams along hollow-core dielectric waveguides. The conditions for transmission with minimum distortion of the complex amplitude and minimum excitation losses are established for beams in the form of Gaussian-Laguerre modes. A scaling relationship is obtained for the attenuation constant of the EH11 mode in glass waveguides acting as transmission lines and for laser cells handling submillimeter wavelengths.
Analysis and application of minimum variance discrete time system identification
NASA Technical Reports Server (NTRS)
Kaufman, H.; Kotob, S.
1975-01-01
An on-line minimum variance parameter identifier is developed which embodies both accuracy and computational efficiency. The formulation results in a linear estimation problem with both additive and multiplicative noise. The resulting filter which utilizes both the covariance of the parameter vector itself and the covariance of the error in identification is proven to be mean square convergent and mean square consistent. The MV parameter identification scheme is then used to construct a stable state and parameter estimation algorithm.
An interleukin 13 receptor α 2–specific peptide homes to human Glioblastoma multiforme xenografts
Pandya, Hetal; Gibo, Denise M.; Garg, Shivank; Kridel, Steven; Debinski, Waldemar
2012-01-01
Interleukin 13 receptor α 2 (IL-13Rα2) is a glioblastoma multiforme (GBM)–associated plasma membrane receptor, a brain tumor of dismal prognosis. Here, we isolated peptide ligands for IL-13Rα2 with use of a cyclic disulphide-constrained heptapeptide phages display library and 2 in vitro biopanning schemes with GBM cells that do (G26-H2 and SnB19-pcDNA cells) or do not (G26-V2 and SnB19-asIL-13Rα2 cells) over-express IL-13Rα2. We identified 3 peptide phages that bind to IL-13Rα2 in cellular and protein assays. One of the 3 peptide phages, termed Pep-1, bound to IL-13Rα2 with the highest specificity, surprisingly, also in a reducing environment. Pep-1 was thus synthesized and further analyzed in both linear and disulphide-constrained forms. The linear peptide bound to IL-13Rα2 more avidly than did the disulphide-constrained form and was efficiently internalized by IL-13Rα2–expressing GBM cells. The native ligand, IL-13, did not compete for the Pep-1 binding to the receptor and vice versa in any of the assays, indicating that the peptide might be binding to a site on the receptor different from the native ligand. Furthermore, we demonstrated by noninvasive near infrared fluorescence imaging in nude mice that Pep-1 binds and homes to both subcutaneous and orthotopic human GBM xenografts expressing IL-13Rα2 when injected by an intravenous route. Thus, we identified a linear heptapeptide specific for the IL-13Rα2 that is capable of crossing the blood-brain tumor barrier and homing to tumors. Pep-1 can be further developed for various applications in cancer and/or inflammatory diseases. PMID:21946118
Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D
2002-07-01
Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.
Trajectory optimization and guidance for an aerospace plane
NASA Technical Reports Server (NTRS)
Mease, Kenneth D.; Vanburen, Mark A.
1989-01-01
The first step in the approach to developing guidance laws for a horizontal take-off, air breathing single-stage-to-orbit vehicle is to characterize the minimum-fuel ascent trajectories. The capability to generate constrained, minimum fuel ascent trajectories for a single-stage-to-orbit vehicle was developed. A key component of this capability is the general purpose trajectory optimization program OTIS. The pre-production version, OTIS 0.96 was installed and run on a Convex C-1. A propulsion model was developed covering the entire flight envelope of a single-stage-to-orbit vehicle. Three separate propulsion modes, corresponding to an after burning turbojet, a ramjet and a scramjet, are used in the air breathing propulsion phase. The Generic Hypersonic Aerodynamic Model Example aerodynamic model of a hypersonic air breathing single-stage-to-orbit vehicle was obtained and implemented. Preliminary results pertaining to the effects of variations in acceleration constraints, available thrust level and fuel specific impulse on the shape of the minimum-fuel ascent trajectories were obtained. The results show that, if the air breathing engines are sized for acceleration to orbital velocity, it is the acceleration constraint rather than the dynamic pressure constraint that is active during ascent.
Method for hue plane preserving color correction.
Mackiewicz, Michal; Andersen, Casper F; Finlayson, Graham
2016-11-01
Hue plane preserving color correction (HPPCC), introduced by Andersen and Hardeberg [Proceedings of the 13th Color and Imaging Conference (CIC) (2005), pp. 141-146], maps device-dependent color values (RGB) to colorimetric color values (XYZ) using a set of linear transforms, realized by white point preserving 3×3 matrices, where each transform is learned and applied in a subregion of color space, defined by two adjacent hue planes. The hue plane delimited subregions of camera RGB values are mapped to corresponding hue plane delimited subregions of estimated colorimetric XYZ values. Hue planes are geometrical half-planes, where each is defined by the neutral axis and a chromatic color in a linear color space. The key advantage of the HPPCC method is that, while offering an estimation accuracy of higher order methods, it maintains the linear colorimetric relations of colors in hue planes. As a significant result, it therefore also renders the colorimetric estimates invariant to exposure and shading of object reflection. In this paper, we present a new flexible and robust version of HPPCC using constrained least squares in the optimization, where the subregions can be chosen freely in number and position in order to optimize the results while constraining transform continuity at the subregion boundaries. The method is compared to a selection of other state-of-the-art characterization methods, and the results show that it outperforms the original HPPCC method.
Control of constraint forces and trajectories in a rich sensory and actuation environment.
Hemami, Hooshang; Dariush, Behzad
2010-12-01
A simple control strategy is proposed and applied to a class of non-linear systems that have abundant sensory and actuation channels as in living systems. The main objective is the independent control of constrained trajectories of motion, and control of the corresponding constraint forces. The peripheral controller is a proportional, derivative and integral (PID) controller. A central controller produces, via pattern generators, reference signals that are the desired constrained position and velocity trajectories, and the desired constraint forces. The basic tenet of the this hybrid control strategy is the use of two mechanisms: 1. linear state and force feedback, and 2. non-linear constraint velocity feedback - sliding mode feedback. The first mechanism can be envisioned as a high gain feedback systems. The high gain attribute imitates the agonist-antagonist co-activation in natural systems. The strategy is applied to the control of the force and trajectory of a two-segment thigh-leg planar biped leg with a mass-less foot cranking a pedal that is analogous to a bicycle pedal. Five computational experiments are presented to show the effectiveness of the strategy and the performance of the controller. The findings of this paper are applicable to the design of orthoses and prostheses to supplement functional electrical stimulation for support purposes in the spinally injured cases. Copyright © 2010 Elsevier Inc. All rights reserved.
The impact of raising the minimum drinking age on driver fatalities.
MacKinnon, D P; Woodward, J A
1986-12-01
Time series analysis was used to obtain statistical tests of the impact of raising the drinking age on monthly driver fatalities in Illinois, Michigan, and Massachusetts. A control series design permitted comparison between younger drivers (21 or less years) and older drivers (25 and older) within states where the minimum drinking age was raised. Since the two groups share the same driving conditions, it was important to demonstrate that any reduction in fatalities was limited to the young age group within which the drinking age change occurred. In addition, control states were selected to permit a comparison between driver fatalities of the young age group (21 or less) in states with the law change and young drivers in states without the law change. Significant immediate reductions in fatalities among 21 and younger drivers in Illinois and Michigan were observed after these states raised their minimum drinking age. No significant reductions in any control series were observed. A linear decrease in young driver fatalities was observed after the drinking age was raised in Massachusetts. There was also a significant linear decrease in young driver fatalities in the Connecticut control series, perhaps due to increasing awareness among young drivers of the dangers of drinking and driving.
Viscoelastic properties of dendrimers in the melt from nonequlibrium molecular dynamics
NASA Astrophysics Data System (ADS)
Bosko, Jaroslaw T.; Todd, B. D.; Sadus, Richard J.
2004-12-01
The viscoelastic properties of dendrimers of generation 1-4 are studied using nonequilibrium molecular dynamics. Flow properties of dendrimer melts under shear are compared to systems composed of linear chain polymers of the same molecular weight, and the influence of molecular architecture is discussed. Rheological material properties, such as the shear viscosity and normal stress coefficients, are calculated and compared for both systems. We also calculate and compare the microscopic properties of both linear chain and dendrimer molecules, such as their molecular alignment, order parameters and rotational velocities. We find that the highly symmetric shape of dendrimers and their highly constrained geometry allows for substantial differences in their material properties compared to traditional linear polymers of equivalent molecular weight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngeow, Chow-Choong; Kanbur, Shashi M.; Schrecengost, Zachariah
Investigation of period–color (PC) and amplitude–color (AC) relations at the maximum and minimum light can be used to probe the interaction of the hydrogen ionization front (HIF) with the photosphere and the radiation hydrodynamics of the outer envelopes of Cepheids and RR Lyraes. For example, theoretical calculations indicated that such interactions would occur at minimum light for RR Lyrae and result in a flatter PC relation. In the past, the PC and AC relations have been investigated by using either the ( V − R ){sub MACHO} or ( V − I ) colors. In this work, we extend previousmore » work to other bands by analyzing the RR Lyraes in the Sloan Digital Sky Survey Stripe 82 Region. Multi-epoch data are available for RR Lyraes located within the footprint of the Stripe 82 Region in five ( ugriz ) bands. We present the PC and AC relations at maximum and minimum light in four colors: ( u − g ){sub 0}, ( g − r ){sub 0}, ( r − i ){sub 0}, and ( i − z ){sub 0}, after they are corrected for extinction. We found that the PC and AC relations for this sample of RR Lyraes show a complex nature in the form of flat, linear or quadratic relations. Furthermore, the PC relations at minimum light for fundamental mode RR Lyrae stars are separated according to the Oosterhoff type, especially in the ( g − r ){sub 0} and ( r − i ){sub 0} colors. If only considering the results from linear regressions, our results are quantitatively consistent with the theory of HIF-photosphere interaction for both fundamental and first overtone RR Lyraes.« less
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Vibration of a spatial elastica constrained inside a straight tube
NASA Astrophysics Data System (ADS)
Chen, Jen-San; Fang, Joyce
2014-04-01
In this paper we study the dynamic behavior of a clamped-clamped spatial elastica under edge thrust constrained inside a straight cylindrical tube. Attention is focused on the calculation of the natural frequencies and mode shapes of the planar and spatial one-point-contact deformations. The main issue in determining the natural frequencies of a constrained rod is the movement of the contact point during vibration. In order to capture the physical essence of the contact-point movement, an Eulerian description of the equations of motion based on director theory is formulated. After proper linearization of the equations of motion, boundary conditions, and contact conditions, the natural frequencies and mode shapes of the elastica can be obtained by solving a system of eighteen first-order differential equations with shooting method. It is concluded that the planar one-point-contact deformation becomes unstable and evolves to a spatial deformation at a bifurcation point in both displacement and force control procedures.
Constraining dark sector perturbations I: cosmic shear and CMB lensing
NASA Astrophysics Data System (ADS)
Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.
2015-04-01
We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant Script L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=-1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.
Geochronological constraints on the evolution of El Hierro (Canary Islands)
NASA Astrophysics Data System (ADS)
Becerril, Laura; Ubide, Teresa; Sudo, Masafumi; Martí, Joan; Galindo, Inés; Galé, Carlos; Morales, Jose María; Yepes, Jorge; Lago, Marceliano
2016-01-01
New age data have been obtained to time constrain the recent Quaternary volcanism of El Hierro (Canary Islands) and to estimate its recurrence rate. We have carried out 40Ar/39Ar geochronology on samples spanning the entire volcanostratigraphic sequence of the island and 14C geochronology on the most recent eruption on the northeast rift of the island: 2280 ± 30 yr BP. We combine the new absolute data with a revision of published ages onshore, some of which were identified through geomorphological criteria (relative data). We present a revised and updated chronology of volcanism for the last 33 ka that we use to estimate the maximum eruptive recurrence of the island. The number of events per year determined is 9.7 × 10-4 for the emerged part of the island, which means that, as a minimum, one eruption has occurred approximately every 1000 years. This highlights the need of more geochronological data to better constrain the eruptive recurrence of El Hierro.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saylor, David M.; Jawahery, Sudi; Silverstein, Joshua S.
2016-07-21
We investigate the link between dynamic localization, characterized by the Debye–Waller factor, 〈u{sup 2}〉, and solute self-diffusivity, D, in a polymer system using atomistic molecular dynamics simulations and vapor sorption experiments. We find a linear relationship between lnD and 1/〈u{sup 2}〉 over more than four decades of D, encompassing most of the glass formation regime. The observed linearity is consistent with the Langevin dynamics in a periodically varying potential field and may offer a means to rapidly assess diffusion based on the characterization of dynamic localization.
Venus Chasmata: A Lithospheric Stretching Model
NASA Technical Reports Server (NTRS)
Solomon, S. C.; Head, J. W.
1985-01-01
An outstanding problem for Venus is the characterization of its style of global tectonics, an issue intimately related to the dominant mechanism of lithospheric heat loss. Among the most spectacular and extensive of the major tectonic features on Venus are the chasmata, deep linear valleys generally interpreted to be the products of lithospheric extension and rifting. Systems of chasmata and related features can be traced along several tectonic zones up to 20,000 km in linear extent. A lithospheric stretching model was developed to explain the topographic characteristics of Venus chasmata and to constrain the physical properties of the Venus crust and lithosphere.
Sampling Based Influence Maximization on Linear Threshold Model
NASA Astrophysics Data System (ADS)
Jia, Su; Chen, Ling
2018-04-01
A sampling based influence maximization on linear threshold (LT) model method is presented. The method samples the routes in the possible worlds in the social networks, and uses Chernoff bound to estimate the number of samples so that the error can be constrained within a given bound. Then the active possibilities of the routes in the possible worlds are calculated, and are used to compute the influence spread of each node in the network. Our experimental results show that our method can effectively select appropriate seed nodes set that spreads larger influence than other similar methods.
Implementation of projective measurements with linear optics and continuous photon counting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeoka, Masahiro; Sasaki, Masahide; Loock, Peter van
2005-02-01
We investigate the possibility of implementing a given projection measurement using linear optics and arbitrarily fast feedforward based on the continuous detection of photons. In particular, we systematically derive the so-called Dolinar scheme that achieves the minimum-error discrimination of binary coherent states. Moreover, we show that the Dolinar-type approach can also be applied to projection measurements in the regime of photonic-qubit signals. Our results demonstrate that for implementing a projection measurement with linear optics, in principle, unit success probability may be approached even without the use of expensive entangled auxiliary states, as they are needed in all known (near-)deterministic linear-opticsmore » proposals.« less
Combined linear theory/impact theory method for analysis and design of high speed configurations
NASA Technical Reports Server (NTRS)
Brooke, D.; Vondrasek, D. V.
1980-01-01
Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.
40 CFR 86.316-79 - Carbon monoxide and carbon dioxide analyzer specifications.
Code of Federal Regulations, 2012 CFR
2012-07-01
...) The use of linearizing circuits is permitted. (c) The minimum water rejection ratio (maximum CO 2... shall be 5000:1. (e) Zero suppression. Various techniques of zero suppression may be used to increase...
40 CFR 86.316-79 - Carbon monoxide and carbon dioxide analyzer specifications.
Code of Federal Regulations, 2013 CFR
2013-07-01
...) The use of linearizing circuits is permitted. (c) The minimum water rejection ratio (maximum CO 2... shall be 5000:1. (e) Zero suppression. Various techniques of zero suppression may be used to increase...
Optimization of self-study room open problem based on green and low-carbon campus construction
NASA Astrophysics Data System (ADS)
Liu, Baoyou
2017-04-01
The optimization of self-study room open arrangement problem in colleges and universities is conducive to accelerate the fine management of the campus and promote green and low-carbon campus construction. Firstly, combined with the actual survey data, the self-study area and living area were divided into different blocks, and the electricity consumption in each self-study room and distance between different living and studying areas were normalized. Secondly, the minimum of total satisfaction index and the minimum of the total electricity consumption were selected as the optimization targets respectively. The mathematical models of linear programming were established and resolved by LINGO software. The results showed that the minimum of total satisfaction index was 4055.533 and the total minimum electricity consumption was 137216 W. Finally, some advice had been put forward on how to realize the high efficient administration of the study room.
NASA Astrophysics Data System (ADS)
Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi
2018-02-01
Delay and sum (DAS) is the most common beamforming algorithm in linear-array photoacoustic imaging (PAI) as a result of its simple implementation. However, it leads to a low resolution and high sidelobes. Delay multiply and sum (DMAS) was used to address the incapabilities of DAS, providing a higher image quality. However, the resolution improvement is not well enough compared to eigenspace-based minimum variance (EIBMV). In this paper, the EIBMV beamformer has been combined with DMAS algebra, called EIBMV-DMAS, using the expansion of DMAS algorithm. The proposed method is used as the reconstruction algorithm in linear-array PAI. EIBMV-DMAS is experimentally evaluated where the quantitative and qualitative results show that it outperforms DAS, DMAS and EIBMV. The proposed method degrades the sidelobes for about 365 %, 221 % and 40 %, compared to DAS, DMAS and EIBMV, respectively. Moreover, EIBMV-DMAS improves the SNR about 158 %, 63 % and 20 %, respectively.
Thermoluminescence response of flat optical fiber subjected to 9 MeV electron irradiations
NASA Astrophysics Data System (ADS)
Hashim, S.; Omar, S. S. Che; Ibrahim, S. A.; Hassan, W. M. S. Wan; Ung, N. M.; Mahdiraji, G. A.; Bradley, D. A.; Alzimami, K.
2015-01-01
We describe the efforts of finding a new thermoluminescent (TL) media using pure silica flat optical fiber (FF). The present study investigates the dose response, sensitivity, minimum detectable dose and glow curve of FF subjected to 9 MeV electron irradiations with various dose ranges from 0 Gy to 2.5 Gy. The above-mentioned TL properties of the FF are compared with commercially available TLD-100 rods. The TL measurements of the TL media exhibit a linear dose response over the delivered dose using a linear accelerator. We found that the sensitivity of TLD-100 is markedly 6 times greater than that of FF optical fiber. The minimum detectable dose was found to be 0.09 mGy for TLD-100 and 8.22 mGy for FF. Our work may contribute towards the development of a new dosimeter for personal monitoring purposes.
A rapid method for optimization of the rocket propulsion system for single-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
Eldred, C. H.; Gordon, S. V.
1976-01-01
A rapid analytical method for the optimization of rocket propulsion systems is presented for a vertical take-off, horizontal landing, single-stage-to-orbit launch vehicle. This method utilizes trade-offs between propulsion characteristics affecting flight performance and engine system mass. The performance results from a point-mass trajectory optimization program are combined with a linearized sizing program to establish vehicle sizing trends caused by propulsion system variations. The linearized sizing technique was developed for the class of vehicle systems studied herein. The specific examples treated are the optimization of nozzle expansion ratio and lift-off thrust-to-weight ratio to achieve either minimum gross mass or minimum dry mass. Assumed propulsion system characteristics are high chamber pressure, liquid oxygen and liquid hydrogen propellants, conventional bell nozzles, and the same fixed nozzle expansion ratio for all engines on a vehicle.
The all-fiber cladding-pumped Yb-doped gain-switched laser.
Larsen, C; Hansen, K P; Mattsson, K E; Bang, O
2014-01-27
Gain-switching is an alternative pulsing technique of fiber lasers, which is power scalable and has a low complexity. From a linear stability analysis of rate equations the relaxation oscillation period is derived and from it, the pulse duration is defined. Good agreement between the measured pulse duration and the theoretical prediction is found over a wide range of parameters. In particular we investigate the influence of an often present length of passive fiber in the cavity and show that it introduces a finite minimum in the achievable pulse duration. This minimum pulse duration is shown to occur at longer active fibers length with increased passive length of fiber in the cavity. The peak power is observed to depend linearly on the absorbed pump power and be independent of the passive fiber length. Given these conclusions, the pulse energy, duration, and peak power can be estimated with good precision.
A multifunctional force microscope for soft matter with in situ imaging
NASA Astrophysics Data System (ADS)
Roberts, Paul; Pilkington, Georgia A.; Wang, Yumo; Frechette, Joelle
2018-04-01
We present the multifunctional force microscope (MFM), a normal and lateral force-measuring instrument with in situ imaging. In the MFM, forces are calculated from the normal and lateral deflection of a cantilever as measured via fiber optic sensors. The motion of the cantilever is controlled normally by a linear micro-translation stage and a piezoelectric actuator, while the lateral motion of the sample is controlled by another linear micro-translation stage. The micro-translation stages allow for travel distances that span 25 mm with a minimum step size of 50 nm, while the piezo has a minimum step size of 0.2 nm, but a 100 μm maximum range. Custom-designed cantilevers allow for the forces to be measured over 4 orders of magnitude (from 50 μN to 1 N). We perform probe tack, friction, and hydrodynamic drainage experiments to demonstrate the sensitivity, versatility, and measurable force range of the instrument.
Daniel J. Isaak; Charles H. Luce; Bruce E. Rieman; David E. Nagel; Erin E. Peterson; Dona L. Horan; Sharon Parkes; Gwynne L. Chandler
2010-01-01
Mountain streams provide important habitats for many species, but their faunas are especially vulnerable to climate change because of ectothermic physiologies and movements that are constrained to linear networks that are easily fragmented. Effectively conserving biodiversity in these systems requires accurate downscaling of climatic trends to local habitat conditions...
MOFA Software for the COBRA Toolbox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griesemer, Marc; Navid, Ali
MOFA-COBRA is a software code for Matlab that performs Multi-Objective Flux Analysis (MOFA), a solving of linear programming problems. Teh leading software package for conducting different types of analyses using constrain-based models is the COBRA Toolbox for Matlab. MOFA-COBRA is an added tool for COBRA that solves multi-objective problems using a novel algorithm.
Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models
ERIC Educational Resources Information Center
Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai
2011-01-01
Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…
Dynamical Analysis of the Circumprimary Planet in the Eccentric Binary System HD 59686
NASA Astrophysics Data System (ADS)
Trifonov, Trifon; Lee, Man Hoi; Reffert, Sabine; Quirrenbach, Andreas
2018-04-01
We present a detailed orbital and stability analysis of the HD 59686 binary-star planet system. HD 59686 is a single-lined, moderately close (a B = 13.6 au) eccentric (e B = 0.73) binary, where the primary is an evolved K giant with mass M = 1.9 M ⊙ and the secondary is a star with a minimum mass of m B = 0.53 M ⊙. Additionally, on the basis of precise radial velocity (RV) data, a Jovian planet with a minimum mass of m p = 7 M Jup, orbiting the primary on a nearly circular S-type orbit with e p = 0.05 and a p = 1.09 au, has recently been announced. We investigate large sets of orbital fits consistent with HD 59686's RV data by applying bootstrap and systematic grid search techniques coupled with self-consistent dynamical fitting. We perform long-term dynamical integrations of these fits to constrain the permitted orbital configurations. We find that if the binary and the planet in this system have prograde and aligned coplanar orbits, there are narrow regions of stable orbital solutions locked in a secular apsidal alignment with the angle between the periapses, Δω, librating about 0°. We also test a large number of mutually inclined dynamical models in an attempt to constrain the three-dimensional orbital architecture. We find that for nearly coplanar and retrograde orbits with mutual inclination 145° ≲ Δi ≤ 180°, the system is fully stable for a large range of orbital solutions.
Two algorithms for neural-network design and training with application to channel equalization.
Sweatman, C Z; Mulgrew, B; Gibson, G J
1998-01-01
We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.
Temperature fine-tunes Mediterranean Arabidopsis thaliana life-cycle phenology geographically.
Marcer, A; Vidigal, D S; James, P M A; Fortin, M-J; Méndez-Vigo, B; Hilhorst, H W M; Bentsink, L; Alonso-Blanco, C; Picó, F X
2018-01-01
To understand how adaptive evolution in life-cycle phenology operates in plants, we need to unravel the effects of geographic variation in putative agents of natural selection on life-cycle phenology by considering all key developmental transitions and their co-variation patterns. We address this goal by quantifying the temperature-driven and geographically varying relationship between seed dormancy and flowering time in the annual Arabidopsis thaliana across the Iberian Peninsula. We used data on genetic variation in two major life-cycle traits, seed dormancy (DSDS50) and flowering time (FT), in a collection of 300 A. thaliana accessions from the Iberian Peninsula. The geographically varying relationship between life-cycle traits and minimum temperature, a major driver of variation in DSDS50 and FT, was explored with geographically weighted regressions (GWR). The environmentally varying correlation between DSDS50 and FT was analysed by means of sliding window analysis across a minimum temperature gradient. Maximum local adjustments between minimum temperature and life-cycle traits were obtained in the southwest Iberian Peninsula, an area with the highest minimum temperatures. In contrast, in off-southwest locations, the effects of minimum temperature on DSDS50 were rather constant across the region, whereas those of minimum temperature on FT were more variable, with peaks of strong local adjustments of GWR models in central and northwest Spain. Sliding window analysis identified a minimum temperature turning point in the relationship between DSDS50 and FT around a minimum temperature of 7.2 °C. Above this minimum temperature turning point, the variation in the FT/DSDS50 ratio became rapidly constrained and the negative correlation between FT and DSDS50 did not increase any further with increasing minimum temperatures. The southwest Iberian Peninsula emerges as an area where variation in life-cycle phenology appears to be restricted by the duration and severity of the hot summer drought. The temperature-driven varying relationship between DSDS50 and FT detected environmental boundaries for the co-evolution between FT and DSDS50 in A. thaliana. In the context of global warming, we conclude that A. thaliana phenology from the southwest Iberian Peninsula, determined by early flowering and deep seed dormancy, might become the most common life-cycle phenotype for this annual plant in the region. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.
NASA Technical Reports Server (NTRS)
Molusis, J. A.; Mookerjee, P.; Bar-Shalom, Y.
1983-01-01
Effect of nonlinearity on convergence of the local linear and global linear adaptive controllers is evaluated. A nonlinear helicopter vibration model is selected for the evaluation which has sufficient nonlinearity, including multiple minimum, to assess the vibration reduction capability of the adaptive controllers. The adaptive control algorithms are based upon a linear transfer matrix assumption and the presence of nonlinearity has a significant effect on algorithm behavior. Simulation results are presented which demonstrate the importance of the caution property in the global linear controller. Caution is represented by a time varying rate weighting term in the local linear controller and this improves the algorithm convergence. Nonlinearity in some cases causes Kalman filter divergence. Two forms of the Kalman filter covariance equation are investigated.
Diesel-Powered Heavy-Duty Refrigeration Unit Noise
DOT National Transportation Integrated Search
1976-01-01
A series of noise measurements were performed on a diesel-powered heavy-duty refrigeration unit. Noise survey information collected included: polar plots of the 'A Weighted' noise levels of the unit under maximum and minimum load conditions; a linear...
Information dynamics in carcinogenesis and tumor growth.
Gatenby, Robert A; Frieden, B Roy
2004-12-21
The storage and transmission of information is vital to the function of normal and transformed cells. We use methods from information theory and Monte Carlo theory to analyze the role of information in carcinogenesis. Our analysis demonstrates that, during somatic evolution of the malignant phenotype, the accumulation of genomic mutations degrades intracellular information. However, the degradation is constrained by the Darwinian somatic ecology in which mutant clones proliferate only when the mutation confers a selective growth advantage. In that environment, genes that normally decrease cellular proliferation, such as tumor suppressor or differentiation genes, suffer maximum information degradation. Conversely, those that increase proliferation, such as oncogenes, are conserved or exhibit only gain of function mutations. These constraints shield most cellular populations from catastrophic mutator-induced loss of the transmembrane entropy gradient and, therefore, cell death. The dynamics of constrained information degradation during carcinogenesis cause the tumor genome to asymptotically approach a minimum information state that is manifested clinically as dedifferentiation and unconstrained proliferation. Extreme physical information (EPI) theory demonstrates that altered information flow from cancer cells to their environment will manifest in-vivo as power law tumor growth with an exponent of size 1.62. This prediction is based only on the assumption that tumor cells are at an absolute information minimum and are capable of "free field" growth that is, they are unconstrained by external biological parameters. The prediction agrees remarkably well with several studies demonstrating power law growth in small human breast cancers with an exponent of 1.72+/-0.24. This successful derivation of an analytic expression for cancer growth from EPI alone supports the conceptual model that carcinogenesis is a process of constrained information degradation and that malignant cells are minimum information systems. EPI theory also predicts that the estimated age of a clinically observed tumor is subject to a root-mean square error of about 30%. This is due to information loss and tissue disorganization and probably manifests as a randomly variable lag phase in the growth pattern that has been observed experimentally. This difference between tumor size and age may impose a fundamental limit on the efficacy of screening based on early detection of small tumors. Independent of the EPI analysis, Monte Carlo methods are applied to predict statistical tumor growth due to perturbed information flow from the environment into transformed cells. A "simplest" Monte Carlo model is suggested by the findings in the EPI approach that tumor growth arises out of a minimally complex mechanism. The outputs of large numbers of simulations show that (a) about 40% of the populations do not survive the first two-generations due to mutations in critical gene segments; but (b) those that do survive will experience power law growth identical to the predicted rate obtained from the independent EPI approach. The agreement between these two very different approaches to the problem strongly supports the idea that tumor cells regress to a state of minimum information during carcinogenesis, and that information dynamics are integrally related to tumor development and growth.
Criteria for equality in two entropic inequalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirokov, M. E., E-mail: msh@mi.ras.ru
2014-07-31
We obtain a simple criterion for local equality between the constrained Holevo capacity and the quantum mutual information of a quantum channel. This shows that the set of all states for which this equality holds is determined by the kernel of the channel (as a linear map). Applications to Bosonic Gaussian channels are considered. It is shown that for a Gaussian channel having no completely depolarizing components the above characteristics may coincide only at non-Gaussian mixed states and a criterion for the existence of such states is given. All the obtained results may be reformulated as conditions for equality betweenmore » the constrained Holevo capacity of a quantum channel and the input von Neumann entropy. Bibliography: 20 titles. (paper)« less
Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.
Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei
2015-08-01
In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
A dual polarized antenna system using a meanderline polarizer
NASA Technical Reports Server (NTRS)
Burger, H. A.
1978-01-01
Certain applications of synthetic aperture radars require transmitting on one linear polarization and receiving on two orthogonal linear polarizations for adequate characterization of the surface. To meet the current need at minimum cost, it was desirable to use two identical horizontally polarized shaped beam antennas and to change the polarization of one of them by a polarization conversion plate. The plate was realized as a four-layer meanderline polarizer designed to convert horizontal polarization to vertical.
Application of genetic algorithms in nonlinear heat conduction problems.
Kadri, Muhammad Bilal; Khan, Waqar A
2014-01-01
Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry.
Rate-Compatible Protograph LDPC Codes
NASA Technical Reports Server (NTRS)
Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)
2014-01-01
Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.
Testing for nonlinearity in non-stationary physiological time series.
Guarín, Diego; Delgado, Edilson; Orozco, Álvaro
2011-01-01
Testing for nonlinearity is one of the most important preprocessing steps in nonlinear time series analysis. Typically, this is done by means of the linear surrogate data methods. But it is a known fact that the validity of the results heavily depends on the stationarity of the time series. Since most physiological signals are non-stationary, it is easy to falsely detect nonlinearity using the linear surrogate data methods. In this document, we propose a methodology to extend the procedure for generating constrained surrogate time series in order to assess nonlinearity in non-stationary data. The method is based on the band-phase-randomized surrogates, which consists (contrary to the linear surrogate data methods) in randomizing only a portion of the Fourier phases in the high frequency domain. Analysis of simulated time series showed that in comparison to the linear surrogate data method, our method is able to discriminate between linear stationarity, linear non-stationary and nonlinear time series. Applying our methodology to heart rate variability (HRV) records of five healthy patients, we encountered that nonlinear correlations are present in this non-stationary physiological signals.
Luo, Biao; Liu, Derong; Wu, Huai-Ning
2018-06-01
Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.
Nonlinear vs. linear biasing in Trp-cage folding simulations
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-01
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Nonlinear vs. linear biasing in Trp-cage folding simulations.
Spiwok, Vojtěch; Oborský, Pavel; Pazúriková, Jana; Křenek, Aleš; Králová, Blanka
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Design for minimum energy in interstellar communication
NASA Astrophysics Data System (ADS)
Messerschmitt, David G.
2015-02-01
Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.
NASA Astrophysics Data System (ADS)
Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.
2018-01-01
To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.
NASA Astrophysics Data System (ADS)
Alton, K. B.
2018-06-01
Abstract TYC 2058-753-1 (NSVS 7903497; ASAS 165139+2255.7) is a W UMa binary system (P = 0.353205 d) which has not been rigorously studied since first being detected nearly 15 years ago by the ROTSE-I telescope. Other than the unfiltered ROTSE-I and monochromatic All Sky Automated Survey (ASAS) survey data, no multi-colored light curves (LC) have been published. Photometric data collected in three bandpasses (B, V, and Ic) at Desert Bloom Observatory in June 2017 produced six times-of-minimum for TYC 2058-753-1 which were used to establish a linear ephemeris from the first directly measured Min I epoch (HJD0). No published radial velocity data are available for this system, however, since this W UMa binary undergoes a very obvious total eclipse, Roche modeling produced a well-constrained photometric value for the mass ratio (qph = 0.103 ± 0.001). This low-mass ratio binary star system also exhibits a high degree of contact (f > 56%). There is a suggestion from the ROTSE-I and ASAS survey data as well as from the new LCs reported herein that maximum light during quadrature (Max I and Max II) is often not equal. As a result, Roche modeling of the TYC 2058-753-1 LCs was investigated with and without surface spots to address this asymmetry as well as a diagonally-aligned flat bottom during Min I that was observed in 2017.
MR PROSTATE SEGMENTATION VIA DISTRIBUTED DISCRIMINATIVE DICTIONARY (DDD) LEARNING.
Guo, Yanrong; Zhan, Yiqiang; Gao, Yaozong; Jiang, Jianguo; Shen, Dinggang
2013-01-01
Segmenting prostate from MR images is important yet challenging. Due to non-Gaussian distribution of prostate appearances in MR images, the popular active appearance model (AAM) has its limited performance. Although the newly developed sparse dictionary learning method[1, 2] can model the image appearance in a non-parametric fashion, the learned dictionaries still lack the discriminative power between prostate and non-prostate tissues, which is critical for accurate prostate segmentation. In this paper, we propose to integrate deformable model with a novel learning scheme, namely the Distributed Discriminative Dictionary ( DDD ) learning, which can capture image appearance in a non-parametric and discriminative fashion. In particular, three strategies are designed to boost the tissue discriminative power of DDD. First , minimum Redundancy Maximum Relevance (mRMR) feature selection is performed to constrain the dictionary learning in a discriminative feature space. Second , linear discriminant analysis (LDA) is employed to assemble residuals from different dictionaries for optimal separation between prostate and non-prostate tissues. Third , instead of learning the global dictionaries, we learn a set of local dictionaries for the local regions (each with small appearance variations) along prostate boundary, thus achieving better tissue differentiation locally. In the application stage, DDDs will provide the appearance cues to robustly drive the deformable model onto the prostate boundary. Experiments on 50 MR prostate images show that our method can yield a Dice Ratio of 88% compared to the manual segmentations, and have 7% improvement over the conventional AAM.
Edgelist phase unwrapping algorithm for time series InSAR analysis.
Shanker, A Piyush; Zebker, Howard
2010-03-01
We present here a new integer programming formulation for phase unwrapping of multidimensional data. Phase unwrapping is a key problem in many coherent imaging systems, including time series synthetic aperture radar interferometry (InSAR), with two spatial and one temporal data dimensions. The minimum cost flow (MCF) [IEEE Trans. Geosci. Remote Sens. 36, 813 (1998)] phase unwrapping algorithm describes a global cost minimization problem involving flow between phase residues computed over closed loops. Here we replace closed loops by reliable edges as the basic construct, thus leading to the name "edgelist." Our algorithm has several advantages over current methods-it simplifies the representation of multidimensional phase unwrapping, it incorporates data from external sources, such as GPS, where available to better constrain the unwrapped solution, and it treats regularly sampled or sparsely sampled data alike. It thus is particularly applicable to time series InSAR, where data are often irregularly spaced in time and individual interferograms can be corrupted with large decorrelated regions. We show that, similar to the MCF network problem, the edgelist formulation also exhibits total unimodularity, which enables us to solve the integer program problem by using efficient linear programming tools. We apply our method to a persistent scatterer-InSAR data set from the creeping section of the Central San Andreas Fault and find that the average creep rate of 22 mm/Yr is constant within 3 mm/Yr over 1992-2004 but varies systematically with ground location, with a slightly higher rate in 1992-1998 than in 1999-2003.
Minimization of transmission cost in decentralized control systems
NASA Technical Reports Server (NTRS)
Wang, S.-H.; Davison, E. J.
1978-01-01
This paper considers the problem of stabilizing a linear time-invariant multivariable system by using local feedback controllers and some limited information exchange among local stations. The problem of achieving a given degree of stability with minimum transmission cost is solved.
2013-08-14
Time-Varying (LTV); Clohessy - Wiltshire -Hill (CWH) 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF...linearized Hill- Clohessy - Wiltshire (HCW) equations [15] approximate the relative motion of the spacecraft on a circular orbit as ẍ− 3n2x− 2nẏ = Fx mc
NASA Astrophysics Data System (ADS)
Goh, Shu Ting
Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due to the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft's range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method's error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.
Information dynamics in living systems: prokaryotes, eukaryotes, and cancer.
Frieden, B Roy; Gatenby, Robert A
2011-01-01
Living systems use information and energy to maintain stable entropy while far from thermodynamic equilibrium. The underlying first principles have not been established. We propose that stable entropy in living systems, in the absence of thermodynamic equilibrium, requires an information extremum (maximum or minimum), which is invariant to first order perturbations. Proliferation and death represent key feedback mechanisms that promote stability even in a non-equilibrium state. A system moves to low or high information depending on its energy status, as the benefit of information in maintaining and increasing order is balanced against its energy cost. Prokaryotes, which lack specialized energy-producing organelles (mitochondria), are energy-limited and constrained to an information minimum. Acquisition of mitochondria is viewed as a critical evolutionary step that, by allowing eukaryotes to achieve a sufficiently high energy state, permitted a phase transition to an information maximum. This state, in contrast to the prokaryote minima, allowed evolution of complex, multicellular organisms. A special case is a malignant cell, which is modeled as a phase transition from a maximum to minimum information state. The minimum leads to a predicted power-law governing the in situ growth that is confirmed by studies measuring growth of small breast cancers. We find living systems achieve a stable entropic state by maintaining an extreme level of information. The evolutionary divergence of prokaryotes and eukaryotes resulted from acquisition of specialized energy organelles that allowed transition from information minima to maxima, respectively. Carcinogenesis represents a reverse transition: of an information maximum to minimum. The progressive information loss is evident in accumulating mutations, disordered morphology, and functional decline characteristics of human cancers. The findings suggest energy restriction is a critical first step that triggers the genetic mutations that drive somatic evolution of the malignant phenotype.
Effect of load introduction on graphite epoxy compression specimens
NASA Technical Reports Server (NTRS)
Reiss, R.; Yao, T. M.
1981-01-01
Compression testing of modern composite materials is affected by the manner in which the compressive load is introduced. Two such effects are investigated: (1) the constrained edge effect which prevents transverse expansion and is common to all compression testing in which the specimen is gripped in the fixture; and (2) nonuniform gripping which induces bending into the specimen. An analytical model capable of quantifying these foregoing effects was developed which is based upon the principle of minimum complementary energy. For pure compression, the stresses are approximated by Fourier series. For pure bending, the stresses are approximated by Legendre polynomials.
Constrained Burn Optimization for the International Space Station
NASA Technical Reports Server (NTRS)
Brown, Aaron J.; Jones, Brandon A.
2017-01-01
In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.
Minimum impulse transfers to rotate the line of apsides
NASA Technical Reports Server (NTRS)
Phong, Connie; Sweetser, Theodore H.
2005-01-01
Transfer between two coplanar orbits can be accomplished via a single impulse if the two orbits intersect. Optimization of a single-impulse transfer, however, is not possible since the transfer orbit is completely constrained by the initial and final orbits. On the other hand, two-impulse transfers are possible between any two terminal orbits. While optimal scenarios are not known for the general two-impulse case, there are various approximate solutions to many special cases. We consider the problem of an inplane rotation of the line of apsides, leaving the size and shape of the orbit unaffected.
Sigurdson, Kris; Cooray, Asantha
2005-11-18
We propose a new method for removing gravitational lensing from maps of cosmic microwave background (CMB) polarization anisotropies. Using observations of anisotropies or structures in the cosmic 21 cm radiation, emitted or absorbed by neutral hydrogen atoms at redshifts 10 to 200, the CMB can be delensed. We find this method could allow CMB experiments to have increased sensitivity to a background of inflationary gravitational waves (IGWs) compared to methods relying on the CMB alone and may constrain models of inflation which were heretofore considered to have undetectable IGW amplitudes.
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
NASA Astrophysics Data System (ADS)
Kuzmanoski, M.; Box, M.; Box, G. P.; Schmidt, B.; Russell, P. B.; Redemann, J.; Livingston, J. M.; Wang, J.; Flagan, R. C.; Seinfeld, J. H.
2002-12-01
As part of the ACE-Asia experiment, conducted off the coast of China, Korea and Japan in spring 2001, measurements of aerosol physical, chemical and radiative characteristics were performed aboard the Twin Otter aircraft. Of particular importance for this paper were spectral measurements of aerosol optical thickness obtained at 13 discrete wavelengths, within 354-1558 nm wavelength range, using the AATS-14 sunphotometer. Spectral aerosol optical thickness can be used to obtain information about particle size distribution. In this paper, we use sunphotometer measurements to retrieve size distribution of aerosols during ACE-Asia. We focus on four cases in which layers influenced by different air masses were identified. Aerosol optical thickness of each layer was inverted using two different techniques - constrained linear inversion and multimodal. In the constrained linear inversion algorithm no assumption about the mathematical form of the distribution to be retrieved is made. Conversely, the multimodal technique assumes that aerosol size distribution is represented as a linear combination of few lognormal modes with predefined values of mode radii and geometric standard deviations. Amplitudes of modes are varied to obtain best fit of sum of optical thicknesses due to individual modes to sunphotometer measurements. In this paper we compare the results of these two retrieval methods. In addition, we present comparisons of retrieved size distributions with in situ measurements taken using an aerodynamic particle sizer and differential mobility analyzer system aboard the Twin Otter aircraft.
Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera
Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin
2016-01-01
This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292
Economic optimization of the energy transport component of a large distributed solar power plant
NASA Technical Reports Server (NTRS)
Turner, R. H.
1976-01-01
A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.
Global solar wind variations over the last four centuries.
Owens, M J; Lockwood, M; Riley, P
2017-01-31
The most recent "grand minimum" of solar activity, the Maunder minimum (MM, 1650-1710), is of great interest both for understanding the solar dynamo and providing insight into possible future heliospheric conditions. Here, we use nearly 30 years of output from a data-constrained magnetohydrodynamic model of the solar corona to calibrate heliospheric reconstructions based solely on sunspot observations. Using these empirical relations, we produce the first quantitative estimate of global solar wind variations over the last 400 years. Relative to the modern era, the MM shows a factor 2 reduction in near-Earth heliospheric magnetic field strength and solar wind speed, and up to a factor 4 increase in solar wind Mach number. Thus solar wind energy input into the Earth's magnetosphere was reduced, resulting in a more Jupiter-like system, in agreement with the dearth of auroral reports from the time. The global heliosphere was both smaller and more symmetric under MM conditions, which has implications for the interpretation of cosmogenic radionuclide data and resulting total solar irradiance estimates during grand minima.
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
Multidimensionally constrained relativistic mean-field study of triple-humped barriers in actinides
NASA Astrophysics Data System (ADS)
Zhao, Jie; Lu, Bing-Nan; Vretenar, Dario; Zhao, En-Guang; Zhou, Shan-Gui
2015-01-01
Background: Potential energy surfaces (PES's) of actinide nuclei are characterized by a two-humped barrier structure. At large deformations beyond the second barrier, the occurrence of a third barrier was predicted by macroscopic-microscopic model calculations in the 1970s, but contradictory results were later reported by a number of studies that used different methods. Purpose: Triple-humped barriers in actinide nuclei are investigated in the framework of covariant density functional theory (CDFT). Methods: Calculations are performed using the multidimensionally constrained relativistic mean field (MDC-RMF) model, with the nonlinear point-coupling functional PC-PK1 and the density-dependent meson exchange functional DD-ME2 in the particle-hole channel. Pairing correlations are treated in the BCS approximation with a separable pairing force of finite range. Results: Two-dimensional PES's of 226,228,230,232Th and 232,235,236,238U are mapped and the third minima on these surfaces are located. Then one-dimensional potential energy curves along the fission path are analyzed in detail and the energies of the second barrier, the third minimum, and the third barrier are determined. The functional DD-ME2 predicts the occurrence of a third barrier in all Th nuclei and 238U . The third minima in 230 ,232Th are very shallow, whereas those in 226 ,228Th and 238U are quite prominent. With the functional PC-PK1 a third barrier is found only in 226 ,228 ,230Th . Single-nucleon levels around the Fermi surface are analyzed in 226Th, and it is found that the formation of the third minimum is mainly due to the Z =90 proton energy gap at β20≈1.5 and β30≈0.7 . Conclusions: The possible occurrence of a third barrier on the PES's of actinide nuclei depends on the effective interaction used in multidimensional CDFT calculations. More pronounced minima are predicted by the DD-ME2 functional, as compared to the functional PC-PK1. The depth of the third well in Th isotopes decreases with increasing neutron number. The origin of the third minimum is due to the proton Z =90 shell gap at relevant deformations.
Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.
Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews
2015-03-01
This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil
2015-01-01
Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
Packing C60 in Boron Nitride Nanotubes
NASA Astrophysics Data System (ADS)
Mickelson, W.; Aloni, S.; Han, Wei-Qiang; Cumings, John; Zettl, A.
2003-04-01
We have created insulated C60 nanowire by packing C60 molecules into the interior of insulating boron nitride nanotubes (BNNTs). For small-diameter BNNTs, the wire consists of a linear chain of C60 molecules. With increasing BNNT inner diameter, unusual C60 stacking configurations are obtained (including helical, hollow core, and incommensurate) that are unknown for bulk or thin-film forms of C60. C60 in BNNTs thus presents a model system for studying the properties of dimensionally constrained ``silo'' crystal structures. For the linear-chain case, we have fused the C60 molecules to form a single-walled carbon nanotube inside the insulating BNNT.
An accelerated proximal augmented Lagrangian method and its application in compressive sensing.
Sun, Min; Liu, Jing
2017-01-01
As a first-order method, the augmented Lagrangian method (ALM) is a benchmark solver for linearly constrained convex programming, and in practice some semi-definite proximal terms are often added to its primal variable's subproblem to make it more implementable. In this paper, we propose an accelerated PALM with indefinite proximal regularization (PALM-IPR) for convex programming with linear constraints, which generalizes the proximal terms from semi-definite to indefinite. Under mild assumptions, we establish the worst-case [Formula: see text] convergence rate of PALM-IPR in a non-ergodic sense. Finally, numerical results show that our new method is feasible and efficient for solving compressive sensing.
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
Bi-directional thruster development and test report
NASA Technical Reports Server (NTRS)
Jacot, A. D.; Bushnell, G. S.; Anderson, T. M.
1990-01-01
The design, calibration and testing of a cold gas, bi-directional throttlable thruster are discussed. The thruster consists of an electro-pneumatic servovalve exhausting through opposite nozzles with a high gain pressure feedback loop to optimize performance. The thruster force was measured to determine hysteresis and linearity. Integral gain was used to maximize performance for linearity, hysteresis, and minimum thrust requirements. Proportional gain provided high dynamic response (bandwidth and phase lag). Thruster performance is very important since the thrusters are intended to be used for active control.
Detection of Bioaerosols Using Single Particle Thermal Emission Spectroscopy (First-year Report)
2012-02-01
cooled MCT detector with a noise equivalent power (NEP) of 7x10(–13) W/Hz, yields a detection S/N > 13 (assuming a sufficiently cooled background). We...dispersively resolved using 190-mm Horiba spectrometer that houses a time-gated 32-element mercury cadmium telluride ( MCT ) linear array. In this report...to 10.0 ms. Minimum integration (and readout) periods for the time-gated 32-element mercury cadmium telluride ( MCT ) linear array are 10 µs. Based
Modular design attitude control system
NASA Technical Reports Server (NTRS)
Chichester, F. D.
1984-01-01
A sequence of single axismodels and a series of reduced state linear observers of minimum order are used to reconstruct inaccessible variables pertaining to the modular attitude control of a rigid body flexible suspension model of a flexible spacecraft. The single axis models consist of two, three, four, and five rigid bodies, each interconnected by a flexible shaft passing through the mass centers of the bodies. Modal damping is added to each model. Reduced state linear observers are developed for synthesizing the inaccessible modal state variables for each modal model.
NASA Technical Reports Server (NTRS)
Nicely, Julie M.; Anderson, Daniel C.; Canty, Timothy P.; Salawitch, Ross J.; Wolfe, Glenn M.; Apel, Eric C.; Arnold, Steve R.; Atlas, Elliot L.; Blake, Nicola J.; Bresch, James F.;
2016-01-01
Hydroxyl radical (OH) is the main daytime oxidant in the troposphere and determines the atmospheric lifetimes of many compounds. We use aircraft measurements of O3, H2O, NO, and other species from the Convective Transport of Active Species in the Tropics (CONTRAST) field campaign, which occurred in the tropical western Pacific (TWP) during January-February 2014, to constrain a photochemical box model and estimate concentrations of OH throughout the troposphere. We find that tropospheric column OH (OHCOL) inferred from CONTRAST observations is 12 to 40% higher than found in chemical transport models (CTMs), including CAM-chem-SD run with 2014 meteorology as well as eight models that participated in POLMIP (2008 meteorology). Part of this discrepancy is due to a clear-sky sampling bias that affects CONTRAST observations; accounting for this bias and also for a small difference in chemical mechanism results in our empirically based value of OHCOL being 0 to 20% larger than found within global models. While these global models simulate observed O3 reasonably well, they underestimate NOx (NO +NO2) by a factor of 2, resulting in OHCOL approx.30% lower than box model simulations constrained by observed NO. Underestimations by CTMs of observed CH3CHO throughout the troposphere and of HCHO in the upper troposphere further contribute to differences between our constrained estimates of OH and those calculated by CTMs. Finally, our calculations do not support the prior suggestion of the existence of a tropospheric OH minimum in the TWP, because during January-February 2014 observed levels of O3 and NO were considerably larger than previously reported values in the TWP.
Preliminary paleoseismic observations along the western Denali fault, Alaska
NASA Astrophysics Data System (ADS)
Koehler, R. D.; Schwartz, D. P.; Rood, D. H.; Reger, R.; Wolken, G. J.
2013-12-01
The Denali fault in south-central Alaska, from Mt. McKinley to the Denali-Totschunda fault branch point, accommodates ~9-12 mm/yr of the right-lateral component of oblique convergence between the Pacific/Yakutat and North American plates. The eastern 226 km of this fault reach was part of the source of the 2002 M7.9 Denali fault earthquake. West of the 2002 rupture there is evidence of two large earthquakes on the Denali fault during the past ~550-700 years but the paleoearthquake chronology prior to this time is largely unknown. To better constrain fault rupture parameters for the western Denali fault and contribute to improved seismic hazard assessment, we performed helicopter and ground reconnaissance along the southern flank of the Alaska Range between the Nenana Glacier and Pyramid Peak, a distance of ~35 km, and conducted a site-specific paleoseismic study. We present a Quaternary geologic strip map along the western Denali fault and our preliminary paleoseismic results, which include a differential-GPS survey of a displaced debris flow fan, cosmogenic 10Be surface exposure ages for boulders on this fan, and an interpretation of a trench across the main trace of the fault at the same site. Between the Nenana Glacier and Pyramid Peak, the Denali fault is characterized by prominent tectonic geomorphic features that include linear side-hill troughs, mole tracks, anastamosing composite scarps, and open left-stepping fissures. Measurements of offset rills and gullies indicate that slip during the most recent earthquake was between ~3 and 5 meters, similar to the average displacement in the 2002 earthquake. At our trench site, ~ 25 km east of the Parks Highway, a steep debris fan is displaced along a series of well-defined left-stepping linear fault traces. Multi-event displacements of debris-flow and snow-avalanche channels incised into the fan range from 8 to 43 m, the latter of which serves as a minimum cumulative fan offset estimate. The trench, excavated into the fan across the main fault scarp and adjacent graben, exposed sheared debris fan parent material at its north and south ends, separated by a central zone of stacked scarp-derived colluvium and weakly developed peaty soils. Stratigraphic relations and upward fault terminations clearly record the occurrence of the past three surface-faulting earthquakes and suggest four or more such events. Results of pending 14C analyses are expected to provide new information on earthquake timing and recurrence. A Holocene slip rate for this section of the fault will be developed using back-slip models and an estimate of the age of the fan constrained by our detailed surveys of channel offsets and pending cosmogenic 10Be exposure ages for surface boulders, respectively.
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158
Estimating health state utility values for comorbid health conditions using SF-6D data.
Ara, Roberta; Brazier, John
2011-01-01
When health state utility values for comorbid health conditions are not available, data from cohorts with single conditions are used to estimate scores. The methods used can produce very different results and there is currently no consensus on which is the most appropriate approach. The objective of the current study was to compare the accuracy of five different methods within the same dataset. Data collected during five Welsh Health Surveys were subgrouped by health status. Mean short-form 6 dimension (SF-6D) scores for cohorts with a specific health condition were used to estimate mean SF-6D scores for cohorts with comorbid conditions using the additive, multiplicative, and minimum methods, the adjusted decrement estimator (ADE), and a linear regression model. The mean SF-6D for subgroups with comorbid health conditions ranged from 0.4648 to 0.6068. The linear model produced the most accurate scores for the comorbid health conditions with 88% of values accurate to within the minimum important difference for the SF-6D. The additive and minimum methods underestimated or overestimated the actual SF-6D scores respectively. The multiplicative and ADE methods both underestimated the majority of scores. However, both methods performed better when estimating scores smaller than 0.50. Although the range in actual health state utility values (HSUVs) was relatively small, our data covered the lower end of the index and the majority of previous research has involved actual HSUVs at the upper end of possible ranges. Although the linear model gave the most accurate results in our data, additional research is required to validate our findings. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.
Andean surface uplift constrained by radiogenic isotopes of arc lavas.
Scott, Erin M; Allen, Mark B; Macpherson, Colin G; McCaffrey, Ken J W; Davidson, Jon P; Saville, Christopher; Ducea, Mihai N
2018-03-06
Climate and tectonics have complex feedback systems which are difficult to resolve and remain controversial. Here we propose a new climate-independent approach to constrain regional Andean surface uplift. 87 Sr/ 86 Sr and 143 Nd/ 144 Nd ratios of Quaternary frontal-arc lavas from the Andean Plateau are distinctly crustal (>0.705 and <0.5125, respectively) compared to non-plateau arc lavas, which we identify as a plateau discriminant. Strong linear correlations exist between smoothed elevation and 87 Sr/ 86 Sr (R 2 = 0.858, n = 17) and 143 Nd/ 144 Nd (R 2 = 0.919, n = 16) ratios of non-plateau arc lavas. These relationships are used to constrain 200 Myr of surface uplift history for the Western Cordillera (present elevation 4200 ± 516 m). Between 16 and 26°S, Miocene to recent arc lavas have comparable isotopic signatures, which we infer indicates that current elevations were attained in the Western Cordillera from 23 Ma. From 23-10 Ma, surface uplift gradually propagated southwards by ~400 km.
NASA Technical Reports Server (NTRS)
Downie, John D.
1995-01-01
Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.
Helicopter Control Energy Reduction Using Moving Horizontal Tail
Oktay, Tugrul; Sal, Firat
2015-01-01
Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841
Uniform magnetic fields in density-functional theory
NASA Astrophysics Data System (ADS)
Tellgren, Erik I.; Laestadius, Andre; Helgaker, Trygve; Kvaal, Simen; Teale, Andrew M.
2018-01-01
We construct a density-functional formalism adapted to uniform external magnetic fields that is intermediate between conventional density functional theory and Current-Density Functional Theory (CDFT). In the intermediate theory, which we term linear vector potential-DFT (LDFT), the basic variables are the density, the canonical momentum, and the paramagnetic contribution to the magnetic moment. Both a constrained-search formulation and a convex formulation in terms of Legendre-Fenchel transformations are constructed. Many theoretical issues in CDFT find simplified analogs in LDFT. We prove results concerning N-representability, Hohenberg-Kohn-like mappings, existence of minimizers in the constrained-search expression, and a restricted analog to gauge invariance. The issue of additivity of the energy over non-interacting subsystems, which is qualitatively different in LDFT and CDFT, is also discussed.
Constrained ℋ∞ control for low bandwidth active suspensions
NASA Astrophysics Data System (ADS)
Wasiwitono, Unggul; Sutantra, I. Nyoman
2017-08-01
Low Bandwidth Active Suspension (LBAS) is shown to be more competitive to High Bandwidth Active Suspension (HBAS) when energy and cost aspects are taken into account. In this paper, the constrained ℋ∞ control scheme is applied for LBAS system. The ℋ∞ performance is used to measure ride comfort while the concept of reachable set in a state-space ellipsoid defined by a quadratic storage function is used to capture the time domain constraint that representing the requirements for road holding, suspension deflection limitation and actuator saturation. Then, the control problem is derived in the framework of Linear Matrix Inequality (LMI) optimization. The simulation is conducted considering the road disturbance as a stationary random process. The achievable performance of LBAS is analyzed for different values of bandwidth and damping ratio.
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Robust model predictive control for constrained continuous-time nonlinear systems
NASA Astrophysics Data System (ADS)
Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong
2018-02-01
In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.
Uniform magnetic fields in density-functional theory.
Tellgren, Erik I; Laestadius, Andre; Helgaker, Trygve; Kvaal, Simen; Teale, Andrew M
2018-01-14
We construct a density-functional formalism adapted to uniform external magnetic fields that is intermediate between conventional density functional theory and Current-Density Functional Theory (CDFT). In the intermediate theory, which we term linear vector potential-DFT (LDFT), the basic variables are the density, the canonical momentum, and the paramagnetic contribution to the magnetic moment. Both a constrained-search formulation and a convex formulation in terms of Legendre-Fenchel transformations are constructed. Many theoretical issues in CDFT find simplified analogs in LDFT. We prove results concerning N-representability, Hohenberg-Kohn-like mappings, existence of minimizers in the constrained-search expression, and a restricted analog to gauge invariance. The issue of additivity of the energy over non-interacting subsystems, which is qualitatively different in LDFT and CDFT, is also discussed.
NASA Astrophysics Data System (ADS)
Ickert, R. B.; Mundil, R.
2012-12-01
Dateable minerals (especially zircon U-Pb) that crystallized at high temperatures but have been redeposited, pose both unique opportunities and challenges for geochronology. Although they have the potential to provide useful information on the depositional age of their host rocks, their relationship to the host is not always well constrained. For example, primary volcanic deposits will often have a lag time (time between eruption and deposition) that is smaller than can be resolved using radiometric techniques, and the age of eruption and of deposition will be coincident within uncertainty. Alternatively, ordinary clastic sedimentary rocks will usually have a long and variable lag time, even for the youngest minerals. Intermediate cases, for example moderately reworked volcanogenic material, will have a short, but unknown lag time. A compounding problem with U-Pb zircon is that the residence time of crystals in their host magma chamber (time between crystallization and eruption) can be high and is variable, even within the products of a single eruption. In cases where the lag and/or residence time suspected to be large relative to the precision of the date, a common objective is to determine the minimum age of a sample of dates, in order to constrain the maximum age of the deposition of the host rock. However, both the extraction of that age as well as assignment of a meaningful uncertainty is not straightforward. A number of ad hoc techniques have been employed in the literature, which may be appropriate for particular data sets or specific problems, but may yield biased or misleading results. Ludwig (2012) has developed an objective, statistically justified method for the determination of the distribution of the minimum age, but it has not been widely adopted. Here we extend this algorithm with a bootstrap (which can show the effect - if any - of the sampling distribution itself). This method has a number of desirable characteristics: It can incorporate all data points while being resistant to outliers, it utilizes the measurement uncertainties, and it does not require the assumption that any given cluster of data represents a single geological event. In brief, the technique generates a synthetic distribution from the input data by resampling with replacement (a bootstrap). Each resample is a random selection from a Gaussian distribution defined by the mean and uncertainty of the data point. For this distribution, the minimum value is calculated. This procedure is repeated many times (>1000) and a distribution of minimum values is generated, from which a confidence interval can be constructed. We demonstrate the application of this technique using natural and synthetic datasets, show the advantages and limitations, and relate it to other methods. We emphasize that this estimate remains strictly a minimum age - as with any other estimate that does not explicitly incorporate lag or residence time, it will not reflect a depositional age if the lag/residence time is larger than the uncertainty of the estimate. We recommend that this or similar techniques be considered by geochronologists. Ludwig, K.R., 2012. Isoplot 3.75, A geochronological toolkit for Microsoft Excel; Berkeley Geochronology Center Special Publication no. 5
Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance
NASA Technical Reports Server (NTRS)
Markopoulos, Nikos; Calise, Anthony J.
1995-01-01
A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.
Pleistocene Thermocline Reconstruction and Oxygen Minimum Zone Evolution in the Maldives
NASA Astrophysics Data System (ADS)
Yu, S. M.; Wright, J.
2017-12-01
Drift deposits of the southern flank the Kardiva Channel in the eastern Inner Sea of the Maldives provide a complete record of Pleistocene water column changes in conjunction with monsoon cyclicity and fluctuations in the current system. We sampled IODP Site 359-U1467 to reconstruct water column using foraminiferal stable isotope records. This unlithified lithostratigraphic unit is rich in well-preserved microfossils and has an average sedimentation rate of 3.4 cm/yr. Marine Isotope Stages 1-6 were identified and show higher sedimentation rates during the interglacial sections approaching 6 cm/kyr. We present the δ13C and δ18O record of planktonic and benthic foraminiferal species taken at intervals of 3 cm. Globigerinoides ruber was used to constrain surface conditions. The thermocline dwelling species, Globorotalia menardii, was chosen to monitor fluctuations in the thermocline compared to the mixed layer. Lastly, the δ13C of the benthic species, Cibicidoides subhaidingerii and Planulina renzi, reveal changes to the bottom water ventilation and expansion of oxygen minimum zones over time. All three taxa recorded similar changes in δ18O over the glacial/interglacial cycles which is remarkable given the large sea level change ( 120 m) and the relatively shallow water depth ( 450 m). There is a small increase in the δ13C gradient during the glacial intervals which might reflect less ventilated bottom waters in the Inner Sea. This multispecies approach allows us to better constrain the thermocline hydrography and suggests that changes in the OMZ thickness are driven by the intensification of the monsoon cycles while painting a more cohesive picture to the changes in the water column structure.
NASA Astrophysics Data System (ADS)
Alakent, Burak; Camurdan, Mehmet C.; Doruker, Pemra
2005-10-01
Time series models, which are constructed from the projections of the molecular-dynamics (MD) runs on principal components (modes), are used to mimic the dynamics of two proteins: tendamistat and immunity protein of colicin E7 (ImmE7). Four independent MD runs of tendamistat and three independent runs of ImmE7 protein in vacuum are used to investigate the energy landscapes of these proteins. It is found that mean-square displacements of residues along the modes in different time scales can be mimicked by time series models, which are utilized in dividing protein dynamics into different regimes with respect to the dominating motion type. The first two regimes constitute the dominance of intraminimum motions during the first 5ps and the random walk motion in a hierarchically higher-level energy minimum, which comprise the initial time period of the trajectories up to 20-40ps for tendamistat and 80-120ps for ImmE7. These are also the time ranges within which the linear nonstationary time series are completely satisfactory in explaining protein dynamics. Encountering energy barriers enclosing higher-level energy minima constrains the random walk motion of the proteins, and pseudorelaxation processes at different levels of minima are detected in tendamistat, depending on the sampling window size. Correlation (relaxation) times of 30-40ps and 150-200ps are detected for two energy envelopes of successive levels for tendamistat, which gives an overall idea about the hierarchical structure of the energy landscape. However, it should be stressed that correlation times of the modes are highly variable with respect to conformational subspaces and sampling window sizes, indicating the absence of an actual relaxation. The random-walk step sizes and the time length of the second regime are used to illuminate an important difference between the dynamics of the two proteins, which cannot be clarified by the investigation of relaxation times alone: ImmE7 has lower-energy barriers enclosing the higher-level energy minimum, preventing the protein to relax and letting it move in a random-walk fashion for a longer period of time.
Robust linear discriminant models to solve financial crisis in banking sectors
NASA Astrophysics Data System (ADS)
Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni
2014-12-01
Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.
NASA Technical Reports Server (NTRS)
Martin, J. A.
1974-01-01
A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.
Electronic torsional sound in linear atomic chains: Chemical energy transport at 1000 km/s
NASA Astrophysics Data System (ADS)
Kurnosov, Arkady A.; Rubtsov, Igor V.; Maksymov, Andrii O.; Burin, Alexander L.
2016-07-01
We investigate entirely electronic torsional vibrational modes in linear cumulene chains. The carbon nuclei of a cumulene are positioned along the primary axis so that they can participate only in the transverse and longitudinal motions. However, the interatomic electronic clouds behave as a torsion spring with remarkable torsional stiffness. The collective dynamics of these clouds can be described in terms of electronic vibrational quanta, which we name torsitons. It is shown that the group velocity of the wavepacket of torsitons is much higher than the typical speed of sound, because of the small mass of participating electrons compared to the atomic mass. For the same reason, the maximum energy of the torsitons in cumulenes is as high as a few electronvolts, while the minimum possible energy is evaluated as a few hundred wavenumbers and this minimum is associated with asymmetry of zero point atomic vibrations. Theory predictions are consistent with the time-dependent density functional theory calculations. Molecular systems for experimental evaluation of the predictions are proposed.
Characterization of the International Linear Collider damping ring optics
NASA Astrophysics Data System (ADS)
Shanks, J.; Rubin, D. L.; Sagan, D.
2014-10-01
A method is presented for characterizing the emittance dilution and dynamic aperture for an arbitrary closed lattice that includes guide field magnet errors, multipole errors and misalignments. This method, developed and tested at the Cornell Electron Storage Ring Test Accelerator (CesrTA), has been applied to the damping ring lattice for the International Linear Collider (ILC). The effectiveness of beam based emittance tuning is limited by beam position monitor (BPM) measurement errors, number of corrector magnets and their placement, and correction algorithm. The specifications for damping ring magnet alignment, multipole errors, number of BPMs, and precision in BPM measurements are shown to be consistent with the required emittances and dynamic aperture. The methodology is then used to determine the minimum number of position monitors that is required to achieve the emittance targets, and how that minimum depends on the location of the BPMs. Similarly, the maximum tolerable multipole errors are evaluated. Finally, the robustness of each BPM configuration with respect to random failures is explored.
Dispersive effects from a comparison of electron and positron scattering from
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul Gueye; M. Bernheim; J. F. Danel
1998-05-01
Dispersive effects have been investigated by comparing elastic scattering of electrons and positrons from {sup 12}C at the Saclay Linear Accelerator. The results demonstrate that dispersive effects at energies of 262 MeV and 450 MeV are less than 2% below the first diffraction minimum [0.95 < q{sub eff} (fm{sup -1}) < 1.66] in agreement with the prediction of Friar and Rosen. At the position of this minimum (q{sub eff} = 1.84 fm{sup -1}), the deviation between the positron scattering cross section and the cross section derived from the electron results is -44% {+-} 30%.
Estimation of transformation parameters for microarray data.
Durbin, Blythe; Rocke, David M
2003-07-22
Durbin et al. (2002), Huber et al. (2002) and Munson (2001) independently introduced a family of transformations (the generalized-log family) which stabilizes the variance of microarray data up to the first order. We introduce a method for estimating the transformation parameter in tandem with a linear model based on the procedure outlined in Box and Cox (1964). We also discuss means of finding transformations within the generalized-log family which are optimal under other criteria, such as minimum residual skewness and minimum mean-variance dependency. R and Matlab code and test data are available from the authors on request.
Optimal dual-fuel propulsion for minimum inert weight or minimum fuel cost
NASA Technical Reports Server (NTRS)
Martin, J. A.
1973-01-01
An analytical investigation of single-stage vehicles with multiple propulsion phases has been conducted with the phasing optimized to minimize a general cost function. Some results are presented for linearized sizing relationships which indicate that single-stage-to-orbit, dual-fuel rocket vehicles can have lower inert weight than similar single-fuel rocket vehicles and that the advantage of dual-fuel vehicles can be increased if a dual-fuel engine is developed. The results also indicate that the optimum split can vary considerably with the choice of cost function to be minimized.
Rice, Karen C.; Hirsch, Robert M.
2012-01-01
Long-term streamflow data within the Chesapeake Bay watershed and surrounding area were analyzed in an attempt to identify trends in streamflow. Data from 30 streamgages near and within the Chesapeake Bay watershed were selected from 1930 through 2010 for analysis. Streamflow data were converted to runoff and trend slopes in percent change per decade were calculated. Trend slopes for three runoff statistics (the 7-day minimum, the mean, and the 1-day maximum) were analyzed annually and seasonally. The slopes also were analyzed both spatially and temporally. The spatial results indicated that trend slopes in the northern half of the watershed were generally greater than those in the southern half. The temporal analysis was done by splitting the 80-year flow record into two subsets; records for 28 streamgages were analyzed for 1930 through 1969 and records for 30 streamgages were analyzed for 1970 through 2010. The mean of the data for all sites for each year were plotted so that the following datasets were analyzed: the 7-day minimum runoff for the north, the 7-day minimum runoff for the south, the mean runoff for the north, the mean runoff for the south, the 1-day maximum runoff for the north, and the 1-day maximum runoff for the south. Results indicated that the period 1930 through 1969 was statistically different from the period 1970 through 2010. For the 7-day minimum runoff and the mean runoff, the latter period had significantly higher streamflow than did the earlier period, although within those two periods no significant linear trends were identified. For the 1-day maximum runoff, no step trend or linear trend could be shown to be statistically significant for the north, although the south showed a mixture of an upward step trend accompanied by linear downtrends within the periods. In no case was a change identified that indicated an increasing rate of change over time, and no general pattern was identified of hydrologic conditions becoming "more extreme" over time.
Structural Properties and Estimation of Delay Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kwong, R. H. S.
1975-01-01
Two areas in the theory of delay systems were studied: structural properties and their applications to feedback control, and optimal linear and nonlinear estimation. The concepts of controllability, stabilizability, observability, and detectability were investigated. The property of pointwise degeneracy of linear time-invariant delay systems is considered. Necessary and sufficient conditions for three dimensional linear systems to be made pointwise degenerate by delay feedback were obtained, while sufficient conditions for this to be possible are given for higher dimensional linear systems. These results were applied to obtain solvability conditions for the minimum time output zeroing control problem by delay feedback. A representation theorem is given for conditional moment functionals of general nonlinear stochastic delay systems, and stochastic differential equations are derived for conditional moment functionals satisfying certain smoothness properties.
On the design of classifiers for crop inventories
NASA Technical Reports Server (NTRS)
Heydorn, R. P.; Takacs, H. C.
1986-01-01
Crop proportion estimators that use classifications of satellite data to correct, in an additive way, a given estimate acquired from ground observations are discussed. A linear version of these estimators is optimal, in terms of minimum variance, when the regression of the ground observations onto the satellite observations in linear. When this regression is not linear, but the reverse regression (satellite observations onto ground observations) is linear, the estimator is suboptimal but still has certain appealing variance properties. In this paper expressions are derived for those regressions which relate the intercepts and slopes to conditional classification probabilities. These expressions are then used to discuss the question of classifier designs that can lead to low-variance crop proportion estimates. Variance expressions for these estimates in terms of classifier omission and commission errors are also derived.
Effects of regulated river flows on habitat suitability for the robust redhorse
Fisk, J. M.; Kwak, Thomas J.; Heise, R. J.
2015-01-01
The Robust Redhorse Moxostoma robustum is a rare and imperiled fish, with wild populations occurring in three drainages from North Carolina to Georgia. Hydroelectric dams have altered the species’ habitat and restricted its range. An augmented minimum-flow regime that will affect Robust Redhorse habitat was recently prescribed for Blewett Falls Dam, a hydroelectric facility on the Pee Dee River, North Carolina. Our objective was to quantify suitable spawning and nonspawning habitat under current and proposed minimum-flow regimes. We implanted radio transmitters into 27 adult Robust Redhorses and relocated the fish from spring 2008 to summer 2009, and we described habitat at 15 spawning capture locations. Nonspawning habitat consisted of deep, slow-moving pools (mean depth D 2.3 m; mean velocity D 0.23 m/s), bedrock and sand substrates, and boulders or coarse woody debris as cover. Spawning habitat was characterized as shallower, faster-moving water (mean depth D 0.84 m; mean velocity D 0.61 m/s) with gravel and cobble as substrates and boulders as cover associated with shoals. Telemetry relocations revealed two behavioral subgroups: a resident subgroup (linear range [mean § SE] D 7.9 § 3.7 river kilometers [rkm]) that remained near spawning areas in the Piedmont region throughout the year; and a migratory subgroup (linear range D 64.3 § 8.4 rkm) that migrated extensively downstream into the Coastal Plain region. Spawning and nonspawning habitat suitability indices were developed based on field microhabitat measurements and were applied to model suitable available habitat (weighted usable area) for current and proposed augmented minimum flows. Suitable habitat (both spawning and nonspawning) increased for each proposed seasonal minimum flow relative to former minimum flows, with substantial increases for spawning sites. Our results contribute to an understanding of how regulated flows affect available habitats for imperiled species. Flow managers can use these findings to regulate discharge more effectively and to create and maintain important habitats during critical periods for priority species.
Linear variability of gait according to socioeconomic status in elderly
2016-01-01
Aim: To evaluate the linear variability of comfortable gait according to socioeconomic status in community-dwelling elderly. Method: For this cross-sectional observational study 63 self- functioning elderly were categorized according to the socioeconomic level on medium-low (n= 33, age 69.0 ± 5.0 years) and medium-high (n= 30, age 71.0 ± 6.0 years). Each participant was asked to perform comfortable gait speed for 3 min on an 40 meters elliptical circuit, recording in video five strides which were transformed into frames, determining the minimum foot clearance, maximum foot clearance and stride length. The intra-group linear variability was calculated by the coefficient of variation in percent. Results: The trajectory parameters variability is not different according to socioeconomic status with a 30% (range= 15-55%) for the minimum foot clearance and 6% (range= 3-8%) in maximum foot clearance. Meanwhile, the stride length consistently was more variable in the medium-low socioeconomic status for the overall sample (p= 0.004), female (p= 0.041) and male gender (p= 0.007), with values near 4% (range = 2.5-5.0%) in the medium-low and 2% (range = 1.5-3.5%) in the medium-high. Conclusions: The intra-group linear variability is consistently higher and within reference parameters for stride length during comfortable gait for elderly belonging to medium-low socioeconomic status. This might be indicative of greater complexity and consequent motor adaptability. PMID:27546931
Hossein-Zadeh, Navid Ghavi
2016-08-01
The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.
NASA Astrophysics Data System (ADS)
Lopez-Sanchez, Marco A.; Marcos, Alberto; Martínez, Francisco J.; Iriondo, Alexander; Llana-Fúnez, Sergio
2015-06-01
The Vivero fault is crustal-scale extensional shear zone parallel to the Variscan orogen in the Iberian massif belt with an associated dip-slip movement toward the hinterland. To constrain the timing of the extension accommodated by this structure, we performed zircon U-Pb LA-ICP-MS geochronology in several deformed plutons: some of them emplaced syntectonically. The different crystallization ages obtained indicate that the fault was active at least between 303 ± 2 and 287 ± 3 Ma, implying a minimum tectonic activity of 16 ± 5 Ma along the fault. The onset of the faulting is established to have occurred later than 314 ± 2 Ma. The geochronological data confirm that the Vivero fault postdates the main Variscan deformation events in the NW of the Iberian massif and that the extension direction of the Late Carboniferous-Early Permian crustal-scale extensional shear zones along the Ibero-Armorican Arc was consistently perpendicular to the general arcuate trend of the belt in SW Europe.
An approximation function for frequency constrained structural optimization
NASA Technical Reports Server (NTRS)
Canfield, R. A.
1989-01-01
The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.
Reduced probability of ice-free summers for 1.5 °C compared to 2 °C warming
NASA Astrophysics Data System (ADS)
Jahn, Alexandra
2018-05-01
Arctic sea ice has declined rapidly with increasing global temperatures. However, it is largely unknown how Arctic summer sea-ice impacts would vary under the 1.5 °C Paris target compared to scenarios with greater warming. Using the Community Earth System Model, I show that constraining warming to 1.5 °C rather than 2.0 °C reduces the probability of any summer ice-free conditions by 2100 from 100% to 30%. It also reduces the late-century probability of an ice cover below the 2012 record minimum from 98% to 55%. For warming above 2 °C, frequent ice-free conditions can be expected, potentially for several months per year. Although sea-ice loss is generally reversible for decreasing temperatures, sea ice will only recover to current conditions if atmospheric CO2 is reduced below present-day concentrations. Due to model biases, these results provide a lower bound on summer sea-ice impacts, but clearly demonstrate the benefits of constraining warming to 1.5 °C.
Constrained dictionary learning and probabilistic hypergraph ranking for person re-identification
NASA Astrophysics Data System (ADS)
He, You; Wu, Song; Pu, Nan; Qian, Li; Xiao, Guoqiang
2018-04-01
Person re-identification is a fundamental and inevitable task in public security. In this paper, we propose a novel framework to improve the performance of this task. First, two different types of descriptors are extracted to represent a pedestrian: (1) appearance-based superpixel features, which are constituted mainly by conventional color features and extracted from the supepixel rather than a whole picture and (2) due to the limitation of discrimination of appearance features, the deep features extracted by feature fusion Network are also used. Second, a view invariant subspace is learned by dictionary learning constrained by the minimum negative sample (termed as DL-cMN) to reduce the noise in appearance-based superpixel feature domain. Then, we use deep features and sparse codes transformed by appearancebased features to establish the hyperedges respectively by k-nearest neighbor, rather than jointing different features simply. Finally, a final ranking is performed by probabilistic hypergraph ranking algorithm. Extensive experiments on three challenging datasets (VIPeR, PRID450S and CUHK01) demonstrate the advantages and effectiveness of our proposed algorithm.
Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints
NASA Technical Reports Server (NTRS)
Orr, Jeb S.; Wall, Johnm W.
2011-01-01
In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.
A mixing-model approach to quantifying sources of organic matter to salt marsh sediments
NASA Astrophysics Data System (ADS)
Bowles, K. M.; Meile, C. D.
2010-12-01
Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes can alter endmember characteristics over time, we investigate the effect of early diagenesis on chosen parameters, an analysis that entails an assessment of the organic matter age distribution. Thus, estimates of the relative contributions of phytoplankton, C3 and C4 plants to bulk sediment organic matter depend not only on environmental characteristics that impact reactivity, but also on sediment mixing processes.
NASA Technical Reports Server (NTRS)
Arneson, Heather M.; Dousse, Nicholas; Langbort, Cedric
2014-01-01
We consider control design for positive compartmental systems in which each compartment's outflow rate is described by a concave function of the amount of material in the compartment.We address the problem of determining the routing of material between compartments to satisfy time-varying state constraints while ensuring that material reaches its intended destination over a finite time horizon. We give sufficient conditions for the existence of a time-varying state-dependent routing strategy which ensures that the closed-loop system satisfies basic network properties of positivity, conservation and interconnection while ensuring that capacity constraints are satisfied, when possible, or adjusted if a solution cannot be found. These conditions are formulated as a linear programming problem. Instances of this linear programming problem can be solved iteratively to generate a solution to the finite horizon routing problem. Results are given for the application of this control design method to an example problem. Key words: linear programming; control of networks; positive systems; controller constraints and structure.
Analyzing systemic risk using non-linear marginal expected shortfall and its minimum spanning tree
NASA Astrophysics Data System (ADS)
Song, Jae Wook; Ko, Bonggyun; Chang, Woojin
2018-02-01
The aim of this paper is to propose a new theoretical framework for analyzing the systemic risk using the marginal expected shortfall (MES) and its correlation-based minimum spanning tree (MST). At first, we develop two parametric models of MES with their closed-form solutions based on the Capital Asset Pricing Model. Our models are derived from the non-symmetric quadratic form, which allows them to consolidate the non-linear relationship between the stock and market returns. Secondly, we discover the evidences related to the utility of our models and the possible association in between the non-linear relationship and the emergence of severe systemic risk by considering the US financial system as a benchmark. In this context, the evolution of MES also can be regarded as a reasonable proxy of systemic risk. Lastly, we analyze the structural properties of the systemic risk using the MST based on the computed series of MES. The topology of MST conveys the presence of sectoral clustering and strong co-movements of systemic risk leaded by few hubs during the crisis. Specifically, we discover that the Depositories are the majority sector leading the connections during the Non-Crisis period, whereas the Broker-Dealers are majority during the Crisis period.
NASA Astrophysics Data System (ADS)
Steinacher, M.; Joos, F.
2016-02-01
Information on the relationship between cumulative fossil CO2 emissions and multiple climate targets is essential to design emission mitigation and climate adaptation strategies. In this study, the transient response of a climate or environmental variable per trillion tonnes of CO2 emissions, termed TRE, is quantified for a set of impact-relevant climate variables and from a large set of multi-forcing scenarios extended to year 2300 towards stabilization. An ˜ 1000-member ensemble of the Bern3D-LPJ carbon-climate model is applied and model outcomes are constrained by 26 physical and biogeochemical observational data sets in a Bayesian, Monte Carlo-type framework. Uncertainties in TRE estimates include both scenario uncertainty and model response uncertainty. Cumulative fossil emissions of 1000 Gt C result in a global mean surface air temperature change of 1.9 °C (68 % confidence interval (c.i.): 1.3 to 2.7 °C), a decrease in surface ocean pH of 0.19 (0.18 to 0.22), and a steric sea level rise of 20 cm (13 to 27 cm until 2300). Linearity between cumulative emissions and transient response is high for pH and reasonably high for surface air and sea surface temperatures, but less pronounced for changes in Atlantic meridional overturning, Southern Ocean and tropical surface water saturation with respect to biogenic structures of calcium carbonate, and carbon stocks in soils. The constrained model ensemble is also applied to determine the response to a pulse-like emission and in idealized CO2-only simulations. The transient climate response is constrained, primarily by long-term ocean heat observations, to 1.7 °C (68 % c.i.: 1.3 to 2.2 °C) and the equilibrium climate sensitivity to 2.9 °C (2.0 to 4.2 °C). This is consistent with results by CMIP5 models but inconsistent with recent studies that relied on short-term air temperature data affected by natural climate variability.
Thermodynamic geometry of minimum-dissipation driven barrier crossing
NASA Astrophysics Data System (ADS)
Sivak, David A.; Crooks, Gavin E.
2016-11-01
We explore the thermodynamic geometry of a simple system that models the bistable dynamics of nucleic acid hairpins in single molecule force-extension experiments. Near equilibrium, optimal (minimum-dissipation) driving protocols are governed by a generalized linear response friction coefficient. Our analysis demonstrates that the friction coefficient of the driving protocols is sharply peaked at the interface between metastable regions, which leads to minimum-dissipation protocols that drive rapidly within a metastable basin, but then linger longest at the interface, giving thermal fluctuations maximal time to kick the system over the barrier. Intuitively, the same principle applies generically in free energy estimation (both in steered molecular dynamics simulations and in single-molecule experiments), provides a design principle for the construction of thermodynamically efficient coupling between stochastic objects, and makes a prediction regarding the construction of evolved biomolecular motors.
Thermodynamic geometry of minimum-dissipation driven barrier crossing
NASA Astrophysics Data System (ADS)
Sivak, David; Crooks, Gavin
We explore the thermodynamic geometry of a simple system that models the bistable dynamics of nucleic acid hairpins in single molecule force-extension experiments. Near equilibrium, optimal (minimum-dissipation) driving protocols are governed by a generalized linear response friction coefficient. Our analysis demonstrates that the friction coefficient of the driving protocols is sharply peaked at the interface between metastable regions, which leads to minimum-dissipation protocols that drive rapidly within a metastable basin, but then linger longest at the interface, giving thermal fluctuations maximal time to kick the system over the barrier. Intuitively, the same principle applies generically in free energy estimation (both in steered molecular dynamics simulations and in single-molecule experiments), provides a design principle for the construction of thermodynamically efficient coupling between stochastic objects, and makes a prediction regarding the construction of evolved biomolecular motors.
Weighting climate model projections using observational constraints.
Gillett, Nathan P
2015-11-13
Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.
Aircraft flight test trajectory control
NASA Technical Reports Server (NTRS)
Menon, P. K. A.; Walker, R. A.
1988-01-01
Two design techniques for linear flight test trajectory controllers (FTTCs) are described: Eigenstructure assignment and the minimum error excitation technique. The two techniques are used to design FTTCs for an F-15 aircraft model for eight different maneuvers at thirty different flight conditions. An evaluation of the FTTCs is presented.
Ground and excited states of vanadium hydroxide isomers and their cations, VOH0,+ and HVO0,+
NASA Astrophysics Data System (ADS)
Miliordos, Evangelos; Harrison, James F.; Hunt, Katharine L. C.
2013-03-01
Employing correlation consistent basis sets of quadruple-zeta quality and applying both multireference configuration interaction and single-reference coupled cluster methodologies, we studied the electronic and geometrical structure of the [V,O,H]0,+ species. The electronic structure of HVO0,+ is explained by considering a hydrogen atom approaching VO0,+, while VOH0,+ molecules are viewed in terms of the interaction of V+,2+ with OH-. The potential energy curves for H-VO0,+ and V0,+-OH have been constructed as functions of the distance between the interacting subunits, and the potential energy curves have also been determined as functions of the H-V-O angle. For the stationary points that we have located, we report energies, geometries, harmonic frequencies, and dipole moments. We find that the most stable bent HVO0,+ structure is lower in energy than any of the linear HVO0,+ structures. Similarly, the most stable state of bent VOH is lower in energy than the linear structures, but linear VOH+ is lower in energy than bent VOH+. The global minimum on the potential energy surface for the neutral species is the tilde{X}^3A″ state of bent HVO, although the tilde{X}^5A″ state of bent VOH is less than 5 kcal/mol higher in energy. The global minimum on the potential surface for the cation is the tilde{X}^4Σ ^- state of linear VOH+, with bent VOH+ and bent HVO+ both more than 10 kcal/mol higher in energy. For the neutral species, the bent geometries exhibit significantly higher dipole moments than the linear structures.
Analysis of the PLL phase error in presence of simulated ionospheric scintillation events
NASA Astrophysics Data System (ADS)
Forte, B.
2012-01-01
The functioning of standard phase locked loops (PLL), including those used to track radio signals from Global Navigation Satellite Systems (GNSS), is based on a linear approximation which holds in presence of small phase errors. Such an approximation represents a reasonable assumption in most of the propagation channels. However, in presence of a fading channel the phase error may become large, making the linear approximation no longer valid. The PLL is then expected to operate in a non-linear regime. As PLLs are generally designed and expected to operate in their linear regime, whenever the non-linear regime comes into play, they will experience a serious limitation in their capability to track the corresponding signals. The phase error and the performance of a typical PLL embedded into a commercial multiconstellation GNSS receiver were analyzed in presence of simulated ionospheric scintillation. Large phase errors occurred during scintillation-induced signal fluctuations although cycle slips only occurred during the signal re-acquisition after a loss of lock. Losses of lock occurred whenever the signal faded below the minimumC/N0threshold allowed for tracking. The simulations were performed for different signals (GPS L1C/A, GPS L2C, GPS L5 and Galileo L1). L5 and L2C proved to be weaker than L1. It appeared evident that the conditions driving the PLL phase error in the specific case of GPS receivers in presence of scintillation-induced signal perturbations need to be evaluated in terms of the combination of the minimumC/N0 tracking threshold, lock detector thresholds, possible cycle slips in the tracking PLL and accuracy of the observables (i.e. the error propagation onto the observables stage).
A non-linear model of economic production processes
NASA Astrophysics Data System (ADS)
Ponzi, A.; Yasutomi, A.; Kaneko, K.
2003-06-01
We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.
Nonlinear vs. linear biasing in Trp-cage folding simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spiwok, Vojtěch, E-mail: spiwokv@vscht.cz; Oborský, Pavel; Králová, Blanka
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200 ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energymore » minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.« less
NASA Astrophysics Data System (ADS)
Ikeda, Sho; Lee, Sang-Yeop; Ito, Hiroyuki; Ishihara, Noboru; Masu, Kazuya
2015-04-01
In this paper, we present a voltage-controlled oscillator (VCO), which achieves highly linear frequency tuning under a low supply voltage of 0.5 V. To obtain the linear frequency tuning of a VCO, the high linearity of the threshold voltage of a varactor versus its back-gate voltage is utilized. This enables the linear capacitance tuning of the varactor; thus, a highly linear VCO can be achieved. In addition, to decrease the power consumption of the VCO, a current-reuse structure is employed as a cross-coupled pair. The proposed VCO was fabricated using a 65 nm Si complementary metal oxide semiconductor (CMOS) process. It shows the ratio of the maximum VCO gain (KVCO) to the minimum one to be 1.28. The dc power consumption is 0.33 mW at a supply voltage of 0.5 V. The measured phase noise at 10 MHz offset is -123 dBc/Hz at an output frequency of 5.8 GHz.
A parametric LQ approach to multiobjective control system design
NASA Technical Reports Server (NTRS)
Kyr, Douglas E.; Buchner, Marc
1988-01-01
The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.
NASA Astrophysics Data System (ADS)
Pepi, John W.
2017-08-01
Thermally induced stress is readily calculated for linear elastic material properties using Hooke's law in which, for situations where expansion is constrained, stress is proportional to the product of the material elastic modulus and its thermal strain. When material behavior is nonlinear, one needs to make use of nonlinear theory. However, we can avoid that complexity in some situations. For situations in which both elastic modulus and coefficient of thermal expansion vary with temperature, solutions can be formulated using secant properties. A theoretical approach is thus presented to calculate stresses for nonlinear, neo-Hookean, materials. This is important for high acuity optical systems undergoing large temperature extremes.
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2012-01-01
When a neuronal spike train is observed, what can we say about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then to choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate and fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that its unique global minimum can thus be found by gradient descent techniques. The global minimum property requires independence of spike time intervals. Lack of history dependence is, however, an important constraint that is not fulfilled in many biological neurons which are known to generate a rich repertoire of spiking behaviors that are incompatible with history independence. Therefore, we expanded the integrate and fire model by including one additional variable, a variable threshold (Mihalas & Niebur, 2009) allowing for history-dependent firing patterns. This neuronal model produces a large number of spiking behaviors while still being linear. Linearity is important as it maintains the distribution of the random variables and still allows for maximum likelihood methods to be used. In this study we show that, although convexity of the negative log-likelihood is not guaranteed for this model, the minimum of the negative log-likelihood function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) frequently reaches the global minimum. PMID:21851282
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.
NASA Astrophysics Data System (ADS)
Kang, Fei; Li, Junjie; Ma, Zhenyue
2013-02-01
Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.
Spectrum and orbit conservation as a factor in future mobile satellite system design
NASA Technical Reports Server (NTRS)
Bowen, Robert R.
1990-01-01
Access to the radio spectrum and geostationary orbit is essential to current and future mobile satellite systems. This access is difficult to obtain for current systems, and may be even more so for larger future systems. In this environment, satellite systems that minimize the amount of spectrum orbit resource required to meet a specific traffic requirement are essential. Several spectrum conservation techniques are discussed, some of which are complementary to designing the system at minimum cost. All may need to be implemented to the limits of technological feasibility if network growth is not to be constrained because of the lack of available spectrum-orbit resource.
Pulsar statistics and their interpretations
NASA Technical Reports Server (NTRS)
Arnett, W. D.; Lerche, I.
1981-01-01
It is shown that a lack of knowledge concerning interstellar electron density, the true spatial distribution of pulsars, the radio luminosity source distribution of pulsars, the real ages and real aging rates of pulsars, the beaming factor (and other unknown factors causing the known sample of about 350 pulsars to be incomplete to an unknown degree) is sufficient to cause a minimum uncertainty of a factor of 20 in any attempt to determine pulsar birth or death rates in the Galaxy. It is suggested that this uncertainty must impact on suggestions that the pulsar rates can be used to constrain possible scenarios for neutron star formation and stellar evolution in general.
Optimal focal-plane restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1989-01-01
Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.
Design of optimally normal minimum gain controllers by continuation method
NASA Technical Reports Server (NTRS)
Lim, K. B.; Juang, J.-N.; Kim, Z. C.
1989-01-01
A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.
Ni-Mn-Ga shape memory nanoactuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohl, M., E-mail: manfred.kohl@kit.edu; Schmitt, M.; Krevet, B.
2014-01-27
To probe finite size effects in ferromagnetic shape memory nanoactuators, double-beam structures with minimum dimensions down to 100 nm are designed, fabricated, and characterized in-situ in a scanning electron microscope with respect to their coupled thermo-elastic and electro-thermal properties. Electrical resistance and mechanical beam bending tests demonstrate a reversible thermal shape memory effect down to 100 nm. Electro-thermal actuation involves large temperature gradients along the nanobeam in the order of 100 K/μm. We discuss the influence of surface and twin boundary energies and explain why free-standing nanoactuators behave differently compared to constrained geometries like films and nanocrystalline shape memory alloys.
Ni-Mn-Ga shape memory nanoactuation
NASA Astrophysics Data System (ADS)
Kohl, M.; Schmitt, M.; Backen, A.; Schultz, L.; Krevet, B.; Fähler, S.
2014-01-01
To probe finite size effects in ferromagnetic shape memory nanoactuators, double-beam structures with minimum dimensions down to 100 nm are designed, fabricated, and characterized in-situ in a scanning electron microscope with respect to their coupled thermo-elastic and electro-thermal properties. Electrical resistance and mechanical beam bending tests demonstrate a reversible thermal shape memory effect down to 100 nm. Electro-thermal actuation involves large temperature gradients along the nanobeam in the order of 100 K/μm. We discuss the influence of surface and twin boundary energies and explain why free-standing nanoactuators behave differently compared to constrained geometries like films and nanocrystalline shape memory alloys.
Hydrogen Burning in Low Mass Stars Constrains Scalar-Tensor Theories of Gravity.
Sakstein, Jeremy
2015-11-13
The most general scalar-tensor theories of gravity predict a weakening of the gravitational force inside astrophysical bodies. There is a minimum mass for hydrogen burning in stars that is set by the interplay of plasma physics and the theory of gravity. We calculate this for alternative theories of gravity and find that it is always significantly larger than the general relativity prediction. The observation of several low mass red dwarf stars therefore rules out a large class of scalar-tensor gravity theories and places strong constraints on the cosmological parameters appearing in the effective field theory of dark energy.
NASA Astrophysics Data System (ADS)
Amengonu, Yawo H.; Kakad, Yogendra P.
2014-07-01
Quasivelocity techniques were applied to derive the dynamics of a Differential Wheeled Mobile Robot (DWMR) in the companion paper. The present paper formulates a control system design for trajectory tracking of this class of robots. The method develops a feedback linearization technique for the nonlinear system using dynamic extension algorithm. The effectiveness of the nonlinear controller is illustrated with simulation example.
Mechanics of Composite Materials for Spacecraft
1992-08-01
this kind lead to a system of linear algebraic equations which involve certain eigenstrain influence coefficients and the given instantaneous...manner. then pa would be the remaining overall strain caused by the eigenstrains pa,; ) is the overall stress caused by pa, in a fully constrained...medium. In the presence of both mechanical overall stress or strain, and uniform I I I • U GEORGE . DVORAK phase eigenstrains , the local fields in the
Apparatus Tests Peeling Of Bonded Rubbery Material
NASA Technical Reports Server (NTRS)
Crook, Russell A.; Graham, Robert
1996-01-01
Instrumented hydraulic constrained blister-peel apparatus obtains data on degree of bonding between specimen of rubbery material and rigid plate. Growth of blister tracked by video camera, digital clock, pressure transducer, and piston-displacement sensor. Cylinder pressure controlled by hydraulic actuator system. Linear variable-differential transformer (LVDT) and float provide second, independent measure of change in blister volume used as more precise volume feedback in low-growth-rate test.
Extensions of output variance constrained controllers to hard constraints
NASA Technical Reports Server (NTRS)
Skelton, R.; Zhu, G.
1989-01-01
Covariance Controllers assign specified matrix values to the state covariance. A number of robustness results are directly related to the covariance matrix. The conservatism in known upperbounds on the H infinity, L infinity, and L (sub 2) norms for stability and disturbance robustness of linear uncertain systems using covariance controllers is illustrated with examples. These results are illustrated for continuous and discrete time systems. **** ONLY 2 BLOCK MARKERS FOUND -- RETRY *****
NASA Astrophysics Data System (ADS)
Iyyappan, I.; Ponmurugan, M.
2018-03-01
A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \
A constrained robust least squares approach for contaminant release history identification
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.
2006-04-01
Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.
Phase field benchmark problems for dendritic growth and linear elasticity
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...
2018-03-26
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Advanced Computational Methods for Security Constrained Financial Transmission Rights
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria
Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
The Trend Odds Model for Ordinal Data‡
Capuano, Ana W.; Dawson, Jeffrey D.
2013-01-01
Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520
The trend odds model for ordinal data.
Capuano, Ana W; Dawson, Jeffrey D
2013-06-15
Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Phase field benchmark problems for dendritic growth and linear elasticity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.
We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
NASA Astrophysics Data System (ADS)
Provencher, Stephen W.
1982-09-01
CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.
DOT National Transportation Integrated Search
2016-06-01
The purpose of this project is to study the optimal scheduling of work zones so that they have minimum negative impact (e.g., travel delay, gas consumption, accidents, etc.) on transport service vehicle flows. In this project, a mixed integer linear ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giovannetti, Vittorio; Lloyd, Seth; Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139
The Amosov-Holevo-Werner conjecture implies the additivity of the minimum Renyi entropies at the output of a channel. The conjecture is proven true for all Renyi entropies of integer order greater than two in a class of Gaussian bosonic channel where the input signal is randomly displaced or where it is coupled linearly to an external environment.
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... with conducting performance tests under § 63.7. Verification of operational status shall, at a minimum... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... with conducting performance tests under § 63.7. Verification of operational status shall, at a minimum... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...
Research and Development Services: Methods Development
1982-07-23
At an applied potential of -1.15 volts, the minimum detectable amount was 500 ng, which was not very sensitive. From Hammett linear free energy... Equation 1, the value of N was optimized by using two columns. The other factors which can influence resolution are the capacity factor, k, and the
Dynamical Scaling and Phase Coexistence in Topologically Constrained DNA Melting.
Fosado, Y A G; Michieletto, D; Marenduzzo, D
2017-09-15
There is a long-standing experimental observation that the melting of topologically constrained DNA, such as circular closed plasmids, is less abrupt than that of linear molecules. This finding points to an important role of topology in the physics of DNA denaturation, which is, however, poorly understood. Here, we shed light on this issue by combining large-scale Brownian dynamics simulations with an analytically solvable phenomenological Landau mean field theory. We find that the competition between melting and supercoiling leads to phase coexistence of denatured and intact phases at the single-molecule level. This coexistence occurs in a wide temperature range, thereby accounting for the broadening of the transition. Finally, our simulations show an intriguing topology-dependent scaling law governing the growth of denaturation bubbles in supercoiled plasmids, which can be understood within the proposed mean field theory.
NASA Astrophysics Data System (ADS)
Lu, Jianbo; Li, Dewei; Xi, Yugeng
2013-07-01
This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.
NASA Astrophysics Data System (ADS)
Virgili-Llop, Josep; Zagaris, Costantinos; Park, Hyeongjun; Zappulla, Richard; Romano, Marcello
2018-03-01
An experimental campaign has been conducted to evaluate the performance of two different guidance and control algorithms on a multi-constrained docking maneuver. The evaluated algorithms are model predictive control (MPC) and inverse dynamics in the virtual domain (IDVD). A linear-quadratic approach with a quadratic programming solver is used for the MPC approach. A nonconvex optimization problem results from the IDVD approach, and a nonlinear programming solver is used. The docking scenario is constrained by the presence of a keep-out zone, an entry cone, and by the chaser's maximum actuation level. The performance metrics for the experiments and numerical simulations include the required control effort and time to dock. The experiments have been conducted in a ground-based air-bearing test bed, using spacecraft simulators that float over a granite table.
Constraining external reverse shock physics of gamma-ray bursts from ROTSE-III limits
NASA Astrophysics Data System (ADS)
Cui, Xiao-Hong; Zou, Yuan-Chuan; Wei, Jun-Jie; Zheng, Wei-Kang; Wu, Xue-Feng
2018-02-01
Assuming that early optical emission is dominated by external reverse shock (RS) in the standard model of gamma-ray bursts (GRBs), we intend to constrain RS models with an initial Lorentz factor Γ0 of the outflows based on the ROTSE-III observations. We consider two cases of RS behaviour: relativistic shock and non-relativistic shock. For a homogeneous interstellar medium (ISM) and the wind circum-burst environment, constraints can be achieved by the fact that the peak flux Fν at the RS crossing time should be lower than the observed upper limit Fν, limit. We consider the different spectral regimes in which the observed optical frequency νopt may locate, which are divided by the orders for the minimum synchrotron frequency νm and the cooling frequency νc. Considering the homogeneous and wind environments around GRBs, we find that the relativistic RS case can be constrained by the (upper and lower) limits of Γ0 in a large range from about hundreds to thousands for 36 GRBs reported by ROTSE-III. Constraints on the non-relativistic RS case are achieved with limits of Γ0 ranging from ∼30 to ∼350 for 26 bursts. The lower limits of Γ0 achieved for the relativistic RS model are disfavored based on the previously discovered correlation between the initial Lorentz factor Γ0 and the isotropic gamma-ray energy Eγ, iso released in the prompt phase.
How CMB and large-scale structure constrain chameleon interacting dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au
2015-07-01
We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength,more » can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.« less
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).
NASA Technical Reports Server (NTRS)
Tiffany, S. H.; Adams, W. M., Jr.
1984-01-01
A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.
Analytical optimal pulse shapes obtained with the aid of genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co; Arango, Carlos A.; Reyes, Andrés
2015-09-28
We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding themore » interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.« less
Precision Magnetic Bearing Six Degree of Freedom Stage
NASA Technical Reports Server (NTRS)
Williams, M. E.; Trumper, David L.
1996-01-01
Magnetic bearings are capable of applying force and torque to a suspended object without rigidly constraining any degrees of freedom. Additionally, the resolution of magnetic bearings is limited only by sensors and control, and not by the finish of a bearing surface. For these reasons, magnetic bearings appear to be ideal for precision wafer positioning in lithography systems. To demonstrate this capability a linear magnetic bearing has been constructed which uses variable reluctance actuators to control the motion of a 14.5 kg suspended platen in five degrees of freedom. A Lorentz type linear motor of our own design and construction is used to provide motion and position control in the sixth degree of freedom. The stage performance results verify that the positioning requirements of photolithography can be met with a system of this type. This paper describes the design, control, and performance of the linear magnetic bearing.
SLFP: a stochastic linear fractional programming approach for sustainable waste management.
Zhu, H; Huang, G H
2011-12-01
A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk. Copyright © 2011 Elsevier Ltd. All rights reserved.
New nonlinear control algorithms for multiple robot arms
NASA Technical Reports Server (NTRS)
Tarn, T. J.; Bejczy, A. K.; Yun, X.
1988-01-01
Multiple coordinated robot arms are modeled by considering the arms as closed kinematic chains and as a force-constrained mechanical system working on the same object simultaneously. In both formulations, a novel dynamic control method is discussed. It is based on feedback linearization and simultaneous output decoupling technique. By applying a nonlinear feedback and a nonlinear coordinate transformation, the complicated model of the multiple robot arms in either formulation is converted into a linear and output decoupled system. The linear system control theory and optimal control theory are used to design robust controllers in the task space. The first formulation has the advantage of automatically handling the coordination and load distribution among the robot arms. In the second formulation, it was found that by choosing a general output equation it became possible simultaneously to superimpose the position and velocity error feedback with the force-torque error feedback in the task space.
Time-response shaping using output to input saturation transformation
NASA Astrophysics Data System (ADS)
Chambon, E.; Burlion, L.; Apkarian, P.
2018-03-01
For linear systems, the control law design is often performed so that the resulting closed loop meets specific frequency-domain requirements. However, in many cases, it may be observed that the obtained controller does not enforce time-domain requirements amongst which the objective of keeping a scalar output variable in a given interval. In this article, a transformation is proposed to convert prescribed bounds on an output variable into time-varying saturations on the synthesised linear scalar control law. This transformation uses some well-chosen time-varying coefficients so that the resulting time-varying saturation bounds do not overlap in the presence of disturbances. Using an anti-windup approach, it is obtained that the origin of the resulting closed loop is globally asymptotically stable and that the constrained output variable satisfies the time-domain constraints in the presence of an unknown finite-energy-bounded disturbance. An application to a linear ball and beam model is presented.
Linear monogamy of entanglement in three-qubit systems
NASA Astrophysics Data System (ADS)
Liu, Feng; Gao, Fei; Wen, Qiao-Yan
2015-11-01
For any three-qubit quantum systems ABC, Oliveira et al. numerically found that both the concurrence and the entanglement of formation (EoF) obey the linear monogamy relations in pure states. They also conjectured that the linear monogamy relations can be saturated when the focus qubit A is maximally entangled with the joint qubits BC. In this work, we prove analytically that both the concurrence and EoF obey linear monogamy relations in an arbitrary three-qubit state. Furthermore, we verify that all three-qubit pure states are maximally entangled in the bipartition A|BC when they saturate the linear monogamy relations. We also study the distribution of the concurrence and EoF. More specifically, when the amount of entanglement between A and B equals to that of A and C, we show that the sum of EoF itself saturates the linear monogamy relation, while the sum of the squared EoF is minimum. Different from EoF, the concurrence and the squared concurrence both saturate the linear monogamy relations when the entanglement between A and B equals to that of A and C.
Linear monogamy of entanglement in three-qubit systems.
Liu, Feng; Gao, Fei; Wen, Qiao-Yan
2015-11-16
For any three-qubit quantum systems ABC, Oliveira et al. numerically found that both the concurrence and the entanglement of formation (EoF) obey the linear monogamy relations in pure states. They also conjectured that the linear monogamy relations can be saturated when the focus qubit A is maximally entangled with the joint qubits BC. In this work, we prove analytically that both the concurrence and EoF obey linear monogamy relations in an arbitrary three-qubit state. Furthermore, we verify that all three-qubit pure states are maximally entangled in the bipartition A|BC when they saturate the linear monogamy relations. We also study the distribution of the concurrence and EoF. More specifically, when the amount of entanglement between A and B equals to that of A and C, we show that the sum of EoF itself saturates the linear monogamy relation, while the sum of the squared EoF is minimum. Different from EoF, the concurrence and the squared concurrence both saturate the linear monogamy relations when the entanglement between A and B equals to that of A and C.
Linear monogamy of entanglement in three-qubit systems
Liu, Feng; Gao, Fei; Wen, Qiao-Yan
2015-01-01
For any three-qubit quantum systems ABC, Oliveira et al. numerically found that both the concurrence and the entanglement of formation (EoF) obey the linear monogamy relations in pure states. They also conjectured that the linear monogamy relations can be saturated when the focus qubit A is maximally entangled with the joint qubits BC. In this work, we prove analytically that both the concurrence and EoF obey linear monogamy relations in an arbitrary three-qubit state. Furthermore, we verify that all three-qubit pure states are maximally entangled in the bipartition A|BC when they saturate the linear monogamy relations. We also study the distribution of the concurrence and EoF. More specifically, when the amount of entanglement between A and B equals to that of A and C, we show that the sum of EoF itself saturates the linear monogamy relation, while the sum of the squared EoF is minimum. Different from EoF, the concurrence and the squared concurrence both saturate the linear monogamy relations when the entanglement between A and B equals to that of A and C. PMID:26568265
The conformational preferences of γ-lactam and its role in constraining peptide structure
NASA Astrophysics Data System (ADS)
Paul, P. K. C.; Burney, P. A.; Campbell, M. M.; Osguthorpe, D. J.
1990-09-01
The conformational constraints imposed by γ-lactams in peptides have been studied using valence force field energy calculations and flexible geometry maps. It has been found that while cyclisation restrains the Ψ of the lactam, non-bonded interactions contribute to the constraints on ϕ of the lactam. The γ-lactam also affects the (ϕ,Ψ) of the residue after it in a peptide sequence. For an l-lactam, the ring geometry restricts Ψ to about-120°, and ϕ has two minima, the lowest energy around-140° and a higher minimum (5 kcal/mol higher) at 60°, making an l-γ-lactam more favourably accommodated in a near extended conformation than in position 2 of a type II' β-turn. The energy of the ϕ˜+60° minimum can be lowered substantially until it is more favoured than the-140° minimum by progressive substitution of bulkier groups on the amide N of the l-γ-lactam. The (ϕ,Ψ) maps of the residue succeeding a γ-lactam show subtle differences from those of standard N-methylated residues. The dependence of the constraints on the chirality of γ-lactams and N-substituted γ-lactams, in terms of the formation of secondary structures like β-turns is discussed and the comparison of the theoretical conformations with experimental results is highlighted.
Constrained Laboratory vs. Unconstrained Steering-Induced Rollover Crash Tests.
Kerrigan, Jason R; Toczyski, Jacek; Roberts, Carolyn; Zhang, Qi; Clauser, Mark
2015-01-01
The goal of this study was to evaluate how well an in-laboratory rollover crash test methodology that constrains vehicle motion can reproduce the dynamics of unconstrained full-scale steering-induced rollover crash tests in sand. Data from previously-published unconstrained steering-induced rollover crash tests using a full-size pickup and mid-sized sedan were analyzed to determine vehicle-to-ground impact conditions and kinematic response of the vehicles throughout the tests. Then, a pair of replicate vehicles were prepared to match the inertial properties of the steering-induced test vehicles and configured to record dynamic roof structure deformations and kinematic response. Both vehicles experienced greater increases in roll-axis angular velocities in the unconstrained tests than in the constrained tests; however, the increases that occurred during the trailing side roof interaction were nearly identical between tests for both vehicles. Both vehicles experienced linear accelerations in the constrained tests that were similar to those in the unconstrained tests, but the pickup, in particular, had accelerations that were matched in magnitude, timing, and duration very closely between the two test types. Deformations in the truck test were higher in the constrained than the unconstrained, and deformations in the sedan were greater in the unconstrained than the constrained as a result of constraints of the test fixture, and differences in impact velocity for the trailing side. The results of the current study suggest that in-laboratory rollover tests can be used to simulate the injury-causing portions of unconstrained rollover crashes. To date, such a demonstration has not yet been published in the open literature. This study did, however, show that road surface can affect vehicle response in a way that may not be able to be mimicked in the laboratory. Lastly, this study showed that configuring the in-laboratory tests to match the leading-side touchdown conditions could result in differences in the trailing side impact conditions.
NASA Astrophysics Data System (ADS)
Panda, Satyajit; Ray, M. C.
2008-04-01
In this paper, a geometrically nonlinear dynamic analysis has been presented for functionally graded (FG) plates integrated with a patch of active constrained layer damping (ACLD) treatment and subjected to a temperature field. The constraining layer of the ACLD treatment is considered to be made of the piezoelectric fiber-reinforced composite (PFRC) material. The temperature field is assumed to be spatially uniform over the substrate plate surfaces and varied through the thickness of the host FG plates. The temperature-dependent material properties of the FG substrate plates are assumed to be graded in the thickness direction of the plates according to a power-law distribution while the Poisson's ratio is assumed to be a constant over the domain of the plate. The constrained viscoelastic layer of the ACLD treatment is modeled using the Golla-Hughes-McTavish (GHM) method. Based on the first-order shear deformation theory, a three-dimensional finite element model has been developed to model the open-loop and closed-loop nonlinear dynamics of the overall FG substrate plates under the thermal environment. The analysis suggests the potential use of the ACLD treatment with its constraining layer made of the PFRC material for active control of geometrically nonlinear vibrations of FG plates in the absence or the presence of the temperature gradient across the thickness of the plates. It is found that the ACLD treatment is more effective in controlling the geometrically nonlinear vibrations of FG plates than in controlling their linear vibrations. The analysis also reveals that the ACLD patch is more effective for controlling the nonlinear vibrations of FG plates when it is attached to the softest surface of the FG plates than when it is bonded to the stiffest surface of the plates. The effect of piezoelectric fiber orientation in the active constraining PFRC layer on the damping characteristics of the overall FG plates is also discussed.
Constrained model predictive control, state estimation and coordination
NASA Astrophysics Data System (ADS)
Yan, Jun
In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to guarantee local stability or convergence to a target state. If these conditions are met for all subsystems, then this stability is inherited by the overall system. For the case when each subsystem suffers from disturbances in the dynamics, own self-measurement noises, and quantization errors on neighbors' information due to the finite-bit-rate channels, the constrained MPC strategy developed in Part (i) is appropriate to apply. In Part (iii), we discuss the local predictor design and bandwidth assignment problem in a coordinated vehicle formation context. The MPC controller used in Part (ii) relates the formation control performance and the information quality in the way that large standoff implies conservative performance. We first develop an LMI (Linear Matrix Inequality) formulation for cross-estimator design in a simple two-vehicle scenario with non-standard information: one vehicle does not have access to the other's exact control value applied at each sampling time, but to its known, pre-computed, coupling linear feedback control law. Then a similar LMI problem is formulated for the bandwidth assignment problem that minimizes the total number of bits by adjusting the prediction gain matrices and the number of bits assigned to each variable. (Abstract shortened by UMI.)
Transcranial Electrical Neuromodulation Based on the Reciprocity Principle
Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don
2016-01-01
A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints. PMID:27303311
Transcranial Electrical Neuromodulation Based on the Reciprocity Principle.
Fernández-Corazza, Mariano; Turovets, Sergei; Luu, Phan; Anderson, Erik; Tucker, Don
2016-01-01
A key challenge in multi-electrode transcranial electrical stimulation (TES) or transcranial direct current stimulation (tDCS) is to find a current injection pattern that delivers the necessary current density at a target and minimizes it in the rest of the head, which is mathematically modeled as an optimization problem. Such an optimization with the Least Squares (LS) or Linearly Constrained Minimum Variance (LCMV) algorithms is generally computationally expensive and requires multiple independent current sources. Based on the reciprocity principle in electroencephalography (EEG) and TES, it could be possible to find the optimal TES patterns quickly whenever the solution of the forward EEG problem is available for a brain region of interest. Here, we investigate the reciprocity principle as a guideline for finding optimal current injection patterns in TES that comply with safety constraints. We define four different trial cortical targets in a detailed seven-tissue finite element head model, and analyze the performance of the reciprocity family of TES methods in terms of electrode density, targeting error, focality, intensity, and directionality using the LS and LCMV solutions as the reference standards. It is found that the reciprocity algorithms show good performance comparable to the LCMV and LS solutions. Comparing the 128 and 256 electrode cases, we found that use of greater electrode density improves focality, directionality, and intensity parameters. The results show that reciprocity principle can be used to quickly determine optimal current injection patterns in TES and help to simplify TES protocols that are consistent with hardware and software availability and with safety constraints.