Sample records for constrained gradient descent

  1. A feasible DY conjugate gradient method for linear equality constraints

    NASA Astrophysics Data System (ADS)

    LI, Can

    2017-09-01

    In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.

  2. Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization.

    PubMed

    Lu, Hanbing; Jesmanowicz, Andrzej; Li, Shi-Jiang; Hyde, James S

    2004-01-01

    MRI gradient coil design is a type of nonlinear constrained optimization. A practical problem in transverse gradient coil design using the conjugate gradient descent (CGD) method is that wire elements move at different rates along orthogonal directions (r, phi, z), and tend to cross, breaking the constraints. A momentum-weighted conjugate gradient descent (MW-CGD) method is presented to overcome this problem. This method takes advantage of the efficiency of the CGD method combined with momentum weighting, which is also an intrinsic property of the Levenberg-Marquardt algorithm, to adjust step sizes along the three orthogonal directions. A water-cooled, 12.8 cm inner diameter, three axis torque-balanced gradient coil for rat imaging was developed based on this method, with an efficiency of 2.13, 2.08, and 4.12 mT.m(-1).A(-1) along X, Y, and Z, respectively. Experimental data demonstrate that this method can improve efficiency by 40% and field uniformity by 27%. This method has also been applied to the design of a gradient coil for the human brain, employing remote current return paths. The benefits of this design include improved gradient field uniformity and efficiency, with a shorter length than gradient coil designs using coaxial return paths. Copyright 2003 Wiley-Liss, Inc.

  3. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  4. 14 CFR 23.69 - Enroute climb/descent.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... climb/descent. (a) All engines operating. The steady gradient and rate of climb must be determined at.... The steady gradient and rate of climb/descent must be determined at each weight, altitude, and ambient...

  5. Algorithms for accelerated convergence of adaptive PCA.

    PubMed

    Chatterjee, C; Kang, Z; Roychowdhury, V P

    2000-01-01

    We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.

  6. Gradient optimization and nonlinear control

    NASA Technical Reports Server (NTRS)

    Hasdorff, L.

    1976-01-01

    The book represents an introduction to computation in control by an iterative, gradient, numerical method, where linearity is not assumed. The general language and approach used are those of elementary functional analysis. The particular gradient method that is emphasized and used is conjugate gradient descent, a well known method exhibiting quadratic convergence while requiring very little more computation than simple steepest descent. Constraints are not dealt with directly, but rather the approach is to introduce them as penalty terms in the criterion. General conjugate gradient descent methods are developed and applied to problems in control.

  7. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  8. Error measure comparison of currently employed dose-modulation schemes for e-beam proximity effect control

    NASA Astrophysics Data System (ADS)

    Peckerar, Martin C.; Marrian, Christie R.

    1995-05-01

    Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.

  9. Convergence Rates of Finite Difference Stochastic Approximation Algorithms

    DTIC Science & Technology

    2016-06-01

    dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It

  10. The q-G method : A q-version of the Steepest Descent method for global optimization.

    PubMed

    Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M

    2015-01-01

    In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems.

  11. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    NASA Astrophysics Data System (ADS)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  12. Gradient descent learning algorithm overview: a general dynamical systems perspective.

    PubMed

    Baldi, P

    1995-01-01

    Gives a unified treatment of gradient descent learning algorithms for neural networks using a general framework of dynamical systems. This general approach organizes and simplifies all the known algorithms and results which have been originally derived for different problems (fixed point/trajectory learning), for different models (discrete/continuous), for different architectures (forward/recurrent), and using different techniques (backpropagation, variational calculus, adjoint methods, etc.). The general approach can also be applied to derive new algorithms. The author then briefly examines some of the complexity issues and limitations intrinsic to gradient descent learning. Throughout the paper, the author focuses on the problem of trajectory learning.

  13. Minimum-Cost Aircraft Descent Trajectories with a Constrained Altitude Profile

    NASA Technical Reports Server (NTRS)

    Wu, Minghong G.; Sadovsky, Alexander V.

    2015-01-01

    An analytical formula for solving the speed profile that accrues minimum cost during an aircraft descent with a constrained altitude profile is derived. The optimal speed profile first reaches a certain speed, called the minimum-cost speed, as quickly as possible using an appropriate extreme value of thrust. The speed profile then stays on the minimum-cost speed as long as possible, before switching to an extreme value of thrust for the rest of the descent. The formula is applied to an actual arrival route and its sensitivity to winds and airlines' business objectives is analyzed.

  14. Accelerating deep neural network training with inconsistent stochastic gradient descent.

    PubMed

    Wang, Linnan; Yang, Yi; Min, Renqiang; Chakradhar, Srimat

    2017-09-01

    Stochastic Gradient Descent (SGD) updates Convolutional Neural Network (CNN) with a noisy gradient computed from a random batch, and each batch evenly updates the network once in an epoch. This model applies the same training effort to each batch, but it overlooks the fact that the gradient variance, induced by Sampling Bias and Intrinsic Image Difference, renders different training dynamics on batches. In this paper, we develop a new training strategy for SGD, referred to as Inconsistent Stochastic Gradient Descent (ISGD) to address this problem. The core concept of ISGD is the inconsistent training, which dynamically adjusts the training effort w.r.t the loss. ISGD models the training as a stochastic process that gradually reduces down the mean of batch's loss, and it utilizes a dynamic upper control limit to identify a large loss batch on the fly. ISGD stays on the identified batch to accelerate the training with additional gradient updates, and it also has a constraint to penalize drastic parameter changes. ISGD is straightforward, computationally efficient and without requiring auxiliary memories. A series of empirical evaluations on real world datasets and networks demonstrate the promising performance of inconsistent training. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Application of the stochastic parallel gradient descent algorithm for numerical simulation and analysis of the coherent summation of radiation from fibre amplifiers

    NASA Astrophysics Data System (ADS)

    Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin

    2009-10-01

    Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.

  16. Programmed gradient descent biosorption of strontium ions by Saccaromyces cerevisiae and ashing analysis: A decrement solution for nuclide and heavy metal disposal.

    PubMed

    Liu, Mingxue; Dong, Faqin; Zhang, Wei; Nie, Xiaoqin; Sun, Shiyong; Wei, Hongfu; Luo, Lang; Xiang, Sha; Zhang, Gege

    2016-08-15

    One of the waste disposal principles is decrement. The programmed gradient descent biosorption of strontium ions by Saccaromyces cerevisiae regarding bioremoval and ashing process for decrement were studied in present research. The results indicated that S. cerevisiae cells showed valid biosorption for strontium ions with greater than 90% bioremoval efficiency for high concentration strontium ions under batch culture conditions. The S. cerevisiae cells bioaccumulated approximately 10% of strontium ions in the cytoplasm besides adsorbing 90% strontium ions on cell wall. The programmed gradient descent biosorption presented good performance with a nearly 100% bioremoval ratio for low concentration strontium ions after 3 cycles. The ashing process resulted in a huge volume and weight reduction ratio as well as enrichment for strontium in the ash. XRD results showed that SrSO4 existed in ash. Simulated experiments proved that sulfate could adjust the precipitation of strontium ions. Finally, we proposed a technological flow process that combined the programmed gradient descent biosorption and ashing, which could yield great decrement and allow the supernatant to meet discharge standard. This technological flow process may be beneficial for nuclides and heavy metal disposal treatment in many fields. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  18. Gradient descent algorithm applied to wavefront retrieval from through-focus images by an extreme ultraviolet microscope with partially coherent source

    DOE PAGES

    Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.

    2014-12-01

    The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less

  19. Gradient descent algorithm applied to wavefront retrieval from through-focus images by an extreme ultraviolet microscope with partially coherent source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.

    The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less

  20. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  1. Sparse decomposition of seismic data and migration using Gaussian beams with nonzero initial curvature

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Yanfei

    2018-04-01

    We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.

  2. Split Bregman multicoil accelerated reconstruction technique: A new framework for rapid reconstruction of cardiac perfusion MRI

    PubMed Central

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Likhite, Devavrat; DiBella, Edward

    2016-01-01

    Purpose: Rapid reconstruction of undersampled multicoil MRI data with iterative constrained reconstruction method is a challenge. The authors sought to develop a new substitution based variable splitting algorithm for faster reconstruction of multicoil cardiac perfusion MRI data. Methods: The new method, split Bregman multicoil accelerated reconstruction technique (SMART), uses a combination of split Bregman based variable splitting and iterative reweighting techniques to achieve fast convergence. Total variation constraints are used along the spatial and temporal dimensions. The method is tested on nine ECG-gated dog perfusion datasets, acquired with a 30-ray golden ratio radial sampling pattern and ten ungated human perfusion datasets, acquired with a 24-ray golden ratio radial sampling pattern. Image quality and reconstruction speed are evaluated and compared to a gradient descent (GD) implementation and to multicoil k-t SLR, a reconstruction technique that uses a combination of sparsity and low rank constraints. Results: Comparisons based on blur metric and visual inspection showed that SMART images had lower blur and better texture as compared to the GD implementation. On average, the GD based images had an ∼18% higher blur metric as compared to SMART images. Reconstruction of dynamic contrast enhanced (DCE) cardiac perfusion images using the SMART method was ∼6 times faster than standard gradient descent methods. k-t SLR and SMART produced images with comparable image quality, though SMART was ∼6.8 times faster than k-t SLR. Conclusions: The SMART method is a promising approach to reconstruct good quality multicoil images from undersampled DCE cardiac perfusion data rapidly. PMID:27036592

  3. Noise-shaping gradient descent-based online adaptation algorithms for digital calibration of analog circuits.

    PubMed

    Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji

    2013-04-01

    Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.

  4. Analysis of Online Composite Mirror Descent Algorithm.

    PubMed

    Lei, Yunwen; Zhou, Ding-Xuan

    2017-03-01

    We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.

  5. Gradient descent for robust kernel-based regression

    NASA Astrophysics Data System (ADS)

    Guo, Zheng-Chu; Hu, Ting; Shi, Lei

    2018-06-01

    In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.

  6. Automation for Accommodating Fuel-Efficient Descents in Constrained Airspace

    NASA Technical Reports Server (NTRS)

    Coopenbarger, Richard A.

    2010-01-01

    Continuous descents at low engine power are desired to reduce fuel consumption, emissions and noise during arrival operations. The challenge is to allow airplanes to fly these types of efficient descents without interruption during busy traffic conditions. During busy conditions today, airplanes are commonly forced to fly inefficient, step-down descents as airtraffic controllers work to ensure separation and maximize throughput. NASA in collaboration with government and industry partners is developing new automation to help controllers accommodate continuous descents in the presence of complex traffic and airspace constraints. This automation relies on accurate trajectory predictions to compute strategic maneuver advisories. The talk will describe the concept behind this new automation and provide an overview of the simulations and flight testing used to develop and refine its underlying technology.

  7. A pipeline leakage locating method based on the gradient descent algorithm

    NASA Astrophysics Data System (ADS)

    Li, Yulong; Yang, Fan; Ni, Na

    2018-04-01

    A pipeline leakage locating method based on the gradient descent algorithm is proposed in this paper. The method has low computing complexity, which is suitable for practical application. We have built experimental environment in real underground pipeline network. A lot of real data has been gathered in the past three months. Every leak point has been certificated by excavation. Results show that positioning error is within 0.4 meter. Rate of false alarm and missing alarm are both under 20%. The calculating time is not above 5 seconds.

  8. 14 CFR 23.69 - Enroute climb/descent.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... inoperative and its propeller in the minimum drag position; (2) The remaining engine(s) at not more than... climb/descent. (a) All engines operating. The steady gradient and rate of climb must be determined at... applicant with— (1) Not more than maximum continuous power on each engine; (2) The landing gear retracted...

  9. 14 CFR 23.69 - Enroute climb/descent.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... inoperative and its propeller in the minimum drag position; (2) The remaining engine(s) at not more than... climb/descent. (a) All engines operating. The steady gradient and rate of climb must be determined at... applicant with— (1) Not more than maximum continuous power on each engine; (2) The landing gear retracted...

  10. 14 CFR 23.69 - Enroute climb/descent.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... inoperative and its propeller in the minimum drag position; (2) The remaining engine(s) at not more than... climb/descent. (a) All engines operating. The steady gradient and rate of climb must be determined at... applicant with— (1) Not more than maximum continuous power on each engine; (2) The landing gear retracted...

  11. 14 CFR 23.69 - Enroute climb/descent.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... inoperative and its propeller in the minimum drag position; (2) The remaining engine(s) at not more than... climb/descent. (a) All engines operating. The steady gradient and rate of climb must be determined at... applicant with— (1) Not more than maximum continuous power on each engine; (2) The landing gear retracted...

  12. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Vicroy, D. D.; Simmon, D. A.

    1985-01-01

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.

  13. Planning fuel-conservative descents in an airline environmental using a small programmable calculator: algorithm development and flight test results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knox, C.E.; Vicroy, D.D.; Simmon, D.A.

    A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, andmore » nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.« less

  14. Fractional-order gradient descent learning of BP neural networks with Caputo derivative.

    PubMed

    Wang, Jian; Wen, Yanqing; Gou, Yida; Ye, Zhenyun; Chen, Hua

    2017-05-01

    Fractional calculus has been found to be a promising area of research for information processing and modeling of some physical systems. In this paper, we propose a fractional gradient descent method for the backpropagation (BP) training of neural networks. In particular, the Caputo derivative is employed to evaluate the fractional-order gradient of the error defined as the traditional quadratic energy function. The monotonicity and weak (strong) convergence of the proposed approach are proved in detail. Two simulations have been implemented to illustrate the performance of presented fractional-order BP algorithm on three small datasets and one large dataset. The numerical simulations effectively verify the theoretical observations of this paper as well. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Atrioventricular nonuniformity of pericardial constraint.

    PubMed

    Hamilton, Douglas R; Sas, Rozsa; Tyberg, John V

    2004-10-01

    Physiologists and clinicians commonly refer to "pressure" as a measure of the constraining effects of the pericardium; however, "pericardial pressure" is really a local measurement of epicardial radial stress. During diastole, from the bottom of the y descent to the beginning of the a wave, pericardial pressure over the right atrium (P(pRA)) is approximately equal to that over the right ventricle (P(pRV)). However, in systole, during the interval between the bottom of the x descent and the peak of the v wave, these two pericardial pressures appear to be completely decoupled in that P(pRV) decreases, whereas P(pRA) remains constant or increases. This decoupling indicates considerable mechanical independence between the RA and RV during systole. That is, RV systolic emptying lowers P(pRV), but P(pRA) continues to increase, suggesting that the relation of the pericardium to the RA must allow effective constraint, even though the pericardium over the RV is simultaneously slack. In conclusion, we measured the pericardial pressure responsible for the previously reported nonuniformity of pericardial strain. P(pRA) and P(pRV) are closely coupled during diastole, but during systole they become decoupled. Systolic nonuniformity of pericardial constraint may augment the atrioventricular valve-opening pressure gradient in early diastole and, so, affect ventricular filling.

  16. Stochastic Spectral Descent for Discrete Graphical Models

    DOE PAGES

    Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...

    2015-12-14

    Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less

  17. 14 CFR 23.75 - Landing distance.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... to the 50 foot height and— (1) The steady approach must be at a gradient of descent not greater than... tests that a maximum steady approach gradient steeper than 5.2 percent, down to the 50-foot height, is safe. The gradient must be established as an operating limitation and the information necessary to...

  18. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data

    PubMed Central

    2017-01-01

    In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718

  19. Steepest descent method implementation on unconstrained optimization problem using C++ program

    NASA Astrophysics Data System (ADS)

    Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.

    2018-03-01

    Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.

  20. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  1. Image counter-forensics based on feature injection

    NASA Astrophysics Data System (ADS)

    Iuliani, M.; Rossetto, S.; Bianchi, T.; De Rosa, Alessia; Piva, A.; Barni, M.

    2014-02-01

    Starting from the concept that many image forensic tools are based on the detection of some features revealing a particular aspect of the history of an image, in this work we model the counter-forensic attack as the injection of a specific fake feature pointing to the same history of an authentic reference image. We propose a general attack strategy that does not rely on a specific detector structure. Given a source image x and a target image y, the adversary processes x in the pixel domain producing an attacked image ~x, perceptually similar to x, whose feature f(~x) is as close as possible to f(y) computed on y. Our proposed counter-forensic attack consists in the constrained minimization of the feature distance Φ(z) =│ f(z) - f(y)│ through iterative methods based on gradient descent. To solve the intrinsic limit due to the numerical estimation of the gradient on large images, we propose the application of a feature decomposition process, that allows the problem to be reduced into many subproblems on the blocks the image is partitioned into. The proposed strategy has been tested by attacking three different features and its performance has been compared to state-of-the-art counter-forensic methods.

  2. A conjugate gradient method with descent properties under strong Wolfe line search

    NASA Astrophysics Data System (ADS)

    Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.

    2017-09-01

    The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.

  3. Method of Real-Time Principal-Component Analysis

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu

    2005-01-01

    Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.

  4. Machine learning for inverse lithography: using stochastic gradient descent for robust photomask synthesis

    NASA Astrophysics Data System (ADS)

    Jia, Ningning; Y Lam, Edmund

    2010-04-01

    Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.

  5. Computational trigonometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, K.

    1994-12-31

    By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.

  6. On the boundary conditions on a shock wave for hypersonic flow around a descent vehicle

    NASA Astrophysics Data System (ADS)

    Golomazov, M. M.; Ivankov, A. A.

    2013-12-01

    Stationary hypersonic flow around a descent vehicle is examined by considering equilibrium and nonequilibrium reactions. We study how physical-chemical processes and shock wave conditions for gas species influence the shock-layer structure. It is shown that conservation conditions of species on the shock wave cause high-temperature and concentration gradients in the shock layer when we calculate spacecraft deceleration trajectory in the atmosphere at 75 km altitude.

  7. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less

  8. Controlling bridging and pinching with pixel-based mask for inverse lithography

    NASA Astrophysics Data System (ADS)

    Kobelkov, Sergey; Tritchkov, Alexander; Han, JiWan

    2016-03-01

    Inverse Lithography Technology (ILT) has become a viable computational lithography candidate in recent years as it can produce mask output that results in process latitude and CD control in the fab that is hard to match with conventional OPC/SRAF insertion approaches. An approach to solving the inverse lithography problem as a nonlinear, constrained minimization problem over a domain mask pixels was suggested in the paper by Y. Granik "Fast pixel-based mask optimization for inverse lithography" in 2006. The present paper extends this method to satisfy bridging and pinching constraints imposed on print contours. Namely, there are suggested objective functions expressing penalty for constraints violations, and their minimization with gradient descent methods is considered. This approach has been tested with an ILT-based Local Printability Enhancement (LPTM) tool in an automated flow to eliminate hotspots that can be present on the full chip after conventional SRAF placement/OPC and has been applied in 14nm, 10nm node production, single and multiple-patterning flows.

  9. An Adaptive Orientation Estimation Method for Magnetic and Inertial Sensors in the Presence of Magnetic Disturbances

    PubMed Central

    Fan, Bingfei; Li, Qingguo; Wang, Chao; Liu, Tao

    2017-01-01

    Magnetic and inertial sensors have been widely used to estimate the orientation of human segments due to their low cost, compact size and light weight. However, the accuracy of the estimated orientation is easily affected by external factors, especially when the sensor is used in an environment with magnetic disturbances. In this paper, we propose an adaptive method to improve the accuracy of orientation estimations in the presence of magnetic disturbances. The method is based on existing gradient descent algorithms, and it is performed prior to sensor fusion algorithms. The proposed method includes stationary state detection and magnetic disturbance severity determination. The stationary state detection makes this method immune to magnetic disturbances in stationary state, while the magnetic disturbance severity determination helps to determine the credibility of magnetometer data under dynamic conditions, so as to mitigate the negative effect of the magnetic disturbances. The proposed method was validated through experiments performed on a customized three-axis instrumented gimbal with known orientations. The error of the proposed method and the original gradient descent algorithms were calculated and compared. Experimental results demonstrate that in stationary state, the proposed method is completely immune to magnetic disturbances, and in dynamic conditions, the error caused by magnetic disturbance is reduced by 51.2% compared with original MIMU gradient descent algorithm. PMID:28534858

  10. 14 CFR 23.75 - Landing distance.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... to the 50 foot height and— (1) The steady approach must be at a gradient of descent not greater than 5.2 percent (3 degrees) down to the 50-foot height. (2) In addition, an applicant may demonstrate by tests that a maximum steady approach gradient steeper than 5.2 percent, down to the 50-foot height, is...

  11. 14 CFR 23.75 - Landing distance.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... to the 50 foot height and— (1) The steady approach must be at a gradient of descent not greater than 5.2 percent (3 degrees) down to the 50-foot height. (2) In addition, an applicant may demonstrate by tests that a maximum steady approach gradient steeper than 5.2 percent, down to the 50-foot height, is...

  12. 14 CFR 23.75 - Landing distance.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... to the 50 foot height and— (1) The steady approach must be at a gradient of descent not greater than 5.2 percent (3 degrees) down to the 50-foot height. (2) In addition, an applicant may demonstrate by tests that a maximum steady approach gradient steeper than 5.2 percent, down to the 50-foot height, is...

  13. Coordinated Beamforming for MISO Interference Channel: Complexity Analysis and Efficient Algorithms

    DTIC Science & Technology

    2010-01-01

    Algorithm The cyclic coordinate descent algorithm is also known as the nonlinear Gauss - Seidel iteration [32]. There are several studies of this type of...vkρ(vi−1). It can be shown that the above BB gradient projection direction is always a descent direction. The R-linear convergence of the BB method has...KKT solution ) of the inexact pricing algorithm for MISO interference channel. The latter is interesting since the convergence of the original pricing

  14. Error analysis of stochastic gradient descent ranking.

    PubMed

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  15. A three-term conjugate gradient method under the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Khadijah, Wan; Rivaie, Mohd; Mamat, Mustafa

    2017-08-01

    Recently, numerous studies have been concerned in conjugate gradient methods for solving large-scale unconstrained optimization method. In this paper, a three-term conjugate gradient method is proposed for unconstrained optimization which always satisfies sufficient descent direction and namely as Three-Term Rivaie-Mustafa-Ismail-Leong (TTRMIL). Under standard conditions, TTRMIL method is proved to be globally convergent under strong-Wolfe line search. Finally, numerical results are provided for the purpose of comparison.

  16. High resolution quantitative phase imaging of live cells with constrained optimization approach

    NASA Astrophysics Data System (ADS)

    Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu

    2016-03-01

    Quantitative phase imaging (QPI) aims at studying weakly scattering and absorbing biological specimens with subwavelength accuracy without any external staining mechanisms. Use of a reference beam at an angle is one of the necessary criteria for recording of high resolution holograms in most of the interferometric methods used for quantitative phase imaging. The spatial separation of the dc and twin images is decided by the reference beam angle and Fourier-filtered reconstructed image will have a very poor resolution if hologram is recorded below a minimum reference angle condition. However, it is always inconvenient to have a large reference beam angle while performing high resolution microscopy of live cells and biological specimens with nanometric features. In this paper, we treat reconstruction of digital holographic microscopy images as a constrained optimization problem with smoothness constraint in order to recover only complex object field in hologram plane even with overlapping dc and twin image terms. We solve this optimization problem by gradient descent approach iteratively and the smoothness constraint is implemented by spatial averaging with appropriate size. This approach will give excellent high resolution image recovery compared to Fourier filtering while keeping a very small reference angle. We demonstrate this approach on digital holographic microscopy of live cells by recovering the quantitative phase of live cells from a hologram recorded with nearly zero reference angle.

  17. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  18. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Online learning in optical tomography: a stochastic approach

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Li, Qin; Liu, Jian-Guo

    2018-07-01

    We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.

  20. Accelerating IMRT optimization by voxel sampling

    NASA Astrophysics Data System (ADS)

    Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.

    2007-12-01

    This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.

  1. An Approach to Stable Gradient-Descent Adaptation of Higher Order Neural Units.

    PubMed

    Bukovsky, Ivo; Homma, Noriyasu

    2017-09-01

    Stability evaluation of a weight-update system of higher order neural units (HONUs) with polynomial aggregation of neural inputs (also known as classes of polynomial neural networks) for adaptation of both feedforward and recurrent HONUs by a gradient descent method is introduced. An essential core of the approach is based on the spectral radius of a weight-update system, and it allows stability monitoring and its maintenance at every adaptation step individually. Assuring the stability of the weight-update system (at every single adaptation step) naturally results in the adaptation stability of the whole neural architecture that adapts to the target data. As an aside, the used approach highlights the fact that the weight optimization of HONU is a linear problem, so the proposed approach can be generally extended to any neural architecture that is linear in its adaptable parameters.

  2. A new fitting method for measurement of the curvature radius of a short arc with high precision

    NASA Astrophysics Data System (ADS)

    Tao, Wei; Zhong, Hong; Chen, Xiao; Selami, Yassine; Zhao, Hui

    2018-07-01

    The measurement of an object with a short arc is widely encountered in scientific research and industrial production. As the most classic method of arc fitting, the least squares fitting method suffers from low precision when it is used for measurement of arcs with smaller central angles and fewer sampling points. The shorter the arc, the lower is the measurement accuracy. In order to improve the measurement precision of short arcs, a parameter constrained fitting method based on a four-parameter circle equation is proposed in this paper. The generalized Lagrange function was introduced together with the optimization by gradient descent method to reduce the influence from noise. The simulation and experimental results showed that the proposed method has high precision even when the central angle drops below 4° and it has good robustness when the noise standard deviation rises to 0.4 mm. This new fitting method is suitable for the high precision measurement of short arcs with smaller central angles without any prior information.

  3. Implementation of a Balance Operator in NCOM

    DTIC Science & Technology

    2016-04-07

    the background temperature Tb and salinity Sb fields do), f is the Coriolis parameter, k is the vertical unit vector, ∇ is the horizontal gradient, p... effectively used as a natural metric in the space of cost function gradients. The associated geometry inhibits descent in the unbalanced directions...28) where f is the local Coriolis parameter, ∆yv is the local grid spacing in the y direction at a v point, and the overbars indicates horizontal

  4. On the constrained minimization of smooth Kurdyka—Łojasiewicz functions with the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone

    2016-10-01

    The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.

  5. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  6. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  7. Implementation of a Balance Operator in NCOM

    DTIC Science & Technology

    2016-04-07

    the background temperature Tb and salinity Sb fields do), f is the Coriolis parameter, k is the vertical unit vector, ∇ is the horizontal gradient, p... effectively used as a natural metric in the space of cost function gradients. The associated geometry inhibits descent in the unbalanced directions and...28) where f is the local Coriolis parameter, ∆yv is the local grid spacing in the y direction at a v point, and the overbars indicates horizontal

  8. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  9. ATMOS/ATLAS-3 Observations of Long-Lived Tracers and Descent in the Antarctic Vortex in November 1994

    NASA Technical Reports Server (NTRS)

    Abrams, M. C.; Manney, G. L.; Gunson, M. R.; Abbas, M. M.; Chang, A. Y.; Goldman, A.; Irion, F. W.; Michelsen, H. A.; Newchurch, M. J.; Rinsland, C. P.; hide

    1996-01-01

    Observations of the long-lived tracers N2O, CH4 and HF obtained by the Atmospheric Trace Molecule Spectroscopy (ATMOS) instrument in early November 1994 are used to estimate average descent rates during winter in the Antarctic polar vortex of 0.5 to 1.5 km/month in the lower stratosphere, and 2.5 to 3.5 km/month in the middle and upper stratosphere. Descent rates inferred from ATMOS tracer observations agree well with theoretical estimates obtained using radiative heating calculations. Air of mesospheric origin (N2O less than 5 ppbV) was observed at altitudes above about 25 km within the vortex. Strong horizontal gradients of tracer mixing ratios, the presence of mesospheric air in the vortex in early spring, and the variation with altitude of inferred descent rates indicate that the Antarctic vortex is highly isolated from midlatitudes throughout the winter from approximately 20 km to the stratopause. The 1994 Antarctic vortex remained well isolated between 20 and 30 km through at least mid-November.

  10. Full-waveform inversion for the Iranian plateau

    NASA Astrophysics Data System (ADS)

    Masouminia, N.; Fichtner, A.; Rahimi, H.

    2017-12-01

    We aim to obtain a detailed tomographic model for the Iranian plateau facilitated by full-waveform inversion. By using this method, we intend to better constrain the 3-D structure of the crust and the upper mantle in the region. The Iranian plateau is a complex tectonic area resulting from the collision of the Arabian and Eurasian tectonic plates. This region is subject to complex tectonic processes such as Makran subduction zone, which runs along the southeastern coast of Iran, and the convergence of the Arabian and- Eurasian plates, which itself led to another subduction under Central Iran. This continent-continent collision has also caused shortening and crustal thickening, which can be seen today as Zagros mountain range in the south and Kopeh Dagh mountain range in the northeast. As a result of such a tectonic activity, the crust and the mantle beneath the region are expected to be highly heterogeneous. To further our understanding of the region and its tectonic history, a detailed 3-D velocity model is required.To construct a 3-D model, we propose to use full-waveform inversion, which allows us to incorporate all types of waves recorded in the seismogram, including body waves as well as fundamental- and higher-mode surface waves. Exploiting more information from the observed data using this approach is likely to constrain features which have not been found by classical tomography studies so far. We address the forward problem using Salvus - a numerical wave propagation solver, based on spectral-element method and run on high-performance computers. The solver allows us to simulate wave field propagating in highly heterogeneous, attenuating and anisotropic media, respecting the surface topography. To improve the model, we solve the optimization problem. Solution of this optimization problem is based on an iterative approach which employs adjoint methods to calculate the gradient and uses steepest descent and conjugate-gradient methods to minimize the objective function. Each iteration of such an approach is expected to bring the model closer to the true model.Our model domain extends between 25°N and 40°N in latitude and 42°E and 63°E in longitude. To constrain the 3-D structure of the area we use 83 broadband seismic stations and 146 earthquakes with magnitude Mw>4.5 -that occurred in the region between 2012 and 2017.

  11. 14 CFR 23.253 - High speed characteristics.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and characteristics include gust upsets, inadvertent control movements, low stick force gradients in relation to control friction, passenger movement, leveling off from climb, and descent from Mach to... normal attitude and its speed reduced to VMO/MMO, without— (1) Exceptional piloting strength or skill; (2...

  12. 14 CFR 23.253 - High speed characteristics.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and characteristics include gust upsets, inadvertent control movements, low stick force gradients in relation to control friction, passenger movement, leveling off from climb, and descent from Mach to... normal attitude and its speed reduced to VMO/MMO, without— (1) Exceptional piloting strength or skill; (2...

  13. Using a Gradient Vector to Find Multiple Periodic Oscillations in Suspension Bridge Models

    ERIC Educational Resources Information Center

    Humphreys, L. D.; McKenna, P. J.

    2005-01-01

    This paper describes how the method of steepest descent can be used to find periodic solutions of differential equations. Applications to two suspension bridge models are discussed, and the method is used to find non-obvious large-amplitude solutions.

  14. Nonuniformity correction for an infrared focal plane array based on diamond search block matching.

    PubMed

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.

  15. Stochastic parallel gradient descent based adaptive optics used for a high contrast imaging coronagraph

    NASA Astrophysics Data System (ADS)

    Dong, Bing; Ren, De-Qing; Zhang, Xi

    2011-08-01

    An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10-3 to 10-4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.

  16. 3D-Web-GIS RFID location sensing system for construction objects.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  17. Optimization of OT-MACH Filter Generation for Target Recognition

    NASA Technical Reports Server (NTRS)

    Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin

    2009-01-01

    An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.

  18. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    PubMed Central

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  19. Cosmic Microwave Background Mapmaking with a Messenger Field

    NASA Astrophysics Data System (ADS)

    Huffenberger, Kevin M.; Næss, Sigurd K.

    2018-01-01

    We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.

  20. A new smoothing modified three-term conjugate gradient method for [Formula: see text]-norm minimization problem.

    PubMed

    Du, Shouqiang; Chen, Miao

    2018-01-01

    We consider a kind of nonsmooth optimization problems with [Formula: see text]-norm minimization, which has many applications in compressed sensing, signal reconstruction, and the related engineering problems. Using smoothing approximate techniques, this kind of nonsmooth optimization problem can be transformed into a general unconstrained optimization problem, which can be solved by the proposed smoothing modified three-term conjugate gradient method. The smoothing modified three-term conjugate gradient method is based on Polak-Ribière-Polyak conjugate gradient method. For the Polak-Ribière-Polyak conjugate gradient method has good numerical properties, the proposed method possesses the sufficient descent property without any line searches, and it is also proved to be globally convergent. Finally, the numerical experiments show the efficiency of the proposed method.

  1. A General Method for Solving Systems of Non-Linear Equations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.; Deiss, Ron (Technical Monitor)

    1995-01-01

    The method of steepest descent is modified so that accelerated convergence is achieved near a root. It is assumed that the function of interest can be approximated near a root by a quadratic form. An eigenvector of the quadratic form is found by evaluating the function and its gradient at an arbitrary point and another suitably selected point. The terminal point of the eigenvector is chosen to lie on the line segment joining the two points. The terminal point found lies on an axis of the quadratic form. The selection of a suitable step size at this point leads directly to the root in the direction of steepest descent in a single step. Newton's root finding method not infrequently diverges if the starting point is far from the root. However, the current method in these regions merely reverts to the method of steepest descent with an adaptive step size. The current method's performance should match that of the Levenberg-Marquardt root finding method since they both share the ability to converge from a starting point far from the root and both exhibit quadratic convergence near a root. The Levenberg-Marquardt method requires storage for coefficients of linear equations. The current method which does not require the solution of linear equations requires more time for additional function and gradient evaluations. The classic trade off of time for space separates the two methods.

  2. Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method.

    PubMed

    Bhaya, Amit; Kaszkurewicz, Eugenius

    2004-01-01

    It is pointed out that the so called momentum method, much used in the neural network literature as an acceleration of the backpropagation method, is a stationary version of the conjugate gradient method. Connections with the continuous optimization method known as heavy ball with friction are also made. In both cases, adaptive (dynamic) choices of the so called learning rate and momentum parameters are obtained using a control Liapunov function analysis of the system.

  3. A Model for Engaging Students in a Research Experience Involving Variational Techniques, Mathematica, and Descent Methods.

    ERIC Educational Resources Information Center

    Mahavier, W. Ted

    2002-01-01

    Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…

  4. Statistical Physics for Adaptive Distributed Control

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    A viewgraph presentation on statistical physics for distributed adaptive control is shown. The topics include: 1) The Golden Rule; 2) Advantages; 3) Roadmap; 4) What is Distributed Control? 5) Review of Information Theory; 6) Iterative Distributed Control; 7) Minimizing L(q) Via Gradient Descent; and 8) Adaptive Distributed Control.

  5. Conjugate gradient filtering of instantaneous normal modes, saddles on the energy landscape, and diffusion in liquids.

    PubMed

    Chowdhary, J; Keyes, T

    2002-02-01

    Instantaneous normal modes (INM's) are calculated during a conjugate-gradient (CG) descent of the potential energy landscape, starting from an equilibrium configuration of a liquid or crystal. A small number (approximately equal to 4) of CG steps removes all the Im-omega modes in the crystal and leaves the liquid with diffusive Im-omega which accurately represent the self-diffusion constant D. Conjugate gradient filtering appears to be a promising method, applicable to any system, of obtaining diffusive modes and facilitating INM theory of D. The relation of the CG-step dependent INM quantities to the landscape and its saddles is discussed.

  6. Algorithms for Mathematical Programming with Emphasis on Bi-level Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldfarb, Donald; Iyengar, Garud

    2014-05-22

    The research supported by this grant was focused primarily on first-order methods for solving large scale and structured convex optimization problems and convex relaxations of nonconvex problems. These include optimal gradient methods, operator and variable splitting methods, alternating direction augmented Lagrangian methods, and block coordinate descent methods.

  7. Blind beam-hardening correction from Poisson measurements

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  8. Controls on Explosive Eruptions along the Pacific-Antarctic Ridge

    NASA Astrophysics Data System (ADS)

    Lewis, M.; Asimow, P. D.; Lund, D. C.

    2016-12-01

    Sediment core OC170-26-159 was retrieved at 38.967°S, 111.35°W, a location that was 8-9km away from the Pacific-Antarctic Ridge (PAR) axis at the time of Glacial Termination II (T-II), 130ka, a period characterized by enhanced flux of hydrothermal metals to the near-ridge sediments on the East Pacific Rise (Lund et. al. 2016). An interval of enhanced Ti content in OC170-26-159 during T-II is rich in basaltic glass shards that we interpret to be the products of explosive submarine volcanic eruptions. Explosive eruptions of this scale are rare at mid-ocean ridges, so we studied the glass to evaluate whether sea level driven modulation in magmatic flux might be related to the frequency of such events though emplacement of distinct compositions or volatile contents. We report major element and volatile content data for the basaltic glasses and compare the results to literature data (PetDB) from on-axis sampling of the nearest ridge segment, to assess whether the glass was derived from the ridge axis and if it is unusual compared to the axial samples. Major element compositional data show that the glasses are a nearly homogenous population (MgO 5.8 to 6.5%). The heterogeneity is similar to that in single flows in Iceland (Maclennan et. al. 2003) and Hawaii (Garcia et. al. 2000), but the shards are dispersed across a gradient in δ18O, suggesting a closely spaced series of similar eruptions. The glasses are more evolved than any effusively erupted basalts on the PAR, yet are consistent with the same liquid line of descent, linking the explosive products to the axial magmatic system. The MELTS thermodynamic model allows us to calculate the changes in multiple variables along the liquid line of descent between the axial and explosive liquid compositions. Comparison of H2O and CO2 contents to those from axial flows will constrain whether variations in these components are related to eruption styles. These results will constrain the connection between sea level driven variations in magma supply rate, hydrothermal activity, thermal state of the axial magma chamber, volatile exsolution, and the potential for explosive submarine eruptions.

  9. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  10. A gradient system solution to Potts mean field equations and its electronic implementation.

    PubMed

    Urahama, K; Ueno, S

    1993-03-01

    A gradient system solution method is presented for solving Potts mean field equations for combinatorial optimization problems subject to winner-take-all constraints. In the proposed solution method the optimum solution is searched by using gradient descent differential equations whose trajectory is confined within the feasible solution space of optimization problems. This gradient system is proven theoretically to always produce a legal local optimum solution of combinatorial optimization problems. An elementary analog electronic circuit implementing the presented method is designed on the basis of current-mode subthreshold MOS technologies. The core constituent of the circuit is the winner-take-all circuit developed by Lazzaro et al. Correct functioning of the presented circuit is exemplified with simulations of the circuits implementing the scheme for solving the shortest path problems.

  11. Hybrid DFP-CG method for solving unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa

    2017-09-01

    The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.

  12. Performance comparison of a new hybrid conjugate gradient method under exact and inexact line searches

    NASA Astrophysics Data System (ADS)

    Ghani, N. H. A.; Mohamed, N. S.; Zull, N.; Shoid, S.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is one of iterative techniques prominently used in solving unconstrained optimization problems due to its simplicity, low memory storage, and good convergence analysis. This paper presents a new hybrid conjugate gradient method, named NRM1 method. The method is analyzed under the exact and inexact line searches in given conditions. Theoretically, proofs show that the NRM1 method satisfies the sufficient descent condition with both line searches. The computational result indicates that NRM1 method is capable in solving the standard unconstrained optimization problems used. On the other hand, the NRM1 method performs better under inexact line search compared with exact line search.

  13. Neural networks applications to control and computations

    NASA Technical Reports Server (NTRS)

    Luxemburg, Leon A.

    1994-01-01

    Several interrelated problems in the area of neural network computations are described. First an interpolation problem is considered, then a control problem is reduced to a problem of interpolation by a neural network via Lyapunov function approach, and finally a new, faster method of learning as compared with the gradient descent method, was introduced.

  14. Real time on-chip sequential adaptive principal component analysis for data feature extraction and image compression

    NASA Technical Reports Server (NTRS)

    Duong, T. A.

    2004-01-01

    In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.

  15. A modified three-term PRP conjugate gradient algorithm for optimization models.

    PubMed

    Wu, Yanlin

    2017-01-01

    The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.

  16. North Pacific Cloud Feedbacks Inferred from Synoptic-Scale Dynamic and Thermodynamic Relationships

    NASA Technical Reports Server (NTRS)

    Norris, Joel R.; Iacobellis, Sam F.

    2005-01-01

    This study analyzed daily satellite cloud observations and reanalysis dynamical parameters to determine how mid-tropospheric vertical velocity and advection over the sea surface temperature gradient control midlatitude North Pacific cloud properties. Optically thick clouds with high tops are generated by synoptic ascent, but two different cloud regimes occur under synoptic descent. When vertical motion is downward during summer, extensive stratocumulus cloudiness is associated with near surface northerly wind, while frequent cloudless pixels occur with southerly wind. Examinations of ship-reported cloud types indicates that midlatitude stratocumulus breaks up as the the boundary level decouples when it is advected equatorward over warmer water. Cumulus is prevalent under conditions of synoptic descent and cold advection during winter. Poleward advection of subtropical air over colder water causes stratification of the near-surface layer that inhibits upward mixing of moisture and suppresses cloudiness until a fog eventually forms. Averaging of cloud and radiation data into intervals of 500-hPa vertical velocity and advection over the SST gradient enables the cloud response to changes in temperature and the stratification of the lower troposphere to be investigated independent of the dynamics.

  17. Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent

    PubMed Central

    De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle

    2018-01-01

    Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770

  18. TRAGEN: Computer program to simulate an aircraft steered to follow a specified verticle profile. User's guide

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The longitudinal dynamics of a medium range twin-jet or tri-jet transport aircraft are simulated. For the climbing trajectory, the thrust is constrained to maximum value, and for descent, the thrust is set at idle. For cruise, the aircraft is held in the trim condition. For climb or descent, the aircraft is steered to follow either (a) a fixed profile which is input to the program or (b) a profile computed at the beginning of that segment of the run. For climb, the aircraft is steered to maintain the given airspeed as a function of altitude. For descent, the aircraft is steered to maintain the given altitude as a function of range-to-go. In both cases, the control variable is angle-of-attack. The given output trajectory is presented and compared with the input trajectory. Step climb is treated just as climb. For cruise, the Breguet equations are used to compute the fuel burned to achieve a given range and to connect given initial and final values of altitude and Mach number.

  19. Neutral Mass Spectrometry for Venus Atmosphere and Surface

    NASA Technical Reports Server (NTRS)

    Mahaffy, Paul

    2005-01-01

    The assignment is to make precise (better than 1 %) measurements of isotope ratios and accurate (5-10%) measurements of abundances of noble gas and to obtain vertical profiles of trace chemically active gases from above the clouds all the way down to the surface. Science measurement objectives are as follows: 1) Determine the composition of Venus atmosphere, including trace gas species and light stable isotopes; 2) Accurately measure noble-gas isotopic abundance in the atmosphere; 3) Provide descent, surface, and ascent meteorological data; 4) Measure zonal cloud-level winds over several Earth days; 5) Obtain near-IR descent images of the surface from 10-km altitude to the surface; 6) Accurately measure elemental abundances & mineralogy of a core from the surface; and 7) Evaluate the texture of surface materials to constrain weathering environment.

  20. Microbial decomposers not constrained by climate history along a Mediterranean climate gradient in southern California.

    PubMed

    Baker, Nameer R; Khalili, Banafshe; Martiny, Jennifer B H; Allison, Steven D

    2018-06-01

    Microbial decomposers mediate the return of CO 2 to the atmosphere by producing extracellular enzymes to degrade complex plant polymers, making plant carbon available for metabolism. Determining if and how these decomposer communities are constrained in their ability to degrade plant litter is necessary for predicting how carbon cycling will be affected by future climate change. We analyzed mass loss, litter chemistry, microbial biomass, extracellular enzyme activities, and enzyme temperature sensitivities in grassland litter transplanted along a Mediterranean climate gradient in southern California. Microbial community composition was manipulated by caging litter within bags made of nylon membrane that prevent microbial immigration. To test whether grassland microbes were constrained by climate history, half of the bags were inoculated with local microbial communities native to each gradient site. We determined that temperature and precipitation likely interact to limit microbial decomposition in the extreme sites along our gradient. Despite their unique climate history, grassland microbial communities were not restricted in their ability to decompose litter under different climate conditions across the gradient, although microbial communities across our gradient may be restricted in their ability to degrade different types of litter. We did find some evidence that local microbial communities were optimized based on climate, but local microbial taxa that proliferated after inoculation into litterbags did not enhance litter decomposition. Our results suggest that microbial community composition does not constrain C-cycling rates under climate change in our system, but optimization to particular resource environments may act as more general constraints on microbial communities. © 2018 by the Ecological Society of America.

  1. Limited-memory fast gradient descent method for graph regularized nonnegative matrix factorization.

    PubMed

    Guan, Naiyang; Wei, Lei; Luo, Zhigang; Tao, Dacheng

    2013-01-01

    Graph regularized nonnegative matrix factorization (GNMF) decomposes a nonnegative data matrix X[Symbol:see text]R(m x n) to the product of two lower-rank nonnegative factor matrices, i.e.,W[Symbol:see text]R(m x r) and H[Symbol:see text]R(r x n) (r < min {m,n}) and aims to preserve the local geometric structure of the dataset by minimizing squared Euclidean distance or Kullback-Leibler (KL) divergence between X and WH. The multiplicative update rule (MUR) is usually applied to optimize GNMF, but it suffers from the drawback of slow-convergence because it intrinsically advances one step along the rescaled negative gradient direction with a non-optimal step size. Recently, a multiple step-sizes fast gradient descent (MFGD) method has been proposed for optimizing NMF which accelerates MUR by searching the optimal step-size along the rescaled negative gradient direction with Newton's method. However, the computational cost of MFGD is high because 1) the high-dimensional Hessian matrix is dense and costs too much memory; and 2) the Hessian inverse operator and its multiplication with gradient cost too much time. To overcome these deficiencies of MFGD, we propose an efficient limited-memory FGD (L-FGD) method for optimizing GNMF. In particular, we apply the limited-memory BFGS (L-BFGS) method to directly approximate the multiplication of the inverse Hessian and the gradient for searching the optimal step size in MFGD. The preliminary results on real-world datasets show that L-FGD is more efficient than both MFGD and MUR. To evaluate the effectiveness of L-FGD, we validate its clustering performance for optimizing KL-divergence based GNMF on two popular face image datasets including ORL and PIE and two text corpora including Reuters and TDT2. The experimental results confirm the effectiveness of L-FGD by comparing it with the representative GNMF solvers.

  2. Research on particle swarm optimization algorithm based on optimal movement probability

    NASA Astrophysics Data System (ADS)

    Ma, Jianhong; Zhang, Han; He, Baofeng

    2017-01-01

    The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.

  3. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  4. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  5. Shape optimisation of an underwater Bernoulli gripper

    NASA Astrophysics Data System (ADS)

    Flint, Tim; Sellier, Mathieu

    2015-11-01

    In this work, we are interested in maximising the suction produced by an underwater Bernoulli gripper. Bernoulli grippers work by exploiting low pressure regions caused by the acceleration of a working fluid through a narrow channel, between the gripper and a surface, to provide a suction force. This mechanism allows for non-contact adhesion to various surfaces and may be used to hold a robot to the hull of a ship while it inspects welds for example. A Bernoulli type pressure analysis was used to model the system with a Darcy friction factor approximation to include the effects of frictional losses. The analysis involved a constrained optimisation in order to avoid cavitation within the mechanism which would result in decreased performance and damage to surfaces. A sensitivity based method and gradient descent approach was used to find the optimum shape of a discretised surface. The model's accuracy has been quantified against finite volume computational fluid dynamics simulation (ANSYS CFX) using the k- ω SST turbulence model. Preliminary results indicate significant improvement in suction force when compared to a simple geometry by retaining a pressure just above that at which cavitation would occur over as much surface area as possible. Doctoral candidate in the Mechanical Engineering Department of the University of Canterbury, New Zealand.

  6. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  7. Ptychographic overlap constraint errors and the limits of their numerical recovery using conjugate gradient descent methods.

    PubMed

    Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G

    2014-01-27

    Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.

  8. Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1987-01-01

    This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.

  9. Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.

    PubMed

    Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter

    2012-08-01

    An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. An online supervised learning method based on gradient descent for spiking neurons.

    PubMed

    Xu, Yan; Yang, Jing; Zhong, Shuiming

    2017-09-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses. The method constructs error function and calculates the adjustment of synaptic weights as soon as the neurons emit a spike during their running process. We analyze and synthesize desired and actual output spikes to select appropriate input spikes in the calculation of weight adjustment in this paper. The experimental results show that our method obviously improves learning performance compared with the offline learning manner and has certain advantage on learning accuracy compared with other learning methods. Stronger learning ability determines that the method has large pattern storage capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Product Distribution Theory for Control of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Lee, Chia Fan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.

  12. Sequentially reweighted TV minimization for CT metal artifact reduction.

    PubMed

    Zhang, Xiaomeng; Xing, Lei

    2013-07-01

    Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.

  13. Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L

    2017-10-01

    The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.

  14. Assessment of patient functional performance in different knee arthroplasty designs during unconstrained squat

    PubMed Central

    Verdini, Federica; Zara, Claudio; Leo, Tommaso; Mengarelli, Alessandro; Cardarelli, Stefano; Innocenti, Bernardo

    2017-01-01

    Summary Background In this paper, squat named by Authors unconstrained because performed without constrains related to feet position, speed, knee maximum angle to be reached, was tested as motor task revealing differences in functional performance after knee arthroplasty. It involves large joints ranges of motion, does not compromise joint safety and requires accurate control strategies to maintain balance. Methods Motion capture techniques were used to study squat on a healthy control group (CTR) and on three groups, each characterised by a specific knee arthroplasty design: a Total Knee Arthroplasty (TKA), a Mobile Bearing and a Fixed Bearing Unicompartmental Knee Arthroplasty (respectively MBUA and FBUA). Squat was analysed during descent, maintenance and ascent phase and described by speed, angular kinematics of lower and upper body, the Center of Pressure (CoP) trajectory and muscle activation timing of quadriceps and biceps femoris. Results Compared to CTR, for TKA and MBUA knee maximum flexion was lower, vertical speed during descent and ascent reduced and the duration of whole movement was longer. CoP mean distance was higher for all arthroplasty groups during descent as higher was, CoP mean velocity for MBUA and TKA during ascent and descent. Conclusions Unconstrained squat is able to reveal differences in the functional performance among control and arthroplasty groups and between different arthroplasty designs. Considering the similarity index calculated for the variables showing statistically significance, FBUA performance appears to be closest to that of the CTR group. Level of evidence III a. PMID:29387646

  15. A Fast Deep Learning System Using GPU

    DTIC Science & Technology

    2014-06-01

    hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...widely used in data modeling until three decades later when efficient training algorithm for RBM is invented by Hinton [3] and the computing power is...be trained using most of optimization algorithms , such as BP, conjugate gradient descent (CGD) or Levenberg-Marquardt (LM). The advantage of this

  16. Nonlinear Performance Seeking Control using Fuzzy Model Reference Learning Control and the Method of Steepest Descent

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    1997-01-01

    Performance Seeking Control (PSC) attempts to find and control the process at the operating condition that will generate maximum performance. In this paper a nonlinear multivariable PSC methodology will be developed, utilizing the Fuzzy Model Reference Learning Control (FMRLC) and the method of Steepest Descent or Gradient (SDG). This PSC control methodology employs the SDG method to find the operating condition that will generate maximum performance. This operating condition is in turn passed to the FMRLC controller as a set point for the control of the process. The conventional SDG algorithm is modified in this paper in order for convergence to occur monotonically. For the FMRLC control, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for effective tuning of the FMRLC controller.

  17. Local structure of equality constrained NLP problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mari, J.

    We show that locally around a feasible point, the behavior of an equality constrained nonlinear program is described by the gradient and the Hessian of the Lagrangian on the tangent subspace. In particular this holds true for reduced gradient approaches. Applying the same ideas to the control of nonlinear ODE:s, one can device first and second order methods that can be applied also to stiff problems. We finally describe an application of these ideas to the optimization of the production of human growth factor by fed-batch fermentation.

  18. Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr

    2013-02-15

    The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.

  19. On Nonconvex Decentralized Gradient Descent

    DTIC Science & Technology

    2016-08-01

    and J. Bolte, On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Math . Program., 116: 5-16, 2009. [2] H...splitting, and regularized Gauss-Seidel methods, Math . Pro- gram., Ser. A, 137: 91-129, 2013. [3] P. Bianchi and J. Jakubowicz, Convergence of a multi-agent...subgradient method under random communication topologies , IEEE J. Sel. Top. Signal Process., 5:754-771, 2011. [11] A. Nedic and A. Ozdaglar, Distributed

  20. Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation

    NASA Astrophysics Data System (ADS)

    Zhou, XueFei

    2018-04-01

    With the development of computer technology, the applications of machine learning are more and more extensive. And machine learning is providing endless opportunities to develop new applications. One of those applications is image recognition by using Convolutional Neural Networks (CNNs). CNN is one of the most common algorithms in image recognition. It is significant to understand its theory and structure for every scholar who is interested in this field. CNN is mainly used in computer identification, especially in voice, text recognition and other aspects of the application. It utilizes hierarchical structure with different layers to accelerate computing speed. In addition, the greatest features of CNNs are the weight sharing and dimension reduction. And all of these consolidate the high effectiveness and efficiency of CNNs with idea computing speed and error rate. With the help of other learning altruisms, CNNs could be used in several scenarios for machine learning, especially for deep learning. Based on the general introduction to the background and the core solution CNN, this paper is going to focus on summarizing how Gradient Descent and Backpropagation work, and how they contribute to the high performances of CNNs. Also, some practical applications will be discussed in the following parts. The last section exhibits the conclusion and some perspectives of future work.

  1. Adaptive ISAR Imaging of Maneuvering Targets Based on a Modified Fourier Transform.

    PubMed

    Wang, Binbin; Xu, Shiyou; Wu, Wenzhen; Hu, Pengjiang; Chen, Zengping

    2018-04-27

    Focusing on the inverse synthetic aperture radar (ISAR) imaging of maneuvering targets, this paper presents a new imaging method which works well when the target's maneuvering is not too severe. After translational motion compensation, we describe the equivalent rotation of maneuvering targets by two variables-the relative chirp rate of the linear frequency modulated (LFM) signal and the Doppler focus shift. The first variable indicates the target's motion status, and the second one represents the possible residual error of the translational motion compensation. With them, a modified Fourier transform matrix is constructed and then used for cross-range compression. Consequently, the imaging of maneuvering is converted into a two-dimensional parameter optimization problem in which a stable and clear ISAR image is guaranteed. A gradient descent optimization scheme is employed to obtain the accurate relative chirp rate and Doppler focus shift. Moreover, we designed an efficient and robust initialization process for the gradient descent method, thus, the well-focused ISAR images of maneuvering targets can be achieved adaptively. Human intervention is not needed, and it is quite convenient for practical ISAR imaging systems. Compared to precedent imaging methods, the new method achieves better imaging quality under reasonable computational cost. Simulation results are provided to validate the effectiveness and advantages of the proposed method.

  2. Applying Gradient Descent in Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Cui, Nan

    2018-04-01

    With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.

  3. In Situ Missions For Investigation of the Climate, Geology and Evolution of Venus

    NASA Astrophysics Data System (ADS)

    Grinspoon, David

    2017-10-01

    In situ Exploration of Venus has been recommended by the Decadal Study of the National Research Council. Many high priority measurements, addressing outstanding first-order, fundamental questions about current processes and evolution of Venus can only be made from in situ platforms such as entry probes, balloons or landers. These include: measuring noble gases and their isotopes to constrain origin and evolution; measuring stable isotopes to constrain the history of water and other volatiles; measuring trace gas profiles and sulfur compounds for chemical cycles and surface-atmosphere interactions, constraining the coupling of radiation, dynamics and chemistry, making visible and infrared descent images, and measuring surface and sub-surface composition. Such measurements will allow us deepen our understanding of the origin and evolution of Venus in the context of the terrestrial planets and extrasolar planets, to determine the level and style of current geological activity and to characterize the divergent climate evolution of Venus and Earth and extend our knowledge of the limits of habitability on hot terrestrial planets.

  4. New hybrid conjugate gradient methods with the generalized Wolfe line search.

    PubMed

    Xu, Xiao; Kong, Fan-Yu

    2016-01-01

    The conjugate gradient method was an efficient technique for solving the unconstrained optimization problem. In this paper, we made a linear combination with parameters β k of the DY method and the HS method, and putted forward the hybrid method of DY and HS. We also proposed the hybrid of FR and PRP by the same mean. Additionally, to present the two hybrid methods, we promoted the Wolfe line search respectively to compute the step size α k of the two hybrid methods. With the new Wolfe line search, the two hybrid methods had descent property and global convergence property of the two hybrid methods that can also be proved.

  5. A modified conjugate gradient coefficient with inexact line search for unconstrained optimization

    NASA Astrophysics Data System (ADS)

    Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa

    2016-11-01

    Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.

  6. Frigatebird behaviour at the ocean-atmosphere interface: integrating animal behaviour with multi-satellite data.

    PubMed

    De Monte, Silvia; Cotté, Cedric; d'Ovidio, Francesco; Lévy, Marina; Le Corre, Matthieu; Weimerskirch, Henri

    2012-12-07

    Marine top predators such as seabirds are useful indicators of the integrated response of the marine ecosystem to environmental variability at different scales. Large-scale physical gradients constrain seabird habitat. Birds however respond behaviourally to physical heterogeneity at much smaller scales. Here, we use, for the first time, three-dimensional GPS tracking of a seabird, the great frigatebird (Fregata minor), in the Mozambique Channel. These data, which provide at the same time high-resolution vertical and horizontal positions, allow us to relate the behaviour of frigatebirds to the physical environment at the (sub-)mesoscale (10-100 km, days-weeks). Behavioural patterns are classified based on the birds' vertical displacement (e.g. fast/slow ascents and descents), and are overlaid on maps of physical properties of the ocean-atmosphere interface, obtained by a nonlinear analysis of multi-satellite data. We find that frigatebirds modify their behaviours concurrently to transport and thermal fronts. Our results suggest that the birds' co-occurrence with these structures is a consequence of their search not only for food (preferentially searched over thermal fronts) but also for upward vertical wind. This is also supported by their relationship with mesoscale patterns of wind divergence. Our multi-disciplinary method can be applied to forthcoming high-resolution animal tracking data, and aims to provide a mechanistic understanding of animals' habitat choice and of marine ecosystem responses to environmental change.

  7. Dependence of image quality on image operator and noise for optical diffusion tomography

    NASA Astrophysics Data System (ADS)

    Chang, Jenghwa; Graber, Harry L.; Barbour, Randall L.

    1998-04-01

    By applying linear perturbation theory to the radiation transport equation, the inverse problem of optical diffusion tomography can be reduced to a set of linear equations, W(mu) equals R, where W is the weight function, (mu) are the cross- section perturbations to be imaged, and R is the detector readings perturbations. We have studied the dependence of image quality on added systematic error and/or random noise in W and R. Tomographic data were collected from cylindrical phantoms, with and without added inclusions, using Monte Carlo methods. Image reconstruction was accomplished using a constrained conjugate gradient descent method. Result show that accurate images containing few artifacts are obtained when W is derived from a reference states whose optical thickness matches that of the unknown teste medium. Comparable image quality was also obtained for unmatched W, but the location of the target becomes more inaccurate as the mismatch increases. Results of the noise study show that image quality is much more sensitive to noise in W than in R, and the impact of noise increase with the number of iterations. Images reconstructed after pure noise was substituted for R consistently contain large peaks clustered about the cylinder axis, which was an initially unexpected structure. In other words, random input produces a non- random output. This finding suggests that algorithms sensitive to the evolution of this feature could be developed to suppress noise effects.

  8. Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.

    PubMed

    Hui, Zhuo; Sankaranarayanan, Aswin C

    2017-10-01

    This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.

  9. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.

    PubMed

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.

  10. Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models

    PubMed Central

    Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou

    2015-01-01

    Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409

  11. A mesh gradient technique for numerical optimization

    NASA Technical Reports Server (NTRS)

    Willis, E. A., Jr.

    1973-01-01

    A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.

  12. River suspended sediment estimation by climatic variables implication: Comparative study among soft computing techniques

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Shiri, Jalal

    2012-06-01

    Estimating sediment volume carried by a river is an important issue in water resources engineering. This paper compares the accuracy of three different soft computing methods, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Gene Expression Programming (GEP), in estimating daily suspended sediment concentration on rivers by using hydro-meteorological data. The daily rainfall, streamflow and suspended sediment concentration data from Eel River near Dos Rios, at California, USA are used as a case study. The comparison results indicate that the GEP model performs better than the other models in daily suspended sediment concentration estimation for the particular data sets used in this study. Levenberg-Marquardt, conjugate gradient and gradient descent training algorithms were used for the ANN models. Out of three algorithms, the Conjugate gradient algorithm was found to be better than the others.

  13. Learning Structured Classifiers with Dual Coordinate Ascent

    DTIC Science & Technology

    2010-06-01

    stochastic gradient descent (SGD) [LeCun et al., 1998], and the margin infused relaxed algorithm (MIRA) [ Crammer et al., 2006]. This paper presents a...evaluate these methods on the Prague Dependency Treebank us- ing online large-margin learning tech- niques ( Crammer et al., 2003; McDonald et al., 2005...between two kinds of factors: hard constraint factors, which are used to rule out forbidden par- tial assignments by mapping them to zero potential values

  14. A network of spiking neurons for computing sparse representations in an energy efficient way

    PubMed Central

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.

    2013-01-01

    Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853

  15. A network of spiking neurons for computing sparse representations in an energy-efficient way.

    PubMed

    Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B

    2012-11-01

    Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.

  16. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  17. Design and Evaluation of the Terminal Area Precision Scheduling and Spacing System

    NASA Technical Reports Server (NTRS)

    Swenson, Harry N.; Thipphavong, Jane; Sadovsky, Alex; Chen, Liang; Sullivan, Chris; Martin, Lynne

    2011-01-01

    This paper describes the design, development and results from a high fidelity human-in-the-loop simulation of an integrated set of trajectory-based automation tools providing precision scheduling, sequencing and controller merging and spacing functions. These integrated functions are combined into a system called the Terminal Area Precision Scheduling and Spacing (TAPSS) system. It is a strategic and tactical planning tool that provides Traffic Management Coordinators, En Route and Terminal Radar Approach Control air traffic controllers the ability to efficiently optimize the arrival capacity of a demand-impacted airport while simultaneously enabling fuel-efficient descent procedures. The TAPSS system consists of four-dimensional trajectory prediction, arrival runway balancing, aircraft separation constraint-based scheduling, traffic flow visualization and trajectory-based advisories to assist controllers in efficient metering, sequencing and spacing. The TAPSS system was evaluated and compared to today's ATC operation through extensive series of human-in-the-loop simulations for arrival flows into the Los Angeles International Airport. The test conditions included the variation of aircraft demand from a baseline of today's capacity constrained periods through 5%, 10% and 20% increases. Performance data were collected for engineering and human factor analysis and compared with similar operations both with and without the TAPSS system. The engineering data indicate operations with the TAPSS show up to a 10% increase in airport throughput during capacity constrained periods while maintaining fuel-efficient aircraft descent profiles from cruise to landing.

  18. Medical image registration by combining global and local information: a chain-type diffeomorphic demons algorithm.

    PubMed

    Liu, Xiaozheng; Yuan, Zhenming; Zhu, Junming; Xu, Dongrong

    2013-12-07

    The demons algorithm is a popular algorithm for non-rigid image registration because of its computational efficiency and simple implementation. The deformation forces of the classic demons algorithm were derived from image gradients by considering the deformation to decrease the intensity dissimilarity between images. However, the methods using the difference of image intensity for medical image registration are easily affected by image artifacts, such as image noise, non-uniform imaging and partial volume effects. The gradient magnitude image is constructed from the local information of an image, so the difference in a gradient magnitude image can be regarded as more reliable and robust for these artifacts. Then, registering medical images by considering the differences in both image intensity and gradient magnitude is a straightforward selection. In this paper, based on a diffeomorphic demons algorithm, we propose a chain-type diffeomorphic demons algorithm by combining the differences in both image intensity and gradient magnitude for medical image registration. Previous work had shown that the classic demons algorithm can be considered as an approximation of a second order gradient descent on the sum of the squared intensity differences. By optimizing the new dissimilarity criteria, we also present a set of new demons forces which were derived from the gradients of the image and gradient magnitude image. We show that, in controlled experiments, this advantage is confirmed, and yields a fast convergence.

  19. Dynamic metrology and data processing for precision freeform optics fabrication and testing

    NASA Astrophysics Data System (ADS)

    Aftab, Maham; Trumper, Isaac; Huang, Lei; Choi, Heejoo; Zhao, Wenchuan; Graves, Logan; Oh, Chang Jin; Kim, Dae Wook

    2017-06-01

    Dynamic metrology holds the key to overcoming several challenging limitations of conventional optical metrology, especially with regards to precision freeform optical elements. We present two dynamic metrology systems: 1) adaptive interferometric null testing; and 2) instantaneous phase shifting deflectometry, along with an overview of a gradient data processing and surface reconstruction technique. The adaptive null testing method, utilizing a deformable mirror, adopts a stochastic parallel gradient descent search algorithm in order to dynamically create a null testing condition for unknown freeform optics. The single-shot deflectometry system implemented on an iPhone uses a multiplexed display pattern to enable dynamic measurements of time-varying optical components or optics in vibration. Experimental data, measurement accuracy / precision, and data processing algorithms are discussed.

  20. Optimal control of switching time in switched stochastic systems with multi-switching times and different costs

    NASA Astrophysics Data System (ADS)

    Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian

    2017-08-01

    In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.

  1. A Relation Between the Eikonal Equation Associated to a Potential Energy Surface and a Hyperbolic Wave Equation.

    PubMed

    Bofill, Josep Maria; Quapp, Wolfgang; Caballero, Marc

    2012-12-11

    The potential energy surface (PES) of a molecule can be decomposed into equipotential hypersurfaces. We show in this article that the hypersurfaces are the wave fronts of a certain hyperbolic partial differential equation, a wave equation. It is connected with the gradient lines, or the steepest descent, or the steepest ascent lines of the PES. The energy seen as a reaction coordinate plays the central role in this treatment.

  2. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    PubMed

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  3. Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.

    PubMed

    Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2017-05-01

    Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.

  4. A different approach to estimate nonlinear regression model using numerical methods

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  5. Spectral edge: gradient-preserving spectral mapping for image fusion.

    PubMed

    Connah, David; Drew, Mark S; Finlayson, Graham D

    2015-12-01

    This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.

  6. Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    1991-01-01

    Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.

  7. Reconstructing the Surface Permittivity Distribution from Data Measured by the CONSERT Instrument aboard Rosetta: Method and Simulations

    NASA Astrophysics Data System (ADS)

    Plettemeier, D.; Statz, C.; Hegler, S.; Herique, A.; Kofman, W. W.

    2014-12-01

    One of the main scientific objectives of the Comet Nucleus Sounding Experiment by Radiowave Transmission (CONSERT) aboard Rosetta is to perform a dielectric characterization of comet 67P/Chuyurmov-Gerasimenko's nucleus by means of a bi-static sounding between the lander Philae launched onto the comet's surface and the orbiter Rosetta. For the sounding, the lander part of CONSERT will receive and process the radio signal emitted by the orbiter part of the instrument and transmit a signal to the orbiter to be received by CONSERT. CONSERT will also be operated as bi-static RADAR during the descent of the lander Philae onto the comet's surface. From data measured during the descent, we aim at reconstructing a surface permittivity map of the comet at the landing site and along the path below the descent trajectory. This surface permittivity map will give information on the bulk material right below and around the landing site and the surface roughness in areas covered by the instrument along the descent. The proposed method to estimate the surface permittivity distribution is based on a least-squares based inversion approach in frequency domain. The direct problem of simulating the wave-propagation between lander and orbiter at line-of-sight and the signal reflected on the comet's surface is modelled using a dielectric physical optics approximation. Restrictions on the measurement positions by the descent orbitography and limitations on the instrument dynamic range will be dealt with by application of a regularization technique where the surface permittivity distribution and the gradient with regard to the permittivity is projected in a domain defined by a viable model of the spatial material and roughness distribution. The least-squares optimization step of the reconstruction is performed in such domain on a reduced set of parameters yielding stable results. The viability of the proposed method is demonstrated by reconstruction results based on simulated data.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less

  9. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo

    PubMed Central

    Svatos, M.; Zankowski, C.; Bednarz, B.

    2016-01-01

    Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051

  10. Tiltrotor noise reduction through flight trajectory management and aircraft configuration control

    NASA Astrophysics Data System (ADS)

    Gervais, Marc

    A tiltrotor can hover, takeoff and land vertically as well as cruise at high speeds and fly long distances. Because of these unique capabilities, tiltrotors are envisioned as an aircraft that could provide a solution to the issue of airport gridlock by operating on stub runways, helipads, or from smaller regional airports. However, during an approach-to-land a tiltrotor is susceptible to radiating strong impulsive noise, in particular, Blade-Vortex Interaction noise (BVI), a phenomenon highly dependent on the vehicle's performance-state. A mathematical model was developed to predict the quasi-static performance characteristics of a tiltrotor during a converting approach in the longitudinal plane. Additionally, a neural network was designed to model the acoustic results from a flight test of the XV-15 tiltrotor as a function of the aircraft's performance parameters. The performance model was linked to the neural network to yield a combined performance/acoustic model that is capable of predicting tiltrotor noise emitted during a decelerating approach. The model was then used to study noise trends associated with different combinations of airspeed, nacelle tilt, and flight path angle. It showed that BVI noise is the dominant noise source during a descent and that its strength increases with steeper descent angles. Strong BVI noise was observed at very steep flight path angles, suggesting that the tiltrotor's high downwash prevents the wake from being pushed above the rotor, even at such steep descent angles. The model was used to study the effects of various aircraft configuration and flight trajectory parameters on the rotor inflow, which adequately captured the measured BVI noise trends. Flight path management effectively constrained the rotor inflow during a converting approach and thus limited the strength of BVI noise. The maximum deceleration was also constrained by controlling the nacelle tilt-rate during conversion. By applying these constraints, low BVI noise approaches that take into account the first-order effects of deceleration on the acoustics were systematically designed and compared to a baseline approach profile. The low-noise approaches yielded substantial noise reduction benefits on a hemisphere surrounding the aircraft and on a ground plane below the aircraft's trajectory.

  11. Estimation of River Bathymetry from ATI-SAR Data

    NASA Astrophysics Data System (ADS)

    Almeida, T. G.; Walker, D. T.; Farquharson, G.

    2013-12-01

    A framework for estimation of river bathymetry from surface velocity observation data is presented using variational inverse modeling applied to the 2D depth-averaged, shallow-water equations (SWEs) including bottom friction. We start with with a cost function defined by the error between observed and estimated surface velocities, and introduce the SWEs as a constraint on the velocity field. The constrained minimization problem is converted to an unconstrained minimization through the use of Lagrange multipliers, and an adjoint SWE model is developed. The adjoint model solution is used to calculate the gradient of the cost function with respect to river bathymetry. The gradient is used in a descent algorithm to determine the bathymetry that yields a surface velocity field that is a best-fit to the observational data. In applying the algorithm, the 2D depth-averaged flow is computed assuming a known, constant discharge rate and a known, uniform bottom-friction coefficient; a correlation relating surface velocity and depth-averaged velocity is also used. Observation data was collected using a dual beam squinted along-track-interferometric, synthetic-aperture radar (ATI-SAR) system, which provides two independent components of the surface velocity, oriented roughly 30 degrees fore and aft of broadside, offering high-resolution bank-to-bank velocity vector coverage of the river. Data and bathymetry estimation results are presented for two rivers, the Snohomish River near Everett, WA and the upper Sacramento River, north of Colusa, CA. The algorithm results are compared to available measured bathymetry data, with favorable results. General trends show that the water-depth estimates are most accurate in shallow regions, and performance is sensitive to the accuracy of the specified discharge rate and bottom friction coefficient. The results also indicate that, for a given reach, the estimated water depth reaches a maximum that is smaller than the true depth; this apparent maximum depth scales with the true river depth and discharge rate, so that the deepest parts of the river show the largest bathymetry errors.

  12. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  13. The Double Star Orbit Initial Value Problem

    NASA Astrophysics Data System (ADS)

    Hensley, Hagan

    2018-04-01

    Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.

  14. Learning in Modular Systems

    DTIC Science & Technology

    2010-05-07

    important for deep modular systems is that taking a series of small update steps and stopping before convergence, so called early stopping, is a form of regu...larization around the initial parameters of the system . For example, the stochastic gradient descent 5 1 u + 1 v = 1 6‖x2‖q = ‖x‖22q 22 Chapter 2...Aside from the overall speed of the classifier, no quantitative performance analysis was given, and the role played by the features in the larger system

  15. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10 6 particles on 65,536 MPI tasks.

  16. Differential Frequency Hopping (DFH) Modulation for Underwater Acoustic Communications and Networking

    DTIC Science & Technology

    2009-10-09

    trains the coefficients c of a finite impulse response (FIR) filter by gradient descent. The coefficients at iteration k + 1 are computed with the update... absorption . Figure 9 shows the reflection loss as a function of grazing angle for this bottom model. Note that below 30◦ this bottom model predicts...less than 1 dB loss per ray bounce. 11 Figure 9: Jackson bottom reflection loss for sand at 15 kHz Absorption Loss The absorption loss in the medium was

  17. Microbial Decomposers Not Constrained by Climate History Along a Mediterranean Climate Gradient

    NASA Astrophysics Data System (ADS)

    Baker, N. R.; Khalili, B.; Martiny, J. B. H.; Allison, S. D.

    2017-12-01

    The return of organic carbon to the atmosphere through terrestrial decomposition is mediated through the breakdown of complex organic polymers by extracellular enzymes produced by microbial decomposer communities. Determining if and how these decomposer communities are constrained in their ability to degrade plant litter is necessary for predicting how carbon cycling will be affected by future climate change. To address this question, we deployed fine-pore nylon mesh "microbial cage" litterbags containing grassland litter with and without local inoculum across five sites in southern California, spanning a gradient of 10.3-22.8° C in mean annual temperature and 100-400+ mm mean annual precipitation. Litterbags were deployed in October 2014 and collected four times over the course of 14 months. Recovered litter was assayed for mass loss, litter chemistry, microbial biomass, extracellular enzymes (Vmax and Km­), and enzyme temperature sensitivities. We hypothesized that grassland litter would decompose most rapidly in the grassland site, and that access to local microbial communities would enhance litter decomposition rates and microbial activity in the other sites along the gradient. We determined that temperature and precipitation likely interact to limit microbial decomposition in the extreme sites along our gradient. Despite their unique climate history, grassland microbes were not restricted in their ability to decompose litter under different climate conditions. Although we observed a strong correlation between bacterial biomass and mass loss across the gradient, litter that was inoculated with local microbial communities lost less mass despite having greater bacterial biomass and potentially accumulating more microbial residues. Our results suggest that microbial community composition may not constrain C-cycling rates under climate change in our system. However, there may be community constraints on decomposition if climate change alters litter chemistry, a mechanism only indirectly addressed by our design.

  18. Acceleration of Convergence to Equilibrium in Markov Chains by Breaking Detailed Balance

    NASA Astrophysics Data System (ADS)

    Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes

    2017-07-01

    We analyse and interpret the effects of breaking detailed balance on the convergence to equilibrium of conservative interacting particle systems and their hydrodynamic scaling limits. For finite systems of interacting particles, we review existing results showing that irreversible processes converge faster to their steady state than reversible ones. We show how this behaviour appears in the hydrodynamic limit of such processes, as described by macroscopic fluctuation theory, and we provide a quantitative expression for the acceleration of convergence in this setting. We give a geometrical interpretation of this acceleration, in terms of currents that are antisymmetric under time-reversal and orthogonal to the free energy gradient, which act to drive the system away from states where (reversible) gradient-descent dynamics result in slow convergence to equilibrium.

  19. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  20. Algorithm for Training a Recurrent Multilayer Perceptron

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.

    2004-01-01

    An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.

  1. Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient.

    PubMed

    Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai

    2016-06-10

    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.

  2. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.

  3. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  4. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE PAGES

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; ...

    2016-07-13

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  5. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar

    2016-07-01

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. We show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.

  6. Application of Artificial Neural Networks in the Design and Optimization of a Nanoparticulate Fingolimod Delivery System Based on Biodegradable Poly(3-Hydroxybutyrate-Co-3-Hydroxyvalerate).

    PubMed

    Shahsavari, Shadab; Rezaie Shirmard, Leila; Amini, Mohsen; Abedin Dokoosh, Farid

    2017-01-01

    Formulation of a nanoparticulate Fingolimod delivery system based on biodegradable poly(3-hydroxybutyrate-co-3-hydroxyvalerate) was optimized according to artificial neural networks (ANNs). Concentration of poly(3-hydroxybutyrate-co-3-hydroxyvalerate), PVA and amount of Fingolimod is considered as the input value, and the particle size, polydispersity index, loading capacity, and entrapment efficacy as output data in experimental design study. In vitro release study was carried out for best formulation according to statistical analysis. ANNs are employed to generate the best model to determine the relationships between various values. In order to specify the model with the best accuracy and proficiency for the in vitro release, a multilayer percepteron with different training algorithm has been examined. Three training model formulations including Levenberg-Marquardt (LM), gradient descent, and Bayesian regularization were employed for training the ANN models. It is demonstrated that the predictive ability of each training algorithm is in the order of LM > gradient descent > Bayesian regularization. Also, optimum formulation was achieved by LM training function with 15 hidden layers and 20 neurons. The transfer function of the hidden layer for this formulation and the output layer were tansig and purlin, respectively. Also, the optimization process was developed by minimizing the error among the predicted and observed values of training algorithm (about 0.0341). Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  7. Piecewise convexity of artificial neural networks.

    PubMed

    Rister, Blaine; Rubin, Daniel L

    2017-10-01

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Manifold regularized discriminative nonnegative matrix factorization with fast gradient descent.

    PubMed

    Guan, Naiyang; Tao, Dacheng; Luo, Zhigang; Yuan, Bo

    2011-07-01

    Nonnegative matrix factorization (NMF) has become a popular data-representation method and has been widely used in image processing and pattern-recognition problems. This is because the learned bases can be interpreted as a natural parts-based representation of data and this interpretation is consistent with the psychological intuition of combining parts to form a whole. For practical classification tasks, however, NMF ignores both the local geometry of data and the discriminative information of different classes. In addition, existing research results show that the learned basis is unnecessarily parts-based because there is neither explicit nor implicit constraint to ensure the representation parts-based. In this paper, we introduce the manifold regularization and the margin maximization to NMF and obtain the manifold regularized discriminative NMF (MD-NMF) to overcome the aforementioned problems. The multiplicative update rule (MUR) can be applied to optimizing MD-NMF, but it converges slowly. In this paper, we propose a fast gradient descent (FGD) to optimize MD-NMF. FGD contains a Newton method that searches the optimal step length, and thus, FGD converges much faster than MUR. In addition, FGD includes MUR as a special case and can be applied to optimizing NMF and its variants. For a problem with 165 samples in R(1600), FGD converges in 28 s, while MUR requires 282 s. We also apply FGD in a variant of MD-NMF and experimental results confirm its efficiency. Experimental results on several face image datasets suggest the effectiveness of MD-NMF.

  9. Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.

    PubMed

    Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen

    2016-07-27

    Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.

  10. Mars Stratigraphy Mission

    NASA Technical Reports Server (NTRS)

    Budney, C. J.; Miller, S. L.; Cutts, J. A.

    2000-01-01

    The Mars Stratigraphy Mission lands a rover on the surface of Mars which descends down a cliff in Valles Marineris to study the stratigraphy. The rover carries a unique complement of instruments to analyze and age-date materials encountered during descent past 2 km of strata. The science objective for the Mars Stratigraphy Mission is to identify the geologic history of the layered deposits in the Valles Marineris region of Mars. This includes constraining the time interval for formation of these deposits by measuring the ages of various layers and determining the origin of the deposits (volcanic or sedimentary) by measuring their composition and imaging their morphology.

  11. Crosswind Shear Gradient Affect on Wake Vortices

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Ahmad, Nashat N.

    2011-01-01

    Parametric simulations with a Large Eddy Simulation (LES) model are used to explore the influence of crosswind shear on aircraft wake vortices. Previous studies based on field measurements, laboratory experiments, as well as LES, have shown that the vertical gradient of crosswind shear, i.e. the second vertical derivative of the environmental crosswind, can influence wake vortex transport. The presence of nonlinear vertical shear of the crosswind velocity can reduce the descent rate, causing a wake vortex pair to tilt and change in its lateral separation. The LES parametric studies confirm that the vertical gradient of crosswind shear does influence vortex trajectories. The parametric results also show that vortex decay from the effects of shear are complex since the crosswind shear, along with the vertical gradient of crosswind shear, can affect whether the lateral separation between wake vortices is increased or decreased. If the separation is decreased, the vortex linking time is decreased, and a more rapid decay of wake vortex circulation occurs. If the separation is increased, the time to link is increased, and at least one of the vortices of the vortex pair may have a longer life time than in the case without shear. In some cases, the wake vortices may never link.

  12. Efficient spectral computation of the stationary states of rotating Bose-Einstein condensates by preconditioned nonlinear conjugate gradient methods

    NASA Astrophysics Data System (ADS)

    Antoine, Xavier; Levitt, Antoine; Tang, Qinglin

    2017-08-01

    We propose a preconditioned nonlinear conjugate gradient method coupled with a spectral spatial discretization scheme for computing the ground states (GS) of rotating Bose-Einstein condensates (BEC), modeled by the Gross-Pitaevskii Equation (GPE). We first start by reviewing the classical gradient flow (also known as imaginary time (IMT)) method which considers the problem from the PDE standpoint, leading to numerically solve a dissipative equation. Based on this IMT equation, we analyze the forward Euler (FE), Crank-Nicolson (CN) and the classical backward Euler (BE) schemes for linear problems and recognize classical power iterations, allowing us to derive convergence rates. By considering the alternative point of view of minimization problems, we propose the preconditioned steepest descent (PSD) and conjugate gradient (PCG) methods for the GS computation of the GPE. We investigate the choice of the preconditioner, which plays a key role in the acceleration of the convergence process. The performance of the new algorithms is tested in 1D, 2D and 3D. We conclude that the PCG method outperforms all the previous methods, most particularly for 2D and 3D fast rotating BECs, while being simple to implement.

  13. The Spatial Distribution of Attention within and across Objects

    PubMed Central

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.; Vecera, Shaun P.

    2011-01-01

    Attention operates to select both spatial locations and perceptual objects. However, the specific mechanism by which attention is oriented to objects is not well understood. We examined the means by which object structure constrains the distribution of spatial attention (i.e., a “grouped array”). Using a modified version of the Egly et al. object cuing task, we systematically manipulated within-object distance and object boundaries. Four major findings are reported: 1) spatial attention forms a gradient across the attended object; 2) object boundaries limit the distribution of this gradient, with the spread of attention constrained by a boundary; 3) boundaries within an object operate similarly to across-object boundaries: we observed object-based effects across a discontinuity within a single object, without the demand to divide or switch attention between discrete object representations; and 4) the gradient of spatial attention across an object directly modulates perceptual sensitivity, implicating a relatively early locus for the grouped array representation. PMID:21728455

  14. Bulk diffusion in a kinetically constrained lattice gas

    NASA Astrophysics Data System (ADS)

    Arita, Chikashi; Krapivsky, P. L.; Mallick, Kirone

    2018-03-01

    In the hydrodynamic regime, the evolution of a stochastic lattice gas with symmetric hopping rules is described by a diffusion equation with density-dependent diffusion coefficient encapsulating all microscopic details of the dynamics. This diffusion coefficient is, in principle, determined by a Green-Kubo formula. In practice, even when the equilibrium properties of a lattice gas are analytically known, the diffusion coefficient cannot be computed except when a lattice gas additionally satisfies the gradient condition. We develop a procedure to systematically obtain analytical approximations for the diffusion coefficient for non-gradient lattice gases with known equilibrium. The method relies on a variational formula found by Varadhan and Spohn which is a version of the Green-Kubo formula particularly suitable for diffusive lattice gases. Restricting the variational formula to finite-dimensional sub-spaces allows one to perform the minimization and gives upper bounds for the diffusion coefficient. We apply this approach to a kinetically constrained non-gradient lattice gas in two dimensions, viz. to the Kob-Andersen model on the square lattice.

  15. Efficient two-dimensional compressive sensing in MIMO radar

    NASA Astrophysics Data System (ADS)

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  16. Statistical Mechanics of Node-perturbation Learning with Noisy Baseline

    NASA Astrophysics Data System (ADS)

    Hara, Kazuyuki; Katahira, Kentaro; Okada, Masato

    2017-02-01

    Node-perturbation learning is a type of statistical gradient descent algorithm that can be applied to problems where the objective function is not explicitly formulated, including reinforcement learning. It estimates the gradient of an objective function by using the change in the object function in response to the perturbation. The value of the objective function for an unperturbed output is called a baseline. Cho et al. proposed node-perturbation learning with a noisy baseline. In this paper, we report on building the statistical mechanics of Cho's model and on deriving coupled differential equations of order parameters that depict learning dynamics. We also show how to derive the generalization error by solving the differential equations of order parameters. On the basis of the results, we show that Cho's results are also apply in general cases and show some general performances of Cho's model.

  17. Development and implementation of (Q)SAR modeling within the CHARMMing web-user interface.

    PubMed

    Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T; Filippov, Igor V; Woodcock, H Lee; Brooks, Bernard R

    2015-01-05

    Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a web-based tool for structure activity relationship and quantitative structure activity relationship modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms-Random Forest, Support Vector Machine, Stochastic Gradient Descent, Gradient Tree Boosting, so forth. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. © 2014 Wiley Periodicals, Inc.

  18. Frequency-domain beamformers using conjugate gradient techniques for speech enhancement.

    PubMed

    Zhao, Shengkui; Jones, Douglas L; Khoo, Suiyang; Man, Zhihong

    2014-09-01

    A multiple-iteration constrained conjugate gradient (MICCG) algorithm and a single-iteration constrained conjugate gradient (SICCG) algorithm are proposed to realize the widely used frequency-domain minimum-variance-distortionless-response (MVDR) beamformers and the resulting algorithms are applied to speech enhancement. The algorithms are derived based on the Lagrange method and the conjugate gradient techniques. The implementations of the algorithms avoid any form of explicit or implicit autocorrelation matrix inversion. Theoretical analysis establishes formal convergence of the algorithms. Specifically, the MICCG algorithm is developed based on a block adaptation approach and it generates a finite sequence of estimates that converge to the MVDR solution. For limited data records, the estimates of the MICCG algorithm are better than the conventional estimators and equivalent to the auxiliary vector algorithms. The SICCG algorithm is developed based on a continuous adaptation approach with a sample-by-sample updating procedure and the estimates asymptotically converge to the MVDR solution. An illustrative example using synthetic data from a uniform linear array is studied and an evaluation on real data recorded by an acoustic vector sensor array is demonstrated. Performance of the MICCG algorithm and the SICCG algorithm are compared with the state-of-the-art approaches.

  19. Adaptive conversion of a high-order mode beam into a near-diffraction-limited beam.

    PubMed

    Zhao, Haichuan; Wang, Xiaolin; Ma, Haotong; Zhou, Pu; Ma, Yanxing; Xu, Xiaojun; Zhao, Yijun

    2011-08-01

    We present a new method for efficiently transforming a high-order mode beam into a nearly Gaussian beam with much higher beam quality. The method is based on modulation of phases of different lobes by stochastic parallel gradient descent algorithm and coherent addition after phase flattening. We demonstrate the method by transforming an LP11 mode into a nearly Gaussian beam. The experimental results reveal that the power in the diffraction-limited bucket in the far field is increased by more than a factor of 1.5.

  20. Evaluation of gravitational gradients generated by Earth's crustal structures

    NASA Astrophysics Data System (ADS)

    Novák, Pavel; Tenzer, Robert; Eshagh, Mehdi; Bagherbandi, Mohammad

    2013-02-01

    Spectral formulas for the evaluation of gravitational gradients generated by upper Earth's mass components are presented in the manuscript. The spectral approach allows for numerical evaluation of global gravitational gradient fields that can be used to constrain gravitational gradients either synthesised from global gravitational models or directly measured by the spaceborne gradiometer on board of the GOCE satellite mission. Gravitational gradients generated by static atmospheric, topographic and continental ice masses are evaluated numerically based on available global models of Earth's topography, bathymetry and continental ice sheets. CRUST2.0 data are then applied for the numerical evaluation of gravitational gradients generated by mass density contrasts within soft and hard sediments, upper, middle and lower crust layers. Combined gravitational gradients are compared to disturbing gravitational gradients derived from a global gravitational model and an idealised Earth's model represented by the geocentric homogeneous biaxial ellipsoid GRS80. The methodology could be used for improved modelling of the Earth's inner structure.

  1. Constraints on Lateral S Wave Velocity Gradients around the Pacific Superplume

    NASA Astrophysics Data System (ADS)

    To, A.; Romanowicz, B.

    2006-12-01

    Global shear velocity tomographic models show two large-scale low velocity structures in the lower mantle, under southern Africa and under the mid-Pacific. While tomographic models show the shape of the structures, the gradient and amplitude of the anomalies are yet to be constrained. By forward modelling of Sdiffracted phases using the Coupled Spectral ELement Method (C-SEM, Capdeville et al., 2003), we have previously shown that observed secondary phases following the Sdiff can be explained by interaction of the wavefield with sharp boundaries of the superplumes in the south Indian and south Pacific ocean (To et al., 2005). Here, we search for further constrains on velocity gradients at the border of the Pacific superplume all around the Pacific using a multi-step approach applied to a large dataset of Sdiffracted travel times and waveforms which are sensitive to the lower most mantle. We first apply our finite frequency tomographic inversion methodology (NACT, Li and Romanowicz, 1996) which provides a good starting 3D model, which in particular allows us to position the fast and slow anomalies and their boundaries quite well, as has been shown previously, but underestimates the gradients and velocity contrasts. We then perform forward modelling of Sdiff travel times, taking into account finite frequency effects, to refine the velocity contrasts and gradients and provides the next iteration 3D model. We then perform forward modelling of waveforms, down to a frequency of 0.06Hz, using C-SEM which provides final adjustments to the model. We present a model which shows that we can constrain sharp gradients on the southern and northern edges of the Pacific Superplume. To, A., B. Romanowicz, Y. Capdeville and N. Takeuchi (2005) 3D effects of sharp boundaries at the borders of the African and Pacific Superplumes: Observation and modeling. Earth and Planetary Sceince Letters, 233: 137-153 Capdeville, Y., A. To and B. Romanowicz (2003) Coupling spectral elements and modes in a spherical earth: an extension to the "sandwich" case. Geophys. J. Int., 154: 44-57 Li, X.D. and B. Romanowicz (1996) Global mantle shear velocity model developed using nonlinear asymptotic coupling theory, J. Geophys. Res., 101, 22,245-22,273

  2. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  3. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  4. Separating figure from ground with a parallel network.

    PubMed

    Kienker, P K; Sejnowski, T J; Hinton, G E; Schumacher, L E

    1986-01-01

    The differentiation of figure from ground plays an important role in the perceptual organization of visual stimuli. The rapidity with which we can discriminate the inside from the outside of a figure suggests that at least this step in the process may be performed in visual cortex by a large number of neurons in several different areas working together in parallel. We have attempted to simulate this collective computation by designing a network of simple processing units that receives two types of information: bottom-up input from the image containing the outlines of a figure, which may be incomplete, and a top-down attentional input that biases one part of the image to be the inside of the figure. No presegmentation of the image was assumed. Two methods for performing the computation were explored: gradient descent, which seeks locally optimal states, and simulated annealing, which attempts to find globally optimal states by introducing noise into the computation. For complete outlines, gradient descent was faster, but the range of input parameters leading to successful performance was very narrow. In contrast, simulated annealing was more robust: it worked over a wider range of attention parameters and a wider range of outlines, including incomplete ones. Our network model is too simplified to serve as a model of human performance, but it does demonstrate that one global property of outlines can be computed through local interactions in a parallel network. Some features of the model, such as the role of noise in escaping from nonglobal optima, may generalize to more realistic models.

  5. Intelligence system based classification approach for medical disease diagnosis

    NASA Astrophysics Data System (ADS)

    Sagir, Abdu Masanawa; Sathasivam, Saratha

    2017-08-01

    The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.

  6. Development of gradient descent adaptive algorithms to remove common mode artifact for improvement of cardiovascular signal quality.

    PubMed

    Ciaccio, Edward J; Micheli-Tzanakou, Evangelia

    2007-07-01

    Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.

  7. Fuel-Efficient Descent and Landing Guidance Logic for a Safe Lunar Touchdown

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    2011-01-01

    The landing of a crewed lunar lander on the surface of the Moon will be the climax of any Moon mission. At touchdown, the landing mechanism must absorb the load imparted on the lander due to the vertical component of the lander's touchdown velocity. Also, a large horizontal velocity must be avoided because it could cause the lander to tip over, risking the life of the crew. To be conservative, the worst-case lander's touchdown velocity is always assumed in designing the landing mechanism, making it very heavy. Fuel-optimal guidance algorithms for soft planetary landing have been studied extensively. In most of these studies, the lander is constrained to touchdown with zero velocity. With bounds imposed on the magnitude of the engine thrust, the optimal control solutions typically have a "bang-bang" thrust profile: the thrust magnitude "bangs" instantaneously between its maximum and minimum magnitudes. But the descent engine might not be able to throttle between its extremes instantaneously. There is also a concern about the acceptability of "bang-bang" control to the crew. In our study, the optimal control of a lander is formulated with a cost function that penalizes both the touchdown velocity and the fuel cost of the descent engine. In this formulation, there is not a requirement to achieve a zero touchdown velocity. Only a touchdown velocity that is consistent with the capability of the landing gear design is required. Also, since the nominal throttle level for the terminal descent sub-phase is well below the peak engine thrust, no bound on the engine thrust is used in our formulated problem. Instead of bangbang type solution, the optimal thrust generated is a continuous function of time. With this formulation, we can easily derive analytical expressions for the optimal thrust vector, touchdown velocity components, and other system variables. These expressions provide insights into the "physics" of the optimal landing and terminal descent maneuver. These insights could help engineers to achieve a better "balance" between the conflicting needs of achieving a safe touchdown velocity, a low-weight landing mechanism, low engine fuel cost, and other design goals. In comparing the computed optimal control results with the preflight landing trajectory design of the Apollo-11 mission, we noted interesting similarities between the two missions.

  8. Evaluating the accuracy performance of Lucas-Kanade algorithm in the circumstance of PIV application

    NASA Astrophysics Data System (ADS)

    Pan, Chong; Xue, Dong; Xu, Yang; Wang, JinJun; Wei, RunJie

    2015-10-01

    Lucas-Kanade (LK) algorithm, usually used in optical flow filed, has recently received increasing attention from PIV community due to its advanced calculation efficiency by GPU acceleration. Although applications of this algorithm are continuously emerging, a systematic performance evaluation is still lacking. This forms the primary aim of the present work. Three warping schemes in the family of LK algorithm: forward/inverse/symmetric warping, are evaluated in a prototype flow of a hierarchy of multiple two-dimensional vortices. Second-order Newton descent is also considered here. The accuracy & efficiency of all these LK variants are investigated under a large domain of various influential parameters. It is found that the constant displacement constraint, which is a necessary building block for GPU acceleration, is the most critical issue in affecting LK algorithm's accuracy, which can be somehow ameliorated by using second-order Newton descent. Moreover, symmetric warping outbids the other two warping schemes in accuracy level, robustness to noise, convergence speed and tolerance to displacement gradient, and might be the first choice when applying LK algorithm to PIV measurement.

  9. Geodesic regression on orientation distribution functions with its application to an aging study.

    PubMed

    Du, Jia; Goh, Alvina; Kushnarev, Sergey; Qiu, Anqi

    2014-02-15

    In this paper, we treat orientation distribution functions (ODFs) derived from high angular resolution diffusion imaging (HARDI) as elements of a Riemannian manifold and present a method for geodesic regression on this manifold. In order to find the optimal regression model, we pose this as a least-squares problem involving the sum-of-squared geodesic distances between observed ODFs and their model fitted data. We derive the appropriate gradient terms and employ gradient descent to find the minimizer of this least-squares optimization problem. In addition, we show how to perform statistical testing for determining the significance of the relationship between the manifold-valued regressors and the real-valued regressands. Experiments on both synthetic and real human data are presented. In particular, we examine aging effects on HARDI via geodesic regression of ODFs in normal adults aged 22 years old and above. © 2013 Elsevier Inc. All rights reserved.

  10. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction

    PubMed Central

    Gregor, Jens; Fessler, Jeffrey A.

    2015-01-01

    Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906

  11. Deep cultural ancestry and human development indicators across nation states

    PubMed Central

    Sookias, Roland B.; Passmore, Samuel

    2018-01-01

    How historical connections, events and cultural proximity can influence human development is being increasingly recognized. One aspect of history that has only recently begun to be examined is deep cultural ancestry, i.e. the vertical relationships of descent between cultures, which can be represented by a phylogenetic tree of descent. Here, we test whether deep cultural ancestry predicts the United Nations Human Development Index (HDI) for 44 Eurasian countries, using language ancestry as a proxy for cultural relatedness and controlling for three additional factors—geographical proximity, religion and former communism. While cultural ancestry alone predicts HDI and its subcomponents (income, health and education indices), when geographical proximity is included only income and health indices remain significant and the effect is small. When communism and religion variables are included, cultural ancestry is no longer a significant predictor; communism significantly negatively predicts HDI, income and health indices, and Muslim percentage of the population significantly negatively predicts education index, although the latter result may not be robust. These findings indicate that geographical proximity and recent cultural history—especially communism—are more important than deep cultural factors in current human development and suggest the efficacy of modern policy initiatives is not tightly constrained by cultural ancestry. PMID:29765628

  12. Deep cultural ancestry and human development indicators across nation states.

    PubMed

    Sookias, Roland B; Passmore, Samuel; Atkinson, Quentin D

    2018-04-01

    How historical connections, events and cultural proximity can influence human development is being increasingly recognized. One aspect of history that has only recently begun to be examined is deep cultural ancestry, i.e. the vertical relationships of descent between cultures, which can be represented by a phylogenetic tree of descent. Here, we test whether deep cultural ancestry predicts the United Nations Human Development Index (HDI) for 44 Eurasian countries, using language ancestry as a proxy for cultural relatedness and controlling for three additional factors-geographical proximity, religion and former communism. While cultural ancestry alone predicts HDI and its subcomponents (income, health and education indices), when geographical proximity is included only income and health indices remain significant and the effect is small. When communism and religion variables are included, cultural ancestry is no longer a significant predictor; communism significantly negatively predicts HDI, income and health indices, and Muslim percentage of the population significantly negatively predicts education index, although the latter result may not be robust. These findings indicate that geographical proximity and recent cultural history-especially communism-are more important than deep cultural factors in current human development and suggest the efficacy of modern policy initiatives is not tightly constrained by cultural ancestry.

  13. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Microbial community diversity, structure and assembly across oxygen gradients in meromictic marine lakes, Palau.

    PubMed

    Meyerhof, Matthew S; Wilson, Jesse M; Dawson, Michael N; Michael Beman, J

    2016-12-01

    Microbial communities consume oxygen, alter biogeochemistry and compress habitat in aquatic ecosystems, yet our understanding of these microbial-biogeochemical-ecological interactions is limited by a lack of systematic analyses of low-oxygen ecosystems. Marine lakes provide an ideal comparative system, as they range from well-mixed holomictic lakes to stratified, anoxic, meromictic lakes that vary in their vertical extent of anoxia. We examined microbial communities inhabiting six marine lakes and one ocean site using pyrosequencing of 16S rRNA genes. Microbial richness and evenness was typically highest in the anoxic monimolimnion of meromictic lakes, with common marine bacteria present in mixolimnion communities replaced by anoxygenic phototrophs, sulfate-reducing bacteria and SAR406 in the monimolimnion. These sharp changes in community structure were linked to environmental gradients (constrained variation in redundancy analysis = 68%-76%) - particularly oxygen and pH. However, in those lakes with the steepest oxygen gradients, salinity and dissolved nutrients were important secondary constraining variables, indicating that subtle but substantive differences in microbial communities occur within similar low-oxygen habitats. Deterministic processes were a dominant influence on whole community assembly (all nearest taxon index values >4), demonstrating that the strong environmental gradients present in meromictic marine lakes drive microbial community assembly. © 2016 Society for Applied Microbiology and John Wiley & Sons Ltd.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xiao-Dong; Park, Changbom; Forero-Romero, J. E.

    We propose a method based on the redshift dependence of the Alcock-Paczynski (AP) test to measure the expansion history of the universe. It uses the isotropy of the galaxy density gradient field to constrain cosmological parameters. If the density parameter Ω {sub m} or the dark energy equation of state w are incorrectly chosen, the gradient field appears to be anisotropic with the degree of anisotropy varying with redshift. We use this effect to constrain the cosmological parameters governing the expansion history of the universe. Although redshift-space distortions (RSD) induced by galaxy peculiar velocities also produce anisotropies in the gradientmore » field, these effects are close to uniform in magnitude over a large range of redshift. This makes the redshift variation of the gradient field anisotropy relatively insensitive to the RSD. By testing the method on mock surveys drawn from the Horizon Run 3 cosmological N-body simulations, we demonstrate that the cosmological parameters can be estimated without bias. Our method is complementary to the baryon acoustic oscillation or topology methods as it depends on D{sub AH} , the product of the angular diameter distance and the Hubble parameter.« less

  16. New recursive-least-squares algorithms for nonlinear active control of sound and vibration using neural networks.

    PubMed

    Bouchard, M

    2001-01-01

    In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.

  17. Kurtosis Approach for Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.

  18. On Vehicle Placement to Intercept Moving Targets (Preprint)

    DTIC Science & Technology

    2010-03-09

    which is feasible only if X1 −X2 = 0 and Y1 − Y2 = 0. We now present the main result for this section. Theorem 3.4 (Minimizing expected cost) From an...Vandenberghe (2004)) leads the vehicle to the unique global minimizer of Cexp. Let V ⊂ [0,W ], and choose φ(x) such that φ(x) = 0,∀x ∈ [0,W ] \\ V. Then, Theorem ...R>0, and following gradient descent with V as the region of integration, the vehicle remains inside [0,W ] × R>0 at all subsequent times. 3 Theorem

  19. Product Distribution Theory and Semi-Coordinate Transformations

    NASA Technical Reports Server (NTRS)

    Airiau, Stephane; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for doing distributed adaptive control of a multiagent system (MAS). We introduce the technique of "coordinate transformations" in PD theory gradient descent. These transformations selectively couple a few agents with each other into "meta-agents". Intuitively, this can be viewed as a generalization of forming binding contracts between those agents. Doing this sacrifices a bit of the distributed nature of the MAS, in that there must now be communication from multiple agents in determining what joint-move is finally implemented However, as we demonstrate in computer experiments, these transformations improve the performance of the MAS.

  20. Deep kernel learning method for SAR image target recognition

    NASA Astrophysics Data System (ADS)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  1. GOCE gravity gradient data for lithospheric modeling - From well surveyed to frontier areas

    NASA Astrophysics Data System (ADS)

    Bouman, J.; Ebbing, J.; Gradmann, S.; Fuchs, M.; Fattah, R. Abdul; Meekes, S.; Schmidt, M.; Lieb, V.; Haagmans, R.

    2012-04-01

    We explore how GOCE gravity gradient data can improve modeling of the Earth's lithosphere and thereby contribute to a better understanding of the Earth's dynamic processes. The idea is to invert satellite gravity gradients and terrestrial gravity data in the well explored and understood North-East Atlantic Margin and to compare the results of this inversion, providing improved information about the lithosphere and upper mantle, with results obtained by means of models based upon other sources like seismics and magnetic field information. Transfer of the obtained knowledge to the less explored Rub' al Khali desert is foreseen. We present a case study for the North-East Atlantic margin, where we analyze the use of satellite gravity gradients by comparison with a well-constrained 3D density model that provides a detailed picture from the upper mantle to the top basement (base of sediments). The latter horizon is well resolved from gravity and especially magnetic data, whereas sedimentary layers are mainly constrained from seismic studies, but do in general not show a prominent effect in the gravity and magnetic field. We analyze how gravity gradients can increase confidence in the modeled structures by calculating a sensitivity matrix for the existing 3D model. This sensitivity matrix describes the relation between calculated gravity gradient data and geological structures with respect to their depth, extent and relative density contrast. As the sensitivity of the modeled bodies varies for different tensor components, we can use this matrix for a weighted inversion of gradient data to optimize the model. This sensitivity analysis will be used as input to study the Rub' al Khali desert in Saudi Arabia. In terms of modeling and data availability this is a frontier area. Here gravity gradient data will be used to better identify the extent of anomalous structures within the basin, with the goal to improve the modeling for hydrocarbon exploration purposes.

  2. South Virgin-White Hills detachment fault system of SE Nevada and NW Arizona: Applying apatite fission track thermochronology to constrain the tectonic evolution of a major continental detachment fault

    NASA Astrophysics Data System (ADS)

    Fitzgerald, Paul G.; Duebendorfer, Ernest M.; Faulds, James E.; O'Sullivan, Paul

    2009-04-01

    The South Virgin-White Hills detachment (SVWHD) in the central Basin and Range province with an along-strike extent of ˜60 km is a major continental detachment fault system. Displacement on the SVWHD decreases north to south from ˜17 to <6 km. This is accompanied by a change in fault and footwall rock type from mylonite overprinted by cataclasite to chlorite cataclasite and then fault breccia reflecting decreasing fault displacement and footwall exhumation. Apatite fission track (AFT) thermochronology was applied both along-strike and across-strike to assess this displacement gradient. The overall thermal history reflects Laramide cooling (˜75 Ma) and then rapid cooling beginning in the late early Miocene. Age patterns reflect some complexity but extension along the SVWHD appears synchronous with rapid cooling initiated at ˜17 Ma due to tectonic exhumation. Slip rate is more rapid (˜8.6 km/Ma) in the north compared to ˜1 km/Ma in the south. The displacement gradient results from penecontemporaneous along-strike motion and formation of the SVWHD by linkage of originally separate fault segments that have differential displacements and hence differential slip rates. East-west transverse structures likely play a role in linkage of different fault segments. The preextension paleogeothermal gradient is well constrained in the Gold Butte block as 18-20°C/km. We present a new thermochronologic approach to constrain fault dip during slip, treating the vertical exhumation rate and the slip as vectors, with the angle between them used to constrain fault dip during slip through the closure temperature of a particular thermochronometer. AFT data from the western rim of the Colorado Plateau constrain the initiation of timing of cooling associated with the Laramide Orogeny at ˜75 Ma, and a reheating event in the late Eocene/early Oligocene associated with burial by sediments ("rim gravels") most likely shed from the Kingman High to the west of the plateau.

  3. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  4. The glucokinase mutation p.T206P is common among MODY patients of Jewish Ashkenazi descent.

    PubMed

    Gozlan, Yael; Tenenbaum, Ariel; Shalitin, Shlomit; Lebenthal, Yael; Oron, Tal; Cohen, Ohad; Phillip, Moshe; Gat-Yablonski, Galia

    2012-09-01

    Maturity-onset diabetes of the young (MODY) is characterized by an autosomal dominant mode of inheritance; a primary defect in insulin secretion with non-ketotic hyperglycemia, age of onset under 25 yr; and lack of autoantibodies. Heterozygous mutations in glucokinase (GCK) are associated with mild fasting hyperglycemia and gestational diabetes mellitus while homozygous or compound heterozygous GCK mutations result in permanent neonatal diabetes mellitus. Given that both the Israeli-Arabic and the various Israeli-Jewish communities tend to maintain ethnic seclusion, we speculated that it would be possible to identify a relatively narrow spectrum of mutations in the Israeli population. To characterize the genetic basis of GCK-MODY in the different ethnic groups of the Israeli population. Patients with clinically identified GCK-MODY and their first degree family members. Molecular analysis of GCK was performed on genomic DNA using polymerase chain reaction, denaturing gradient gel electrophoresis (DGGE), and sequencing. Bioinformatic model was preformed using the NEST program. Mutations in GCK were identified in 25 families and were all family-specific, except c.616A>C. p.T206P. This mutation was identified in six unrelated families, all patients from a Jewish-Ashkenazi descent, thus indicating an ethno-genetic correlation. A simple, fast, and relatively cheap DGGE/restriction-digestion assay was developed. The high incidence of the mutant allele in GCK-MODY patients of Jewish-Ashkenazi descent suggests a founder effect. We propose that clinically identified GCK-MODY patients of Jewish-Ashkenazi origin be first tested for this mutation. © 2011 John Wiley & Sons A/S.

  5. Edge remap for solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamm, James R.; Love, Edward; Robinson, Allen C.

    We review the edge element formulation for describing the kinematics of hyperelastic solids. This approach is used to frame the problem of remapping the inverse deformation gradient for Arbitrary Lagrangian-Eulerian (ALE) simulations of solid dynamics. For hyperelastic materials, the stress state is completely determined by the deformation gradient, so remapping this quantity effectively updates the stress state of the material. A method, inspired by the constrained transport remap in electromagnetics, is reviewed, according to which the zero-curl constraint on the inverse deformation gradient is implicitly satisfied. Open issues related to the accuracy of this approach are identified. An optimization-based approachmore » is implemented to enforce positivity of the determinant of the deformation gradient. The efficacy of this approach is illustrated with numerical examples.« less

  6. Optimization for high-dose-rate brachytherapy of cervical cancer with adaptive simulated annealing and gradient descent.

    PubMed

    Yao, Rui; Templeton, Alistair K; Liao, Yixiang; Turian, Julius V; Kiel, Krystyna D; Chu, James C H

    2014-01-01

    To validate an in-house optimization program that uses adaptive simulated annealing (ASA) and gradient descent (GD) algorithms and investigate features of physical dose and generalized equivalent uniform dose (gEUD)-based objective functions in high-dose-rate (HDR) brachytherapy for cervical cancer. Eight Syed/Neblett template-based cervical cancer HDR interstitial brachytherapy cases were used for this study. Brachytherapy treatment plans were first generated using inverse planning simulated annealing (IPSA). Using the same dwell positions designated in IPSA, plans were then optimized with both physical dose and gEUD-based objective functions, using both ASA and GD algorithms. Comparisons were made between plans both qualitatively and based on dose-volume parameters, evaluating each optimization method and objective function. A hybrid objective function was also designed and implemented in the in-house program. The ASA plans are higher on bladder V75% and D2cc (p=0.034) and lower on rectum V75% and D2cc (p=0.034) than the IPSA plans. The ASA and GD plans are not significantly different. The gEUD-based plans have higher homogeneity index (p=0.034), lower overdose index (p=0.005), and lower rectum gEUD and normal tissue complication probability (p=0.005) than the physical dose-based plans. The hybrid function can produce a plan with dosimetric parameters between the physical dose-based and gEUD-based plans. The optimized plans with the same objective value and dose-volume histogram could have different dose distributions. Our optimization program based on ASA and GD algorithms is flexible on objective functions, optimization parameters, and can generate optimized plans comparable with IPSA. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  7. A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.

    PubMed

    Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar

    2017-03-01

    The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Optimized computational imaging methods for small-target sensing in lens-free holographic microscopy

    NASA Astrophysics Data System (ADS)

    Xiong, Zhen; Engle, Isaiah; Garan, Jacob; Melzer, Jeffrey E.; McLeod, Euan

    2018-02-01

    Lens-free holographic microscopy is a promising diagnostic approach because it is cost-effective, compact, and suitable for point-of-care applications, while providing high resolution together with an ultra-large field-of-view. It has been applied to biomedical sensing, where larger targets like eukaryotic cells, bacteria, or viruses can be directly imaged without labels, and smaller targets like proteins or DNA strands can be detected via scattering labels like micro- or nano-spheres. Automated image processing routines can count objects and infer target concentrations. In these sensing applications, sensitivity and specificity are critically affected by image resolution and signal-to-noise ratio (SNR). Pixel super-resolution approaches have been shown to boost resolution and SNR by synthesizing a high-resolution image from multiple, partially redundant, low-resolution images. However, there are several computational methods that can be used to synthesize the high-resolution image, and previously, it has been unclear which methods work best for the particular case of small-particle sensing. Here, we quantify the SNR achieved in small-particle sensing using regularized gradient-descent optimization method, where the regularization is based on cardinal-neighbor differences, Bayer-pattern noise reduction, or sparsity in the image. In particular, we find that gradient-descent with sparsity-based regularization works best for small-particle sensing. These computational approaches were evaluated on images acquired using a lens-free microscope that we assembled from an off-the-shelf LED array and color image sensor. Compared to other lens-free imaging systems, our hardware integration, calibration, and sample preparation are particularly simple. We believe our results will help to enable the best performance in lens-free holographic sensing.

  9. Performance analysis of structured gradient algorithm. [for adaptive beamforming linear arrays

    NASA Technical Reports Server (NTRS)

    Godara, Lal C.

    1990-01-01

    The structured gradient algorithm uses a structured estimate of the array correlation matrix (ACM) to estimate the gradient required for the constrained least-mean-square (LMS) algorithm. This structure reflects the structure of the exact array correlation matrix for an equispaced linear array and is obtained by spatial averaging of the elements of the noisy correlation matrix. In its standard form the LMS algorithm does not exploit the structure of the array correlation matrix. The gradient is estimated by multiplying the array output with the receiver outputs. An analysis of the two algorithms is presented to show that the covariance of the gradient estimated by the structured method is less sensitive to the look direction signal than that estimated by the standard method. The effect of the number of elements on the signal sensitivity of the two algorithms is studied.

  10. A parametric LQ approach to multiobjective control system design

    NASA Technical Reports Server (NTRS)

    Kyr, Douglas E.; Buchner, Marc

    1988-01-01

    The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.

  11. Pixel-By Estimation of Scene Motion in Video

    NASA Astrophysics Data System (ADS)

    Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.

    2017-05-01

    The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.

  12. High-resolution Anorectal Manometry for Identifying Defecatory Disorders and Rectal Structural Abnormalities in Women.

    PubMed

    Prichard, David O; Lee, Taehee; Parthasarathy, Gopanandan; Fletcher, Joel G; Zinsmeister, Alan R; Bharucha, Adil E

    2017-03-01

    Contrary to conventional wisdom, the rectoanal gradient during evacuation is negative in many healthy people, undermining the utility of anorectal high-resolution manometry (HRM) for diagnosing defecatory disorders. We aimed to compare HRM and magnetic resonance imaging (MRI) for assessing rectal evacuation and structural abnormalities. We performed a retrospective analysis of 118 patients (all female; 51 with constipation, 48 with fecal incontinence, and 19 with rectal prolapse; age, 53 ± 1 years) assessed by HRM, the rectal balloon expulsion test (BET), and MRI at Mayo Clinic, Rochester, Minnesota, from February 2011 through March 2013. Thirty healthy asymptomatic women (age, 37 ± 2 years) served as controls. We used principal components analysis of HRM variables to identify rectoanal pressure patterns associated with rectal prolapse and phenotypes of patients with prolapse. Compared with patients with normal findings from the rectal BET, patients with an abnormal BET had lower median rectal pressure (36 vs 22 mm Hg, P = .002), a more negative median rectoanal gradient (-6 vs -29 mm Hg, P = .006) during evacuation, and a lower proportion of evacuation on the basis of MRI analysis (median of 40% vs 80%, P < .0001). A score derived from rectal pressure and anorectal descent during evacuation and a patulous anal canal was associated (P = .005) with large rectoceles (3 cm or larger). A principal component (PC) logistic model discriminated between patients with and without prolapse with 96% accuracy. Among patients with prolapse, there were 2 phenotypes, which were characterized by high (PC1) or low (PC2) anal pressures at rest and squeeze along with higher rectal and anal pressures (PC1) or a higher rectoanal gradient during evacuation (PC2). In a retrospective analysis of patients assessed by HRM, measurements of rectal evacuation by anorectal HRM, BET, and MRI were correlated. HRM alone and together with anorectal descent during evacuation may identify rectal prolapse and large rectoceles, respectively, and also identify unique phenotypes of rectal prolapse. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  13. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  14. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  15. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  16. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  17. Direct Temperature Measurements during Netlander Descent on Mars

    NASA Astrophysics Data System (ADS)

    Colombatti, G.; Angrilli, F.; Ferri, F.; Francesconi, A.; Fulchignoni, M.; Lion Stoppato, P. F.; Saggi, B.

    1999-09-01

    A new design for a platinum thermoresistance temperature sensor has been developed and tested in Earth's atmosphere and stratosphere. It will be one of the sensors equipping the scientific package ATMIS (Atmospheric and Meteorology Instrument System), which will be devoted to the measurement of the meteorological parameters during both the entry/descent phase and the surface phase, aboard the Netlanders. In particular vertical profiles of temperature, density and pressure will allow the resolution of vertical gradients to investigate the atmospheric structure and dynamics. In view of the future missions to Mars, Netlander represents a unique chance to increase significantly the climate record both in time and in space, doubling the current knowledge of the atmospheric parameters. Furthermore is the only opportunity to conduct direct measurement of temperature and pressure (outside the boundary layer of the airbags used for the landing). The temperature sensor proposed is a platinum thermoresistance, enhancement of HASI TEM (Cassini/Huygens Mission); a substantial improvement of the performances, i.e. a faster dynamic response, has been obtained. Two different prototypes of new design sensor have been built, laboratory test are proceeding and the second one has been already flown aboard a stratospheric balloon.

  18. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  19. Adjoint shape optimization for fluid-structure interaction of ducted flows

    NASA Astrophysics Data System (ADS)

    Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.

    2018-03-01

    Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.

  20. Northern Hemisphere Nitrous Oxide Morphology during the 1989 AASE and the 1991-1992 AASE 2 Campaigns

    NASA Technical Reports Server (NTRS)

    Podolske, James R.; Loewenstein, Max; Weaver, Alex; Strahan, Susan; Chan, K. Roland

    1993-01-01

    Nitrous oxide vertical profiles and latitudinal distributions for the 1989 AASE and 1992 AASE II northern polar winters are developed from the ATLAS N2O dataset, using both potential temperature and pressure as vertical coordinates. Morphologies show strong descent occurring poleward of the polar jet. The AASE II morphology shows a mid latitude 'surf zone,' characterized by strong horizontal mixing, and a horizontal gradient south of 30 deg N due to the sub-tropical jet. These features are similar to those produced by two-dimensional photochemical models which include coupling between transport, radiation, and chemistry.

  1. Northern hemisphere nitrous oxide morphology during the 1989 AASE and the 1991-1992 AASE 2 campaigns

    NASA Technical Reports Server (NTRS)

    Podolske, James R.; Loewenstein, Max; Weaver, Alex; Strahan, Susan E.; Chan, K. Roland

    1993-01-01

    Nitrous oxide vertical profiles and latitudinal distributions for the 1989 Airborne Antarctic Ozone Experiment (AASE) and 1992 AASE 2 northern polar winters are developed from the ATLAS N2O dataset, using both potential temperature and pressure as vertical coordinates. Morphologies show strong descent occuring poleward of the polar jet. The AASE 2 morphology shows a mid latitude 'surf zone', characterized by strong horizontal mixing, and a horizontal gradient south of 30 deg N due to the sub-tropical jet. These features are similar to those produced by two-dimensional photochemical models which include coupling between transport, radiation, and chemistry.

  2. Output Feedback Stabilization for a Class of Multi-Variable Bilinear Stochastic Systems with Stochastic Coupling Attenuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Qichun; Zhou, Jinglin; Wang, Hong

    In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.

  3. A Gradient Taguchi Method for Engineering Optimization

    NASA Astrophysics Data System (ADS)

    Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song

    2017-10-01

    To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.

  4. Kurtosis Approach Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.

  5. Atmospheric tides on Venus. III - The planetary boundary layer

    NASA Technical Reports Server (NTRS)

    Dobrovolskis, A. R.

    1983-01-01

    Diurnal solar heating of Venus' surface produces variable temperatures, winds, and pressure gradients within a shallow layer at the bottom of the atmosphere. The corresponding asymmetric mass distribution experiences a tidal torque tending to maintain Venus' slow retrograde rotation. It is shown that including viscosity in the boundary layer does not materially affect the balance of torques. On the other hand, friction between the air and ground can reduce the predicted wind speeds from about 5 to about 1 m/sec in the lower atmosphere, more consistent with the observations from Venus landers and descent probes. Implications for aeolian activity on Venus' surface and for future missions are discussed.

  6. Enhancement of the beam quality of non-uniform output slab laser amplifier with a 39-actuator rectangular piezoelectric deformable mirror.

    PubMed

    Yang, Ping; Ning, Yu; Lei, Xiang; Xu, Bing; Li, Xinyang; Dong, Lizhi; Yan, Hu; Liu, Wenjing; Jiang, Wenhan; Liu, Lei; Wang, Chao; Liang, Xingbo; Tang, Xiaojun

    2010-03-29

    We present a slab laser amplifier beam cleanup experimental system based on a 39-actuator rectangular piezoelectric deformable mirror. Rather than use a wave-front sensor to measure distortions in the wave-front and then apply a conjugation wave-front for compensating them, the system uses a Stochastic Parallel Gradient Descent algorithm to maximize the power contained within a far-field designated bucket. Experimental results demonstrate that at the output power of 335W, more than 30% energy concentrates in the 1x diffraction-limited area while the beam quality is enhanced greatly.

  7. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.

  8. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  9. Quantum generalisation of feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.

    2017-09-01

    We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.

  10. Genetic algorithm and graph theory based matrix factorization method for online friend recommendation.

    PubMed

    Li, Qu; Yao, Min; Yang, Jianhua; Xu, Ning

    2014-01-01

    Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.

  11. Railway obstacle detection algorithm using neural network

    NASA Astrophysics Data System (ADS)

    Yu, Mingyang; Yang, Peng; Wei, Sen

    2018-05-01

    Aiming at the difficulty of detection of obstacle in outdoor railway scene, a data-oriented method based on neural network to obtain image objects is proposed. First, we mark objects of images(such as people, trains, animals) acquired on the Internet. and then use the residual learning units to build Fast R-CNN framework. Then, the neural network is trained to get the target image characteristics by using stochastic gradient descent algorithm. Finally, a well-trained model is used to identify an outdoor railway image. if it includes trains and other objects, it will issue an alert. Experiments show that the correct rate of warning reached 94.85%.

  12. A hybrid Gerchberg-Saxton-like algorithm for DOE and CGH calculation

    NASA Astrophysics Data System (ADS)

    Wang, Haichao; Yue, Weirui; Song, Qiang; Liu, Jingdan; Situ, Guohai

    2017-02-01

    The Gerchberg-Saxton (GS) algorithm is widely used in various disciplines of modern sciences and technologies where phase retrieval is required. However, this legendary algorithm most likely stagnates after a few iterations. Many efforts have been taken to improve this situation. Here we propose to introduce the strategy of gradient descent and weighting technique to the GS algorithm, and demonstrate it using two examples: design of a diffractive optical element (DOE) to achieve off-axis illumination in lithographic tools, and design of a computer generated hologram (CGH) for holographic display. Both numerical simulation and optical experiments are carried out for demonstration.

  13. Convergence of fractional adaptive systems using gradient approach.

    PubMed

    Gallegos, Javier A; Duarte-Mermoud, Manuel A

    2017-07-01

    Conditions for boundedness and convergence of the output error and the parameter error for various Caputo's fractional order adaptive schemes based on the steepest descent method are derived in this paper. To this aim, the concept of sufficiently exciting signals is introduced, characterized and related to the concept of persistently exciting signals used in the integer order case. An application is designed in adaptive indirect control of integer order systems using fractional equations to adjust parameters. This application is illustrated for a pole placement adaptive problem. Advantages of using fractional adjustment in control adaptive schemes are experimentally obtained. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Molybdenum and Phosphorus Interact to Constrain Asymbiotic Nitrogen Fixation in Tropical Forests

    PubMed Central

    Wurzburger, Nina; Bellenger, Jean Philippe; Kraepiel, Anne M. L.; Hedin, Lars O.

    2012-01-01

    Biological di-nitrogen fixation (N2) is the dominant natural source of new nitrogen to land ecosystems. Phosphorus (P) is thought to limit N2 fixation in many tropical soils, yet both molybdenum (Mo) and P are crucial for the nitrogenase reaction (which catalyzes N2 conversion to ammonia) and cell growth. We have limited understanding of how and when fixation is constrained by these nutrients in nature. Here we show in tropical forests of lowland Panama that the limiting element on asymbiotic N2 fixation shifts along a broad landscape gradient in soil P, where Mo limits fixation in P-rich soils while Mo and P co-limit in P-poor soils. In no circumstance did P alone limit fixation. We provide and experimentally test a mechanism that explains how Mo and P can interact to constrain asymbiotic N2 fixation. Fixation is uniformly favored in surface organic soil horizons - a niche characterized by exceedingly low levels of available Mo relative to P. We show that soil organic matter acts to reduce molybdate over phosphate bioavailability, which, in turn, promotes Mo limitation in sites where P is sufficient. Our findings show that asymbiotic N2 fixation is constrained by the relative availability and dynamics of Mo and P in soils. This conceptual framework can explain shifts in limitation status across broad landscape gradients in soil fertility and implies that fixation depends on Mo and P in ways that are more complex than previously thought. PMID:22470462

  15. A new modified conjugate gradient coefficient for solving system of linear equations

    NASA Astrophysics Data System (ADS)

    Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.

    2017-09-01

    Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations

  16. Multigrid one shot methods for optimal control problems: Infinite dimensional control

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Taasan, Shlomo

    1994-01-01

    The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.

  17. Measurement of lung expansion with computed tomography and comparison with quantitative histology.

    PubMed

    Coxson, H O; Mayo, J R; Behzad, H; Moore, B J; Verburgt, L M; Staples, C A; Paré, P D; Hogg, J C

    1995-11-01

    The total and regional lung volumes were estimated from computed tomography (CT), and the pleural pressure gradient was determined by using the milliliters of gas per gram of tissue estimated from the X-ray attenuation values and the pressure-volume curve of the lung. The data show that CT accurately estimated the volume of the resected lobe but overestimated its weight by 24 +/- 19%. The volume of gas per gram of tissue was less in the gravity-dependent regions due to a pleural pressure gradient of 0.24 +/- 0.08 cmH2O/cm of descent in the thorax. The proportion of tissue to air obtained with CT was similar to that obtained by quantitative histology. We conclude that the CT scan can be used to estimate total and regional lung volumes and that measurements of the proportions of tissue and air within the thorax by CT can be used in conjunction with quantitative histology to evaluate lung structure.

  18. Predictability of Top of Descent Location for Operational Idle-Thrust Descents

    NASA Technical Reports Server (NTRS)

    Stell, Laurel L.

    2010-01-01

    To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the trajectory predictor and its uncertainty models, commercial flights executed idle-thrust descents at a specified descent speed, and the recorded data included the specified descent speed profile, aircraft weight, and the winds entered into the FMS as well as the radar data. The FMS computed the intended descent path assuming idle thrust after top of descent (TOD), and the controllers and pilots then endeavored to allow the FMS to fly the descent to the meter fix with minimal human intervention. The horizontal flight path, cruise and meter fix altitudes, and actual TOD location were extracted from the radar data. Using approximately 70 descents each in Boeing 757 and Airbus 319/320 aircraft, multiple regression estimated TOD location as a linear function of the available predictive factors. The cruise and meter fix altitudes, descent speed, and wind clearly improve goodness of fit. The aircraft weight improves fit for the Airbus descents but not for the B757. Except for a few statistical outliers, the residuals have absolute value less than 5 nmi. Thus, these predictive factors adequately explain the TOD location, which indicates the data do not include excessive noise.

  19. An approach to multiobjective optimization of rotational therapy. II. Pareto optimal surfaces and linear combinations of modulated blocked arcs for a prostate geometry.

    PubMed

    Pardo-Montero, Juan; Fenwick, John D

    2010-06-01

    The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape comprising two regions: One where the dose to the target is close to prescription and trade-offs can be made between doses to the organs at risk and (small) changes in target dose, and one where very substantial rectal sparing is achieved at the cost of large target underdosage. Plans computed following the approach using a conformal arc and four blocked arcs generally lie close to the Pareto front, although distances of some plans from high gradient regions of the Pareto front can be greater. Only around 12% of plans lie a relative Euclidean distance of 0.15 or greater from the Pareto front. Using the alternative distance measure of Craft ["Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization," Phys. Medica (to be published)], around 2/5 of plans lie more than 0.05 from the front. Computation of blocked arcs is quite fast, the algorithms requiring 35%-80% of the running time per iteration needed for conventional inverse plan computation. The geometry-based arc approach to multicriteria optimization of rotational therapy allows solutions to be obtained that lie close to the Pareto front. Both the image-reconstruction type and gradient-descent algorithms produce similar modulated arcs, the latter one perhaps being preferred because it is more easily implementable in standard treatment planning systems. Moderate unblocking provides a good way of dealing with OARs which abut the PTV. Optimization of geometry-based arcs is faster than usual inverse optimization of treatment plans, making this approach more rapid than an inverse-based Pareto front reconstruction.

  20. Combinational concentration gradient confinement through stagnation flow.

    PubMed

    Alicia, Toh G G; Yang, Chun; Wang, Zhiping; Nguyen, Nam-Trung

    2016-01-21

    Concentration gradient generation in microfluidics is typically constrained by two conflicting mass transport requirements: short characteristic times (τ) for precise temporal control of concentration gradients but at the expense of high flow rates and hence, high flow shear stresses (σ). To decouple the limitations from these parameters, here we propose the use of stagnation flows to confine concentration gradients within large velocity gradients that surround the stagnation point. We developed a modified cross-slot (MCS) device capable of feeding binary and combinational concentration sources in stagnation flows. We show that across the velocity well, source-sink pairs can form permanent concentration gradients. As source-sink concentration pairs are continuously supplied to the MCS, a permanently stable concentration gradient can be generated. Tuning the flow rates directly controls the velocity gradients, and hence the stagnation point location, allowing the confined concentration gradient to be focused. In addition, the flow rate ratio within the MCS rapidly controls (τ ∼ 50 ms) the location of the stagnation point and the confined combinational concentration gradients at low flow shear (0.2 Pa < σ < 2.9 Pa). The MCS device described in this study establishes the method for using stagnation flows to rapidly generate and position low shear combinational concentration gradients for shear sensitive biological assays.

  1. Atmospheric gradients from very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Macmillan, D. S.

    1995-01-01

    Azimuthal asymmetries in the atmospheric refractive index can lead to errors in estimated vertical and horizontal station coordinates. Daily average gradient effects can be as large as 50 mm of delay at a 7 deg elevation. To model gradients, the constrained estimation of gradient paramters was added to the standard VLBI solution procedure. Here the analysis of two sets of data is summarized: the set of all geodetic VLBI experiments from 1990-1993 and a series of 12 state-of-the-art R&D experiments run on consecutive days in January 1994. In both cases, when the gradient parameters are estimated, the overall fit of the geodetic solution is improved at greater than the 99% confidence level. Repeatabilities of baseline lengths ranging up to 11,000 km are improved by 1 to 8 mm in a root-sum-square sense. This varies from about 20% to 40% of the total baseline length scatter without gradient modeling for the 1990-1993 series and 40% to 50% for the January series. Gradients estimated independently for each day as a piecewise linear function are mostly continuous from day to day within their formal uncertainties.

  2. Solution of nonlinear multivariable constrained systems using a gradient projection digital algorithm that is insensitive to the initial state

    NASA Technical Reports Server (NTRS)

    Hargrove, A.

    1982-01-01

    Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.

  3. The dynamics of plate tectonics and mantle flow: from local to global scales.

    PubMed

    Stadler, Georg; Gurnis, Michael; Burstedde, Carsten; Wilcox, Lucas C; Alisic, Laura; Ghattas, Omar

    2010-08-27

    Plate tectonics is regulated by driving and resisting forces concentrated at plate boundaries, but observationally constrained high-resolution models of global mantle flow remain a computational challenge. We capitalized on advances in adaptive mesh refinement algorithms on parallel computers to simulate global mantle flow by incorporating plate motions, with individual plate margins resolved down to a scale of 1 kilometer. Back-arc extension and slab rollback are emergent consequences of slab descent in the upper mantle. Cold thermal anomalies within the lower mantle couple into oceanic plates through narrow high-viscosity slabs, altering the velocity of oceanic plates. Viscous dissipation within the bending lithosphere at trenches amounts to approximately 5 to 20% of the total dissipation through the entire lithosphere and mantle.

  4. Constraints on Southern Ocean CO2 Fluxes and Seasonality from Atmospheric Vertical Gradients Observed on Multiple Airborne Campaigns

    NASA Astrophysics Data System (ADS)

    McKain, K.; Sweeney, C.; Stephens, B. B.; Long, M. C.; Jacobson, A. R.; Basu, S.; Chatterjee, A.; Weir, B.; Wofsy, S. C.; Atlas, E. L.; Blake, D. R.; Montzka, S. A.; Stern, R.

    2017-12-01

    The Southern Ocean plays an important role in the global carbon cycle and climate system, but net CO2 flux into the Southern Ocean is difficult to measure and model because it results from large opposing and seasonally-varying fluxes due to thermal forcing, biological uptake, and deep-water mixing. We present an analysis to constrain the seasonal cycle of net CO2 exchange with the Southern Ocean, and the magnitude of summer uptake, using the vertical gradients in atmospheric CO2 observed during three aircraft campaigns in the southern polar region. The O2/N2 Ratio and CO2 Airborne Southern Ocean Study (ORCAS) was an airborne campaign that intensively sampled the atmosphere at 0-13 km altitude and 45-75 degrees south latitude in the austral summer (January-February) of 2016. The global airborne campaigns, the HIAPER Pole-to-Pole Observations (HIPPO) study and the Atmospheric Tomography Mission (ATom), provide additional measurements over the Southern Ocean from other seasons and multiple years (2009-2011, 2016-2017). Derivation of fluxes from measured vertical gradients requires robust estimates of the residence time of air in the polar tropospheric domain, and of the contribution of long-range transport from northern latitudes outside the domain to the CO2 gradient. We use diverse independent approaches to estimate both terms, including simulations using multiple transport and flux models, and observed gradients of shorter-lived tracers with specific sources regions and well-known loss processes. This study demonstrates the utility of aircraft profile measurements for constraining large-scale air-sea fluxes for the Southern Ocean, in contrast to those derived from the extrapolation of sparse ocean and atmospheric measurements and uncertain flux parameterizations.

  5. Slab Geometry and Segmentation on Seismogenic Subduction Zone; Insight from gravity gradients

    NASA Astrophysics Data System (ADS)

    Saraswati, A. T.; Mazzotti, S.; Cattin, R.; Cadio, C.

    2017-12-01

    Slab geometry is a key parameter to improve seismic hazard assessment in subduction zones. In many cases, information about structures beneath subduction are obtained from geophysical dedicated studies, including geodetic and seismic measurements. However, due to the lack of global information, both geometry and segmentation in seismogenic zone of many subductions remain badly-constrained. Here we propose an alternative approach based on satellite gravity observations. The GOCE (Gravity field and steady-state Ocean Circulation Explorer) mission enables to probe Earth deep mass structures from gravity gradients, which are more sensitive to spatial structure geometry and directional properties than classical gravitational data. Gravity gradients forward modeling of modeled slab is performed by using horizontal and vertical gravity gradient components to better determine slab geophysical model rather than vertical gradient only. Using polyhedron method, topography correction on gravity gradient signal is undertaken to enhance the anomaly signal of lithospheric structures. Afterward, we compare residual gravity gradients with the calculated signals associated with slab geometry. In this preliminary study, straightforward models are used to better understand the characteristic of gravity gradient signals due to deep mass sources. We pay a special attention to the delineation of slab borders and dip angle variations.

  6. Optimum Strategies for Selecting Descent Flight-Path Angles

    NASA Technical Reports Server (NTRS)

    Wu, Minghong G. (Inventor); Green, Steven M. (Inventor)

    2016-01-01

    An information processing system and method for adaptively selecting an aircraft descent flight path for an aircraft, are provided. The system receives flight adaptation parameters, including aircraft flight descent time period, aircraft flight descent airspace region, and aircraft flight descent flyability constraints. The system queries a plurality of flight data sources and retrieves flight information including any of winds and temperatures aloft data, airspace/navigation constraints, airspace traffic demand, and airspace arrival delay model. The system calculates a set of candidate descent profiles, each defined by at least one of a flight path angle and a descent rate, and each including an aggregated total fuel consumption value for the aircraft following a calculated trajectory, and a flyability constraints metric for the calculated trajectory. The system selects a best candidate descent profile having the least fuel consumption value while the fly ability constraints metric remains within aircraft flight descent flyability constraints.

  7. Cannibalism and activity rate in larval damselflies increase along a latitudinal gradient as a consequence of time constraints.

    PubMed

    Sniegula, Szymon; Golab, Maria J; Johansson, Frank

    2017-07-14

    Predation is ubiquitous in nature. One form of predation is cannibalism, which is affected by many factors such as size structure and resource density. However, cannibalism may also be influenced by abiotic factors such as seasonal time constraints. Since time constraints are greater at high latitudes, cannibalism could be stronger at such latitudes, but we know next to nothing about latitudinal variation in cannibalism. In this study, we examined cannibalism and activity in larvae of the damselfly Lestes sponsa along a latitudinal gradient across Europe. We did this by raising larvae from the egg stage at different temperatures and photoperiods corresponding to different latitudes. We found that the more seasonally time-constrained populations in northern latitudes and individuals subjected to greater seasonal time constraints exhibited a higher level of cannibalism. We also found that activity was higher at north latitude conditions, and thus correlated with cannibalism, suggesting that this behaviour mediates higher levels of cannibalism in time-constrained animals. Our results go counter to the classical latitude-predation pattern which predicts higher predation at lower latitudes, since we found that predation was stronger at higher latitudes. The differences in cannibalism might have implications for population dynamics along the latitudinal gradients, but further experiments are needed to explore this.

  8. A method to stabilize linear systems using eigenvalue gradient information

    NASA Technical Reports Server (NTRS)

    Wieseman, C. D.

    1985-01-01

    Formal optimization methods and eigenvalue gradient information are used to develop a stabilizing control law for a closed loop linear system that is initially unstable. The method was originally formulated by using direct, constrained optimization methods with the constraints being the real parts of the eigenvalues. However, because of problems in trying to achieve stabilizing control laws, the problem was reformulated to be solved differently. The method described uses the Davidon-Fletcher-Powell minimization technique to solve an indirect, constrained minimization problem in which the performance index is the Kreisselmeier-Steinhauser function of the real parts of all the eigenvalues. The method is applied successfully to solve two different problems: the determination of a fourth-order control law stabilizes a single-input single-output active flutter suppression system and the determination of a second-order control law for a multi-input multi-output lateral-directional flight control system. Various sets of design variables and initial starting points were chosen to show the robustness of the method.

  9. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines

    PubMed Central

    Neftci, Emre O.; Augustine, Charles; Paul, Somnath; Detorakis, Georgios

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning. PMID:28680387

  10. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.

    PubMed

    Neftci, Emre O; Augustine, Charles; Paul, Somnath; Detorakis, Georgios

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

  11. Natural learning in NLDA networks.

    PubMed

    González, Ana; Dorronsoro, José R

    2007-07-01

    Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.

  12. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Martin, C. H.; Gonsalvez, D. J.; Levis, C. A.; Wang, C. W.

    1983-01-01

    Progress is reported on a computer code to improve the efficiency of spectrum and orbit utilization for the Broadcasting Satellite Service in the 12 GHz band for Region 2. It implements a constrained gradient search procedure using an exponential objective function based on aggregate signal to noise ratio and an extended line search in the gradient direction. The procedure is tested against a manually generated initial scenario and appears to work satisfactorily. In this test it was assumed that alternate channels use orthogonal polarizations at any one satellite location.

  13. Staging optics considerations for a plasma wakefield acceleration linear collider

    NASA Astrophysics Data System (ADS)

    Lindstrøm, C. A.; Adli, E.; Allen, J. M.; Delahaye, J. P.; Hogan, M. J.; Joshi, C.; Muggli, P.; Raubenheimer, T. O.; Yakimenko, V.

    2016-09-01

    Plasma wakefield acceleration offers acceleration gradients of several GeV/m, ideal for a next-generation linear collider. The beam optics requirements between plasma cells include injection and extraction of drive beams, matching the main beam beta functions into the next cell, canceling dispersion as well as constraining bunch lengthening and chromaticity. To maintain a high effective acceleration gradient, this must be accomplished in the shortest distance possible. A working example is presented, using novel methods to correct chromaticity, as well as scaling laws for a high energy regime.

  14. Region Segmentation in the Frequency Domain Applied to Upper Airway Real-Time Magnetic Resonance Images

    PubMed Central

    Narayanan, Shrikanth

    2009-01-01

    We describe a method for unsupervised region segmentation of an image using its spatial frequency domain representation. The algorithm was designed to process large sequences of real-time magnetic resonance (MR) images containing the 2-D midsagittal view of a human vocal tract airway. The segmentation algorithm uses an anatomically informed object model, whose fit to the observed image data is hierarchically optimized using a gradient descent procedure. The goal of the algorithm is to automatically extract the time-varying vocal tract outline and the position of the articulators to facilitate the study of the shaping of the vocal tract during speech production. PMID:19244005

  15. Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.

    PubMed

    Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban

    2015-07-20

    In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.

  16. A new version of Stochastic-parallel-gradient-descent algorithm (SPGD) for phase correction of a distorted orbital angular momentum (OAM) beam

    NASA Astrophysics Data System (ADS)

    Jiao Ling, LIn; Xiaoli, Yin; Huan, Chang; Xiaozhou, Cui; Yi-Lin, Guo; Huan-Yu, Liao; Chun-YU, Gao; Guohua, Wu; Guang-Yao, Liu; Jin-KUn, Jiang; Qing-Hua, Tian

    2018-02-01

    Atmospheric turbulence limits the performance of orbital angular momentum-based free-space optical communication (FSO-OAM) system. In order to compensate phase distortion induced by atmospheric turbulence, wavefront sensorless adaptive optics (WSAO) has been proposed and studied in recent years. In this paper a new version of SPGD called MZ-SPGD, which combines the Z-SPGD based on the deformable mirror influence function and the M-SPGD based on the Zernike polynomials, is proposed. Numerical simulations show that the hybrid method decreases convergence times markedly but can achieve the same compensated effect compared to Z-SPGD and M-SPGD.

  17. CP decomposition approach to blind separation for DS-CDMA system using a new performance index

    NASA Astrophysics Data System (ADS)

    Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss

    2014-12-01

    In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.

  18. Molecular Diagnostics of the Internal Motions of Massive Cores

    NASA Astrophysics Data System (ADS)

    Pineda, Jorge; Velusamy, T.; Goldsmith, P.; Li, D.; Peng, R.; Langer, W.

    2009-12-01

    We present models of the internal kinematics of massive cores in the Orion molecular cloud. We use a sample of cores studied by Velusamy et al. (2008) that show red, blue, and no asymmetry in their HCO+ line profiles in equal proportion, and which therefore may represent a sample of cores in different kinematic states. We use the radiative transfer code RATRAN (Hogerheijde & van der Tak 2000) to model several transitions of HCO+ and H13CO+ as well as the dust continuum emission, of a spherical model cloud with radial density, temperature, and velocity gradients. We find that an excitation and velocity gradients are prerequisites to reproduce the observed line profiles. We use the dust continuum emission to constrain the density and temperature gradients. This allows us to narrow down the functional forms of the velocity gradient giving us the opportunity to test several theoretical predictions of velocity gradients produced by the effect of magnetic fields (e.g. Tassis et. al. 2007) and turbulence (e.g. Vasquez-Semanedi et al 2007).

  19. Matter coupling in partially constrained vielbein formulation of massive gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Antonio De; Mukohyama, Shinji; Gümrükçüoğlu, A. Emir

    2016-01-01

    We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less

  20. Matter coupling in partially constrained vielbein formulation of massive gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Antonio De; Gümrükçüoğlu, A. Emir; Heisenberg, Lavinia

    2016-01-04

    We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less

  1. A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.

    2005-01-01

    We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.

  2. A comparison of discrete versus continuous adjoint states to invert groundwater flow in heterogeneous dual porosity systems

    NASA Astrophysics Data System (ADS)

    Delay, Frederick; Badri, Hamid; Fahs, Marwan; Ackerer, Philippe

    2017-12-01

    Dual porosity models become increasingly used for simulating groundwater flow at the large scale in fractured porous media. In this context, model inversions with the aim of retrieving the system heterogeneity are frequently faced with huge parameterizations for which descent methods of inversion with the assistance of adjoint state calculations are well suited. We compare the performance of discrete and continuous forms of adjoint states associated with the flow equations in a dual porosity system. The discrete form inherits from previous works by some of the authors, as the continuous form is completely new and here fully differentiated for handling all types of model parameters. Adjoint states assist descent methods by calculating the gradient components of the objective function, these being a key to good convergence of inverse solutions. Our comparison on the basis of synthetic exercises show that both discrete and continuous adjoint states can provide very similar solutions close to reference. For highly heterogeneous systems, the calculation grid of the continuous form cannot be too coarse, otherwise the method may show lack of convergence. This notwithstanding, the continuous adjoint state is the most versatile form as its non-intrusive character allows for plugging an inversion toolbox quasi-independent from the code employed for solving the forward problem.

  3. The mid-cretaceous water bearer: Isotope mass balance quantification of the Albian hydrologic cycle

    USGS Publications Warehouse

    Ufnar, David F.; Gonzalez, Luis A.; Ludvigson, Greg A.; Brenner, Richard L.; Witzke, B.J.

    2002-01-01

    A latitudinal gradient in meteoric ??18O compositions compiled from paleosol sphaerosiderites throughout the Cretaceous Western Interior Basin (KWIB) (34-75??N paleolatitude) exhibits a steeper, more depleted trend than modern (predicted) values (3.0??? [34??N latitude] to 9.7??? [75??N] lighter). Furthermore, the sphaerosiderite meteoric ??18O latitudinal gradient is significantly steeper and more depleted (5.8??? [34??N] to 13.8??? [75??N] lighter) than a predicted gradient for the warm mid-Cretaceous using modern empirical temperature-??18O precipitation relationships. We have suggested that the steeper and more depleted (relative to the modern theoretical gradient) meteoric sphaerosiderite ??18O latitudinal gradient resulted from increased air mass rainout effects in coastal areas of the KWIB during the mid-Cretaceous. The sphaerosiderite isotopic data have been used to constrain a mass balance model of the hydrologic cycle in the northern hemisphere and to quantify precipitation rates of the equable 'greenhouse' Albian Stage in the KWIB. The mass balance model tracks the evolving isotopic composition of an air mass and its precipitation, and is driven by latitudinal temperature gradients. Our simulations indicate that significant increases in Albian precipitation (34-52%) and evaporation fluxes (76-96%) are required to reproduce the difference between modern and Albian meteoric siderite ??18O latitudinal gradients. Calculations of precipitation rates from model outputs suggest mid-high latitude precipitation rates greatly exceeded modern rates (156-220% greater in mid latitudes [2600-3300 mm/yr], 99% greater at high latitudes [550 mm/yr]). The calculated precipitation rates are significantly different from the precipitation rates predicted by some recent general circulation models (GCMs) for the warm Cretaceous, particularly in the mid to high latitudes. Our mass balance model by no means replaces GCMs. However, it is a simple and effective means of obtaining quantitative data regarding the mid-Cretaceous hydrologic cycle in the KWIB. Our goal is to encourage the incorporation of isotopic tracers into GCM simulations of the mid-Cretaceous, and to show how our empirical data and mass balance model estimates help constrain the boundary conditions. ?? 2002 Elsevier Science B.V. All rights reserved.

  4. Flight Management System Execution of Idle-Thrust Descents in Operations

    NASA Technical Reports Server (NTRS)

    Stell, Laurel L.

    2011-01-01

    To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the trajectory predictor and its error models, commercial flights executed idle-thrust descents, and the recorded data includes the target speed profile and FMS intent trajectories. The FMS computes the intended descent path assuming idle thrust after top of descent (TOD), and any intervention by the controllers that alters the FMS execution of the descent is recorded so that such flights are discarded from the analysis. The horizontal flight path, cruise and meter fix altitudes, and actual TOD location are extracted from the radar data. Using more than 60 descents in Boeing 777 aircraft, the actual speeds are compared to the intended descent speed profile. In addition, three aspects of the accuracy of the FMS intent trajectory are analyzed: the meter fix crossing time, the TOD location, and the altitude at the meter fix. The actual TOD location is within 5 nmi of the intent location for over 95% of the descents. Roughly 90% of the time, the airspeed is within 0.01 of the target Mach number and within 10 KCAS of the target descent CAS, but the meter fix crossing time is only within 50 sec of the time computed by the FMS. Overall, the aircraft seem to be executing the descents as intended by the designers of the onboard automation.

  5. Learning Maximal Entropy Models from finite size datasets: a fast Data-Driven algorithm allows to sample from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).

  6. An approximate, maximum terminal velocity descent to a point

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eisler, G.R.; Hull, D.G.

    1987-01-01

    No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximatemore » physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.« less

  7. Automatic Hazard Detection for Landers

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Cheng, Yang; Matthies, Larry H.

    2008-01-01

    Unmanned planetary landers to date have landed 'blind'; that is, without the benefit of onboard landing hazard detection and avoidance systems. This constrains landing site selection to very benign terrain,which in turn constrains the scientific agenda of missions. The state of the art Entry, Descent, and Landing (EDL) technology can land a spacecraft on Mars somewhere within a 20-100km landing ellipse.Landing ellipses are very likely to contain hazards such as craters, discontinuities, steep slopes, and large rocks, than can cause mission-fatal damage. We briefly review sensor options for landing hazard detection and identify a perception approach based on stereo vision and shadow analysis that addresses the broadest set of missions. Our approach fuses stereo vision and monocular shadow-based rock detection to maximize spacecraft safety. We summarize performance models for slope estimation and rock detection within this approach and validate those models experimentally. Instantiating our model of rock detection reliability for Mars predicts that this approach can reduce the probability of failed landing by at least a factor of 4 in any given terrain. We also describe a rock detector/mapper applied to large-high-resolution images from the Mars Reconnaissance Orbiter (MRO) for landing site characterization and selection for Mars missions.

  8. Novel maximum-margin training algorithms for supervised neural networks.

    PubMed

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.

  9. A Risk-Constrained Multi-Stage Decision Making Approach to the Architectural Analysis of Mars Missions

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki; Pavone, Marco; Balaram, J. (Bob)

    2012-01-01

    This paper presents a novel risk-constrained multi-stage decision making approach to the architectural analysis of planetary rover missions. In particular, focusing on a 2018 Mars rover concept, which was considered as part of a potential Mars Sample Return campaign, we model the entry, descent, and landing (EDL) phase and the rover traverse phase as four sequential decision-making stages. The problem is to find a sequence of divert and driving maneuvers so that the rover drive is minimized and the probability of a mission failure (e.g., due to a failed landing) is below a user specified bound. By solving this problem for several different values of the model parameters (e.g., divert authority), this approach enables rigorous, accurate and systematic trade-offs for the EDL system vs. the mobility system, and, more in general, cross-domain trade-offs for the different phases of a space mission. The overall optimization problem can be seen as a chance-constrained dynamic programming problem, with the additional complexity that 1) in some stages the disturbances do not have any probabilistic characterization, and 2) the state space is extremely large (i.e, hundreds of millions of states for trade-offs with high-resolution Martian maps). To this purpose, we solve the problem by performing an unconventional combination of average and minimax cost analysis and by leveraging high efficient computation tools from the image processing community. Preliminary trade-off results are presented.

  10. METALLICITY GRADIENTS THROUGH DISK INSTABILITY: A SIMPLE MODEL FOR THE MILKY WAY'S BOXY BULGE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Valpuesta, Inma; Gerhard, Ortwin, E-mail: imv@mpe.mpg.de, E-mail: gerhard@mpe.mpg.de

    2013-03-20

    Observations show a clear vertical metallicity gradient in the Galactic bulge, which is often taken as a signature of dissipative processes in the formation of a classical bulge. Various evidence shows, however, that the Milky Way is a barred galaxy with a boxy bulge representing the inner three-dimensional part of the bar. Here we show with a secular evolution N-body model that a boxy bulge formed through bar and buckling instabilities can show vertical metallicity gradients similar to the observed gradient if the initial axisymmetric disk had a comparable radial metallicity gradient. In this framework, the range of metallicities inmore » bulge fields constrains the chemical structure of the Galactic disk at early times before bar formation. Our secular evolution model was previously shown to reproduce inner Galaxy star counts and we show here that it also has cylindrical rotation. We use it to predict a full mean metallicity map across the Galactic bulge from a simple metallicity model for the initial disk. This map shows a general outward gradient on the sky as well as longitudinal perspective asymmetries. We also briefly comment on interpreting metallicity gradient observations in external boxy bulges.« less

  11. A Nonlinear Programming Perspective on Sensitivity Calculations for Systems Governed by State Equations

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael

    1997-01-01

    This paper discusses the calculation of sensitivities. or derivatives, for optimization problems involving systems governed by differential equations and other state relations. The subject is examined from the point of view of nonlinear programming, beginning with the analytical structure of the first and second derivatives associated with such problems and the relation of these derivatives to implicit differentiation and equality constrained optimization. We also outline an error analysis of the analytical formulae and compare the results with similar results for finite-difference estimates of derivatives. We then attend to an investigation of the nature of the adjoint method and the adjoint equations and their relation to directions of steepest descent. We illustrate the points discussed with an optimization problem in which the variables are the coefficients in a differential operator.

  12. Deep neural mapping support vector machines.

    PubMed

    Li, Yujian; Zhang, Ting

    2017-09-01

    The choice of kernel has an important effect on the performance of a support vector machine (SVM). The effect could be reduced by NEUROSVM, an architecture using multilayer perceptron for feature extraction and SVM for classification. In binary classification, a general linear kernel NEUROSVM can be theoretically simplified as an input layer, many hidden layers, and an SVM output layer. As a feature extractor, the sub-network composed of the input and hidden layers is first trained together with a virtual ordinary output layer by backpropagation, then with the output of its last hidden layer taken as input of the SVM classifier for further training separately. By taking the sub-network as a kernel mapping from the original input space into a feature space, we present a novel model, called deep neural mapping support vector machine (DNMSVM), from the viewpoint of deep learning. This model is also a new and general kernel learning method, where the kernel mapping is indeed an explicit function expressed as a sub-network, different from an implicit function induced by a kernel function traditionally. Moreover, we exploit a two-stage procedure of contrastive divergence learning and gradient descent for DNMSVM to jointly training an adaptive kernel mapping instead of a kernel function, without requirement of kernel tricks. As a whole of the sub-network and the SVM classifier, the joint training of DNMSVM is done by using gradient descent to optimize the objective function with the sub-network layer-wise pre-trained via contrastive divergence learning of restricted Boltzmann machines. Compared to the separate training of NEUROSVM, the joint training is a new algorithm for DNMSVM to have advantages over NEUROSVM. Experimental results show that DNMSVM can outperform NEUROSVM and RBFSVM (i.e., SVM with the kernel of radial basis function), demonstrating its effectiveness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Application of artificial neural network to predict clay sensitivity in a high landslide prone area using CPTu data- A case study in Southwest of Sweden

    NASA Astrophysics Data System (ADS)

    Shahri, Abbas; Mousavinaseri, Mahsasadat; Naderi, Shima; Espersson, Maria

    2015-04-01

    Application of Artificial Neural Networks (ANNs) in many areas of engineering, in particular to geotechnical engineering problems such as site characterization has demonstrated some degree of success. The present paper aims to evaluate the feasibility of several various types of ANN models to predict the clay sensitivity of soft clays form piezocone penetration test data (CPTu). To get the aim, a research database of CPTu data of 70 test points around the Göta River near the Lilli Edet in the southwest of Sweden which is a high prone land slide area were collected and considered as input for ANNs. For training algorithms the quick propagation, conjugate gradient descent, quasi-Newton, limited memory quasi-Newton and Levenberg-Marquardt were developed tested and trained using the CPTu data to provide a comparison between the results of field investigation and ANN models to estimate the clay sensitivity. The reason of using the clay sensitivity parameter in this study is due to its relation to landslides in Sweden.A special high sensitive clay namely quick clay is considered as the main responsible for experienced landslides in Sweden which has high sensitivity and prone to slide. The training and testing program was started with 3-2-1 ANN architecture structure. By testing and trying several various architecture structures and changing the hidden layer in order to have a higher output resolution the 3-4-4-3-1 architecture structure for ANN in this study was confirmed. The tested algorithm showed that increasing the hidden layers up to 4 layers in ANN can improve the results and the 3-4-4-3-1 architecture structure ANNs for prediction of clay sensitivity represent reliable and reasonable response. The obtained results showed that the conjugate gradient descent algorithm with R2=0.897 has the best performance among the tested algorithms. Keywords: clay sensitivity, landslide, Artificial Neural Network

  14. Efficacy of Metarhizium anisopliae isolate MAX-2 from Shangri-la, China under desiccation stress

    PubMed Central

    2014-01-01

    Background Metarhizium anisopliae, a soil-borne entomopathogen found worldwide, is an interesting fungus for biological control. However, its efficacy in the fields is significantly affected by environmental conditions, particularly moisture. To overcome the weakness of Metarhizium and determine its isolates with antistress capacity, the efficacies of four M. anisopliae isolates, which were collected from arid regions of Yunnan Province in China during the dry season, were determined at different moisture levels, and the efficacy of the isolate MAX-2 from Shangri-la under desiccation stress was evaluated at low moisture level. Results M. anisopliae isolates MAX-2, MAC-6, MAL-1, and MAQ-28 showed gradient descent efficacies against sterile Tenebrio molitor larvae, and gradient descent capacities against desiccation with the decrease in moisture levels. The efficacy of MAX-2 showed no significant differences at 35% moisture level than those of the other isolates. However, significant differences were found at 8% to 30% moisture levels. The efficacies of all isolates decreased with the decrease in moisture levels. MAX-2 was relatively less affected by desiccation stress. Its efficacy was almost unaffected by the decrease at moisture levels > 25%, but slowly decreased at moisture levels < 25%. By contrast, the efficacies of other isolates rapidly decreased with the decrease in moisture levels. MAX-2 caused different infection characteristics on T. molitor larvae under desiccation stress and in wet microhabitat. Local black patches were found on the cuticles of the insects, and the cadavers dried without fungal growth under desiccation stress. However, dark black internodes and fungal growth were found after death of the insects in the wet microhabitat. Conclusions MAX-2 showed significantly higher efficacy and superior antistress capacity than the other isolates under desiccation stress. The infection of sterile T. molitor larvae at low moisture level constituted a valid laboratory bioassay system in evaluating M. anisopliae efficacy under desiccation stress. PMID:24383424

  15. Efficacy of Metarhizium anisopliae isolate MAX-2 from Shangri-la, China under desiccation stress.

    PubMed

    Chen, Zi-Hong; Xu, Ling; Yang, Feng-lian; Ji, Guang-Hai; Yang, Jing; Wang, Jian-Yun

    2014-01-03

    Metarhizium anisopliae, a soil-borne entomopathogen found worldwide, is an interesting fungus for biological control. However, its efficacy in the fields is significantly affected by environmental conditions, particularly moisture. To overcome the weakness of Metarhizium and determine its isolates with antistress capacity, the efficacies of four M. anisopliae isolates, which were collected from arid regions of Yunnan Province in China during the dry season, were determined at different moisture levels, and the efficacy of the isolate MAX-2 from Shangri-la under desiccation stress was evaluated at low moisture level. M. anisopliae isolates MAX-2, MAC-6, MAL-1, and MAQ-28 showed gradient descent efficacies against sterile Tenebrio molitor larvae, and gradient descent capacities against desiccation with the decrease in moisture levels. The efficacy of MAX-2 showed no significant differences at 35% moisture level than those of the other isolates. However, significant differences were found at 8% to 30% moisture levels. The efficacies of all isolates decreased with the decrease in moisture levels. MAX-2 was relatively less affected by desiccation stress. Its efficacy was almost unaffected by the decrease at moisture levels > 25%, but slowly decreased at moisture levels < 25%. By contrast, the efficacies of other isolates rapidly decreased with the decrease in moisture levels. MAX-2 caused different infection characteristics on T. molitor larvae under desiccation stress and in wet microhabitat. Local black patches were found on the cuticles of the insects, and the cadavers dried without fungal growth under desiccation stress. However, dark black internodes and fungal growth were found after death of the insects in the wet microhabitat. MAX-2 showed significantly higher efficacy and superior antistress capacity than the other isolates under desiccation stress. The infection of sterile T. molitor larvae at low moisture level constituted a valid laboratory bioassay system in evaluating M. anisopliae efficacy under desiccation stress.

  16. Smoothing of cost function leads to faster convergence of neural network learning

    NASA Astrophysics Data System (ADS)

    Xu, Li-Qun; Hall, Trevor J.

    1994-03-01

    One of the major problems in supervised learning of neural networks is the inevitable local minima inherent in the cost function f(W,D). This often makes classic gradient-descent-based learning algorithms that calculate the weight updates for each iteration according to (Delta) W(t) equals -(eta) (DOT)$DELwf(W,D) powerless. In this paper we describe a new strategy to solve this problem, which, adaptively, changes the learning rate and manipulates the gradient estimator simultaneously. The idea is to implicitly convert the local- minima-laden cost function f((DOT)) into a sequence of its smoothed versions {f(beta t)}Ttequals1, which, subject to the parameter (beta) t, bears less details at time t equals 1 and gradually more later on, the learning is actually performed on this sequence of functionals. The corresponding smoothed global minima obtained in this way, {Wt}Ttequals1, thus progressively approximate W-the desired global minimum. Experimental results on a nonconvex function minimization problem and a typical neural network learning task are given, analyses and discussions of some important issues are provided.

  17. A hybrid neural network model for noisy data regression.

    PubMed

    Lee, Eric W M; Lim, Chee Peng; Yuen, Richard K K; Lo, S M

    2004-04-01

    A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.

  18. Broiler weight estimation based on machine vision and artificial neural network.

    PubMed

    Amraei, S; Abdanan Mehdizadeh, S; Salari, S

    2017-04-01

    1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.

  19. Adaptation to Space: An Introduction

    NASA Technical Reports Server (NTRS)

    Hargens, Alan R.

    1995-01-01

    The cardiovascular and musculoskeletal systems are normally exposed to gradients of blood pressure and weight on Earth. These gradients increase blood pressure and tissue weight in dependent tissues of the body. Exposure to actual and simulated microgravity causes blood and tissue fluid to shift from the legs to the head. Studies of humans in space have documented facial edema, space motion sickness, decreased plasma volume, muscle atrophy, and loss of bone strength. Return of astronauts to Earth is accompanied by orthostatic intolerance, decreased neuromuscular coordination, and reduced exercise capacity. These factors decrease performance during descent from orbit and increase risk during emergency egress from the spacecraft. Models of simulated microgravity include 6 deg head-down tilt, immersion, and prolonged horizontal bedrest. Head-down tilt is the most accepted model and studies using this model of up to one year have been performed in Russia. Animal models which offer clear insights into the role of gravity on vertebrates include the developing giraffe and snakes from various habitats. Finally, possible countermeasures to speed readaptation of astronauts to gravity after prolonged space flight will be discussed.

  20. One Giant Leap for Categorizers: One Small Step for Categorization Theory

    PubMed Central

    Smith, J. David; Ell, Shawn W.

    2015-01-01

    We explore humans’ rule-based category learning using analytic approaches that highlight their psychological transitions during learning. These approaches confirm that humans show qualitatively sudden psychological transitions during rule learning. These transitions contribute to the theoretical literature contrasting single vs. multiple category-learning systems, because they seem to reveal a distinctive learning process of explicit rule discovery. A complete psychology of categorization must describe this learning process, too. Yet extensive formal-modeling analyses confirm that a wide range of current (gradient-descent) models cannot reproduce these transitions, including influential rule-based models (e.g., COVIS) and exemplar models (e.g., ALCOVE). It is an important theoretical conclusion that existing models cannot explain humans’ rule-based category learning. The problem these models have is the incremental algorithm by which learning is simulated. Humans descend no gradient in rule-based tasks. Very different formal-modeling systems will be required to explain humans’ psychology in these tasks. An important next step will be to build a new generation of models that can do so. PMID:26332587

  1. Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.

    PubMed

    Sokoloski, Sacha

    2017-09-01

    In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.

  2. Field evaluation of flight deck procedures for flying CTAS descents

    DOT National Transportation Integrated Search

    1997-01-01

    Flight deck descent procedures were developed for a field evaluation of the CTAS Descent Advisor conducted in the fall of 1995. During this study, CTAS descent clearances were issued to 185 commercial flights at Denver International Airport. Data col...

  3. Fast alternating projection methods for constrained tomographic reconstruction

    PubMed Central

    Liu, Li; Han, Yongxin

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298

  4. How to define pathologic pelvic floor descent in MR defecography during defecation?

    PubMed

    Schawkat, Khoschy; Heinrich, Henriette; Parker, Helen L; Barth, Borna K; Mathew, Rishi P; Weishaupt, Dominik; Fox, Mark; Reiner, Caecilia S

    2018-06-01

    To assess the extents of pelvic floor descent both during the maximal straining phase and the defecation phase in healthy volunteers and in patients with pelvic floor disorders, studied with MR defecography (MRD), and to define specific threshold values for pelvic floor descent during the defecation phase. Twenty-two patients (mean age 51 ± 19.4) with obstructed defecation and 20 healthy volunteers (mean age 33.4 ± 11.5) underwent 3.0T MRD in supine position using midsagittal T2-weighted images. Two radiologists performed measurements in reference to PCL-lines in straining and during defecation. In order to identify cutoff values of pelvic floor measurements for diagnosis of pathologic pelvic floor descent [anterior, middle, and posterior compartments (AC, MC, PC)], receiver-operating characteristic (ROC) curves were plotted. Pelvic floor descent of all three compartments was significantly larger during defecation than at straining in patients and healthy volunteers (p < 0.002). When grading pelvic floor descent in the straining phase, only two healthy volunteers showed moderate PC descent (10%), which is considered pathologic. However, when applying the grading system during defecation, PC descent was overestimated with 50% of the healthy volunteers (10 of 20) showing moderate PC descent. The AUC for PC measurements during defecation was 0.77 (p = 0.003) and suggests a cutoff value of 45 mm below the PCL to identify patients with pathologic PC descent. With the adapted cutoff, only 15% of healthy volunteers show pathologic PC descent during defecation. MRD measurements during straining and defecation can be used to differentiate patients with pelvic floor dysfunction from healthy volunteers. However, different cutoff values should be used during straining and during defecation to define normal or pathologic PC descent.

  5. Evaluation of pelvic descent disorders by dynamic contrast roentgenography.

    PubMed

    Takano, M; Hamada, A

    2000-10-01

    For precise diagnosis and rational treatment of the increasing number of patients with descent of intrapelvic organ(s) and anatomic plane(s), dynamic contrast roentgenography of multiple intrapelvic organs and planes is described. Sixty-six patients, consisting of 11 males, with a mean age (+/- standard deviation) of 65.6+/-14.2 years and with chief complaints of intrapelvic organ and perineal descent or defecation problems, were examined in this study. Dynamic contrast roentgenography was obtained by opacifying the ileum, urinary bladder, vagina, rectum, and the perineum. Films were taken at both squeeze and strain phases. On the films the lowest points of each organ and plane were plotted, and the distances from the standard line drawn at the upper surface of the sacrum were measured. The values were corrected to percentages according to the height of the sacrococcygeal bone of each patient. From these corrected values, organ or plane descents at strain and squeeze were diagnosed and graphically demonstrated as a descentgram in each patient. Among 17 cases with subjective symptoms of bladder descent, 9 cases (52.9 percent) showed roentgenographic descent. By the same token, among the cases with subjective feeling of descent of the vagina, uterus, peritoneum, perineum, rectum, and anus, roentgenographic descent was confirmed in 15 of 20 (75 percent), 7 of 9 (77.8 percent), 6 of 16 (37.5 percent), 33 of 33 (100 percent), 25 of 37 (67.6 percent), and 22 of 36 (61.6 percent), respectively. The descentgrams were divided into three patterns: anorectal descent type, female genital descent type, and total organ descent type. Dynamic contrast roentgenography and successive descentgraphy of multiple intrapelvic organs and planes are useful for objective diagnosis and rational treatment of patients with descent disorders of the intrapelvic organ(s) and plane(s).

  6. Constrained Burn Optimization for the International Space Station

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.; Jones, Brandon A.

    2017-01-01

    In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.

  7. Analysis of various descent trajectories for a hypersonic-cruise, cold-wall research airplane

    NASA Technical Reports Server (NTRS)

    Lawing, P. L.

    1975-01-01

    The probable descent operating conditions for a hypersonic air-breathing research airplane were examined. Descents selected were cruise angle of attack, high dynamic pressure, high lift coefficient, turns, and descents with drag brakes. The descents were parametrically exercised and compared from the standpoint of cold-wall (367 K) aircraft heat load. The descent parameters compared were total heat load, peak heating rate, time to landing, time to end of heat pulse, and range. Trends in total heat load as a function of cruise Mach number, cruise dynamic pressure, angle-of-attack limitation, pull-up g-load, heading angle, and drag-brake size are presented.

  8. A piloted simulator evaluation of a ground-based 4-D descent advisor algorithm

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Green, Steven M.; Erzberger, Heinz

    1990-01-01

    A ground-based, four dimensional (4D) descent-advisor algorithm is under development at NASA-Ames. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. The ability is investigated of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the result of the simulation studies are presented.

  9. STS-1 operational flight profile. Volume 5: Descent, cycle 3. Appendix C: Monte Carlo dispersion analysis

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The results of three nonlinear the Monte Carlo dispersion analyses for the Space Transportation System 1 Flight (STS-1) Orbiter Descent Operational Flight Profile, Cycle 3 are presented. Fifty randomly selected simulation for the end of mission (EOM) descent, the abort once around (AOA) descent targeted line are steep target line, and the AOA descent targeted to the shallow target line are analyzed. These analyses compare the flight environment with system and operational constraints on the flight environment and in some cases use simplified system models as an aid in assessing the STS-1 descent flight profile. In addition, descent flight envelops are provided as a data base for use by system specialists to determine the flight readiness for STS-1. The results of these dispersion analyses supersede results of the dispersion analysis previously documented.

  10. Power plant fault detection using artificial neural network

    NASA Astrophysics Data System (ADS)

    Thanakodi, Suresh; Nazar, Nazatul Shiema Moh; Joini, Nur Fazriana; Hidzir, Hidzrin Dayana Mohd; Awira, Mohammad Zulfikar Khairul

    2018-02-01

    The fault that commonly occurs in power plants is due to various factors that affect the system outage. There are many types of faults in power plants such as single line to ground fault, double line to ground fault, and line to line fault. The primary aim of this paper is to diagnose the fault in 14 buses power plants by using an Artificial Neural Network (ANN). The Multilayered Perceptron Network (MLP) that detection trained utilized the offline training methods such as Gradient Descent Backpropagation (GDBP), Levenberg-Marquardt (LM), and Bayesian Regularization (BR). The best method is used to build the Graphical User Interface (GUI). The modelling of 14 buses power plant, network training, and GUI used the MATLAB software.

  11. Incoherent beam combining based on the momentum SPGD algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng

    2018-05-01

    Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.

  12. Frequency-domain ultrasound waveform tomography breast attenuation imaging

    NASA Astrophysics Data System (ADS)

    Sandhu, Gursharan Yash Singh; Li, Cuiping; Roy, Olivier; West, Erik; Montgomery, Katelyn; Boone, Michael; Duric, Neb

    2016-04-01

    Ultrasound waveform tomography techniques have shown promising results for the visualization and characterization of breast disease. By using frequency-domain waveform tomography techniques and a gradient descent algorithm, we have previously reconstructed the sound speed distributions of breasts of varying densities with different types of breast disease including benign and malignant lesions. By allowing the sound speed to have an imaginary component, we can model the intrinsic attenuation of a medium. We can similarly recover the imaginary component of the velocity and thus the attenuation. In this paper, we will briefly review ultrasound waveform tomography techniques, discuss attenuation and its relations to the imaginary component of the sound speed, and provide both numerical and ex vivo examples of waveform tomography attenuation reconstructions.

  13. Deep turbulence effects mitigation with coherent combining of 21 laser beams over 7 km.

    PubMed

    Weyrauch, Thomas; Vorontsov, Mikhail; Mangano, Joseph; Ovchinnikov, Vladimir; Bricker, David; Polnau, Ernst; Rostov, Andrey

    2016-02-15

    We demonstrate coherent beam combining and adaptive mitigation of atmospheric turbulence effects over 7 km under strong scintillation conditions using a coherent fiber array laser transmitter operating in a target-in-the-loop setting. The transmitter system is composed of a densely packed array of 21 fiber collimators with integrated capabilities for piston, tip, and tilt control of the outgoing beams wavefront phases. A small cat's-eye retro reflector was used for evaluation of beam combining and turbulence compensation performance at the target plane, and to provide the feedback signal for control of piston and tip/tilt phases of the transmitted beams using the stochastic parallel gradient descent maximization of the power-in-the-bucket metric.

  14. WS-BP: An efficient wolf search based back-propagation algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Rehman, M. Z.; Khan, Abdullah

    2015-05-01

    Wolf Search (WS) is a heuristic based optimization algorithm. Inspired by the preying and survival capabilities of the wolves, this algorithm is highly capable to search large spaces in the candidate solutions. This paper investigates the use of WS algorithm in combination with back-propagation neural network (BPNN) algorithm to overcome the local minima problem and to improve convergence in gradient descent. The performance of the proposed Wolf Search based Back-Propagation (WS-BP) algorithm is compared with Artificial Bee Colony Back-Propagation (ABC-BP), Bat Based Back-Propagation (Bat-BP), and conventional BPNN algorithms. Specifically, OR and XOR datasets are used for training the network. The simulation results show that the WS-BP algorithm effectively avoids the local minima and converge to global minima.

  15. Aeroassisted orbital maneuvering using Lyapunov optimal feedback control

    NASA Technical Reports Server (NTRS)

    Grantham, Walter J.; Lee, Byoung-Soo

    1987-01-01

    A Liapunov optimal feedback controller incorporating a preferred direction of motion at each state of the system which is opposite to the gradient of a specified descent function is developed for aeroassisted orbital transfer from high-earth orbit to LEO. The performances of the Liapunov controller and a calculus-of-variations open-loop minimum-fuel controller, both of which are based on the 1962 U.S. Standard Atmosphere, are simulated using both the 1962 U.S. Standard Atmosphere and an atmosphere corresponding to the STS-6 Space Shuttle flight. In the STS-6 atmosphere, the calculus-of-variations open-loop controller fails to exit the atmosphere, while the Liapunov controller achieves the optimal minimum-fuel conditions, despite the + or - 40 percent fluctuations in the STS-6 atmosphere.

  16. An experimental trip to the Calculus of Variations

    NASA Astrophysics Data System (ADS)

    Arroyo, Josu

    2008-04-01

    This paper presents a collection of experiments in the Calculus of Variations. The implementation of the Gradient Descent algorithm built on cubic-splines acting as "numerically friendly" elementary functions, give us ways to solve variational problems by constructing the solution. It wins a pragmatic point of view: one gets solutions sometimes as fast as possible, sometimes as close as possible to the true solutions. The balance speed/precision is not always easy to achieve. Starting from the most well-known, classic or historical formulation of a variational problem, section 2 describes briefly the bridge between theoretical and computational formulations. The next sections show the results of several kind of experiments; from the most basics, as those about geodesics, to the most complex, as those about vesicles.

  17. Soft learning vector quantization and clustering algorithms based on ordered weighted aggregation operators.

    PubMed

    Karayiannis, N B

    2000-01-01

    This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.

  18. Quantitative characterization of turbidity by radiative transfer based reflectance imaging

    PubMed Central

    Tian, Peng; Chen, Cheng; Jin, Jiahong; Hong, Heng; Lu, Jun Q.; Hu, Xin-Hua

    2018-01-01

    A new and noncontact approach of multispectral reflectance imaging has been developed to inversely determine the absorption coefficient of μa, the scattering coefficient of μs and the anisotropy factor g of a turbid target from one measured reflectance image. The incident beam was profiled with a diffuse reflectance standard for deriving both measured and calculated reflectance images. A GPU implemented Monte Carlo code was developed to determine the parameters with a conjugate gradient descent algorithm and the existence of unique solutions was shown. We noninvasively determined embedded region thickness in heterogeneous targets and estimated in vivo optical parameters of nevi from 4 patients between 500 and 950nm for melanoma diagnosis to demonstrate the potentials of quantitative reflectance imaging. PMID:29760971

  19. Eruption Constraints for a Young Channelized Lava Flow, Marte Vallis, Mars

    NASA Technical Reports Server (NTRS)

    Therkelsen, J. P.; Santiago, S. S.; Grosfils, E. B.; Sakimoto, S. E. H.; Mendelson, C. V.; Bleacher, J. E.

    2001-01-01

    This study constrains flow rates for a specific channelized lava flow in Marte Vallis, Mars. We measured slope-gradient, channel width, and channel depth. Our results are similar to other recent studies which suggests similarities to long, terrestrial basaltic flow. Additional information is contained in the original extended abstract.

  20. Seed availability constrains plant species sorting along a soil fertility gradient

    Treesearch

    Bryan L. Foster; Erin J. Questad; Cathy D. Collins; Cheryl A. Murphy; Timothy L. Dickson; Val H. Smith

    2011-01-01

    1. Spatial variation in species composition within and among communities may be caused by deterministic, niche-based species sorting in response to underlying environmental heterogeneity as well as by stochastic factors such as dispersal limitation and variable species pools. An important goal in ecology is to reconcile deterministic and stochastic perspectives of...

  1. Why convective heat transport in the solar nebula was inefficient

    NASA Technical Reports Server (NTRS)

    Cassen, P.

    1993-01-01

    The radial distributions of the effective temperatures of circumstellar disks associated with pre-main sequence (T Tauri) stars are relatively well-constrained by ground-based and spacecraft infrared photometry and radio continuum observations. If the mechanisms by which energy is transported vertically in the disks are understood, these data can be used to constrain models of the thermal structure and evolution of solar nebula. Several studies of the evolution of the solar nebula have included the calculation of the vertical transport of heat by convection. Such calculations rely on a mixing length theory of transport and some assumption regarding the vertical distribution of internal dissipation. In all cases, the results of these calculations indicate that transport by radiation dominates that by convection, even when the nebula is convectively unstable. A simple argument that demonstrates the generality (and limits) of this result, regardless of the details of mixing length theory or the precise distribution of internal heating is presented. It is based on the idea that the radiative gradient in an optically thick nebula generally does not greatly exceed the adiabatic gradient.

  2. Digital robust control law synthesis using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivekananda

    1989-01-01

    Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.

  3. Description of the computations and pilot procedures for planning fuel-conservative descents with a small programmable calculator

    NASA Technical Reports Server (NTRS)

    Vicroy, D. D.; Knox, C. E.

    1983-01-01

    A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modeling required for the DC-10 airplane is described.

  4. Disentangling climatic versus biotic drivers of tree range constraints: Broad scale tradeoffs between climate and competion rarely explain local range boundaries

    NASA Astrophysics Data System (ADS)

    Anderegg, L. D. L.; Hillerislambers, J.

    2016-12-01

    Accurate prediction of climatically-driven range shifts requires knowledge of the dominant forces constraining species ranges, because climatically controlled range boundaries will likely behave differently from biotically controlled range boundaries in a changing climate. Yet the roles of climatic constraints (due to species physiological tolerance) versus biotic constraints (caused by species interactions) on geographic ranges are largely unknown, infusing large uncertainty into projections of future range shifts. Plant species ranges across strong climatic gradients such as elevation gradients are often assumed to represent a tradeoff between climatic constraints on the harsh side of the range and biotic constraints (often competitive constraints) on the climatically benign side. To test this assumption, we collected tree cores from across the elevational range of the three dominant tree species inhabiting each of three climatically disparate mountain slopes and assessed climatic versus competitive constraints on growth at each species' range margins. Across all species and mountains, we found evidence for a tradeoff between climatic and competitve growth constraints. We also found that some individual species did show an apparent trade-off between a climatic constraint at one range margin and a competitive constraint at the other. However, even these simple elevation gradients resulted in complex interactions between temperature, moisture, and competitive constraints such that a climate-competition tradeoff did not explain range constraints for many species. Our results suggest that tree species can be constrained by a simple trade-off between climate and competition, but that the intricacies of real world climate gradients complicate the application of this theory even in apparently harsh environments, such as near high elevation tree line.

  5. The Yearly Variation in Fall-Winter Arctic Winter Vortex Descent

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark R.; Newman, Paul A.

    1999-01-01

    Using the change in HALOE methane profiles from early September to late March, we have estimated the minimum amount of diabatic descent within the polar which takes place during Arctic winter. The year to year variations are a result in the year to year variations in stratospheric wave activity which (1) modify the temperature of the vortex and thus the cooling rate; (2) reduce the apparent descent by mixing high amounts of methane into the vortex. The peak descent amounts from HALOE methane vary from l0km -14km near the arrival altitude of 25 km. Using a diabatic trajectory calculation, we compare forward and backward trajectories over the course of the winter using UKMO assimilated stratospheric data. The forward calculation agrees fairly well with the observed descent. The backward calculation appears to be unable to produce the observed amount of descent, but this is only an apparent effect due to the density decrease in parcels with altitude. Finally we show the results for unmixed descent experiments - where the parcels are fixed in latitude and longitude and allowed to descend based on the local cooling rate. Unmixed descent is found to always exceed mixed descent, because when normal parcel motion is included, the path average cooling is always less than the cooling at a fixed polar point.

  6. Automatic toilet seat lowering apparatus

    DOEpatents

    Guerty, Harold G.

    1994-09-06

    A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat. A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat.

  7. Occurrence of Sporadic -E layer during the Low Solar Activity over the Anomaly Crest Region Bhopal, India

    NASA Astrophysics Data System (ADS)

    Bhawre, Purushottam

    2016-07-01

    Ionospheric anomaly crest regions are most challenging for scientific community to understand its mechanism and investigation, for this purpose we are investigating some inospheric result for this region. The study is based on the ionogram data recorded by IPS-71 Digital Ionosonde installed over anomaly crust region Bhopal (Geo.Lat.23.2° N, Geo. Long77.4° E, Dip latitude18.4°) over a four year period from January 2007 to December 2010, covering the ending phase of 23rd Solar Cycle and starting phase of 24th solar cycle. This particular period is felt to be very suitable for examining the sunspot number and it encompasses periods of low solar activities. Quarterly ionograms are analyzed for 24 hours during these study years and have been carefully examined to note down the presence of sporadic- E. We also note down the space weather activities along with the study. The studies are divided in mainly four parts with space and geomagnetic activities during these periods. The occurrence probability of this layer is highest in summer solstice, moderate during equinox and low during winter solstice. Remarkable occurrence peaks appear from June to July in summer and from December to January in winter. The layer occurrence showed a double peak variation with distinct layer groups, in the morning (0200 LT) and the other during evening (1800 LT).The morning layer descent was associated with layer density increase indicating the strengthening of the layer while it decreased during the evening layer descent. The result indicates the presence of semi-diurnal tide over the location while the higher descent velocities could be due to the modulation of the ionization by gravity waves along with the tides. The irregularities associated with the gradient-drift instability disappear during the counter electrojet and the current flow is reversed in westward.

  8. Frequency-domain full-waveform inversion with non-linear descent directions

    NASA Astrophysics Data System (ADS)

    Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.

    2018-05-01

    Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a benchmark FWI approach involving the standard gradient.

  9. GOCE gravity gradient data for lithospheric modeling and geophysical exploration research

    NASA Astrophysics Data System (ADS)

    Bouman, Johannes; Ebbing, Jörg; Meekes, Sjef; Lieb, Verena; Fuchs, Martin; Schmidt, Michael; Fattah, Rader Abdul; Gradmann, Sofie; Haagmans, Roger

    2013-04-01

    GOCE gravity gradient data can improve modeling of the Earth's lithosphere and upper mantle, contributing to a better understanding of the Earth's dynamic processes. We present a method to compute user-friendly GOCE gravity gradient grids at mean satellite altitude, which are easier to use than the original GOCE gradients that are given in a rotating instrument frame. In addition, the GOCE gradients are combined with terrestrial gravity data to obtain high resolution grids of gravity field information close to the Earth's surface. We also present a case study for the North-East Atlantic margin, where we analyze the use of satellite gravity gradients by comparison with a well-constrained 3D density model that provides a detailed picture from the upper mantle to the top basement (base of sediments). We demonstrate how gravity gradients can increase confidence in the modeled structures by calculating the sensitvity of model geometry and applied densities at different observation heights; e.g. satellite height and near surface. Finally, this sensitivity analysis is used as input to study the Rub' al Khali desert in Saudi Arabia. In terms of modeling and data availability this is a frontier area. Here gravity gradient data help especially to set up the regional crustal structure, which in turn allows to refine sedimentary thickness estimates and the regional heat-flow pattern. This can have implications for hydrocarbon exploration in the region.

  10. Rationale for a Mars Pathfinder mission to Chryse Planitia and the Viking 1 lander

    NASA Technical Reports Server (NTRS)

    Craddock, Robert A.

    1994-01-01

    Presently the landing site for Mars Pathfinder will be constrained to latitudes between 0 deg and 30 deg N to facilitate communication with earth and to allow the lander and rover solar arrays to generate the maximum possible power. The reference elevation of the site must also be below 0 km so that the descent parachute, a Viking derivative, has sufficient time to open and slow the lander to the correct terminal velocity. Although Mars has as much land surface area as the continental crust of the earth, such engineering constraints immediately limit the number of possible landing sites to only three broad areas: Amazonis, Chryse, and Isidis Planitia. Of these, both Chryse and Isidis Planitia stand out as the sites offering the most information to address several broad scientific topics.

  11. System description and analysis. Part 1: Feasibility study for helicopter/VTOL wide-angle simulation image generation display system

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A preliminary design for a helicopter/VSTOL wide angle simulator image generation display system is studied. The visual system is to become part of a simulator capability to support Army aviation systems research and development within the near term. As required for the Army to simulate a wide range of aircraft characteristics, versatility and ease of changing cockpit configurations were primary considerations of the study. Due to the Army's interest in low altitude flight and descents into and landing in constrained areas, particular emphasis is given to wide field of view, resolution, brightness, contrast, and color. The visual display study includes a preliminary design, demonstrated feasibility of advanced concepts, and a plan for subsequent detail design and development. Analysis and tradeoff considerations for various visual system elements are outlined and discussed.

  12. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less

  13. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    DOE PAGES

    Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.; ...

    2017-07-26

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less

  14. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    NASA Astrophysics Data System (ADS)

    Quinn Thomas, R.; Brooks, Evan B.; Jersild, Annika L.; Ward, Eric J.; Wynne, Randolph H.; Albaugh, Timothy J.; Dinon-Aldridge, Heather; Burkhart, Harold E.; Domec, Jean-Christophe; Fox, Thomas R.; Gonzalez-Benecke, Carlos A.; Martin, Timothy A.; Noormets, Asko; Sampson, David A.; Teskey, Robert O.

    2017-07-01

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model-data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.

  15. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: Algorithm development and flight test results

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1983-01-01

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.

  16. Description of the computations and pilot procedures for planning fuel-conservative descents with a small programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vicroy, D.D.; Knox, C.E.

    A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modelingmore » required for the DC-10 airplane is described.« less

  17. Infrared and visible image fusion based on total variation and augmented Lagrangian.

    PubMed

    Guo, Hanqi; Ma, Yong; Mei, Xiaoguang; Ma, Jiayi

    2017-11-01

    This paper proposes a new algorithm for infrared and visible image fusion based on gradient transfer that achieves fusion by preserving the intensity of the infrared image and then transferring gradients in the corresponding visible one to the result. The gradient transfer suffers from the problems of low dynamic range and detail loss because it ignores the intensity from the visible image. The new algorithm solves these problems by providing additive intensity from the visible image to balance the intensity between the infrared image and the visible one. It formulates the fusion task as an l 1 -l 1 -TV minimization problem and then employs variable splitting and augmented Lagrangian to convert the unconstrained problem to a constrained one that can be solved in the framework of alternating the multiplier direction method. Experiments demonstrate that the new algorithm achieves better fusion results with a high computation efficiency in both qualitative and quantitative tests than gradient transfer and most state-of-the-art methods.

  18. Shining light on modifications of gravity

    NASA Astrophysics Data System (ADS)

    Brax, Philippe; Burrage, Clare; Davis, Anne-Christine

    2012-10-01

    Many modifications of gravity introduce new scalar degrees of freedom, and in such theories matter fields typically couple to an effective metric that depends on both the true metric of spacetime and on the scalar field and its derivatives. Scalar field contributions to the effective metric can be classified as conformal and disformal. Disformal terms introduce gradient couplings between scalar fields and the energy momentum tensor of other matter fields, and cannot be constrained by fifth force experiments because the effects of these terms are trivial around static non-relativistic sources. The use of high-precision, low-energy photon experiments to search for conformally coupled scalar fields, called axion-like particles, is well known. In this article we show that these experiments are also constraining for disformal scalar field theories, and are particularly important because of the difficulty of constraining these couplings with other laboratory experiments.

  19. Terrestrial Ecosystem Science 2017 ECRP Annual Report: Tropical Forest Response to a Drier Future: Turnover Times of Soil Organic Matter, Roots, Respired CO 2, and CH 4 Across Moisture Gradients in Time and Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McFarlane, Karis J.

    The overall goal of my Early Career research is to constrain belowground carbon turnover times for tropical forests across a broad range in moisture regimes. My group is using 14C analysis and modeling to address two major objectives: quantify age and belowground carbon turnover times across tropical forests spanning a moisture gradient from wetlands to dry forest; and identify specific areas for focused model improvement and data needs through site-specific model-data comparison and belowground carbon modeling for tropic forests.

  20. Validation of Genome-Wide Prostate Cancer Associations in Men of African Descent

    PubMed Central

    Chang, Bao-Li; Spangler, Elaine; Gallagher, Stephen; Haiman, Christopher A.; Henderson, Brian; Isaacs, William; Benford, Marnita L.; Kidd, LaCreis R.; Cooney, Kathleen; Strom, Sara; Ann Ingles, Sue; Stern, Mariana C.; Corral, Roman; Joshi, Amit D.; Xu, Jianfeng; Giri, Veda N.; Rybicki, Benjamin; Neslund-Dudas, Christine; Kibel, Adam S.; Thompson, Ian M.; Leach, Robin J.; Ostrander, Elaine A.; Stanford, Janet L.; Witte, John; Casey, Graham; Eeles, Rosalind; Hsing, Ann W.; Chanock, Stephen; Hu, Jennifer J.; John, Esther M.; Park, Jong; Stefflova, Klara; Zeigler-Johnson, Charnita; Rebbeck, Timothy R.

    2010-01-01

    Background Genome-wide association studies (GWAS) have identified numerous prostate cancer susceptibility alleles, but these loci have been identified primarily in men of European descent. There is limited information about the role of these loci in men of African descent. Methods We identified 7,788 prostate cancer cases and controls with genotype data for 47 GWAS-identified loci. Results We identified significant associations for SNP rs10486567 at JAZF1, rs10993994 at MSMB, rs12418451 and rs7931342 at 11q13, and rs5945572 and rs5945619 at NUDT10/11. These associations were in the same direction and of similar magnitude as those reported in men of European descent. Significance was attained at all report prostate cancer susceptibility regions at chromosome 8q24, including associations reaching genome-wide significance in region 2. Conclusion We have validated in men of African descent the associations at some, but not all, prostate cancer susceptibility loci originally identified in European descent populations. This may be due to heterogeneity in genetic etiology or in the pattern of genetic variation across populations. Impact The genetic etiology of prostate cancer in men of African descent differs from that of men of European descent. PMID:21071540

  1. Studies of the hormonal control of postnatal testicular descent in the rat.

    PubMed

    Spencer, J R; Vaughan, E D; Imperato-McGinley, J

    1993-03-01

    Dihydrotestosterone is believed to control the transinguinal phase of testicular descent based on hormonal manipulation studies performed in postnatal rats. In the present study, these hormonal manipulation experiments were repeated, and the results were compared with those obtained using the antiandrogens flutamide and cyproterone acetate. 17 beta-estradiol completely blocked testicular descent, but testosterone and dihydrotestosterone were equally effective in reversing this inhibition. Neither flutamide nor cyproterone acetate prevented testicular descent in postnatal rats despite marked peripheral antiandrogenic action. Further analysis of the data revealed a correlation between testicular size and descent. Androgen receptor blockade did not produce a marked reduction in testicular size and consequently did not prevent testicular descent, whereas estradiol alone caused marked testicular atrophy and testicular maldescent. Reduction of the estradiol dosage or concomitant administration of androgens or human chorionic gonadotropin resulted in both increased testicular size and degree of descent. These data suggest that growth of the neonatal rat testis may contribute to its passage into the scrotum.

  2. Recursive least-squares learning algorithms for neural networks

    NASA Astrophysics Data System (ADS)

    Lewis, Paul S.; Hwang, Jenq N.

    1990-11-01

    This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].

  3. Planning fuel-conservative descents with or without time constraints using a small programmable calculator: algorithm development and flight test results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knox, C.E.

    A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight testsmore » flown with a T-39A (Sabreliner) airplane are presented.« less

  4. Thermochronometrically constrained anatomy and evolution of a Miocene extensional accommodation zone and tilt domain boundary: The southern Wassuk Range, Nevada

    NASA Astrophysics Data System (ADS)

    Gorynski, Kyle E.; Stockli, Daniel F.; Douglas Walker, J.

    2013-06-01

    (AHe) and Zircon (ZHe) (U-Th)/He thermochronometric data from the southern Wassuk Range (WR) coupled with 40Ar/39Ar age data from the overlying tilted Tertiary section are used to constrain the thermal evolution of an extensional accommodation zone and tilt-domain boundary. AHe and ZHe data record two episodes of rapid cooling related to the tectonic exhumation of the WR fault block beginning at ~15 and ~4 Ma. Extension was accommodated through fault-block rotation and variably tilted the southern WR to the west from ~60°-70° in the central WR to ~15°-35° in the southernmost WR and Pine Grove Hills, and minimal tilting in the Anchorite Hills and along the Mina Deflection to the south. Middle Miocene geothermal gradient estimates record heating immediately prior to large-magnitude extension that was likely coeval with the extrusion of the Lincoln Flat andesite at ~14.8 Ma. Geothermal gradients increase from ~19° ± 4°C/km to ≥ 65° ± 20°C/km toward the Mina Deflection, suggesting that it was the focus of Middle Miocene arc magmatism in the upper crust. The decreasing thickness of tilt blocks toward the south resulted from a shallowing brittle/ductile transition zone. Postmagmatic Middle Miocene extension and fault-block advection were focused in the northern and central WR and coincidentally moderated the large lateral thermal gradient within the uppermost crust.

  5. A strongly negative shear velocity gradient and lateral variability in the lowermost mantle beneath the Pacific

    NASA Astrophysics Data System (ADS)

    Ritsema, Jeroen; Garnero, Edward; Lay, Thorne

    1997-01-01

    A new approach for constraining the seismic shear velocity structure above the core-mantle boundary is introduced, whereby SH-SKS differential travel times, amplitude ratios of SV/SKS, and Sdiff waveshapes are simultaneously modeled. This procedure is applied to the lower mantle beneath the central Pacific using da.ta from numerous deep-focus southwest Pacific earthquakes recorded in North America. We analyze 90 broadband and 248 digitized analog recordings for this source-receiver geometry. SH-SKS times are highly variable and up to 10 s larger than standard reference model predictions, indicating the presence of laterally varying low shear velocities in the study area. The travel times, however, do not constrain the depth extent or velocity gradient of the low-velocity region. SV/SKS amplitude ratios and SH waveforms are sensitive to the radial shear velocity profile, and when analyzed simultaneously with SH-SKS times, rnveal up to 3% shear velocity reductions restricted to the lowermost 190±50 km of the mantle. Our preferred model for the central-eastern Pacific region (Ml) has a strong negative gradient (with 0.5% reduction in velocity relative to the preliminary reference Earth model (PREM) at 2700 km depth and 3% reduction at 2891 km depth) and slight velocity reductions from 2000 to 2700 km depth (0-0.5% lower than PREM). Significant small-scale (100-500 km) shear velocity heterogeneity (0.5%-1%) is required to explain scatter in the differential times and amplitude ratios.

  6. Correlation Between Echodefecography and 3-Dimensional Vaginal Ultrasonography in the Detection of Perineal Descent in Women With Constipation Symptoms.

    PubMed

    Murad-Regadas, Sthela M; Pinheiro Regadas, Francisco Sergio; Rodrigues, Lusmar V; da Silva Vilarinho, Adjra; Buchen, Guilherme; Borges, Livia Olinda; Veras, Lara B; da Cruz, Mariana Murad

    2016-12-01

    Defecography is an established method of evaluating dynamic anorectal dysfunction, but conventional defecography does not allow for visualization of anatomic structures. The purpose of this study was to describe the use of dynamic 3-dimensional endovaginal ultrasonography for evaluating perineal descent in comparison with echodefecography (3-dimensional anorectal ultrasonography) and to study the relationship between perineal descent and symptoms and anatomic/functional abnormalities of the pelvic floor. This was a prospective study. The study was conducted at a large university tertiary care hospital. Consecutive female patients were eligible if they had pelvic floor dysfunction, obstructed defecation symptoms, and a score >6 on the Cleveland Clinic Florida Constipation Scale. Each patient underwent both echodefecography and dynamic 3-dimensional endovaginal ultrasonography to evaluate posterior pelvic floor dysfunction. Normal perineal descent was defined on echodefecography as puborectalis muscle displacement ≤2.5 cm; excessive perineal descent was defined as displacement >2.5 cm. Of 61 women, 29 (48%) had normal perineal descent; 32 (52%) had excessive perineal descent. Endovaginal ultrasonography identified 27 of the 29 patients in the normal group as having anorectal junction displacement ≤1 cm (mean = 0.6 cm; range, 0.1-1.0 cm) and a mean anorectal junction position of 0.6 cm (range, 0-2.3 cm) above the symphysis pubis during the Valsalva maneuver and correctly identified 30 of the 32 patients in the excessive perineal descent group. The κ statistic showed almost perfect agreement (κ = 0.86) between the 2 methods for categorization into the normal and excessive perineal descent groups. Perineal descent was not related to fecal or urinary incontinence or anatomic and functional factors (sphincter defects, pubovisceral muscle defects, levator hiatus area, grade II or III rectocele, intussusception, or anismus). The study did not include a control group without symptoms. Three-dimensional endovaginal ultrasonography is a reliable technique for assessment of perineal descent. Using this technique, excessive perineal descent can be defined as displacement of the anorectal junction >1 cm and/or its position below the symphysis pubis on Valsalva maneuver.

  7. On the formation of granulites

    USGS Publications Warehouse

    Bohlen, S.R.

    1991-01-01

    The tectonic settings for the formation and evolution of regional granulite terranes and the lowermost continental crust can be deduced from pressure-temperature-time (P-T-time) paths and constrained by petrological and geophysical considerations. P-T conditions deduced for regional granulites require transient, average geothermal gradients of greater than 35??C km-1, implying minimum heat flow in excess of 100 mW m-2. Such high heat flow is probably caused by magmatic heating. Tectonic settings wherein such conditions are found include convergent plate margins, continental rifts, hot spots and at the margins of large, deep-seated batholiths. Cooling paths can be constrained by solid-solid and devolatilization equilibria and geophysical modelling. -from Author

  8. Finding intrinsic rewards by embodied evolution and constrained reinforcement learning.

    PubMed

    Uchibe, Eiji; Doya, Kenji

    2008-12-01

    Understanding the design principle of reward functions is a substantial challenge both in artificial intelligence and neuroscience. Successful acquisition of a task usually requires not only rewards for goals, but also for intermediate states to promote effective exploration. This paper proposes a method for designing 'intrinsic' rewards of autonomous agents by combining constrained policy gradient reinforcement learning and embodied evolution. To validate the method, we use Cyber Rodent robots, in which collision avoidance, recharging from battery packs, and 'mating' by software reproduction are three major 'extrinsic' rewards. We show in hardware experiments that the robots can find appropriate 'intrinsic' rewards for the vision of battery packs and other robots to promote approach behaviors.

  9. Transmit Designs for the MIMO Broadcast Channel With Statistical CSI

    NASA Astrophysics Data System (ADS)

    Wu, Yongpeng; Jin, Shi; Gao, Xiqi; McKay, Matthew R.; Xiao, Chengshan

    2014-09-01

    We investigate the multiple-input multiple-output broadcast channel with statistical channel state information available at the transmitter. The so-called linear assignment operation is employed, and necessary conditions are derived for the optimal transmit design under general fading conditions. Based on this, we introduce an iterative algorithm to maximize the linear assignment weighted sum-rate by applying a gradient descent method. To reduce complexity, we derive an upper bound of the linear assignment achievable rate of each receiver, from which a simplified closed-form expression for a near-optimal linear assignment matrix is derived. This reveals an interesting construction analogous to that of dirty-paper coding. In light of this, a low complexity transmission scheme is provided. Numerical examples illustrate the significant performance of the proposed low complexity scheme.

  10. Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.

    PubMed

    De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher

    2015-12-01

    Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.

  11. Object recognition in images via a factor graph model

    NASA Astrophysics Data System (ADS)

    He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu

    2018-04-01

    Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.

  12. Optimal landing of a helicopter in autorotation

    NASA Technical Reports Server (NTRS)

    Lee, A. Y. N.

    1985-01-01

    Gliding descent in autorotation is a maneuver used by helicopter pilots in case of engine failure. The landing of a helicopter in autorotation is formulated as a nonlinear optimal control problem. The OH-58A helicopter was used. Helicopter vertical and horizontal velocities, vertical and horizontal displacement, and the rotor angle speed were modeled. An empirical approximation for the induced veloctiy in the vortex-ring state were provided. The cost function of the optimal control problem is a weighted sum of the squared horizontal and vertical components of the helicopter velocity at touchdown. Optimal trajectories are calculated for entry conditions well within the horizontal-vertical restriction curve, with the helicopter initially in hover or forwared flight. The resultant two-point boundary value problem with path equality constraints was successfully solved using the Sequential Gradient Restoration Technique.

  13. Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros

    NASA Technical Reports Server (NTRS)

    Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.

    1973-01-01

    Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.

  14. Algorithms for the optimization of RBE-weighted dose in particle therapy.

    PubMed

    Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M

    2013-01-21

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  15. Algorithms for the optimization of RBE-weighted dose in particle therapy

    NASA Astrophysics Data System (ADS)

    Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.

    2013-01-01

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  16. GPU-based stochastic-gradient optimization for non-rigid medical image registration in time-critical applications

    NASA Astrophysics Data System (ADS)

    Bhosale, Parag; Staring, Marius; Al-Ars, Zaid; Berendsen, Floris F.

    2018-03-01

    Currently, non-rigid image registration algorithms are too computationally intensive to use in time-critical applications. Existing implementations that focus on speed typically address this by either parallelization on GPU-hardware, or by introducing methodically novel techniques into CPU-oriented algorithms. Stochastic gradient descent (SGD) optimization and variations thereof have proven to drastically reduce the computational burden for CPU-based image registration, but have not been successfully applied in GPU hardware due to its stochastic nature. This paper proposes 1) NiftyRegSGD, a SGD optimization for the GPU-based image registration tool NiftyReg, 2) random chunk sampler, a new random sampling strategy that better utilizes the memory bandwidth of GPU hardware. Experiments have been performed on 3D lung CT data of 19 patients, which compared NiftyRegSGD (with and without random chunk sampler) with CPU-based elastix Fast Adaptive SGD (FASGD) and NiftyReg. The registration runtime was 21.5s, 4.4s and 2.8s for elastix-FASGD, NiftyRegSGD without, and NiftyRegSGD with random chunk sampling, respectively, while similar accuracy was obtained. Our method is publicly available at https://github.com/SuperElastix/NiftyRegSGD.

  17. Efficient Online Learning Algorithms Based on LSTM Neural Networks.

    PubMed

    Ergen, Tolga; Kozat, Suleyman Serdar

    2017-09-13

    We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

  18. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2013-01-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658

  19. Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.

    PubMed

    Chen, Jianhui; Liu, Ji; Ye, Jieping

    2012-02-01

    We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.

  20. Axial compartmentation of descending and ascending thin limbs of Henle's loops

    PubMed Central

    Westrick, Kristen Y.; Serack, Bradley; Dantzler, William H.

    2013-01-01

    In the inner medulla, radial organization of nephrons and blood vessels around collecting duct (CD) clusters leads to two lateral interstitial regions and preferential intersegmental fluid and solute flows. As the descending (DTLs) and ascending thin limbs (ATLs) pass through these regions, their transepithelial fluid and solute flows are influenced by variable transepithelial solute gradients and structure-to-structure interactions. The goal of this study was to quantify structure-to-structure interactions, so as to better understand compartmentation and flows of transepithelial water, NaCl, and urea and generation of the axial osmotic gradient. To accomplish this, we determined lateral distances of AQP1-positive and AQP1-negative DTLs and ATLs from their nearest CDs, so as to gauge interactions with intercluster and intracluster lateral regions and interactions with interstitial nodal spaces (INSs). DTLs express reduced AQP1 and low transepithelial water permeability along their deepest segments. Deep AQP1-null segments, prebend segments, and ATLs lie equally near to CDs. Prebend segments and ATLs abut CDs and INSs throughout much of their descent and ascent, respectively; however, the distal 30% of ATLs of the longest loops lie distant from CDs as they approach the outer medullary boundary and have minimal interaction with INSs. These relationships occur regardless of loop length. Finally, we show that ascending vasa recta separate intercluster AQP1-positive DTLs from descending vasa recta, thereby minimizing dilution of gradients that drive solute secretion. We hypothesize that DTLs and ATLs enter and exit CD clusters in an orchestrated fashion that is important for generation of the corticopapillary solute gradient by minimizing NaCl and urea loss. PMID:23195680

  1. Axial compartmentation of descending and ascending thin limbs of Henle's loops.

    PubMed

    Westrick, Kristen Y; Serack, Bradley; Dantzler, William H; Pannabecker, Thomas L

    2013-02-01

    In the inner medulla, radial organization of nephrons and blood vessels around collecting duct (CD) clusters leads to two lateral interstitial regions and preferential intersegmental fluid and solute flows. As the descending (DTLs) and ascending thin limbs (ATLs) pass through these regions, their transepithelial fluid and solute flows are influenced by variable transepithelial solute gradients and structure-to-structure interactions. The goal of this study was to quantify structure-to-structure interactions, so as to better understand compartmentation and flows of transepithelial water, NaCl, and urea and generation of the axial osmotic gradient. To accomplish this, we determined lateral distances of AQP1-positive and AQP1-negative DTLs and ATLs from their nearest CDs, so as to gauge interactions with intercluster and intracluster lateral regions and interactions with interstitial nodal spaces (INSs). DTLs express reduced AQP1 and low transepithelial water permeability along their deepest segments. Deep AQP1-null segments, prebend segments, and ATLs lie equally near to CDs. Prebend segments and ATLs abut CDs and INSs throughout much of their descent and ascent, respectively; however, the distal 30% of ATLs of the longest loops lie distant from CDs as they approach the outer medullary boundary and have minimal interaction with INSs. These relationships occur regardless of loop length. Finally, we show that ascending vasa recta separate intercluster AQP1-positive DTLs from descending vasa recta, thereby minimizing dilution of gradients that drive solute secretion. We hypothesize that DTLs and ATLs enter and exit CD clusters in an orchestrated fashion that is important for generation of the corticopapillary solute gradient by minimizing NaCl and urea loss.

  2. Effects of flutamide and finasteride on rat testicular descent.

    PubMed

    Spencer, J R; Torrado, T; Sanchez, R S; Vaughan, E D; Imperato-McGinley, J

    1991-08-01

    The endocrine control of descent of the testis in mammalian species is poorly understood. The androgen dependency of testicular descent was studied in the rat using an antiandrogen (flutamide) and an inhibitor of the enzyme 5 alpha-reductase (finasteride). Androgen receptor blockade inhibited testicular descent more effectively than inhibition of 5 alpha-reductase activity. Moreover, its inhibitory effect was limited to the outgrowth phase of the gubernaculum testis, particularly the earliest stages of outgrowth. Gubernacular size was also significantly reduced in fetuses exposed to flutamide during the outgrowth period. In contrast, androgen receptor blockade or 5 alpha-reductase inhibition applied after the initiation of gubernacular outgrowth or during the regression phase did not affect testicular descent. Successful inhibition of the development of epididymis and vas by prenatal flutamide did not correlate with ipsilateral testicular maldescent, suggesting that an intact epididymis is not required for descent of the testis. Plasma androgen assays confirmed significant inhibition of dihydrotestosterone formation in finasteride-treated rats. These data suggest that androgens, primarily testosterone, are required during the early phases of gubernacular outgrowth for subsequent successful completion of testicular descent.

  3. Intra-coil interactions in split gradient coils in a hybrid MRI-LINAC system

    NASA Astrophysics Data System (ADS)

    Tang, Fangfang; Freschi, Fabio; Sanchez Lopez, Hector; Repetto, Maurizio; Liu, Feng; Crozier, Stuart

    2016-04-01

    An MRI-LINAC system combines a magnetic resonance imaging (MRI) system with a medical linear accelerator (LINAC) to provide image-guided radiotherapy for targeting tumors in real-time. In an MRI-LINAC system, a set of split gradient coils is employed to produce orthogonal gradient fields for spatial signal encoding. Owing to this unconventional gradient configuration, eddy currents induced by switching gradient coils on and off may be of particular concern. It is expected that strong intra-coil interactions in the set will be present due to the constrained return paths, leading to potential degradation of the gradient field linearity and image distortion. In this study, a series of gradient coils with different track widths have been designed and analyzed to investigate the electromagnetic interactions between coils in a split gradient set. A driving current, with frequencies from 100 Hz to 10 kHz, was applied to study the inductive coupling effects with respect to conductor geometry and operating frequency. It was found that the eddy currents induced in the un-energized coils (hereby-referred to as passive coils) positively correlated with track width and frequency. The magnetic field induced by the eddy currents in the passive coils with wide tracks was several times larger than that induced by eddy currents in the cold shield of cryostat. The power loss in the passive coils increased with the track width. Therefore, intra-coil interactions should be included in the coil design and analysis process.

  4. Ecological gradients within a Pennsylvanian mire forest

    USGS Publications Warehouse

    DiMichele, W.A.; Falcon-Lang, H. J.; Nelson, W.J.; Elrick, S.D.; Ames, P.R.

    2007-01-01

    Pennsylvanian coals represent remains of the earliest peat-forming rain forests, but there is no current consensus on forest ecology. Localized studies of fossil forests suggest intermixture of taxa (heterogeneity), while, in contrast, coal ball and palynological analyses imply the existence of pronounced ecological gradients. Here, we report the discovery of a spectacular fossil forest preserved over ???1000 ha on top of the Pennsylvanian (Desmoinesian) Herrin (No. 6) Coal of Illinois, United States. The forest was abruptly drowned when fault movement dropped a segment of coastal mire below sea level. In the largest study of its kind to date, forest composition is statistically analyzed within a well-constrained paleogeographic context. Findings resolve apparent conflicts in models of Pennsylvanian mire ecology by confirming the existence of forest heterogeneity at the local scale, while additionally demonstrating the emergence of ecological gradients at landscape scale. ?? 2007 The Geological Society of America.

  5. The sodium pump in the evolution of animal cells.

    PubMed

    Stein, W D

    1995-09-29

    Plant cells and bacterial cells are surrounded by a massive cellulose wall, which constrains their high internal osmotic pressure (tens of atmospheres). Animal cells, in contrast, are in osmotic equilibrium with their environment, have no restraining surround, can take on a variety of shapes and change these from moment to moment. This osmotic balance is achieved by the action of the energy-consuming sodium pump, one of the P-type ATPase transport protein family, members of which are indeed also found in bacteria. The pump's action brings about a transmembranal electrochemical gradient of sodium ions, harnessed in a range of transport systems that couple the dissipation of this gradient to establishing a gradient of the coupled substrate. The primary role of the sodium pump as a regulator of cell volume has evolved to provide the basis for an enormous variety of physiological functions.

  6. Testicular descent related to growth hormone treatment.

    PubMed

    Papadimitriou, Anastasios; Fountzoula, Ioanna; Grigoriadou, Despina; Christianakis, Stratos; Tzortzatou, Georgia

    2003-01-01

    An 8.7 year-old boy with cryptorchidism and growth hormone (GH) deficiency due to septooptic dysplasia presented testicular descent related to the commencement of hGH treatment. This case suggests a role for GH in testicular descent.

  7. Aircraft Vortex Wake Descent and Decay under Real Atmospheric Effects

    DOT National Transportation Integrated Search

    1973-10-01

    Aircraft vortex wake descent and decay in a real atmosphere is studied analytically. Factors relating to encounter hazard, wake generation, wake descent and stability, and atmospheric dynamics are considered. Operational equations for encounter hazar...

  8. Computational and theoretical investigation of Mars's atmospheric impact on the descent module "Exomars-2018" under aerodynamic deceleration

    NASA Astrophysics Data System (ADS)

    Golomazov, M. M.; Ivankov, A. A.

    2016-12-01

    Methods for calculating the aerodynamic impact of the Martian atmosphere on the descent module "Exomars-2018" intended for solving the problem of heat protection of the descent module during aerodynamic deceleration are presented. The results of the investigation are also given. The flow field and radiative and convective heat exchange are calculated along the trajectory of the descent module until parachute system activation.

  9. Apollo lunar descent guidance

    NASA Technical Reports Server (NTRS)

    Klumpp, A. R.

    1974-01-01

    Apollo lunar-descent guidance transfers the Lunar Module from a near-circular orbit to touchdown, traversing a 17 deg central angle and a 15 km altitude in 11 min. A group of interactive programs in an onboard computer guide the descent, controlling altitude and the descent propulsion system throttle. A ground-based program pre-computes guidance targets. The concepts involved in this guidance are described. Explicit and implicit guidance are discussed, guidance equations are derived, and the earlier Apollo explicit equation is shown to be an inferior special case of the later implicit equation. Interactive guidance, by which the two-man crew selects a landing site in favorable terrain and directs the trajectory there, is discussed. Interactive terminal-descent guidance enables the crew to control the essentially vertical descent rate in order to land in minimum time with safe contact speed. The altitude maneuver routine uses concepts that make gimbal lock inherently impossible.

  10. Vertical Descent and Landing Tests of a 0.13-Scale Model of the Convair XFY-1 Vertically Rising Airplane in Still Air, TED No. NACA DE 368

    NASA Technical Reports Server (NTRS)

    Smith, Charlee C., Jr.; Lovell, Powell M., Jr.

    1954-01-01

    An investigation is being conducted to determine the dynamic stability and control characteristics of a 0.13-scale flying model of Convair XFY-1 vertically rising airplane. This paper presents the results of flight and force tests to determine the stability and control characteristics of the model in vertical descent and landings in still air. The tests indicated that landings, including vertical descent from altitudes representing up to 400 feet for the full-scale airplane and at rates of descent up to 15 or 20 feet per second (full scale), can be performed satisfactorily. Sustained vertical descent in still air probably will be more difficult to perform because of large random trim changes that become greater as the descent velocity is increased. A slight steady head wind or cross wind might be sufficient to eliminate the random trim changes.

  11. Options for Robust Airfoil Optimization under Uncertainty

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Li, Wu

    2002-01-01

    A robust optimization method is developed to overcome point-optimization at the sampled design points. This method combines the best features from several preliminary methods proposed by the authors and their colleagues. The robust airfoil shape optimization is a direct method for drag reduction over a given range of operating conditions and has three advantages: (1) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (2) it uses a large number of spline control points as design variables yet the resulting airfoil shape does not need to be smoothed, and (3) it allows the user to make a tradeoff between the level of optimization and the amount of computing time consumed. For illustration purposes, the robust optimization method is used to solve a lift-constrained drag minimization problem for a two-dimensional (2-D) airfoil in Euler flow with 20 geometric design variables.

  12. A constrained-gradient method to control divergence errors in numerical MHD

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-10-01

    In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.

  13. Conjugate gradient determination of optimal plane changes for a class of three-impulse transfers between noncoplanar circular orbits

    NASA Technical Reports Server (NTRS)

    Burrows, R. R.

    1972-01-01

    A particular type of three-impulse transfer between two circular orbits is analyzed. The possibility of three plane changes is recognized, and the problem is to optimally distribute these plane changes to minimize the sum of the individual impulses. Numerical difficulties and their solution are discussed. Numerical results obtained from a conjugate gradient technique are presented for both the case where the individual plane changes are unconstrained and for the case where they are constrained. Possibly not unexpectedly, multiple minima are found. The techniques presented could be extended to the finite burn case, but primarily the contents are addressed to preliminary mission design and vehicle sizing.

  14. Implementation and verification of global optimization benchmark problems

    NASA Astrophysics Data System (ADS)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  15. Tune-stabilized, non-scaling, fixed-field, alternating gradient accelerator

    DOEpatents

    Johnstone, Carol J [Warrenville, IL

    2011-02-01

    A FFAG is a particle accelerator having turning magnets with a linear field gradient for confinement and a large edge angle to compensate for acceleration. FODO cells contain focus magnets and defocus magnets that are specified by a number of parameters. A set of seven equations, called the FFAG equations relate the parameters to one another. A set of constraints, call the FFAG constraints, constrain the FFAG equations. Selecting a few parameters, such as injection momentum, extraction momentum, and drift distance reduces the number of unknown parameters to seven. Seven equations with seven unknowns can be solved to yield the values for all the parameters and to thereby fully specify a FFAG.

  16. Viscous relaxation of impact crater relief on Venus - Constraints on crustal thickness and thermal gradient

    NASA Technical Reports Server (NTRS)

    Grimm, Robert E.; Solomon, Sean C.

    1988-01-01

    Models for the viscous relaxation of impact crater topography are used to constrain the crustal thickness (H) and the mean lithospheric thermal gradient beneath the craters on Venus. A general formulation for gravity-driven flow in a linearly viscous fluid has been obtained which incorporates the densities and temperature-dependent effective viscosities of distinct crust and mantle layers. An upper limit to the crustal volume of Venus of 10 to the 10th cu km is obtained which implies either that the average rate of crustal generation has been much smaller on Venus than on earth or that some form of crustal recycling has occurred on Venus.

  17. On the modeling of breath-by-breath oxygen uptake kinetics at the onset of high-intensity exercises: simulated annealing vs. GRG2 method.

    PubMed

    Bernard, Olivier; Alata, Olivier; Francaux, Marc

    2006-03-01

    Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.

  18. Shape regularized active contour based on dynamic programming for anatomical structure segmentation

    NASA Astrophysics Data System (ADS)

    Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra

    2005-04-01

    We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.

  19. Evaluation of vertical profiles to design continuous descent approach procedure

    NASA Astrophysics Data System (ADS)

    Pradeep, Priyank

    The current research focuses on predictability, variability and operational feasibility aspect of Continuous Descent Approach (CDA), which is among the key concepts of the Next Generation Air Transportation System (NextGen). The idle-thrust CDA is a fuel economical, noise and emission abatement procedure, but requires increased separation to accommodate for variability and uncertainties in vertical and speed profiles of arriving aircraft. Although a considerable amount of researches have been devoted to the estimation of potential benefits of the CDA, only few have attempted to explain the predictability, variability and operational feasibility aspect of CDA. The analytical equations derived using flight dynamics and Base of Aircraft and Data (BADA) Total Energy Model (TEM) in this research gives insight into dependency of vertical profile of CDA on various factors like wind speed and gradient, weight, aircraft type and configuration, thrust settings, atmospheric factors (deviation from ISA (DISA), pressure and density of the air) and descent speed profile. Application of the derived equations to idle-thrust CDA gives an insight into sensitivity of its vertical profile to multiple factors. This suggests fixed geometric flight path angle (FPA) CDA has higher degree of predictability and lesser variability at the cost of non-idle and low thrust engine settings. However, with optimized design this impact can be overall minimized. The CDA simulations were performed using Future ATM Concept Evaluation Tool (FACET) based on radar-track and aircraft type data (BADA) of the real air-traffic to some of the busiest airports in the USA (ATL, SFO and New York Metroplex (JFK, EWR and LGA)). The statistical analysis of the vertical profiles of CDA shows 1) mean geometric FPAs derived from various simulated vertical profiles are consistently shallower than 3° glideslope angle and 2) high level of variability in vertical profiles of idle-thrust CDA even in absence of uncertainties in external factors. Analysis from operational feasibility perspective suggests that two key features of the performance based Flight Management System (FMS) i.e. required time of arrival (RTA) and geometric descent path would help in reduction of unpredictability associated with arrival time and vertical profile of aircraft guided by the FMS coupled with auto-pilot (AP) and auto-throttle (AT). The statistical analysis of the vertical profiles of CDA also suggests that for procedure design window type, 'AT or above' and 'AT or below' altitude and FPA constraints are more realistic and useful compared to obsolete 'AT' type altitude constraint.

  20. A Descent Rate Control Approach to Developing an Autonomous Descent Vehicle

    NASA Astrophysics Data System (ADS)

    Fields, Travis D.

    Circular parachutes have been used for aerial payload/personnel deliveries for over 100 years. In the past two decades, significant work has been done to improve the landing accuracies of cargo deliveries for humanitarian and military applications. This dissertation discusses the approach developed in which a circular parachute is used in conjunction with an electro-mechanical reefing system to manipulate the landing location. Rather than attempt to steer the autonomous descent vehicle directly, control of the landing location is accomplished by modifying the amount of time spent in a particular wind layer. Descent rate control is performed by reversibly reefing the parachute canopy. The first stage of the research investigated the use of a single actuation during descent (with periodic updates), in conjunction with a curvilinear target. Simulation results using real-world wind data are presented, illustrating the utility of the methodology developed. Additionally, hardware development and flight-testing of the single actuation autonomous descent vehicle are presented. The next phase of the research focuses on expanding the single actuation descent rate control methodology to incorporate a multi-actuation path-planning system. By modifying the parachute size throughout the descent, the controllability of the system greatly increases. The trajectory planning methodology developed provides a robust approach to accurately manipulate the landing location of the vehicle. The primary benefits of this system are the inherent robustness to release location errors and the ability to overcome vehicle uncertainties (mass, parachute size, etc.). A separate application of the path-planning methodology is also presented. An in-flight path-prediction system was developed for use in high-altitude ballooning by utilizing the path-planning methodology developed for descent vehicles. The developed onboard system improves landing location predictions in-flight using collected flight information during the ascent and descent. Simulation and real-world flight tests (using the developed low-cost hardware) demonstrate the significance of the improvements achievable when flying the developed system.

  1. The Influence of Wavelength-Dependent Absorption and Temperature Gradients on Temperature Determination in Laser-Heated Diamond-Anvil Cells

    NASA Astrophysics Data System (ADS)

    Deng, J.; Lee, K. K. M.; Du, Z.; Benedetti, L. R.

    2016-12-01

    In situ temperature measurements in the laser-heated diamond-anvil cell (LHDAC) are among the most fundamental experiments undertaken in high-pressure science. Despite its importance, few efforts have been made to examine the alteration of thermal radiation spectra of hot samples by wavelength-dependent absorption of the sample itself together with temperature gradients within samples while laser heating and their influence on temperature measurement. For example, iron-bearing minerals show strong wavelength dependent absorption in the wavelength range used to determine temperature, which, together with temperature gradients can account for largely aliased apparent temperatures (e.g., 1200 K deviation for a 4000 K melting temperature) in some experiments obtained by fitting of detected thermal radiation intensities. As such, conclusions of melting temperatures, phase diagrams and partitioning behavior, may be grossly incorrect for these materials. In general, wavelength-dependent absorption and temperature gradients of samples are two key factors to consider in order to rigorously constrain temperatures, which have been largely ignored in previous LHDAC studies. A reevaluation of temperatures measured in recent high-profile papers will be reviewed.

  2. Evolutionary responses of tree phenology to the combined effects of assortative mating, gene flow and divergent selection

    PubMed Central

    Soularue, J-P; Kremer, A

    2014-01-01

    The timing of bud burst (TBB) in temperate trees is a key adaptive trait, the expression of which is triggered by temperature gradients across the landscape. TBB is strongly correlated with flowering time and is therefore probably mediated by assortative mating. We derived theoretical predictions and realized numerical simulations of evolutionary changes in TBB in response to divergent selection and gene flow in a metapopulation. We showed that the combination of the environmental gradient of TBB and assortative mating creates contrasting genetic clines, depending on the direction of divergent selection. If divergent selection acts in the same direction as the environmental gradient (cogradient settings), genetic clines are established and inflated by assortative mating. Conversely, under divergent selection of the same strength but acting in the opposite direction (countergradient selection), genetic clines are slightly constrained. We explored the consequences of these dynamics for population maladaptation, by monitoring pollen swamping. Depending on the direction of divergent selection with respect to the environmental gradient, pollen filtering owing to assortative mating either facilitates or impedes adaptation in peripheral populations. PMID:24924591

  3. Mixed finite-element formulations in piezoelectricity and flexoelectricity

    PubMed Central

    2016-01-01

    Flexoelectricity, the linear coupling of strain gradient and electric polarization, is inherently a size-dependent phenomenon. The energy storage function for a flexoelectric material depends not only on polarization and strain, but also strain-gradient. Thus, conventional finite-element methods formulated solely on displacement are inadequate to treat flexoelectric solids since gradients raise the order of the governing differential equations. Here, we introduce a computational framework based on a mixed formulation developed previously by one of the present authors and a colleague. This formulation uses displacement and displacement-gradient as separate variables which are constrained in a ‘weighted integral sense’ to enforce their known relation. We derive a variational formulation for boundary-value problems for piezo- and/or flexoelectric solids. We validate this computational framework against available exact solutions. Our new computational method is applied to more complex problems, including a plate with an elliptical hole, stationary cracks, as well as tension and shear of solids with a repeating unit cell. Our results address several issues of theoretical interest, generate predictions of experimental merit and reveal interesting flexoelectric phenomena with potential for application. PMID:27436967

  4. Reversed magnetic shear suppression of electron-scale turbulence on NSTX

    NASA Astrophysics Data System (ADS)

    Yuh, Howard Y.; Levinton, F. M.; Bell, R. E.; Hosea, J. C.; Kaye, S. M.; Leblanc, B. P.; Mazzucato, E.; Smith, D. R.; Domier, C. W.; Luhmann, N. C.; Park, H. K.

    2009-11-01

    Electron thermal internal transport barriers (e-ITBs) are observed in reversed (negative) magnetic shear NSTX discharges^1. These e-ITBs can be created with either neutral beam heating or High Harmonic Fast Wave (HHFW) RF heating. The e-ITB location occurs at the location of minimum magnetic shear determined by Motional Stark Effect (MSE) constrained equilibria. Statistical studies show a threshold condition in magnetic shear for e-ITB formation. High-k fluctuation measurements at electron turbulence wavenumbers^3 have been made under several different transport regimes, including a bursty regime that limits temperature gradients at intermediate magnetic shear. The growth rate of fluctuations has been calculated immediately following a change in the local magnetic shear, resulting in electron temperature gradient relaxation. Linear gyrokinetic simulation results for NSTX show that while measured electron temperature gradients exceed critical linear thresholds for ETG instability, growth rates can remain low under reversed shear conditions up to high electron temperatures gradients. ^1H. Yuh, et. al., PoP 16, 056120 ^2D.R. Smith, E. Mazzucato et al., RSI 75, 3840 ^3E. Mazzucato, D.R. Smith et al., PRL 101, 075001

  5. Mixed finite-element formulations in piezoelectricity and flexoelectricity.

    PubMed

    Mao, Sheng; Purohit, Prashant K; Aravas, Nikolaos

    2016-06-01

    Flexoelectricity, the linear coupling of strain gradient and electric polarization, is inherently a size-dependent phenomenon. The energy storage function for a flexoelectric material depends not only on polarization and strain, but also strain-gradient. Thus, conventional finite-element methods formulated solely on displacement are inadequate to treat flexoelectric solids since gradients raise the order of the governing differential equations. Here, we introduce a computational framework based on a mixed formulation developed previously by one of the present authors and a colleague. This formulation uses displacement and displacement-gradient as separate variables which are constrained in a 'weighted integral sense' to enforce their known relation. We derive a variational formulation for boundary-value problems for piezo- and/or flexoelectric solids. We validate this computational framework against available exact solutions. Our new computational method is applied to more complex problems, including a plate with an elliptical hole, stationary cracks, as well as tension and shear of solids with a repeating unit cell. Our results address several issues of theoretical interest, generate predictions of experimental merit and reveal interesting flexoelectric phenomena with potential for application.

  6. An evaluation of descent strategies for TNAV-equipped aircraft in an advanced metering environment

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Schwab, R. W.; Groce, J. L.; Coote, M. A.

    1986-01-01

    Investigated were the effects on system throughput and fleet fuel usage of arrival aircraft utilizing three 4D RNAV descent strategies (cost optimal, clean-idle Mach/CAS and constant descent angle Mach/CAS), both individually and in combination, in an advanced air traffic control metering environment. Results are presented for all mixtures of arrival traffic consisting of three Boeing commercial jet types and for all combinations of the three descent strategies for a typical en route metering airport arrival distribution.

  7. An annealed chaotic maximum neural network for bipartite subgraph problem.

    PubMed

    Wang, Jiahai; Tang, Zheng; Wang, Ronglong

    2004-04-01

    In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.

  8. SPIDER. IV. OPTICAL AND NEAR-INFRARED COLOR GRADIENTS IN EARLY-TYPE GALAXIES: NEW INSIGHT INTO CORRELATIONS WITH GALAXY PROPERTIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Barbera, F.; De Carvalho, R. R.; De La Rosa, I. G.

    2010-11-15

    We present an analysis of stellar population gradients in 4546 early-type galaxies (ETGs) with photometry in grizYHJK along with optical spectroscopy. ETGs were selected as bulge-dominated systems, displaying passive spectra within the SDSS fibers. A new approach is described which utilizes color information to constrain age and metallicity gradients. Defining an effective color gradient, {nabla}{sub *}, which incorporates all of the available color indices, we investigate how {nabla}{sub *} varies with galaxy mass proxies, i.e., velocity dispersion, stellar (M{sub *}) and dynamical (M{sub dyn}) masses, as well as age, metallicity, and [{alpha}/Fe]. ETGs with M{sub dyn} larger than 8.5 xmore » 10{sup 10} M{sub sun} have increasing age gradients and decreasing metallicity gradients with respect to mass, metallicity, and enhancement. We find that velocity dispersion and [{alpha}/Fe] are the main drivers of these correlations. ETGs with 2.5 x 10{sup 10} M{sub sun} {<=} M{sub dyn} {<=} 8.5 x 10{sup 10} M{sub sun} show no correlation of age, metallicity, and color gradients with respect to mass, although color gradients still correlate with stellar population parameters, and these correlations are independent of each other. In both mass regimes, the striking anti-correlation between color gradient and {alpha}-enhancement is significant at {approx}5{sigma} and results from the fact that metallicity gradient decreases with [{alpha}/Fe]. This anti-correlation may reflect the fact that star formation and metallicity enrichment are regulated by the interplay between the energy input from supernovae, and the temperature and pressure of the hot X-ray gas in ETGs. For all mass ranges, positive age gradients are associated with old galaxies (>5-7 Gyr). For galaxies younger than {approx}5 Gyr, mostly at low mass, the age gradient tends to be anti-correlated with the Age parameter, with more positive gradients at younger ages.« less

  9. THE AFRICAN DESCENT AND GLAUCOMA EVALUATION STUDY (ADAGES): PREDICTORS OF VISUAL FIELD DAMAGE IN GLAUCOMA SUSPECTS

    PubMed Central

    Khachatryan, Naira; Medeiros, Felipe A.; Sharpsten, Lucie; Bowd, Christopher; Sample, Pamela A.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Weinreb, Robert N.; Miki, Atsuya; Hammel, Na’ama; Zangwill, Linda M.

    2015-01-01

    Purpose To evaluate racial differences in the development of visual field (VF) damage in glaucoma suspects. Design Prospective, observational cohort study. Methods Six hundred thirty six eyes from 357 glaucoma suspects with normal VF at baseline were included from the multicenter African Descent and Glaucoma Evaluation Study (ADAGES). Racial differences in the development of VF damage were examined using multivariable Cox Proportional Hazard models. Results Thirty one (25.4%) of 122 African descent participants and 47 (20.0%) of 235 European descent participants developed VF damage (p=0.078). In multivariable analysis, worse baseline VF mean deviation, higher mean arterial pressure during follow up, and a race *mean intraocular pressure (IOP) interaction term were significantly associated with the development of VF damage suggesting that racial differences in the risk of VF damage varied by IOP. At higher mean IOP levels, race was predictive of the development of VF damage even after adjusting for potentially confounding factors. At mean IOPs during follow-up of 22, 24 and 26 mmHg, multivariable hazard ratios (95%CI) for the development of VF damage in African descent compared to European descent subjects were 2.03 (1.15–3.57), 2.71 (1.39–5.29), and 3.61 (1.61–8.08), respectively. However, at lower mean IOP levels (below 22 mmHg) during follow-up, African descent was not predictive of the development of VF damage. Conclusion In this cohort of glaucoma suspects with similar access to treatment, multivariate analysis revealed that at higher mean IOP during follow-up, individuals of African descent were more likely to develop VF damage than individuals of European descent. PMID:25597839

  10. [Ethnic differences in forensic psychiatry: an exploratory study at a Dutch forensic psychiatric centre].

    PubMed

    van der Stoep, T

    Compared to the percentage of ethnic minorities in the general population, ethnic minorities are overrepresented in forensic psychiatry. If these minorities are to be treated successfully, we need to know more about this group. So far, however, little is known about the differences between mental disorders and types of offences associated with patients of non-Dutch descent and those associated with patients of Dutch descent.
    AIM: To take the first steps to obtain the information we need in order to provide customised care for patients of non-Dutch descent.
    METHOD: It proved possible to identify differences between patients of Dutch and non-Dutch descent with regard to treatment, diagnosis and offences committed within a group of patients who were admitted to the forensic psychiatric centre Oostvaarderskliniek during the period 2001 - 2014.
    RESULTS: The treatment of patients of non-Dutch descent lasted longer than the treatment of patients of Dutch descent (8.5 year versus 6.6 year). Furthermore, patients from ethnic minority groups were diagnosed more often with schizophrenia (49.1% versus 21.4%), but less often with pervasive developmental disorders or sexual disorders. Patients of non-Dutch descent were more often convicted for sexual crimes where the victim was aged 16 years or older, whereas patients of Dutch descent were convicted of sexual crimes where the victim was under 16.
    CONCLUSION: There are differences between patients of Dutch and non-Dutch descent with regard to treatment duration, diagnosis and offences they commit. Future research needs to investigate whether these results are representative for the entire field of forensic psychiatry and to discover the reasons for these differences.

  11. Distributions of ectomycorrhizal and foliar endophytic fungal communities associated with Pinus ponderosa along a spatially constrained elevation gradient.

    PubMed

    Bowman, Elizabeth A; Arnold, A Elizabeth

    2018-04-01

    Understanding distributions of plant-symbiotic fungi is important for projecting responses to environmental change. Many coniferous trees host ectomycorrhizal fungi (EM) in association with roots and foliar endophytic fungi (FE) in leaves. We examined how EM and FE associated with Pinus ponderosa each vary in abundance, diversity, and community structure over a spatially constrained elevation gradient that traverses four plant communities, 4°C in mean annual temperature, and 15 cm in mean annual precipitation. We sampled 63 individuals of Pinus ponderosa in 10 sites along a 635 m elevation gradient that encompassed a geographic distance of 9.8 km. We used standard methods to characterize each fungal group (amplified and sequenced EM from root tips; isolated and sequenced FE from leaves). Abundance and diversity of EM were similar across sites, but community composition and distributions of the most common EM differed with elevation (i.e., with climate, soil chemistry, and plant communities). Abundance and composition of FE did not differ with elevation, but diversity peaked in mid-to-high elevations. Our results suggest relatively tight linkages between EM and climate, soil chemistry, and plant communities. That FE appear less linked with these factors may speak to limitations of a culture-based approach, but more likely reflects the small spatial scale encompassed by our study. Future work should consider comparable methods for characterizing these functional groups, and additional transects to understand relationships of EM and FE to environmental factors that are likely to shift as a function of climate change. © 2018 Botanical Society of America.

  12. Thermoregulation in the lizard Psammodromus algirus along a 2200-m elevational gradient in Sierra Nevada (Spain)

    NASA Astrophysics Data System (ADS)

    Zamora-Camacho, Francisco Javier; Reguera, Senda; Moreno-Rueda, Gregorio

    2016-05-01

    Achieving optimal body temperature maximizes animal fitness. Since ambient temperature may limit ectotherm thermal performance, it can be constrained in too cold or hot environments. In this sense, elevational gradients encompass contrasting thermal environments. In thermally pauperized elevations, ectotherms may either show adaptations or suboptimal body temperatures. Also, reproductive condition may affect thermal needs. Herein, we examined different thermal ecology and physiology capabilities of the lizard Psammodromus algirus along a 2200-m elevational gradient. We measured field (Tb) and laboratory-preferred (Tpref) body temperatures of lizards with different reproductive conditions, as well as ambient (Ta) and copper-model operative temperature (Te), which we used to determine thermal quality of the habitat (de), accuracy (db), and effectiveness of thermoregulation (de-db) indexes. We detected no Tb trend in elevation, while Ta constrained Tb only at high elevations. Moreover, while Ta decreased more than 7 °C with elevation, Tpref dropped only 0.6 °C, although significantly. Notably, low-elevation lizards faced excess temperature (Te > Tpref). Notably, de was best at middle elevations, followed by high elevations, and poorest at low elevations. Nonetheless, regarding microhabitat, high-elevation de was more suitable in sun-exposed microhabitats, which may increase exposition to predators, and at midday, which may limit daily activity. As for gender, db and de-db were better in females than in males. In conclusion, P. algirus seems capable to face a wide thermal range, which probably contributes to its extensive corology and makes it adaptable to climate changes.

  13. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  14. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  15. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  16. The impact of Asian descent on the incidence of acquired severe aplastic anaemia in children.

    PubMed

    McCahon, Emma; Tang, Keith; Rogers, Paul C J; McBride, Mary L; Schultz, Kirk R

    2003-04-01

    Previous studies have suggested an increased incidence of acquired severe aplastic anaemia in Asian populations. We evaluated the incidence of aplastic anaemia in people of Asian descent, using a well-defined paediatric (0-14 years) population in British Columbia, Canada to minimize environmental factors. The incidence in children of East/South-east Asian descent (6.9/million/year) and South Asian (East Indian) descent (7.3/million/year) was higher than for those of White/mixed ethnic descent (1.7/million/year). There appeared to be no contribution by environmental factors. This study shows that Asian children have an increased incidence of severe aplastic anaemia possibly as a result of a genetic predisposition.

  17. Constrained trajectory optimization for kinematically redundant arms

    NASA Technical Reports Server (NTRS)

    Carignan, Craig R.; Tarrant, Janice M.

    1990-01-01

    Two velocity optimization schemes for resolving redundant joint configurations are compared. The Extended Moore-Penrose Technique minimizes the joint velocities and avoids obstacles indirectly by adjoining a cost gradient to the solution. A new method can incorporate inequality constraints directly to avoid obstacles and singularities in the workspace. A four-link arm example is used to illustrate singularity avoidance while tracking desired end-effector paths.

  18. Intrascrotal CGRP 8-37 causes a delay in testicular descent in mice.

    PubMed

    Samarakkody, U K; Hutson, J M

    1992-07-01

    The genitofemoral nerve is a key factor in the inguinoscrotal descent of the testis. The effect of androgens may be mediated via the central nervous system, which in turn secretes the neurotransmitter calcitonin gene-related peptide (CGRP) at the genitofemoral nerve endings, to cause testicular descent. The effect of endogenous CGRP was examined by weekly injections of a vehicle with or without synthetic antagonist (CGRP 8-37) into the developing scrotum of neonatal mice. The descent of the testis was delayed in the experimental group compared with the control group. At 2 weeks of age 43% of controls had descended testes compared with 0% of experimental animals. At 3 weeks of age 17% of experimentals still had undescended testes, whereas all testes were descended in controls. At 4 weeks 3 testes remained undescended in the experimental group. It is concluded that the CGRP antagonist can retard testicular descent. This result is consistent with the hypothesis that CGRP is an important intermediary in testicular descent.

  19. Structure and State of Stress of the Chilean Subduction Zone from Terrestrial and Satellite-Derived Gravity and Gravity Gradient Data

    NASA Astrophysics Data System (ADS)

    Gutknecht, B. D.; Götze, H.-J.; Jahr, T.; Jentzsch, G.; Mahatsente, R.; Zeumann, St.

    2014-11-01

    It is well known that the quality of gravity modelling of the Earth's lithosphere is heavily dependent on the limited number of available terrestrial gravity data. More recently, however, interest has grown within the geoscientific community to utilise the homogeneously measured satellite gravity and gravity gradient data for lithospheric scale modelling. Here, we present an interdisciplinary approach to determine the state of stress and rate of deformation in the Central Andean subduction system. We employed gravity data from terrestrial, satellite-based and combined sources using multiple methods to constrain stress, strain and gravitational potential energy (GPE). Well-constrained 3D density models, which were partly optimised using the combined regional gravity model IMOSAGA01C (Hosse et al. in Surv Geophys, 2014, this issue), were used as bases for the computation of stress anomalies on the top of the subducting oceanic Nazca plate and GPE relative to the base of the lithosphere. The geometries and physical parameters of the 3D density models were used for the computation of stresses and uplift rates in the dynamic modelling. The stress distributions, as derived from the static and dynamic modelling, reveal distinct positive anomalies of up to 80 MPa along the coastal Jurassic batholith belt. The anomalies correlate well with major seismicity in the shallow parts of the subduction system. Moreover, the pattern of stress distributions in the Andean convergent zone varies both along the north-south and west-east directions, suggesting that the continental fore-arc is highly segmented. Estimates of GPE show that the high Central Andes might be in a state of horizontal deviatoric tension. Models of gravity gradients from the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite mission were used to compute Bouguer-like gradient anomalies at 8 km above sea level. The analysis suggests that data from GOCE add significant value to the interpretation of lithospheric structures, given that the appropriate topographic correction is applied.

  20. Constraining Gas Diffusivity-Soil Water Content Relationships in Forest Soils Using Surface Chamber Fluxes and Depth Profiles of Multiple Trace Gases

    NASA Astrophysics Data System (ADS)

    Dore, J. E.; Kaiser, K.; Seybold, E. C.; McGlynn, B. L.

    2012-12-01

    Forest soils are sources of carbon dioxide (CO2) to the atmosphere and can act as either sources or sinks of methane (CH4) and nitrous oxide (N2O), depending on redox conditions and other factors. Soil moisture is an important control on microbial activity, redox conditions and gas diffusivity. Direct chamber measurements of soil-air CO2 fluxes are facilitated by the availability of sensitive, portable infrared sensors; however, corresponding CH4 and N2O fluxes typically require the collection of time-course physical samples from the chamber with subsequent analyses by gas chromatography (GC). Vertical profiles of soil gas concentrations may also be used to derive CH4 and N2O fluxes by the gradient method; this method requires much less time and many fewer GC samples than the direct chamber method, but requires that effective soil gas diffusivities are known. In practice, soil gas diffusivity is often difficult to accurately estimate using a modeling approach. In our study, we apply both the chamber and gradient methods to estimate soil trace gas fluxes across a complex Rocky Mountain forested watershed in central Montana. We combine chamber flux measurements of CO2 (by infrared sensor) and CH4 and N2O (by GC) with co-located soil gas profiles to determine effective diffusivity in soil for each gas simultaneously, over-determining the diffusion equations and providing constraints on both the chamber and gradient methodologies. We then relate these soil gas diffusivities to soil type and volumetric water content in an effort to arrive at empirical parameterizations that may be used to estimate gas diffusivities across the watershed, thereby facilitating more accurate, frequent and widespread gradient-based measurements of trace gas fluxes across our study system. Our empirical approach to constraining soil gas diffusivity is well suited for trace gas flux studies over complex landscapes in general.

  1. CVB: the Constrained Vapor Bubble Capillary Experiment on the International Space Station MARANGONI FLOW REGION

    NASA Technical Reports Server (NTRS)

    Wayner, Peter C., Jr.; Kundan, Akshay; Plawsky, Joel

    2014-01-01

    The Constrained Vapor Bubble (CVB) is a wickless, grooved heat pipe and we report on a full- scale fluids experiment flown on the International Space Station (ISS). The CVB system consists of a relatively simple setup a quartz cuvette with sharp corners partially filled with either pentane or an ideal mixture of pentane and isohexane as the working fluids. Along with temperature and pressure measurements, the two-dimensional thickness profile of the menisci formed at the corners of the quartz cuvette was determined using the Light Microscopy Module (LMM). Even with the large, millimeter dimensions of the CVB, interfacial forces dominate in these exceedingly small Bond Number systems. The experiments were carried out at various power inputs. Although conceptually simple, the transport processes were found to be very complex with many different regions. At the heated end of the CVB, due to a high temperature gradient, we observed Marangoni flow at some power inputs. This region from the heated end to the central drop region is defined as a Marangoni dominated region. We present a simple analysis based on interfacial phenomena using only measurements from the ISS experiments that lead to a predictive equation for the thickness of the film near the heated end of the CVB. The average pressure gradient for flow in the film is assumed due to the measured capillary pressure at the two ends of the liquid film and that the pressure stress gradient due to cohesion self adjusts to a constant value over a distance L. The boundary conditions are the no slip condition at the wall interface and an interfacial shear stress at the liquid- vapor interface due to the Marangoni stress, which is due to the high temperature gradient. Although the heated end is extremely complex, since it includes three- dimensional variations in radiation, conduction, evaporation, condensation, fluid flow and interfacial forces, we find that using the above simplifying assumptions, a simple successful model can be developed.

  2. The Weighted Burgers Vector: a new quantity for constraining dislocation densities and types using electron backscatter diffraction on 2D sections through crystalline materials.

    PubMed

    Wheeler, J; Mariani, E; Piazolo, S; Prior, D J; Trimby, P; Drury, M R

    2009-03-01

    The Weighted Burgers Vector (WBV) is defined here as the sum, over all types of dislocations, of [(density of intersections of dislocation lines with a map) x (Burgers vector)]. Here we show that it can be calculated, for any crystal system, solely from orientation gradients in a map view, unlike the full dislocation density tensor, which requires gradients in the third dimension. No assumption is made about gradients in the third dimension and they may be non-zero. The only assumption involved is that elastic strains are small so the lattice distortion is entirely due to dislocations. Orientation gradients can be estimated from gridded orientation measurements obtained by EBSD mapping, so the WBV can be calculated as a vector field on an EBSD map. The magnitude of the WBV gives a lower bound on the magnitude of the dislocation density tensor when that magnitude is defined in a coordinate invariant way. The direction of the WBV can constrain the types of Burgers vectors of geometrically necessary dislocations present in the microstructure, most clearly when it is broken down in terms of lattice vectors. The WBV has three advantages over other measures of local lattice distortion: it is a vector and hence carries more information than a scalar quantity, it has an explicit mathematical link to the individual Burgers vectors of dislocations and, since it is derived via tensor calculus, it is not dependent on the map coordinate system. If a sub-grain wall is included in the WBV calculation, the magnitude of the WBV becomes dependent on the step size but its direction still carries information on the Burgers vectors in the wall. The net Burgers vector content of dislocations intersecting an area of a map can be simply calculated by an integration round the edge of that area, a method which is fast and complements point-by-point WBV calculations.

  3. Human chorionic gonadotropin but not the calcitonin gene-related peptide induces postnatal testicular descent in mice.

    PubMed

    Houle, A M; Gagné, D

    1995-01-01

    The androgen-regulated paracrine factor, calcitonin gene-related peptide (CGRP), has been proposed as a possible mediator of testicular descent. This peptide has been found to increase rhythmic contractions of gubernaculae and is known to be released by the genitofemoral nerve. We have investigated the ability of CGRP to induce premature testicular descent. CGRP was administered alone, or in combination with human chorionic gonadotropin (hCG) to C57BL/6 male mice postnatally. The extent of testicular descent at 18 days postpartum was then ascertained. The potential relationship between testicular weight and descent was also examined. Our results show that testes of mice treated with either hCG alone, or in combination with 500 ng CGRP, were at a significantly lower position than those of controls by 16% and 17%, respectively. In contrast, mice treated with 500 ng of CGRP alone had testes at a higher position when compared to those of controls, by 19%. In mice treated with 50 ng of CGRP alone or in combination with hCG, testes were at a position similar to those in controls. Furthermore, testicular descent was analyzed in relation to testicular weight, and we found that significantly smaller testes per gram of body weight than those of controls were at a significantly lower position compared to those of controls. Our data demonstrate that CGRP had no effect on postnatal testicular descent and that there is no relationship between postnatal descent and testicular weight.

  4. Transformable descent vehicles

    NASA Astrophysics Data System (ADS)

    Pichkhadze, K. M.; Finchenko, V. S.; Aleksashkin, S. N.; Ostreshko, B. A.

    2016-12-01

    This article presents some types of planetary descent vehicles, the shape of which varies in different flight phases. The advantages of such vehicles over those with unchangeable form (from launch to landing) are discussed. It is shown that the use of transformable descent vehicles widens the scope of possible tasks to solve.

  5. 43 CFR 10.14 - Lineal descent and cultural affiliation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... evidence sufficient to: (i) Establish the identity and cultural characteristics of the earlier group, (ii... 43 Public Lands: Interior 1 2011-10-01 2011-10-01 false Lineal descent and cultural affiliation... GRAVES PROTECTION AND REPATRIATION REGULATIONS General § 10.14 Lineal descent and cultural affiliation...

  6. 43 CFR 10.14 - Lineal descent and cultural affiliation.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... evidence sufficient to: (i) Establish the identity and cultural characteristics of the earlier group, (ii... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Lineal descent and cultural affiliation... GRAVES PROTECTION AND REPATRIATION REGULATIONS General § 10.14 Lineal descent and cultural affiliation...

  7. Constrained optimization for position calibration of an NMR field camera.

    PubMed

    Chang, Paul; Nassirpour, Sahar; Eschelbach, Martin; Scheffler, Klaus; Henning, Anke

    2018-07-01

    Knowledge of the positions of field probes in an NMR field camera is necessary for monitoring the B 0 field. The typical method of estimating these positions is by switching the gradients with known strengths and calculating the positions using the phases of the FIDs. We investigated improving the accuracy of estimating the probe positions and analyzed the effect of inaccurate estimations on field monitoring. The field probe positions were estimated by 1) assuming ideal gradient fields, 2) using measured gradient fields (including nonlinearities), and 3) using measured gradient fields with relative position constraints. The fields measured with the NMR field camera were compared to fields acquired using a dual-echo gradient recalled echo B 0 mapping sequence. Comparisons were done for shim fields from second- to fourth-order shim terms. The position estimation was the most accurate when relative position constraints were used in conjunction with measured (nonlinear) gradient fields. The effect of more accurate position estimates was seen when compared to fields measured using a B 0 mapping sequence (up to 10%-15% more accurate for some shim fields). The models acquired from the field camera are sensitive to noise due to the low number of spatial sample points. Position estimation of field probes in an NMR camera can be improved using relative position constraints and nonlinear gradient fields. Magn Reson Med 80:380-390, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Rocket measurements of electron density irregularities during MAC/SINE

    NASA Technical Reports Server (NTRS)

    Ulwick, J. C.

    1989-01-01

    Four Super Arcas rockets were launched at the Andoya Rocket Range, Norway, as part of the MAC/SINE campaign to measure electron density irregularities with high spatial resolution in the cold summer polar mesosphere. They were launched as part of two salvos: the turbulent/gravity wave salvo (3 rockets) and the EISCAT/SOUSY radar salvo (one rocket). In both salvos meteorological rockets, measuring temperature and winds, were also launched and the SOUSY radar, located near the launch site, measured mesospheric turbulence. Electron density irregularities and strong gradients were measured by the rocket probes in the region of most intense backscatter observed by the radar. The electron density profiles (8 to 4 on ascent and 4 on descent) show very different characteristics in the peak scattering region and show marked spatial and temporal variability. These data are intercompared and discussed.

  9. Ellipsoidal fuzzy learning for smart car platoons

    NASA Astrophysics Data System (ADS)

    Dickerson, Julie A.; Kosko, Bart

    1993-12-01

    A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.

  10. Adaptive filter design using recurrent cerebellar model articulation controller.

    PubMed

    Lin, Chih-Min; Chen, Li-Yang; Yeung, Daniel S

    2010-07-01

    A novel adaptive filter is proposed using a recurrent cerebellar-model-articulation-controller (CMAC). The proposed locally recurrent globally feedforward recurrent CMAC (RCMAC) has favorable properties of small size, good generalization, rapid learning, and dynamic response, thus it is more suitable for high-speed signal processing. To provide fast training, an efficient parameter learning algorithm based on the normalized gradient descent method is presented, in which the learning rates are on-line adapted. Then the Lyapunov function is utilized to derive the conditions of the adaptive learning rates, so the stability of the filtering error can be guaranteed. To demonstrate the performance of the proposed adaptive RCMAC filter, it is applied to a nonlinear channel equalization system and an adaptive noise cancelation system. The advantages of the proposed filter over other adaptive filters are verified through simulations.

  11. Speckle-metric-optimization-based adaptive optics for laser beam projection and coherent beam combining.

    PubMed

    Vorontsov, Mikhail; Weyrauch, Thomas; Lachinova, Svetlana; Gatz, Micah; Carhart, Gary

    2012-07-15

    Maximization of a projected laser beam's power density at a remotely located extended object (speckle target) can be achieved by using an adaptive optics (AO) technique based on sensing and optimization of the target-return speckle field's statistical characteristics, referred to here as speckle metrics (SM). SM AO was demonstrated in a target-in-the-loop coherent beam combining experiment using a bistatic laser beam projection system composed of a coherent fiber-array transmitter and a power-in-the-bucket receiver. SM sensing utilized a 50 MHz rate dithering of the projected beam that provided a stair-mode approximation of the outgoing combined beam's wavefront tip and tilt with subaperture piston phases. Fiber-integrated phase shifters were used for both the dithering and SM optimization with stochastic parallel gradient descent control.

  12. Algorithm based on the Thomson problem for determination of equilibrium structures of metal nanoclusters

    NASA Astrophysics Data System (ADS)

    Arias, E.; Florez, E.; Pérez-Torres, J. F.

    2017-06-01

    A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu7, Cu9, and Cu11 as benchmark systems, and Cu38 and Ni9 as novel systems. New equilibrium structures for Cu9, Cu11, Cu38, and Ni9 are reported.

  13. Hybrid preconditioning for iterative diagonalization of ill-conditioned generalized eigenvalue problems in electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Yunfeng, E-mail: yfcai@math.pku.edu.cn; Department of Computer Science, University of California, Davis 95616; Bai, Zhaojun, E-mail: bai@cs.ucdavis.edu

    2013-12-15

    The iterative diagonalization of a sequence of large ill-conditioned generalized eigenvalue problems is a computational bottleneck in quantum mechanical methods employing a nonorthogonal basis for ab initio electronic structure calculations. We propose a hybrid preconditioning scheme to effectively combine global and locally accelerated preconditioners for rapid iterative diagonalization of such eigenvalue problems. In partition-of-unity finite-element (PUFE) pseudopotential density-functional calculations, employing a nonorthogonal basis, we show that the hybrid preconditioned block steepest descent method is a cost-effective eigensolver, outperforming current state-of-the-art global preconditioning schemes, and comparably efficient for the ill-conditioned generalized eigenvalue problems produced by PUFE as the locally optimal blockmore » preconditioned conjugate-gradient method for the well-conditioned standard eigenvalue problems produced by planewave methods.« less

  14. A dual estimate method for aeromagnetic compensation

    NASA Astrophysics Data System (ADS)

    Ma, Ming; Zhou, Zhijian; Cheng, Defu

    2017-11-01

    Scalar aeromagnetic surveys have played a vital role in prospecting. However, before analysis of the surveys’ aeromagnetic data is possible, the aircraft’s magnetic interference should be removed. The extensively adopted linear model for aeromagnetic compensation is computationally efficient but faces an underfitting problem. On the other hand, the neural model proposed by Williams is more powerful at fitting but always suffers from an overfitting problem. This paper starts off with an analysis of these two models and then proposes a dual estimate method to combine them together to improve accuracy. This method is based on an unscented Kalman filter, but a gradient descent method is implemented over the iteration so that the parameters of the linear model are adjustable during flight. The noise caused by the neural model’s overfitting problem is suppressed by introducing an observation noise.

  15. Algorithm based on the Thomson problem for determination of equilibrium structures of metal nanoclusters.

    PubMed

    Arias, E; Florez, E; Pérez-Torres, J F

    2017-06-28

    A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu 7 , Cu 9 , and Cu 11 as benchmark systems, and Cu 38 and Ni 9 as novel systems. New equilibrium structures for Cu 9 , Cu 11 , Cu 38 , and Ni 9 are reported.

  16. Radial basis function network learns ceramic processing and predicts related strength and density

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.

    1993-01-01

    Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.

  17. Intra-coil interactions in split gradient coils in a hybrid MRI-LINAC system.

    PubMed

    Tang, Fangfang; Freschi, Fabio; Sanchez Lopez, Hector; Repetto, Maurizio; Liu, Feng; Crozier, Stuart

    2016-04-01

    An MRI-LINAC system combines a magnetic resonance imaging (MRI) system with a medical linear accelerator (LINAC) to provide image-guided radiotherapy for targeting tumors in real-time. In an MRI-LINAC system, a set of split gradient coils is employed to produce orthogonal gradient fields for spatial signal encoding. Owing to this unconventional gradient configuration, eddy currents induced by switching gradient coils on and off may be of particular concern. It is expected that strong intra-coil interactions in the set will be present due to the constrained return paths, leading to potential degradation of the gradient field linearity and image distortion. In this study, a series of gradient coils with different track widths have been designed and analyzed to investigate the electromagnetic interactions between coils in a split gradient set. A driving current, with frequencies from 100 Hz to 10 kHz, was applied to study the inductive coupling effects with respect to conductor geometry and operating frequency. It was found that the eddy currents induced in the un-energized coils (hereby-referred to as passive coils) positively correlated with track width and frequency. The magnetic field induced by the eddy currents in the passive coils with wide tracks was several times larger than that induced by eddy currents in the cold shield of cryostat. The power loss in the passive coils increased with the track width. Therefore, intra-coil interactions should be included in the coil design and analysis process. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. SDSS-IV MaNGA: modelling the metallicity gradients of gas and stars - radially dependent metal outflow versus IMF

    NASA Astrophysics Data System (ADS)

    Lian, Jianhui; Thomas, Daniel; Maraston, Claudia; Goddard, Daniel; Parikh, Taniya; Fernández-Trincado, J. G.; Roman-Lopes, Alexandre; Rong, Yu; Tang, Baitian; Yan, Renbin

    2018-05-01

    In our previous work, we found that only two scenarios are capable of reproducing the observed integrated mass-metallicity relations for the gas and stellar components of local star-forming galaxies simultaneously. One scenario invokes a time-dependent metal outflow loading factor with stronger outflows at early times. The other scenario uses a time-dependent initial mass function (IMF) slope with a steeper IMF at early times. In this work, we extend our study to investigate the radial profile of gas and stellar metallicity in local star-forming galaxies using spatially resolved spectroscopic data from the SDSS-IV MaNGA survey. We find that most galaxies show negative gradients in both gas and stellar metallicity with steeper gradients in stellar metallicity. The stellar metallicity gradients tend to be mass dependent with steeper gradients in more massive galaxies while no clear mass dependence is found for the gas metallicity gradient. Then we compare the observations with the predictions from a chemical evolution model of the radial profiles of gas and stellar metallicities. We confirm that the two scenarios proposed in our previous work are also required to explain the metallicity gradients. Based on these two scenarios, we successfully reproduce the radial profiles of gas metallicity, stellar metallicity, stellar mass surface density, and star formation rate surface density simultaneously. The origin of the negative gradient in stellar metallicity turns out to be driven by either radially dependent metal outflow or IMF slope. In contrast, the radial dependence of the gas metallicity is less constrained because of the degeneracy in model parameters.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birge, J. R.; Qi, L.; Wei, Z.

    In this paper we give a variant of the Topkis-Veinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a Fritz-John point of the problem. We introduce a Fritz-John (FJ) function, an FJ1 strong second-order sufficiency condition (FJ1-SSOSC), and an FJ2 strong second-order sufficiency condition (FJ2-SSOSC), and then show, without any constraint qualification (CQ), that (i) ifmore » an FJ point z satisfies the FJ1-SSOSC, then there exists a neighborhood N(z) of z such that, for any FJ point y element of N(z) {l_brace}z {r_brace} , f{sub 0}(y) {ne} f{sub 0}(z) , where f{sub 0} is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2-SSOSC, then z is a strict local minimum of the problem. The result (i) implies that the entire iteration point sequence generated by the method converges to an FJ point. We also show that if the parameters are chosen large enough, a unit step length can be accepted by the proposed algorithm.« less

  20. Overview of the Phoenix Entry, Descent and Landing System

    NASA Technical Reports Server (NTRS)

    Grover, Rob

    2005-01-01

    A viewgraph presentation on the entry, descent and landing system of Phoenix is shown. The topics include: 1) Phoenix Mission Goals; 2) Payload; 3) Aeroshell/Entry Comparison; 4) Entry Trajectory Comparison; 5) Phoenix EDL Timeline; 6) Hypersonic Phase; 7) Parachute Phase; 8) Terminal Descent Phase; and 9) EDL Communications.

  1. Ascent/Descent Software

    NASA Technical Reports Server (NTRS)

    Brown, Charles; Andrew, Robert; Roe, Scott; Frye, Ronald; Harvey, Michael; Vu, Tuan; Balachandran, Krishnaiyer; Bly, Ben

    2012-01-01

    The Ascent/Descent Software Suite has been used to support a variety of NASA Shuttle Program mission planning and analysis activities, such as range safety, on the Integrated Planning System (IPS) platform. The Ascent/Descent Software Suite, containing Ascent Flight Design (ASC)/Descent Flight Design (DESC) Configuration items (Cis), lifecycle documents, and data files used for shuttle ascent and entry modeling analysis and mission design, resides on IPS/Linux workstations. A list of tools in Navigation (NAV)/Prop Software Suite represents tool versions established during or after the IPS Equipment Rehost-3 project.

  2. Descent Stage of Mars Science Laboratory During Assembly

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image from early October 2008 shows personnel working on the descent stage of NASA's Mars Science Laboratory inside the Spacecraft Assembly Facility at NASA's Jet Propulsion Laboratory, Pasadena, Calif.

    The descent stage will provide rocket-powered deceleration for a phase of the arrival at Mars after the phases using the heat shield and parachute. When it nears the surface, the descent stage will lower the rover on a bridle the rest of the way to the ground. The larger three of the orange spheres in the descent stage are fuel tanks. The smaller two are tanks for pressurant gas used for pushing the fuel to the rocket engines.

    JPL, a division of the California Institute of Technology, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington.

  3. Dynamics of the Venera 13 and 14 descent modules in the parachute segment of descent

    NASA Astrophysics Data System (ADS)

    Vishniak, A. A.; Kariagin, V. P.; Kovtunenko, V. M.; Kotov, B. B.; Kuznetsov, V. V.; Lopatkin, A. I.; Perov, O. V.; Pichkhadze, K. M.; Rysev, O. V.

    1983-05-01

    The parachute system for the Venera 13 and 14 descent modules was designed to assure the prescribed duration of descent in the Venus cloud layer as well as the separation of heat-shield elements from the module. A mathematical model is developed which makes possible a numerical analysis of the dynamics of the module-parachute system with allowance for parachute inertia, atmospheric turbulence, the means by which the parachute is attachead to the module, and the elasticity and damping of the suspended system. A formula is derived for determining the period of oscillations of the module in the parachute segment of descent. A comparison of theoretical and experimental results shows that this formula can be used in the design calculations, especially at the early stage of module development.

  4. User's manual for a fuel-conservative descent planning algorithm implemented on a small programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vicroy, D.D.

    A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. An explanation and examples of how the algorithm is used,more » as well as a detailed flow chart and listing of the algorithm are contained.« less

  5. Biodiversity patterns along ecological gradients: unifying β-diversity indices.

    PubMed

    Szava-Kovats, Robert C; Pärtel, Meelis

    2014-01-01

    Ecologists have developed an abundance of conceptions and mathematical expressions to define β-diversity, the link between local (α) and regional-scale (γ) richness, in order to characterize patterns of biodiversity along ecological (i.e., spatial and environmental) gradients. These patterns are often realized by regression of β-diversity indices against one or more ecological gradients. This practice, however, is subject to two shortcomings that can undermine the validity of the biodiversity patterns. First, many β-diversity indices are constrained to range between fixed lower and upper limits. As such, regression analysis of β-diversity indices against ecological gradients can result in regression curves that extend beyond these mathematical constraints, thus creating an interpretational dilemma. Second, despite being a function of the same measured α- and γ-diversity, the resultant biodiversity pattern depends on the choice of β-diversity index. We propose a simple logistic transformation that rids beta-diversity indices of their mathematical constraints, thus eliminating the possibility of an uninterpretable regression curve. Moreover, this transformation results in identical biodiversity patterns for three commonly used classical beta-diversity indices. As a result, this transformation eliminates the difficulties of both shortcomings, while allowing the researcher to use whichever beta-diversity index deemed most appropriate. We believe this method can help unify the study of biodiversity patterns along ecological gradients.

  6. Biodiversity Patterns along Ecological Gradients: Unifying β-Diversity Indices

    PubMed Central

    Szava-Kovats, Robert C.; Pärtel, Meelis

    2014-01-01

    Ecologists have developed an abundance of conceptions and mathematical expressions to define β-diversity, the link between local (α) and regional-scale (γ) richness, in order to characterize patterns of biodiversity along ecological (i.e., spatial and environmental) gradients. These patterns are often realized by regression of β-diversity indices against one or more ecological gradients. This practice, however, is subject to two shortcomings that can undermine the validity of the biodiversity patterns. First, many β-diversity indices are constrained to range between fixed lower and upper limits. As such, regression analysis of β-diversity indices against ecological gradients can result in regression curves that extend beyond these mathematical constraints, thus creating an interpretational dilemma. Second, despite being a function of the same measured α- and γ-diversity, the resultant biodiversity pattern depends on the choice of β-diversity index. We propose a simple logistic transformation that rids beta-diversity indices of their mathematical constraints, thus eliminating the possibility of an uninterpretable regression curve. Moreover, this transformation results in identical biodiversity patterns for three commonly used classical beta-diversity indices. As a result, this transformation eliminates the difficulties of both shortcomings, while allowing the researcher to use whichever beta-diversity index deemed most appropriate. We believe this method can help unify the study of biodiversity patterns along ecological gradients. PMID:25330181

  7. Development and validation of a critical gradient energetic particle driven Alfven eigenmode transport model for DIII-D tilted neutral beam experiments

    DOE PAGES

    Waltz, Ronald E.; Bass, Eric M.; Heidbrink, William W.; ...

    2015-10-30

    Recent experiments with the DIII-D tilted neutral beam injection (NBI) varying the beam energetic particle (EP) source profiles have provided strong evidence that unstable Alfven eigenmodes (AE) drive stiff EP transport at a critical EP density gradient. Here the critical gradient is identified by the local AE growth rate being equal to the local ITG/TEM growth rate at the same low toroidal mode number. The growth rates are taken from the gyrokinetic code GYRO. Simulation show that the slowing down beam-like EP distribution has a slightly lower critical gradient than the Maxwellian. The ALPHA EP density transport code, used tomore » validate the model, combines the low-n stiff EP critical density gradient AE mid-core transport with the energy independent high-n ITG/TEM density transport model controling the central core EP density profile. For the on-axis NBI heated DIII-D shot 146102, while the net loss to the edge is small, about half the birth fast ions are transported from the central core r/a < 0.5 and the central density is about half the slowing down density. Lastly, these results are in good agreement with experimental fast ion pressure profiles inferred from MSE constrained EFIT equilibria.« less

  8. An efficient sequential strategy for realizing cross-gradient joint inversion: method and its application to 2-D cross borehole seismic traveltime and DC resistivity tomography

    NASA Astrophysics Data System (ADS)

    Gao, Ji; Zhang, Haijiang

    2018-05-01

    Cross-gradient joint inversion that enforces structural similarity between different models has been widely utilized in jointly inverting different geophysical data types. However, it is a challenge to combine different geophysical inversion systems with the cross-gradient structural constraint into one joint inversion system because they may differ greatly in the model representation, forward modelling and inversion algorithm. Here we propose a new joint inversion strategy that can avoid this issue. Different models are separately inverted using the existing inversion packages and model structure similarity is only enforced through cross-gradient minimization between two models after each iteration. Although the data fitting and structural similarity enforcing processes are decoupled, our proposed strategy is still able to choose appropriate models to balance the trade-off between geophysical data fitting and structural similarity. This is realized by using model perturbations from separate data inversions to constrain the cross-gradient minimization process. We have tested this new strategy on 2-D cross borehole synthetic seismic traveltime and DC resistivity data sets. Compared to separate geophysical inversions, our proposed joint inversion strategy fits the separate data sets at comparable levels while at the same time resulting in a higher structural similarity between the velocity and resistivity models.

  9. Tracer-Based Determination of Vortex Descent in the 1999-2000 Arctic Winter

    NASA Technical Reports Server (NTRS)

    Greenblatt, Jeffery B.; Jost, Hans-Juerg; Loewenstein, Max; Podolske, James R.; Hurst, Dale F.; Elkins, James W.; Schauffler, Sue M.; Atlas, Elliot L.; Herman, Robert L.; Webster, Christopher R.

    2001-01-01

    A detailed analysis of available in situ and remotely sensed N2O and CH4 data measured in the 1999-2000 winter Arctic vortex has been performed in order to quantify the temporal evolution of vortex descent. Differences in potential temperature (theta) among balloon and aircraft vertical profiles (an average of 19-23 K on a given N2O or CH4 isopleth) indicated significant vortex inhomogeneity in late fall as compared with late winter profiles. A composite fall vortex profile was constructed for November 26, 1999, whose error bars encompassed the observed variability. High-latitude, extravortex profiles measured in different years and seasons revealed substantial variability in N2O and CH4 on theta surfaces, but all were clearly distinguishable from the first vortex profiles measured in late fall 1999. From these extravortex-vortex differences, we inferred descent prior to November 26: 397+/-15 K (1sigma) at 30 ppbv N2O and 640 ppbv CH4, and 28+/-13 K above 200 ppbv N2O and 1280 ppbv CH4. Changes in theta were determined on five N2O and CH4 isopleths from November 26 through March 12, and descent rates were calculated on each N2O isopleth for several time intervals. The maximum descent rates were seen between November 26 and January 27: 0.82+/-0.20 K/day averaged over 50-250 ppbv N2O. By late winter (February 26-March 12), the average rate had decreased to 0.10+/-0.25 K/day. Descent rates also decreased with increasing N2O; the winter average (November 26-March 5) descent rate varied from 0.75+/-0.10 K/day at 50 ppbv to 0.40+/-0.11 K/day at 250 ppbv. Comparison of these results with observations and models of descent in prior years showed very good overall agreement. Two models of the 1999-2000 vortex descent, SLIMCAT and REPROBUS, despite theta offsets with respect to observed profiles of up to 20 K on most tracer isopleths, produced descent rates that agreed very favorably with the inferred rates from observation.

  10. Immediate effects of a distal gait modification during stair descent in individuals with patellofemoral pain.

    PubMed

    Aliberti, Sandra; Mezêncio, Bruno; Amadio, Alberto Carlos; Serrão, Julio Cerca; Mochizuki, Luis

    2018-05-23

    Knee pain during stair managing is a common complaint among individuals with PFP and can negatively affect their activities of daily living. Gait modification programs can be used to decrease patellofemoral pain. Immediate effects of a stair descent distal gait modification session that intended to emphasize forefoot landing during stair descent are described in this study. To analyze the immediate effects of a distal gait modification session on lower extremity movements and intensity of pain in women with patellofemoral pain during stair descent. Nonrandomized controlled trial. Sixteen women with patellofemoral pain were allocated into two groups: (1) Gait Modification Group (n = 8); and 2) Control Group (n = 8). The intensity of pain (visual analog scale) and kinematics of knee, ankle, and forefoot (multi-segmental foot model) during stair descent were assessed before and after the intervention. After the gait modification session, there was an increase of forefoot eversion and ankle plantarflexion as well as a decrease of knee flexion. An immediate decrease in patellofemoral pain intensity during stair descent was also observed. The distal gait modification session changed the lower extremity kinetic chain strategy of movement, increasing foot and ankle movement contribution and decreasing knee contribution to the task. An immediate decrease in patellofemoral pain intensity during stair descent was also observed. To emphasize forefoot landing may be a useful intervention to immediately relieve pain in patients with patellofemoral pain during stair descent. Clinical studies are needed to verify the gait modification session effects in medium and long terms.

  11. Human Scleral Structural Stiffness Increases More Rapidly With Age in Donors of African Descent Compared to Donors of European Descent

    PubMed Central

    Fazio, Massimo A.; Grytz, Rafael; Morris, Jeffrey S.; Bruno, Luigi; Girkin, Christopher A.; Downs, J. Crawford

    2014-01-01

    Purpose. We tested the hypothesis that the variation of peripapillary scleral structural stiffness with age is different in donors of European (ED) and African (AD) descent. Methods. Posterior scleral shells from normal eyes from donors of European (n = 20 pairs; previously reported) and African (n = 9 pairs) descent aged 0 and 90 years old were inflation tested within 48 hours post mortem. Scleral shells were pressurized from 5 to 45 mm Hg and the full-field, 3-dimensional (3D) deformation of the outer surface was recorded at submicrometric accuracy using speckle interferometry (ESPI). Mean maximum principal (tensile) strain of the peripapillary and midperipheral regions surrounding the optic nerve head (ONH) were fit using a functional mixed effects model that accounts for intradonor variability, same-race correlation, and spatial autocorrelation to estimate the effect of race on the age-related changes in mechanical scleral strain. Results. Mechanical tensile strain significantly decreased with age in the peripapillary sclera in the African and European descent groups (P < 0.001), but the age-related stiffening was significantly greater in the African descent group (P < 0.05). Maximum principal strain in the peripapillary sclera was significantly higher than in the midperipheral sclera for both ethnic groups. Conclusions. The sclera surrounding the ONH stiffens more rapidly with age in the African descent group compared to the European group. Stiffening of the peripapillary sclera with age may be related to the higher prevalence of glaucoma in the elderly and persons of African descent. PMID:25237162

  12. Human scleral structural stiffness increases more rapidly with age in donors of African descent compared to donors of European descent.

    PubMed

    Fazio, Massimo A; Grytz, Rafael; Morris, Jeffrey S; Bruno, Luigi; Girkin, Christopher A; Downs, J Crawford

    2014-09-18

    We tested the hypothesis that the variation of peripapillary scleral structural stiffness with age is different in donors of European (ED) and African (AD) descent. Posterior scleral shells from normal eyes from donors of European (n = 20 pairs; previously reported) and African (n = 9 pairs) descent aged 0 and 90 years old were inflation tested within 48 hours post mortem. Scleral shells were pressurized from 5 to 45 mm Hg and the full-field, 3-dimensional (3D) deformation of the outer surface was recorded at submicrometric accuracy using speckle interferometry (ESPI). Mean maximum principal (tensile) strain of the peripapillary and midperipheral regions surrounding the optic nerve head (ONH) were fit using a functional mixed effects model that accounts for intradonor variability, same-race correlation, and spatial autocorrelation to estimate the effect of race on the age-related changes in mechanical scleral strain. Mechanical tensile strain significantly decreased with age in the peripapillary sclera in the African and European descent groups (P < 0.001), but the age-related stiffening was significantly greater in the African descent group (P < 0.05). Maximum principal strain in the peripapillary sclera was significantly higher than in the midperipheral sclera for both ethnic groups. The sclera surrounding the ONH stiffens more rapidly with age in the African descent group compared to the European group. Stiffening of the peripapillary sclera with age may be related to the higher prevalence of glaucoma in the elderly and persons of African descent. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  13. Latin American Immigrant Women and Intergenerational Sex Education

    ERIC Educational Resources Information Center

    Alcalde, Maria Cristina; Quelopana, Ana Maria

    2013-01-01

    People of Latin American descent make up the largest and fastest-growing minority group in the USA. Rates of pregnancy, childbirth, and sexually transmitted infections among people of Latin American descent are higher than among other ethnic groups. This paper builds on research that suggests that among families of Latin American descent, mothers…

  14. Analysis of foot clearance in firefighters during ascent and descent of stairs.

    PubMed

    Kesler, Richard M; Horn, Gavin P; Rosengren, Karl S; Hsiao-Wecksler, Elizabeth T

    2016-01-01

    Slips, trips, and falls are a leading cause of injury to firefighters with many injuries occurring while traversing stairs, possibly exaggerated by acute fatigue from firefighting activities and/or asymmetric load carriage. This study examined the effects that fatigue, induced by simulated firefighting activities, and hose load carriage have on foot clearance while traversing stairs. Landing and passing foot clearances for each stair during ascent and descent of a short staircase were investigated. Clearances decreased significantly (p < 0.05) post-exercise for nine of 12 ascent parameters and increased for two of eight descent parameters. Load carriage resulted in significantly decreased (p < 0.05) clearance over three ascent parameters, and one increase during descent. Decreased clearances during ascent caused by fatigue or load carriage may result in an increased trip risk. Increased clearances during descent may suggest use of a compensation strategy to ensure stair clearance or an increased risk of over-stepping during descent. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Toward a Caribbean psychology: an African-centered approach.

    PubMed

    Sutherland, Marcia Elizabeth

    2011-01-01

    Although the Americas and Caribbean region are purported to comprise different ethnic groups, this article’s focus is on people of African descent, who represent the largest ethnic group in many countries. The emphasis on people of African descent is related to their family structure, ethnic identity, cultural, psychohistorical, and contemporary psychosocial realities. This article discusses the limitations of Western psychology for theory, research, and applied work on people of African descent in the Americas and Caribbean region. In view of the adaptations that some people of African descent have made to slavery, colonialism, and more contemporary forms of cultural intrusions, it is argued that when necessary, notwithstanding Western psychology’s limitations, Caribbean psychologists should reconstruct mainstream psychology to address the psychological needs of these Caribbean people. The relationship between theory and psychological interventions for the optimal development of people of African descent is emphasized throughout this article. In this regard, the African-centered and constructionist viewpoint is argued to be of utility in addressing the psychological growth and development of people of African descent living in the Americas and Caribbean region.

  16. Effects of aircraft and flight parameters on energy-efficient profile descents in time-based metered traffic

    NASA Technical Reports Server (NTRS)

    Dejarnette, F. R.

    1984-01-01

    Concepts to save fuel while preserving airport capacity by combining time based metering with profile descent procedures were developed. A computer algorithm is developed to provide the flight crew with the information needed to fly from an entry fix to a metering fix and arrive there at a predetermined time, altitude, and airspeed. The flight from the metering fix to an aim point near the airport was calculated. The flight path is divided into several descent and deceleration segments. Descents are performed at constant Mach numbers or calibrated airspeed, whereas decelerations occur at constant altitude. The time and distance associated with each segment are calculated from point mass equations of motion for a clean configuration with idle thrust. Wind and nonstandard atmospheric properties have a large effect on the flight path. It is found that uncertainty in the descent Mach number has a large effect on the predicted flight time. Of the possible combinations of Mach number and calibrated airspeed for a descent, only small changes were observed in the fuel consumed.

  17. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    Treesearch

    R. Quinn Thomas; Evan B. Brooks; Annika L. Jersild; Eric J. Ward; Randolph H. Wynne; Timothy J. Albaugh; Heather Dinon-Aldridge; Harold E. Burkhart; Jean-Christophe Domec; Timothy R. Fox; Carlos A. Gonzalez-Benecke; Timothy A. Martin; Asko Noormets; David A. Sampson; Robert O. Teskey

    2017-01-01

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of...

  18. Eocene greenhouse climate revealed by coupled clumped isotope-Mg/Ca thermometry.

    PubMed

    Evans, David; Sagoo, Navjit; Renema, Willem; Cotton, Laura J; Müller, Wolfgang; Todd, Jonathan A; Saraswati, Pratul Kumar; Stassen, Peter; Ziegler, Martin; Pearson, Paul N; Valdes, Paul J; Affek, Hagit P

    2018-02-06

    Past greenhouse periods with elevated atmospheric CO 2 were characterized by globally warmer sea-surface temperatures (SST). However, the extent to which the high latitudes warmed to a greater degree than the tropics (polar amplification) remains poorly constrained, in particular because there are only a few temperature reconstructions from the tropics. Consequently, the relationship between increased CO 2 , the degree of tropical warming, and the resulting latitudinal SST gradient is not well known. Here, we present coupled clumped isotope (Δ 47 )-Mg/Ca measurements of foraminifera from a set of globally distributed sites in the tropics and midlatitudes. Δ 47 is insensitive to seawater chemistry and therefore provides a robust constraint on tropical SST. Crucially, coupling these data with Mg/Ca measurements allows the precise reconstruction of Mg/Ca sw throughout the Eocene, enabling the reinterpretation of all planktonic foraminifera Mg/Ca data. The combined dataset constrains the range in Eocene tropical SST to 30-36 °C (from sites in all basins). We compare these accurate tropical SST to deep-ocean temperatures, serving as a minimum constraint on high-latitude SST. This results in a robust conservative reconstruction of the early Eocene latitudinal gradient, which was reduced by at least 32 ± 10% compared with present day, demonstrating greater polar amplification than captured by most climate models.

  19. Characterization of a Quadrotor Unmanned Aircraft System for Aerosol-Particle-Concentration Measurements.

    PubMed

    Brady, James M; Stokes, M Dale; Bonnardel, Jim; Bertram, Timothy H

    2016-02-02

    High-spatial-resolution, near-surface vertical profiling of atmospheric chemical composition is currently limited by the availability of experimental platforms that can sample in constrained environments. As a result, measurements of near-surface gradients in trace gas and aerosol particle concentrations have been limited to studies conducted from fixed location towers or tethered balloons. Here, we explore the utility of a quadrotor unmanned aircraft system (UAS) as a sampling platform to measure vertical and horizontal concentration gradients of trace gases and aerosol particles at high spatial resolution (1 m) within the mixed layer (0-100 m). A 3D Robotics Iris+ autonomous quadrotor UAS was outfitted with a sensor package consisting of a two-channel aerosol optical particle counter and a CO2 sensor. The UAS demonstrated high precision in both vertical (±0.5 m) and horizontal positions (±1 m), highlighting the potential utility of quadrotor UAS drones for aerosol- and trace-gas measurements within complex terrain, such as the urban environment, forest canopies, and above difficult-to-access areas such as breaking surf. Vertical profiles of aerosol particle number concentrations, acquired from flights conducted along the California coastline, were used to constrain sea-spray aerosol-emission rates from coastal wave breaking.

  20. Ascent/descent ancillary data production user's guide

    NASA Technical Reports Server (NTRS)

    Brans, H. R.; Seacord, A. W., II; Ulmer, J. W.

    1986-01-01

    The Ascent/Descent Ancillary Data Product, also called the A/D BET because it contains a Best Estimate of the Trajectory (BET), is a collection of trajectory, attitude, and atmospheric related parameters computed for the ascent and descent phases of each Shuttle Mission. These computations are executed shortly after the event in a post-flight environment. A collection of several routines including some stand-alone routines constitute what is called the Ascent/Descent Ancillary Data Production Program. A User's Guide for that program is given. It is intended to provide the reader with all the information necessary to generate an Ascent or a Descent Ancillary Data Product. It includes descriptions of the input data and output data for each routine, and contains explicit instructions on how to run each routine. A description of the final output product is given.

  1. Time-specific androgen blockade with flutamide inhibits testicular descent in the rat.

    PubMed

    Husmann, D A; McPhaul, M J

    1991-09-01

    Inhibition of androgen action by flutamide, a nonsteroidal antiandrogen, blocked testicular descent in 40% of the testes exposed to this agent continuously from gestational day 13 through postpartal day 28. By contrast, only 11% of the testes failed to descend when blocked by 5 alpha-reductase inhibitors during the same period. Flutamide administration over narrower time intervals (gestational day 13-15, 16-17, or 18-19) revealed maximal interference with testicular descent after androgen inhibition during gestational days 16-17. No significant differences in testicular or epididymal weights were evident between descended and undescended testes; furthermore, no correlation was detected between the presence of epididymal abnormalities and testicular descent. These findings indicate that androgen inhibition during a brief period of embryonic development can block testicular descent. The mechanism through which this inhibition occurs remains to be elucidated.

  2. A conflict analysis of 4D descent strategies in a metered, multiple-arrival route environment

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Harris, C. S.

    1990-01-01

    A conflict analysis was performed on multiple arrival traffic at a typical metered airport. The Flow Management Evaluation Model (FMEM) was used to simulate arrival operations using Denver Stapleton's arrival route structure. Sensitivities of conflict performance to three different 4-D descent strategies (clear-idle Mach/Constant AirSpeed (CAS), constant descent angle Mach/CAS and energy optimal) were examined for three traffic mixes represented by those found at Denver Stapleton, John F. Kennedy and typical en route metering (ERM) airports. The Monte Carlo technique was used to generate simulation entry point times. Analysis results indicate that the clean-idle descent strategy offers the best compromise in overall performance. Performance measures primarily include susceptibility to conflict and conflict severity. Fuel usage performance is extrapolated from previous descent strategy studies.

  3. Analysis of Flight Management System Predictions of Idle-Thrust Descents

    NASA Technical Reports Server (NTRS)

    Stell, Laurel

    2010-01-01

    To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the predictor and its uncertainty models, descents from cruise to the meter fix were executed using vertical navigation in a B737-700 simulator and a B777-200 simulator, both with commercial FMSs. For both aircraft types, the FMS computed the intended descent path for a specified speed profile assuming idle thrust after top of descent (TOD), and then it controlled the avionics without human intervention. The test matrix varied aircraft weight, descent speed, and wind conditions. The first analysis in this paper determined the effect of the test matrix parameters on the FMS computation of TOD location, and it compared the results to those for the current ground predictor in the Efficient Descent Advisor (EDA). The second analysis was similar but considered the time to fly a specified distance to the meter fix. The effects of the test matrix variables together with the accuracy requirements for the predictor will determine the allowable error for the predictor inputs. For the B737, the EDA prediction of meter fix crossing time agreed well with the FMS; but its prediction of TOD location probably was not sufficiently accurate to enable idle-thrust descents in congested airspace, even though the FMS and EDA gave similar shapes for TOD location as a function of the test matrix variables. For the B777, the FMS and EDA gave different shapes for the TOD location function, and the EDA prediction of the TOD location is not accurate enough to fully enable the concept. Furthermore, the differences between the FMS and EDA predictions of meter fix crossing time for the B777 indicated that at least one of them was not sufficiently accurate.

  4. Rotary Wing Deceleration Use on Titan

    NASA Technical Reports Server (NTRS)

    Young, Larry A.; Steiner, Ted J.

    2011-01-01

    Rotary wing decelerator (RWD) systems were compared against other methods of atmospheric deceleration and were determined to show significant potential for application to a system requiring controlled descent, low-velocity landing, and atmospheric research capability on Titan. Design space exploration and down-selection results in a system with a single rotor utilizing cyclic pitch control. Models were developed for selection of a RWD descent system for use on Titan and to determine the relationships between the key design parameters of such a system and the time of descent. The possibility of extracting power from the system during descent was also investigated.

  5. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  6. Implementation of neural network for color properties of polycarbonates

    NASA Astrophysics Data System (ADS)

    Saeed, U.; Ahmad, S.; Alsadi, J.; Ross, D.; Rizvi, G.

    2014-05-01

    In present paper, the applicability of artificial neural networks (ANN) is investigated for color properties of plastics. The neural networks toolbox of Matlab 6.5 is used to develop and test the ANN model on a personal computer. An optimal design is completed for 10, 12, 14,16,18 & 20 hidden neurons on single hidden layer with five different algorithms: batch gradient descent (GD), batch variable learning rate (GDX), resilient back-propagation (RP), scaled conjugate gradient (SCG), levenberg-marquardt (LM) in the feed forward back-propagation neural network model. The training data for ANN is obtained from experimental measurements. There were twenty two inputs including resins, additives & pigments while three tristimulus color values L*, a* and b* were used as output layer. Statistical analysis in terms of Root-Mean-Squared (RMS), absolute fraction of variance (R squared), as well as mean square error is used to investigate the performance of ANN. LM algorithm with fourteen neurons on hidden layer in Feed Forward Back-Propagation of ANN model has shown best result in the present study. The degree of accuracy of the ANN model in reduction of errors is proven acceptable in all statistical analysis and shown in results. However, it was concluded that ANN provides a feasible method in error reduction in specific color tristimulus values.

  7. The Dropout Learning Algorithm

    PubMed Central

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  8. An Impacting Descent Probe for Europa and the Other Galilean Moons of Jupiter

    NASA Astrophysics Data System (ADS)

    Wurz, P.; Lasi, D.; Thomas, N.; Piazza, D.; Galli, A.; Jutzi, M.; Barabash, S.; Wieser, M.; Magnes, W.; Lammer, H.; Auster, U.; Gurvits, L. I.; Hajdas, W.

    2017-08-01

    We present a study of an impacting descent probe that increases the science return of spacecraft orbiting or passing an atmosphere-less planetary bodies of the solar system, such as the Galilean moons of Jupiter. The descent probe is a carry-on small spacecraft (<100 kg), to be deployed by the mother spacecraft, that brings itself onto a collisional trajectory with the targeted planetary body in a simple manner. A possible science payload includes instruments for surface imaging, characterisation of the neutral exosphere, and magnetic field and plasma measurement near the target body down to very low-altitudes ( 1 km), during the probe's fast ( km/s) descent to the surface until impact. The science goals and the concept of operation are discussed with particular reference to Europa, including options for flying through water plumes and after-impact retrieval of very-low altitude science data. All in all, it is demonstrated how the descent probe has the potential to provide a high science return to a mission at a low extra level of complexity, engineering effort, and risk. This study builds upon earlier studies for a Callisto Descent Probe for the former Europa-Jupiter System Mission of ESA and NASA, and extends them with a detailed assessment of a descent probe designed to be an additional science payload for the NASA Europa Mission.

  9. Parity violation constraints using cosmic microwave background polarization spectra from 2006 and 2007 observations by the QUaD polarimeter.

    PubMed

    Wu, E Y S; Ade, P; Bock, J; Bowden, M; Brown, M L; Cahill, G; Castro, P G; Church, S; Culverhouse, T; Friedman, R B; Ganga, K; Gear, W K; Gupta, S; Hinderks, J; Kovac, J; Lange, A E; Leitch, E; Melhuish, S J; Memari, Y; Murphy, J A; Orlando, A; Piccirillo, L; Pryke, C; Rajguru, N; Rusholme, B; Schwarz, R; O'Sullivan, C; Taylor, A N; Thompson, K L; Turner, A H; Zemcov, M

    2009-04-24

    We constrain parity-violating interactions to the surface of last scattering using spectra from the QUaD experiment's second and third seasons of observations by searching for a possible systematic rotation of the polarization directions of cosmic microwave background photons. We measure the rotation angle due to such a possible "cosmological birefringence" to be 0.55 degrees +/-0.82 degrees (random) +/-0.5 degrees (systematic) using QUaD's 100 and 150 GHz temperature-curl and gradient-curl spectra over the spectra over the multipole range 200

  10. Hair Breakage in Patients of African Descent: Role of Dermoscopy

    PubMed Central

    Quaresma, Maria Victória; Martinez Velasco, María Abril; Tosti, Antonella

    2015-01-01

    Dermoscopy represents a useful technique for the diagnosis and follow-up of hair and scalp disorders. To date, little has been published regarding dermoscopy findings of hair disorders in patients of African descent. This article illustrates how dermoscopy allows fast diagnosis of hair breakage due to intrinsic factors and chemical damage in African descent patients. PMID:27170942

  11. Ethnic Identity and Acculturative Stress as Mediators of Depression in Students of Asian Descent

    ERIC Educational Resources Information Center

    Lantrip, Crystal; Mazzetti, Francesco; Grasso, Joseph; Gill, Sara; Miller, Janna; Haner, Morgynn; Rude, Stephanie; Awad, Germine

    2015-01-01

    This study underscored the importance of addressing the well-being of college students of Asian descent, because these students had higher rates of depression and lower positive feelings about their ethnic group compared with students of European descent, as measured by the Affirmation subscale of the Ethnic Identity Scale. Affirmation mediated…

  12. Thermoregulation in the lizard Psammodromus algirus along a 2200-m elevational gradient in Sierra Nevada (Spain).

    PubMed

    Zamora-Camacho, Francisco Javier; Reguera, Senda; Moreno-Rueda, Gregorio

    2016-05-01

    Achieving optimal body temperature maximizes animal fitness. Since ambient temperature may limit ectotherm thermal performance, it can be constrained in too cold or hot environments. In this sense, elevational gradients encompass contrasting thermal environments. In thermally pauperized elevations, ectotherms may either show adaptations or suboptimal body temperatures. Also, reproductive condition may affect thermal needs. Herein, we examined different thermal ecology and physiology capabilities of the lizard Psammodromus algirus along a 2200-m elevational gradient. We measured field (T(b)) and laboratory-preferred (T(pref)) body temperatures of lizards with different reproductive conditions, as well as ambient (T(a)) and copper-model operative temperature (T(e)), which we used to determine thermal quality of the habitat (d(e)), accuracy (d(b)), and effectiveness of thermoregulation (de-db) indexes. We detected no Tb trend in elevation, while T(a) constrained T(b) only at high elevations. Moreover, while Ta decreased more than 7 °C with elevation, T(pref) dropped only 0.6 °C, although significantly. Notably, low-elevation lizards faced excess temperature (T(e) > T(pref)). Notably, de was best at middle elevations, followed by high elevations, and poorest at low elevations. Nonetheless, regarding microhabitat, high-elevation de was more suitable in sun-exposed microhabitats, which may increase exposition to predators, and at midday, which may limit daily activity. As for gender, d(b) and d(e)-d(b) were better in females than in males. In conclusion, P. algirus seems capable to face a wide thermal range, which probably contributes to its extensive corology and makes it adaptable to climate changes.

  13. Coastal Fish Assemblages Reflect Geological and Oceanographic Gradients Within An Australian Zootone

    PubMed Central

    Harvey, Euan S.; Cappo, Mike; Kendrick, Gary A.; McLean, Dianne L.

    2013-01-01

    Distributions of mobile animals have been shown to be heavily influenced by habitat and climate. We address the historical and contemporary context of fish habitats within a major zootone: the Recherche Archipelago, southern western Australia. Baited remote underwater video systems were set in nine habitat types within three regions to determine the species diversity and relative abundance of bony fishes, sharks and rays. Constrained ordinations and multivariate prediction and regression trees were used to examine the effects of gradients in longitude, depth, distance from islands and coast, and epibenthic habitat on fish assemblage composition. A total of 90 species from 43 families were recorded from a wide range of functional groups. Ordination accounted for 19% of the variation in the assemblage composition when constrained by spatial and epibenthic covariates, and identified redundancy in the use of distance from the nearest emergent island as a predictor. A spatial hierarchy of fourteen fish assemblages was identified using multivariate prediction and regression trees, with the primary split between assemblages on macroalgal reefs, and those on bare or sandy habitats supporting seagrass beds. The characterisation of indicator species for assemblages within the hierarchy revealed important faunal break in fish assemblages at 122.30 East at Cape Le Grand and subtle niche partitioning amongst species within the labrids and monacanthids. For example, some species of monacanthids were habitat specialists and predominantly found on seagrass (Acanthaluteres vittiger, Scobinichthys granulatus), reef (Meuschenia galii, Meuschenia hippocrepis) or sand habitats (Nelusetta ayraudi). Predatory fish that consume molluscs, crustaceans and cephalopods were dominant with evidence of habitat generalisation in reef species to cope with local disturbances by wave action. Niche separation within major genera, and a sub-regional faunal break, indicate future zootone mapping should recognise both cross-shelf and longshore environmental gradients. PMID:24278353

  14. Device for Lowering Mars Science Laboratory Rover to the Surface

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This is hardware for controlling the final lowering of NASA's Mars Science Laboratory rover to the surface of Mars from the spacecraft's hovering, rocket-powered descent stage.

    The photo shows the bridle device assembly, which is about two-thirds of a meter, or 2 feet, from end to end, and has two main parts. The cylinder on the left is the descent brake. On the right is the bridle assembly, including a spool of nylon and Vectran cords that will be attached to the rover.

    When pyrotechnic bolts fire to sever the rigid connection between the rover and the descent stage, gravity will pull the tethered rover away from the descent stage. The bridle or tether, attached to three points on the rover, will unspool from the bridle assembly, beginning from the larger-diameter portion of the spool at far right. The rotation rate of the assembly, hence the descent rate of the rover, will be governed by the descent brake. Inside the housing of that brake are gear boxes and banks of mechanical resistors engineered to prevent the bridle from spooling out too quickly or too slowly. The length of the bridle will allow the rover to be lowered about 7.5 meters (25 feet) while still tethered to the descent stage.

    The Starsys division of SpaceDev Inc., Poway, Calif., provided the descent brake. NASA's Jet Propulsion Laboratory, Pasadena, Calif., built the bridle assembly. Vectran is a product of Kuraray Co. Ltd., Tokyo. JPL, a division of the California Institute of Technology, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington.

  15. Antarctic Polar Descent and Planetary Wave Activity Observed in ISAMS CO from April to July 1992

    NASA Technical Reports Server (NTRS)

    Allen, D. R.; Stanford, J. L.; Nakamura, N.; Lopez-Valverde, M. A.; Lopez-Puertas, M.; Taylor, F. W.; Remedios, J. J.

    2000-01-01

    Antarctic polar descent and planetary wave activity in the upper stratosphere and lower mesosphere are observed in ISAMS CO data from April to July 1992. CO-derived mean April-to-May upper stratosphere descent rates of 15 K/day (0.25 km/day) at 60 S and 20 K/day (0.33 km/day) at 80 S are compared with descent rates from diabatic trajectory analyses. At 60 S there is excellent agreement, while at 80 S the trajectory-derived descent is significantly larger in early April. Zonal wavenumber 1 enhancement of CO is observed on 9 and 28 May, coincident with enhanced wave 1 in UKMO geopotential height. The 9 May event extends from 40 to 70 km and shows westward phase tilt with height, while the 28 May event extends from 40 to 50 km and shows virtually no phase tilt with height.

  16. The Uncertain Significance of Low Vitamin D levels in African Descent Populations: A Review of the Bone and Cardiometabolic Literature

    PubMed Central

    O'Connor, Michelle Y; Thoreson, Caroline K; Ramsey, Natalie L M; Ricks, Madia; Sumner, Anne E

    2014-01-01

    Vitamin D levels in people of African descent are often described as inadequate or deficient. Whether low vitamin D levels in people of African descent lead to compromised bone or cardiometabolic health is unknown. Clarity on this issue is essential because if clinically significant vitamin D deficiency is present, vitamin D supplementation is necessary. However, if vitamin D is metabolically sufficient, vitamin D supplementation could be wasteful of scarce resources and even harmful. In this review vitamin D physiology is described with a focus on issues specific to populations of African descent such as the influence of melanin on endogenous vitamin D production and lactose intolerance on the willingness of people to ingest vitamin D fortified foods. Then data on the relationship of vitamin D to bone and cardiometabolic health in people of African descent are evaluated. PMID:24267433

  17. Descent Assisted Split Habitat Lunar Lander Concept

    NASA Technical Reports Server (NTRS)

    Mazanek, Daniel D.; Goodliff, Kandyce; Cornelius, David M.

    2008-01-01

    The Descent Assisted Split Habitat (DASH) lunar lander concept utilizes a disposable braking stage for descent and a minimally sized pressurized volume for crew transport to and from the lunar surface. The lander can also be configured to perform autonomous cargo missions. Although a braking-stage approach represents a significantly different operational concept compared with a traditional two-stage lander, the DASH lander offers many important benefits. These benefits include improved crew egress/ingress and large-cargo unloading; excellent surface visibility during landing; elimination of the need for deep-throttling descent engines; potentially reduced plume-surface interactions and lower vertical touchdown velocity; and reduced lander gross mass through efficient mass staging and volume segmentation. This paper documents the conceptual study on various aspects of the design, including development of sortie and outpost lander configurations and a mission concept of operations; the initial descent trajectory design; the initial spacecraft sizing estimates and subsystem design; and the identification of technology needs

  18. Descent graphs in pedigree analysis: applications to haplotyping, location scores, and marker-sharing statistics.

    PubMed Central

    Sobel, E.; Lange, K.

    1996-01-01

    The introduction of stochastic methods in pedigree analysis has enabled geneticists to tackle computations intractable by standard deterministic methods. Until now these stochastic techniques have worked by running a Markov chain on the set of genetic descent states of a pedigree. Each descent state specifies the paths of gene flow in the pedigree and the founder alleles dropped down each path. The current paper follows up on a suggestion by Elizabeth Thompson that genetic descent graphs offer a more appropriate space for executing a Markov chain. A descent graph specifies the paths of gene flow but not the particular founder alleles traveling down the paths. This paper explores algorithms for implementing Thompson's suggestion for codominant markers in the context of automatic haplotyping, estimating location scores, and computing gene-clustering statistics for robust linkage analysis. Realistic numerical examples demonstrate the feasibility of the algorithms. PMID:8651310

  19. Development and test results of a flight management algorithm for fuel conservative descents in a time-based metered traffic environment

    NASA Technical Reports Server (NTRS)

    Knox, C. E.; Cannon, D. G.

    1980-01-01

    A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.

  20. Compressed sensing with gradient total variation for low-dose CBCT reconstruction

    NASA Astrophysics Data System (ADS)

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Seongchae; Huh, Young; Park, Justin C.; Lee, Byeonghun; Baek, Junghee; Kim, Eunyoung

    2015-06-01

    This paper describes the improvement of convergence speed with gradient total variation (GTV) in compressed sensing (CS) for low-dose cone-beam computed tomography (CBCT) reconstruction. We derive a fast algorithm for the constrained total variation (TV)-based a minimum number of noisy projections. To achieve this task we combine the GTV with a TV-norm regularization term to promote an accelerated sparsity in the X-ray attenuation characteristics of the human body. The GTV is derived from a TV and enforces more efficient computationally and faster in convergence until a desired solution is achieved. The numerical algorithm is simple and derives relatively fast convergence. We apply a gradient projection algorithm that seeks a solution iteratively in the direction of the projected gradient while enforcing a non-negatively of the found solution. In comparison with the Feldkamp, Davis, and Kress (FDK) and conventional TV algorithms, the proposed GTV algorithm showed convergence in ≤18 iterations, whereas the original TV algorithm needs at least 34 iterations in reducing 50% of the projections compared with the FDK algorithm in order to reconstruct the chest phantom images. Future investigation includes improving imaging quality, particularly regarding X-ray cone-beam scatter, and motion artifacts of CBCT reconstruction.

  1. Constrained growth flips the direction of optimal phenological responses among annual plants.

    PubMed

    Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas

    2016-03-01

    Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  2. Descent Equations Starting from High Rank Chern-Simons

    NASA Astrophysics Data System (ADS)

    Kang, Bei; Pan, Yi; Wu, Ke; Yang, Jie; Yang, Zi-Feng

    2018-04-01

    In this paper a set of generalized descent equations are proposed. The solutions to those descent equations labeled by r for any r (r ≥ 2, r ɛ ℕ) are forms of degrees varying from 0 to (2r ‑ 1). And the case of r = 2 is mainly discussed. Supported by National Natural Science Foundation of China under Grant Nos. 11475116, 11401400

  3. Mars Science Laboratory Entry, Descent and Landing System Overview

    NASA Technical Reports Server (NTRS)

    Steltzner, Adam D.; San Martin, A. Miguel; Rivellini, Tomasso P.; Chen, Allen

    2013-01-01

    The Mars Science Laboratory project recently places the Curiosity rove on the surface of Mars. With the success of the landing system, the performance envelope of entry, descent and landing capabilities has been extended over the previous state of the art. This paper will present an overview to the MSL entry, descent and landing system design and preliminary flight performance results.

  4. Study of Some Planetary Atmospheres Features by Probe Entry and Descent Simulations

    NASA Technical Reports Server (NTRS)

    Gil, P. J. S.; Rosa, P. M. B.

    2005-01-01

    Characterization of planetary atmospheres is analyzed by its effects in the entry and descent trajectories of probes. Emphasis is on the most important variables that characterize atmospheres e.g. density profile with altitude. Probe trajectories are numerically determined with ENTRAP, a developing multi-purpose computational tool for entry and descent trajectory simulations capable of taking into account many features and perturbations. Real data from Mars Pathfinder mission is used. The goal is to be able to determine more accurately the atmosphere structure by observing real trajectories and what changes are to expect in probe descent trajectories if atmospheres have different properties than the ones assumed initially.

  5. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.

  6. NASA aviation safety reporting system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Reports describing various types of communication problems are presented along with summaries dealing with judgment and decision making. Concerns relating to the ground proximity warning system are summarized and several examples of true terrain proximity warnings are provided. An analytic study of reports relating to profile descents was performed. Problems were found to be associated with charting and graphic presentation of the descents, with lack of uniformity of the descent procedures among facilities using them, and with the flight crew workload engendered by profile descents, particularly when additional requirements are interposed by air traffic control during the execution of the profiles. A selection of alert bulletins and responses to them were reviewed.

  7. Tracer-based Determination of Vortex Descent in the 1999/2000 Arctic Winter

    NASA Technical Reports Server (NTRS)

    Greenblatt, Jeffrey B.; Jost, Hans-Juerg; Loewenstein, Max; Podolske, James R.; Hurst, Dale F.; Elkins, James W.; Schauffler, Sue M.; Atlas, Elliot L.; Herman, Robert L.; Webster, Chrisotopher R.

    2002-01-01

    A detailed analysis of available in situ and remotely sensed N2O and CH4 data measured in the 1999/2000 winter Arctic vortex has been performed in order to quantify the temporal evolution of vortex descent. Differences in potential temperature (theta) among balloon and aircraft vertical profiles (an average of 19-23 K on a given N2O or CH4 isopleth) indicated significant vortex inhomogeneity in late fall as compared with late winter profiles. A composite fall vortex profile was constructed for 26 November 1999, whose error bars encompassed the observed variability. High-latitude extravortex profiles measured in different years and seasons revealed substantial variability in N2O and CH4 on theta surfaces, but all were clearly distinguishable from the first vortex profiles measured in late fall 1999. From these extravortex-vortex differences we inferred descent prior to 26 November: as much as 397 plus or minus 15 K (lsigma) at 30 ppbv N2O and 640 ppbv CH4, and falling to 28 plus or minus 13 K above 200 ppbv N2O and 1280 ppbv CH4. Changes in theta were determined on five N2O and CH4 isopleths from 26 November through 12 March, and descent rates were calculated on each N2O isopleth for several time intervals. The maximum descent rates were seen between 26 November and 27 January: 0.82 plus or minus 0.20 K/day averaged over 50- 250 ppbv N2O. By late winter (26 February to 12 March), the average rate had decreased to 0.10 plus or minus 0.25 K/day. Descent rates also decreased with increasing N2O; the winter average (26 November to 5 March) descent rate varied from 0.75 plus or minus 0.10 K/day at 50 ppbv to 0.40 plus or minus 0.11 K/day at 250 ppbv. Comparison of these results with observations and models of descent in prior years showed very good overall agreement. Two models of the 1999/2000 vortex descent, SLIMCAT and REPROBUS, despite theta offsets with respect to observed profiles of up to 20 K on most tracer isopleths, produced descent rates that agreed very favorably with the inferred rates from observation.

  8. Variational stereo imaging of oceanic waves with statistical constraints.

    PubMed

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  9. Material parameter estimation with terahertz time-domain spectroscopy.

    PubMed

    Dorney, T D; Baraniuk, R G; Mittleman, D M

    2001-07-01

    Imaging systems based on terahertz (THz) time-domain spectroscopy offer a range of unique modalities owing to the broad bandwidth, subpicosecond duration, and phase-sensitive detection of the THz pulses. Furthermore, the possibility exists for combining spectroscopic characterization or identification with imaging because the radiation is broadband in nature. To achieve this, we require novel methods for real-time analysis of THz waveforms. This paper describes a robust algorithm for extracting material parameters from measured THz waveforms. Our algorithm simultaneously obtains both the thickness and the complex refractive index of an unknown sample under certain conditions. In contrast, most spectroscopic transmission measurements require knowledge of the sample's thickness for an accurate determination of its optical parameters. Our approach relies on a model-based estimation, a gradient descent search, and the total variation measure. We explore the limits of this technique and compare the results with literature data for optical parameters of several different materials.

  10. Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation

    NASA Astrophysics Data System (ADS)

    Bedi, Amrit Singh; Rajawat, Ketan

    2018-05-01

    Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.

  11. Wavefront sensorless adaptive optics ophthalmoscopy in the human eye

    PubMed Central

    Hofer, Heidi; Sredar, Nripun; Queener, Hope; Li, Chaohong; Porter, Jason

    2011-01-01

    Wavefront sensor noise and fidelity place a fundamental limit on achievable image quality in current adaptive optics ophthalmoscopes. Additionally, the wavefront sensor ‘beacon’ can interfere with visual experiments. We demonstrate real-time (25 Hz), wavefront sensorless adaptive optics imaging in the living human eye with image quality rivaling that of wavefront sensor based control in the same system. A stochastic parallel gradient descent algorithm directly optimized the mean intensity in retinal image frames acquired with a confocal adaptive optics scanning laser ophthalmoscope (AOSLO). When imaging through natural, undilated pupils, both control methods resulted in comparable mean image intensities. However, when imaging through dilated pupils, image intensity was generally higher following wavefront sensor-based control. Despite the typically reduced intensity, image contrast was higher, on average, with sensorless control. Wavefront sensorless control is a viable option for imaging the living human eye and future refinements of this technique may result in even greater optical gains. PMID:21934779

  12. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    NASA Astrophysics Data System (ADS)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  13. Monte Carlo-based Reconstruction in Water Cherenkov Detectors using Chroma

    NASA Astrophysics Data System (ADS)

    Seibert, Stanley; Latorre, Anthony

    2012-03-01

    We demonstrate the feasibility of event reconstruction---including position, direction, energy and particle identification---in water Cherenkov detectors with a purely Monte Carlo-based method. Using a fast optical Monte Carlo package we have written, called Chroma, in combination with several variance reduction techniques, we can estimate the value of a likelihood function for an arbitrary event hypothesis. The likelihood can then be maximized over the parameter space of interest using a form of gradient descent designed for stochastic functions. Although slower than more traditional reconstruction algorithms, this completely Monte Carlo-based technique is universal and can be applied to a detector of any size or shape, which is a major advantage during the design phase of an experiment. As a specific example, we focus on reconstruction results from a simulation of the 200 kiloton water Cherenkov far detector option for LBNE.

  14. Adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope

    NASA Astrophysics Data System (ADS)

    Ma, Haotong; Hu, Haojun; Xie, Wenke; Zhao, Haichuan; Xu, Xiaojun; Chen, Jinbao

    2013-08-01

    We demonstrate the adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope based on the stochastic parallel gradient descent (SPGD) algorithm and dual phase only liquid crystal spatial light modulators (LC-SLMs). Adaptive pre-compensation the wavefront of projected laser beam at the transmitter telescope is chosen to improve the power coupling efficiency. One phase only LC-SLM adaptively optimizes phase distribution of the projected laser beam and the other generates turbulence phase screen. The intensity distributions of the dark hollow beam after passing through the turbulent atmosphere with and without adaptive beam shaping are analyzed in detail. The influence of propagation distance and aperture size of the Cassegrain-telescope on coupling efficiency are investigated theoretically and experimentally. These studies show that the power coupling can be significantly improved by adaptive beam shaping. The technique can be used in optical communication, deep space optical communication and relay mirror.

  15. Learning and optimization with cascaded VLSI neural network building-block chips

    NASA Technical Reports Server (NTRS)

    Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.

    1992-01-01

    To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.

  16. Off-Policy Integral Reinforcement Learning Method to Solve Nonlinear Continuous-Time Multiplayer Nonzero-Sum Games.

    PubMed

    Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai

    2017-03-01

    This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.

  17. Active semi-supervised learning method with hybrid deep belief networks.

    PubMed

    Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong

    2014-01-01

    In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.

  18. Impact of a variational objective analysis scheme on a regional area numerical model: The Italian Air Force Weather Service experience

    NASA Astrophysics Data System (ADS)

    Bonavita, M.; Torrisi, L.

    2005-03-01

    A new data assimilation system has been designed and implemented at the National Center for Aeronautic Meteorology and Climatology of the Italian Air Force (CNMCA) in order to improve its operational numerical weather prediction capabilities and provide more accurate guidance to operational forecasters. The system, which is undergoing testing before operational use, is based on an “observation space” version of the 3D-VAR method for the objective analysis component, and on the High Resolution Regional Model (HRM) of the Deutscher Wetterdienst (DWD) for the prognostic component. Notable features of the system include a completely parallel (MPI+OMP) implementation of the solution of analysis equations by a preconditioned conjugate gradient descent method; correlation functions in spherical geometry with thermal wind constraint between mass and wind field; derivation of the objective analysis parameters from a statistical analysis of the innovation increments.

  19. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  20. Efficient numerical calculation of MHD equilibria with magnetic islands, with particular application to saturated neoclassical tearing modes

    NASA Astrophysics Data System (ADS)

    Raburn, Daniel Louis

    We have developed a preconditioned, globalized Jacobian-free Newton-Krylov (JFNK) solver for calculating equilibria with magnetic islands. The solver has been developed in conjunction with the Princeton Iterative Equilibrium Solver (PIES) and includes two notable enhancements over a traditional JFNK scheme: (1) globalization of the algorithm by a sophisticated backtracking scheme, which optimizes between the Newton and steepest-descent directions; and, (2) adaptive preconditioning, wherein information regarding the system Jacobian is reused between Newton iterations to form a preconditioner for our GMRES-like linear solver. We have developed a formulation for calculating saturated neoclassical tearing modes (NTMs) which accounts for the incomplete loss of a bootstrap current due to gradients of multiple physical quantities. We have applied the coupled PIES-JFNK solver to calculate saturated island widths on several shots from the Tokamak Fusion Test Reactor (TFTR) and have found reasonable agreement with experimental measurement.

  1. Learning and tuning fuzzy logic controllers through reinforcements.

    PubMed

    Berenji, H R; Khedkar, P

    1992-01-01

    A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.

  2. African and Non-African Admixture Components in African Americans and An African Caribbean Population

    PubMed Central

    Murray, Tanda; Beaty, Terri H.; Mathias, Rasika A.; Rafaels, Nicholas; Grant, Audrey Virginia; Faruque, Mezbah U.; Watson, Harold R.; Ruczinski, Ingo; Dunston, Georgia M.; Barnes, Kathleen C.

    2013-01-01

    Admixture is a potential source of confounding in genetic association studies, so it becomes important to detect and estimate admixture in a sample of unrelated individuals. Populations of African descent in the US and the Caribbean share similar historical backgrounds but the distributions of African admixture may differ. We selected 416 ancestry informative markers (AIMs) to estimate and compare admixture proportions using STRUCTURE in 906 unrelated African Americans (AAs) and 294 Barbadians (ACs) from a study of asthma. This analysis showed AAs on average were 72.5% African, 19.6% European and 8% Asian, while ACs were 77.4% African, 15.9% European, and 6.7% Asian which were significantly different. A principal components analysis based on these AIMs yielded one primary eigenvector that explained 54.04% of the variation and captured a gradient from West African to European admixture. This principal component was highly correlated with African vs. European ancestry as estimated by STRUCTURE (r2 = 0.992, r2 = 0.912, respectively). To investigate other African contributions to African American and Barbadian admixture, we performed PCA on ~14,000 (14k) genome-wide SNPs in AAs, ACs, Yorubans, Luhya and Maasai African groups, and estimated genetic distances (FST). We found AAs and ACs were closest genetically (FST = 0.008), and both were closer to the Yorubans than the other East African populations. In our sample of individuals of African descent, ~400 well-defined AIMs were just as good for detecting substructure as ~14,000 random SNPs drawn from a genome-wide panel of markers. PMID:20717976

  3. Improving Robot Locomotion Through Learning Methods for Expensive Black-Box Systems

    DTIC Science & Technology

    2013-11-01

    development of a class of “gradient free” optimization techniques; these include local approaches, such as a Nelder- Mead simplex search (c.f. [73]), and global...1Note that this simple method differs from the Nelder Mead constrained nonlinear optimization method [73]. 39 the Non-dominated Sorting Genetic Algorithm...Kober, and Jan Peters. Model-free inverse reinforcement learning. In International Conference on Artificial Intelligence and Statistics, 2011. [12] George

  4. Thermo-hydraulics of the Peruvian accretionary complex at 12°S

    USGS Publications Warehouse

    Kukowski, Nina; Pecher, Ingo

    1999-01-01

    The models were constrained by the thermal gradient obtained from the depth of bottomsimulating reflectors (BSRs) at the lower slope and some conventional measurements. We foundthat significant frictional heating is required to explain the observed strong landward increase ofheat flux. This is consistent with results from sandbox modelling which predict strong basalfriction at this margin. A significantly higher heat source is needed to match the observed thermalgradient in the southern line.

  5. Gradient Projection Anti-windup Scheme on Constrained Planar LTI Systems

    DTIC Science & Technology

    2010-03-15

    was recognized as a largely open problem in a recent survey paper . This report analyzes the properties of the GPAW scheme applied to an input...recent survey paper [2] that anti- windup compensation for nonlinear systems remains largely an open problem. To this end, [3] and relevant references...controllers, the solution of which was recognized as a largely open problem in a recent survey paper . This report analyzes the properties of the GPAW

  6. Cancer patterns among children of Turkish descent in Germany: A study at the German Childhood Cancer Registry

    PubMed Central

    Spallek, Jacob; Spix, Claudia; Zeeb, Hajo; Kaatsch, Peter; Razum, Oliver

    2008-01-01

    Background Cancer risks of migrants might differ from risks of the indigenous population due to differences in socioeconomic status, life style, or genetic factors. The aim of this study was to investigate cancer patterns among children of Turkish descent in Germany. Methods We identified cases with Turkish names (as a proxy of Turkish descent) among the 37,259 cases of childhood cancer registered in the German Childhood Cancer Registry (GCCR) during 1980–2005. As it is not possible to obtain reference population data for children of Turkish descent, the distribution of cancer diagnoses was compared between cases of Turkish descent and all remaining (mainly German) cases in the registry, using proportional cancer incidence ratios (PCIRs). Results The overall distribution of cancer diagnoses was similar in the two groups. The PCIRs in three diagnosis groups were increased for cases of Turkish descent: acute non-lymphocytic leukaemia (PCIR 1.23; CI (95%) 1.02–1.47), Hodgkin's disease (1.34; 1.13–1.59) and Non-Hodgkin/Burkitt lymphoma (1.19; 1.02–1.39). Age, sex, and period of diagnosis showed no influence on the distribution of diagnoses. Conclusion No major differences were found in cancer patterns among cases of Turkish descent compared to all other cases in the GCCR. Slightly higher proportions of systemic malignant diseases indicate that analytical studies involving migrants may help investigating the causes of such cancers. PMID:18462495

  7. Regression Analysis of Top of Descent Location for Idle-thrust Descents

    NASA Technical Reports Server (NTRS)

    Stell, Laurel; Bronsvoort, Jesper; McDonald, Greg

    2013-01-01

    In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. The independent variables cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also recorded or computed post-operations. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajec- tory parameters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowl- edge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace. In particular, a model for TOD location that is linear in the independent variables would enable decision support tool human-machine interfaces for which a kinetic approach would be computationally too slow.

  8. Equatorial Oscillations in Jupiter's and Saturn's Atmospheres

    NASA Technical Reports Server (NTRS)

    Flasar, F. Michael; Guerlet, S.; Fouchet, T.; Schinder, P. J.

    2011-01-01

    Equatorial oscillations in the zonal-mean temperatures and zonal winds have been well documented in Earth's middle atmosphere. A growing body of evidence from ground-based and Cassini spacecraft observations indicates that such phenomena also occur in the stratospheres of Jupiter and Saturn. Earth-based midinfrared measurements spanning several decades have established that the equatorial stratospheric temperatures on Jupiter vary with a cycle of 4-5 years and on Saturn with a cycle of approximately 15 years. Spectra obtained by the Composite Infrared Spectrometer (CIRS) during the Cassini swingby at the end of 2000, with much better vertical resolution than the ground-based data, indicated a series of vertically stacked warm and cold anomalics at Jupiter's equator; a similar structurc was seen at Saturn's equator in CIRS limb measurements made in 2005, in the early phase of Cassini's orbital tour. The thermal wind equation implied similar patterns of mean zonal winds increasing and decreasing with altitude. On Saturn the peak-to-pcak amplitude of this variation was nearly 200 meters per second. The alternating vertical pattern of wanner and colder cquatorial tcmperatures and easterly and westerly tendencies of the zonal winds is seen in Earth's equatorial oscillations, where the pattern descends with time, The Cassini Jupiter and early Saturn observations were snapshots within a limited time interval, and they did not show the temporal evolution of the spatial patterns. However, more recent Saturn observations by CIRS (2010) and Cassini radio-occultation soundings (2009-2010) have provided an opportunity to follow the change of the temperature-zonal wind pattern, and they suggest there is descent, at a rate of roughly one scale height over four years. On Earth, the observed descent in the zonal-mean structure is associated with the absorption of a combination of vertically propagating waves with easlerly and westerly phase velocities. The peak-to-peak zonal wind amplitude in the oscillation pattern and the rate of descent constrain the absorbed wave flux of zonal momentum. On Saturn this is approximately 0.05 square meters per square seconds, which is comparable to if not greater than that associated with the terrestrial oscillations. We discuss possible candidates for the absorbed waves on Saturn. On Earth the wave forcing of the equatorial oscillation generales secondary circulations that can affcct the temperature and wind structure at latitudes well away from the equator, and we discuss possible evidence of that on Saturn.

  9. Characterization of Thin Film Materials using SCAN meta-GGA, an Accurate Nonempirical Density Functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buda, I. G.; Lane, C.; Barbiellini, B.

    We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less

  10. Characterization of Thin Film Materials using SCAN meta-GGA, an Accurate Nonempirical Density Functional

    DOE PAGES

    Buda, I. G.; Lane, C.; Barbiellini, B.; ...

    2017-03-23

    We discuss self-consistently obtained ground-state electronic properties of monolayers of graphene and a number of ’beyond graphene’ compounds, including films of transition-metal dichalcogenides (TMDs), using the recently proposed strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) to the density functional theory. The SCAN meta-GGA results are compared with those based on the local density approximation (LDA) as well as the generalized gradient approximation (GGA). As expected, the GGA yields expanded lattices and softened bonds in relation to the LDA, but the SCAN meta-GGA systematically improves the agreement with experiment. Our study suggests the efficacy of the SCAN functionalmore » for accurate modeling of electronic structures of layered materials in high-throughput calculations more generally.« less

  11. Optimum sensor placement for microphone arrays

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.

    Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.

  12. The Role of la Familia for Women of Mexican Descent Who Are Leaders in Higher Education

    ERIC Educational Resources Information Center

    Elizondo, Sandra Gray

    2012-01-01

    The purpose of this qualitative case study was to describe the role of "la familia" for women of Mexican descent as it relates to their development as leaders and their leadership in academia. Purposeful sampling was utilized to reach the goal of 18 participants who were female academic leaders of Mexican descent teaching full time in…

  13. Investigation of iterative image reconstruction in low-dose breast CT

    NASA Astrophysics Data System (ADS)

    Bian, Junguo; Yang, Kai; Boone, John M.; Han, Xiao; Sidky, Emil Y.; Pan, Xiaochuan

    2014-06-01

    There is interest in developing computed tomography (CT) dedicated to breast-cancer imaging. Because breast tissues are radiation-sensitive, the total radiation exposure in a breast-CT scan is kept low, often comparable to a typical two-view mammography exam, thus resulting in a challenging low-dose-data-reconstruction problem. In recent years, evidence has been found that suggests that iterative reconstruction may yield images of improved quality from low-dose data. In this work, based upon the constrained image total-variation minimization program and its numerical solver, i.e., the adaptive steepest descent-projection onto the convex set (ASD-POCS), we investigate and evaluate iterative image reconstructions from low-dose breast-CT data of patients, with a focus on identifying and determining key reconstruction parameters, devising surrogate utility metrics for characterizing reconstruction quality, and tailoring the program and ASD-POCS to the specific reconstruction task under consideration. The ASD-POCS reconstructions appear to outperform the corresponding clinical FDK reconstructions, in terms of subjective visualization and surrogate utility metrics.

  14. Robust Airfoil Optimization to Achieve Consistent Drag Reduction Over a Mach Range

    NASA Technical Reports Server (NTRS)

    Li, Wu; Huyse, Luc; Padula, Sharon; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    We prove mathematically that in order to avoid point-optimization at the sampled design points for multipoint airfoil optimization, the number of design points must be greater than the number of free-design variables. To overcome point-optimization at the sampled design points, a robust airfoil optimization method (called the profile optimization method) is developed and analyzed. This optimization method aims at a consistent drag reduction over a given Mach range and has three advantages: (a) it prevents severe degradation in the off-design performance by using a smart descent direction in each optimization iteration, (b) there is no random airfoil shape distortion for any iterate it generates, and (c) it allows a designer to make a trade-off between a truly optimized airfoil and the amount of computing time consumed. For illustration purposes, we use the profile optimization method to solve a lift-constrained drag minimization problem for 2-D airfoil in Euler flow with 20 free-design variables. A comparison with other airfoil optimization methods is also included.

  15. Surface erosion caused on Mars from Viking descent engine plume

    USGS Publications Warehouse

    Hutton, R.E.; Moore, H.J.; Scott, R.F.; Shorthill, R.W.; Spitzer, C.R.

    1980-01-01

    During the Martian landings the descent engine plumes on Viking Lander 1 (VL-1) and Viking Lander 2 (VL-2) eroded the Martian surface materials. This had been anticipated and investigated both analytically and experimentally during the design phase of the Viking spacecraft. This paper presents data on erosion obtained during the tests of the Viking descent engine and the evidence for erosion by the descent engines of VL-1 and VL-2 on Mars. From these and other results, it is concluded that there are four distinct surface materials on Mars: (1) drift material, (2) crusty to cloddy material, (3) blocky material, and (4) rock. ?? 1980 D. Reidel Publishing Co.

  16. Local flow management/profile descent algorithm. Fuel-efficient, time-controlled profiles for the NASA TSRV airplane

    NASA Technical Reports Server (NTRS)

    Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.

    1986-01-01

    The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.

  17. Structure of neutron star crusts from new Skyrme effective interactions constrained by chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Lim, Yeunhwan; Holt, Jeremy W.

    2017-06-01

    We investigate the structure of neutron star crusts, including the crust-core boundary, based on new Skyrme mean field models constrained by the bulk-matter equation of state from chiral effective field theory and the ground-state energies of doubly-magic nuclei. Nuclear pasta phases are studied using both the liquid drop model as well as the Thomas-Fermi approximation. We compare the energy per nucleon for each geometry (spherical nuclei, cylindrical nuclei, nuclear slabs, cylindrical holes, and spherical holes) to obtain the ground state phase as a function of density. We find that the size of the Wigner-Seitz cell depends strongly on the model parameters, especially the coefficients of the density gradient interaction terms. We employ also the thermodynamic instability method to check the validity of the numerical solutions based on energy comparisons.

  18. Prestack density inversion using the Fatti equation constrained by the P- and S-wave impedance and density

    NASA Astrophysics Data System (ADS)

    Liang, Li-Feng; Zhang, Hong-Bing; Dan, Zhi-Wei; Xu, Zi-Qiang; Liu, Xiu-Juan; Cao, Cheng-Hao

    2017-03-01

    Simultaneous prestack inversion is based on the modified Fatti equation and uses the ratio of the P- and S-wave velocity as constraints. We use the relation of P-wave impedance and density (PID) and S-wave impedance and density (SID) to replace the constant Vp/Vs constraint, and we propose the improved constrained Fatti equation to overcome the effect of P-wave impedance on density. We compare the sensitivity of both methods using numerical simulations and conclude that the density inversion sensitivity improves when using the proposed method. In addition, the random conjugate-gradient method is used in the inversion because it is fast and produces global solutions. The use of synthetic and field data suggests that the proposed inversion method is effective in conventional and nonconventional lithologies.

  19. Integrated Targeting and Guidance for Powered Planetary Descent

    NASA Astrophysics Data System (ADS)

    Azimov, Dilmurat M.; Bishop, Robert H.

    2018-02-01

    This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.

  20. Integrated Targeting and Guidance for Powered Planetary Descent

    NASA Astrophysics Data System (ADS)

    Azimov, Dilmurat M.; Bishop, Robert H.

    2018-06-01

    This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.

  1. A molecular signature of an arrest of descent in human parturition

    PubMed Central

    MITTAL, Pooja; ROMERO, Roberto; TARCA, Adi L.; DRAGHICI, Sorin; NHAN-CHANG, Chia-Ling; CHAIWORAPONGSA, Tinnakorn; HOTRA, John; GOMEZ, Ricardo; KUSANOVIC, Juan Pedro; LEE, Deug-Chan; KIM, Chong Jai; HASSAN, Sonia S.

    2010-01-01

    Objective This study was undertaken to identify the molecular basis of an arrest of descent. Study Design Human myometrium was obtained from women in term labor (TL; n=29) and arrest of descent (AODes, n=21). Gene expression was characterized using Illumina® HumanHT-12 microarrays. A moderated t-test and false discovery rate adjustment were applied for analysis. Confirmatory qRT-PCR and immunoblot was performed in an independent sample set. Results 400 genes were differentially expressed between women with an AODes compared to those with TL. Gene Ontology analysis indicated enrichment of biological processes and molecular functions related to inflammation and muscle function. Impacted pathways included inflammation and the actin cytoskeleton. Overexpression of HIF1A, IL-6, and PTGS2 in AODES was confirmed. Conclusion We have identified a stereotypic pattern of gene expression in the myometrium of women with an arrest of descent. This represents the first study examining the molecular basis of an arrest of descent using a genome-wide approach. PMID:21284969

  2. Air-Traffic Controllers Evaluate The Descent Advisor

    NASA Technical Reports Server (NTRS)

    Tobias, Leonard; Volckers, Uwe; Erzberger, Heinz

    1992-01-01

    Report describes study of Descent Advisor algorithm: software automation aid intended to assist air-traffic controllers in spacing traffic and meeting specified times or arrival. Based partly on mathematical models of weather conditions and performances of aircraft, it generates suggested clearances, including top-of-descent points and speed-profile data to attain objectives. Study focused on operational characteristics with specific attention to how it can be used for prediction, spacing, and metering.

  3. Design principles of descent vehicles with an inflatable braking device

    NASA Astrophysics Data System (ADS)

    Alexashkin, S. N.; Pichkhadze, K. M.; Finchenko, V. S.

    2013-12-01

    A new type of descent vehicle (DVs) is described: a descent vehicle with an inflatable braking device (IBD DV). IBD development issues, as well as materials needed for the design, manufacturing, and testing of an IBD and its thermal protection, are discussed. A list is given of Russian integrated test facilities intended for testing IBD DVs. Progress is described in the development of IBD DVs in Russia and abroad.

  4. Synonymous ABCA3 Variants Do Not Increase Risk for Neonatal Respiratory Distress Syndrome

    PubMed Central

    Wambach, Jennifer A.; Wegner, Daniel J.; Heins, Hillary B.; Druley, Todd E.; Mitra, Robi D.; Hamvas, Aaron; Cole, F. Sessions

    2014-01-01

    Objective To determine whether synonymous variants in the adenosine triphosphate-binding cassette A3 transporter (ABCA3) gene increase the risk for neonatal respiratory distress syndrome (RDS) in term and late preterm infants of European and African descent. Study design Using next-generation pooled sequencing of race-stratified DNA samples from infants of European and African descent at $34 weeks gestation with and without RDS (n = 503), we scanned all exons of ABCA3, validated each synonymous variant with an independent genotyping platform, and evaluated race-stratified disease risk associated with common synonymous variants and collapsed frequencies of rare synonymous variants. Results The synonymous ABCA3 variant frequency spectrum differs between infants of European descent and those of African descent. Using in silico prediction programs and statistical strategies, we found no potentially disruptive synonymous ABCA3 variants or evidence of selection pressure. Individual common synonymous variants and collapsed frequencies of rare synonymous variants did not increase disease risk in term and late-preterm infants of European or African descent. Conclusion In contrast to rare, nonsynonymous ABCA3 mutations, synonymous ABCA3 variants do not increase the risk for neonatal RDS among term and late-preterm infants of European or African descent. PMID:24657120

  5. A systematic review and meta-analysis of comparative studies assessing the efficacy of luteinizing hormone-releasing hormone therapy for children with cryptorchidism.

    PubMed

    Li, Tao; Gao, Liang; Chen, Peng; Bu, Siyuan; Cao, Dehong; Yang, Lu; Wei, Qiang

    2016-05-01

    To assess the efficacy of intranasal luteinizing hormone-releasing hormone (LHRH) therapy for cryptorchidism. Eligible studies were identified by two reviewers using PubMed, Embase, and Web of Science databases. Primary outcomes were complete testicular descent rate, complete testicular descent rate for nonpalpable testis, and pre-scrotal and inguinal testis. Secondary outcomes included testicular descent with different medicines strategy and a subgroup analysis. Pooled data including the 1255 undescended testes showed that complete testicular descent rate was 20.9 % in LHRH group versus 5.6 % in the placebo group, which was significantly different [relative risk (RR) 3.94, 95 % confidence interval (CI) 2.14-7.28, P < 0.0001]. There was also a significant difference in the incidence of pre-scrotal and inguinal position testis descent, with 22.8 % in the LHRH group versus 3.6 % in the placebo group (RR 5.79, 95 % CI 2.94-11.39, P < 0.00001). However, side effects were more frequent in the LHRH group (RR 2.61, 95 % CI 1.52-4.49, P = 0.0005). There were no significant differences for nonpalpable testes. LHRH had significant benefits on testicular descent, particularly for inguinal and pre-scrotal testes, which was also accompanied by temporary slight side effects.

  6. Global Patterns of Prostate Cancer Incidence, Aggressiveness, and Mortality in Men of African Descent

    PubMed Central

    Rebbeck, Timothy R.; Devesa, Susan S.; Chang, Bao-Li; Bunker, Clareann H.; Cheng, Iona; Cooney, Kathleen; Eeles, Rosalind; Fernandez, Pedro; Giri, Veda N.; Gueye, Serigne M.; Haiman, Christopher A.; Henderson, Brian E.; Heyns, Chris F.; Hu, Jennifer J.; Ingles, Sue Ann; Isaacs, William; Jalloh, Mohamed; John, Esther M.; Kibel, Adam S.; Kidd, LaCreis R.; Layne, Penelope; Leach, Robin J.; Neslund-Dudas, Christine; Okobia, Michael N.; Ostrander, Elaine A.; Park, Jong Y.; Patrick, Alan L.; Phelan, Catherine M.; Ragin, Camille; Roberts, Robin A.; Rybicki, Benjamin A.; Stanford, Janet L.; Strom, Sara; Thompson, Ian M.; Witte, John; Xu, Jianfeng; Yeboah, Edward; Hsing, Ann W.; Zeigler-Johnson, Charnita M.

    2013-01-01

    Prostate cancer (CaP) is the leading cancer among men of African descent in the USA, Caribbean, and Sub-Saharan Africa (SSA). The estimated number of CaP deaths in SSA during 2008 was more than five times that among African Americans and is expected to double in Africa by 2030. We summarize publicly available CaP data and collected data from the men of African descent and Carcinoma of the Prostate (MADCaP) Consortium and the African Caribbean Cancer Consortium (AC3) to evaluate CaP incidence and mortality in men of African descent worldwide. CaP incidence and mortality are highest in men of African descent in the USA and the Caribbean. Tumor stage and grade were highest in SSA. We report a higher proportion of T1 stage prostate tumors in countries with greater percent gross domestic product spent on health care and physicians per 100,000 persons. We also observed that regions with a higher proportion of advanced tumors reported lower mortality rates. This finding suggests that CaP is underdiagnosed and/or underreported in SSA men. Nonetheless, CaP incidence and mortality represent a significant public health problem in men of African descent around the world. PMID:23476788

  7. Evolutionary analyses of non-genealogical bonds produced by introgressive descent.

    PubMed

    Bapteste, Eric; Lopez, Philippe; Bouchard, Frédéric; Baquero, Fernando; McInerney, James O; Burian, Richard M

    2012-11-06

    All evolutionary biologists are familiar with evolutionary units that evolve by vertical descent in a tree-like fashion in single lineages. However, many other kinds of processes contribute to evolutionary diversity. In vertical descent, the genetic material of a particular evolutionary unit is propagated by replication inside its own lineage. In what we call introgressive descent, the genetic material of a particular evolutionary unit propagates into different host structures and is replicated within these host structures. Thus, introgressive descent generates a variety of evolutionary units and leaves recognizable patterns in resemblance networks. We characterize six kinds of evolutionary units, of which five involve mosaic lineages generated by introgressive descent. To facilitate detection of these units in resemblance networks, we introduce terminology based on two notions, P3s (subgraphs of three nodes: A, B, and C) and mosaic P3s, and suggest an apparatus for systematic detection of introgressive descent. Mosaic P3s correspond to a distinct type of evolutionary bond that is orthogonal to the bonds of kinship and genealogy usually examined by evolutionary biologists. We argue that recognition of these evolutionary bonds stimulates radical rethinking of key questions in evolutionary biology (e.g., the relations among evolutionary players in very early phases of evolutionary history, the origin and emergence of novelties, and the production of new lineages). This line of research will expand the study of biological complexity beyond the usual genealogical bonds, revealing additional sources of biodiversity. It provides an important step to a more realistic pluralist treatment of evolutionary complexity.

  8. Design of automation tools for management of descent traffic

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Nedell, William

    1988-01-01

    The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational evaluations at an en route center.

  9. Reduced-gravity Testing of The Huygens Probe Ssp Tiltmeter and Hasi Accelerometer Sensors and Their Role In Reconstruction of The Probe Descent Dynamics

    NASA Astrophysics Data System (ADS)

    Ghafoor, N.; Zarnecki, J.

    When the ESA Huygens Probe arrives at Titan in 2005, measurements taken during and after the descent through the atmosphere are likely to revolutionise our under- standing of SaturnSs most enigmatic moon. The accurate atmospheric profiling of Titan from these measurements will require knowledge of the probe descent trajectory and in some cases attitude history, whilst certain atmospheric information (e.g. wind speeds) may be inferred directly from the probe dynamics during descent. Two of the instruments identified as contributing valuable information for the reconstruction of the probeSs parachute descent dynamics are the Surface Science Package Tilt sensor (SSP-TIL) and the Huygens Atmospheric Structure Instrument servo accelerometer (HASI-ACC). This presentation provides an overview of these sensors and their static calibration before describing an investigation into their real-life dynamic performance under simulated Titan-gravity conditions via a low-cost parabolic flight opportunity. The combined use of SSP-TIL and HASI-ACC in characterising the aircraft dynam- ics is also demonstrated and some important challenges are highlighted. Results from some simple spin tests are also presented. Finally, having validated the performance of the sensors under simulated Titan conditions, estimates are made as to the output of SSP-TIL and HASI-ACC under a variety of probe dynamics, ranging from verti- cal descent with spin to a simple 3 degree-of-freedom parachute descent model with horizontal gusting. It is shown how careful consideration must be given to the instru- mentsS principles of operation in each case, and also the impact of the sampling rates and resolutions as selected for the Huygens mission. The presentation concludes with a discussion of ongoing work on more advanced descent modelling and surface dy- namics modelling, and also of a proposal for the testing of the sensors on a sea-surface.

  10. Mars Descent Imager (MARDI) on the Mars Polar Lander

    USGS Publications Warehouse

    Malin, M.C.; Caplinger, M.A.; Carr, M.H.; Squyres, S.; Thomas, P.; Veverka, J.

    2001-01-01

    The Mars Descent Imager, or MARDI, experiment on the Mars Polar Lander (MPL) consists of a camera characterized by small physical size and mass (???6 ?? 6 ?? 12 cm, including baffle; <500 gm), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 x 1000 pixel, low noise). The intent of the investigation is to acquire nested images over a range of resolutions, from 8 m/pixel to better than 1 cm/pixel, during the roughly 2 min it takes the MPL to descend from 8 km to the surface under parachute and rocket-powered deceleration. Observational goals will include studies of (1) surface morphology (e.g., nature and distribution of landforms indicating past and present environmental processes); (2) local and regional geography (e.g., context for other lander instruments: precise location, detailed local relief); and (3) relationships to features seen in orbiter data. To accomplish these goals, MARDI will collect three types of images. Four small images (256 x 256 pixels) will be acquired on 0.5 s centers beginning 0.3 s before MPL's heatshield is jettisoned. Sixteen full-frame images (1024 X 1024, circularly edited) will be acquired on 5.3 s centers thereafter. Just after backshell jettison but prior to the start of powered descent, a "best final nonpowered descent image" will be acquired. Five seconds after the start of powered descent, the camera will begin acquiring images on 4 s centers. Storage for as many as ten 800 x 800 pixel images is available during terminal descent. A number of spacecraft factors are likely to impact the quality of MARDI images, including substantial motion blur resulting from large rates of attitude variation during parachute descent and substantial rocket-engine-induced vibration during powered descent. In addition, the mounting location of the camera places the exhaust plume of the hydrazine engines prominently in the field of view. Copyright 2001 by the American Geophysical Union.

  11. Compressed sensing for rapid late gadolinium enhanced imaging of the left atrium: A preliminary study.

    PubMed

    Kamesh Iyer, Srikant; Tasdizen, Tolga; Burgon, Nathan; Kholmovski, Eugene; Marrouche, Nassir; Adluru, Ganesh; DiBella, Edward

    2016-09-01

    Current late gadolinium enhancement (LGE) imaging of left atrial (LA) scar or fibrosis is relatively slow and requires 5-15min to acquire an undersampled (R=1.7) 3D navigated dataset. The GeneRalized Autocalibrating Partially Parallel Acquisitions (GRAPPA) based parallel imaging method is the current clinical standard for accelerating 3D LGE imaging of the LA and permits an acceleration factor ~R=1.7. Two compressed sensing (CS) methods have been developed to achieve higher acceleration factors: a patch based collaborative filtering technique tested with acceleration factor R~3, and a technique that uses a 3D radial stack-of-stars acquisition pattern (R~1.8) with a 3D total variation constraint. The long reconstruction time of these CS methods makes them unwieldy to use, especially the patch based collaborative filtering technique. In addition, the effect of CS techniques on the quantification of percentage of scar/fibrosis is not known. We sought to develop a practical compressed sensing method for imaging the LA at high acceleration factors. In order to develop a clinically viable method with short reconstruction time, a Split Bregman (SB) reconstruction method with 3D total variation (TV) constraints was developed and implemented. The method was tested on 8 atrial fibrillation patients (4 pre-ablation and 4 post-ablation datasets). Blur metric, normalized mean squared error and peak signal to noise ratio were used as metrics to analyze the quality of the reconstructed images, Quantification of the extent of LGE was performed on the undersampled images and compared with the fully sampled images. Quantification of scar from post-ablation datasets and quantification of fibrosis from pre-ablation datasets showed that acceleration factors up to R~3.5 gave good 3D LGE images of the LA wall, using a 3D TV constraint and constrained SB methods. This corresponds to reducing the scan time by half, compared to currently used GRAPPA methods. Reconstruction of 3D LGE images using the SB method was over 20 times faster than standard gradient descent methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy

    NASA Astrophysics Data System (ADS)

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-01

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse rawdata and provides more stable results than volume-to-volume approaches. By applying the proposed registration approach to low dose tomographic fluoroscopy it is possible to improve the temporal resolution and thus to increase the robustness of low dose tomographic fluoroscopy.

  13. A Pn Spreading Model Constrained with Observed Amplitudes in Asia

    DTIC Science & Technology

    2011-09-01

    and stations, from which we collected my data. According to Patton (1980), the “ tectonic ” province was defined as an area with its crustal thickness...and the definition of the “ tectonic ” province as a tectonically active region with similar crustal and upper-mantle structure in most parts of the...North Australian Craton: Influence of crustal velocity gradients, Bull. Seismol. Soc. Am. 81: 592–610. Brune, J. N. (1970). Tectonic stress and the

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiangfeng; Tanihata, Kimiaki; Miyamoto, Yoshinari

    A TiC/Ni functionally gradient material (FGM) fabricated via gas-pressure combustion sintering is presently investigated to establish its mechanical and thermal properties. Attention is given to the FGM's specific thermal conductivities with different thermal cycling conditions; these are found to decrease with thermal cycling in all samples tested, implying that the lateral cracks are generated in the FGM and then propagated by the thermal cycle. High compressive stresses are induced at the TiC surface when this is constrained by a Cu block. 6 refs.

  15. Interactive effects of genotype and food quality on consumer growth rate and elemental content.

    PubMed

    Prater, Clay; Wagner, Nicole D; Frost, Paul C

    2017-05-01

    Consumer body stoichiometry is a key trait that links organismal physiology to population and ecosystem-level dynamics. However, as elemental composition has traditionally been considered to be constrained within a species, the ecological and evolutionary factors shaping consumer elemental composition have not been clearly resolved. To this end, we examined the causes and extent of variation in the body phosphorus (P) content and the expression of P-linked traits, mass specific growth rate (MSGR), and P use efficiency (PUE) of the keystone aquatic consumer Daphnia using lake surveys and common garden experiments. While daphnid body %P was relatively constrained in field assemblages sampled across an environmental P gradient, unique genotypes isolated from these lakes showed highly variable phenotypic responses when raised across dietary P gradients in the laboratory. Specifically, we observed substantial inter- and intra-specific variation and differences in daphnid responses within and among our study lakes. While variation in Daphnia body %P was mostly due to plastic phenotypic changes, we documented considerable genetic differences in daphnid MSGR and PUE, and relationships between MSGR and body P content were highly variable among genotypes. Overall, our study found that consumer responses to food quality may differ considerably among genotypes and that relationships between organismal life-history traits and body stoichiometry may be strongly influenced by genetic and environmental variation in natural assemblages. © 2017 by the Ecological Society of America.

  16. Estimating the composition of hydrates from a 3D seismic dataset near Penghu Canyon on Chinese passive margin offshore Taiwan

    NASA Astrophysics Data System (ADS)

    Chi, Wu-Cheng

    2016-04-01

    A bottom-simulating reflector (BSR), representing the base of the gas hydrate stability zone, can be used to estimate geothermal gradients under seafloor. However, to derive temperature estimates at the BSR, the correct hydrate composition is needed to calculate the phase boundary. Here we applied the method by Minshull and Keddie to constrain the hydrate composition and the pore fluid salinity. We used a 3D seismic dataset offshore SW Taiwan to test the method. Different from previous studies, we have considered the effects of 3D topographic effects using finite element modelling and also depth-dependent thermal conductivity. Using a pore water salinity of 2% at the BSR depth as found from the nearby core samples, we successfully used 99% methane and 1% ethane gas hydrate phase boundary to derive a sub-bottom depth vs. temperature plot which is consistent with the seafloor temperature from in-situ measurements. The results are also consistent with geochemical analyses of the pore fluids. The derived regional geothermal gradient is 40.1oC/km, which is similar to 40oC/km used in the 3D finite element modelling used in this study. This study is among the first documented successful use of Minshull and Keddie's method to constrain seafloor gas hydrate composition.

  17. Performance of the strongly constrained and appropriately normed density functional for solid-state materials

    DOE PAGES

    Isaacs, Eric B.; Wolverton, Chris

    2018-06-22

    Constructed to satisfy 17 known exact constraints for a semilocal density functional, the strongly constrained and appropriately normed (SCAN) meta-generalized-gradient-approximation functional has shown early promise for accurately describing the electronic structure of molecules and solids. One open question is how well SCAN predicts the formation energy, a key quantity for describing the thermodynamic stability of solid-state compounds. To answer this question, we perform an extensive benchmark of SCAN by computing the formation energies for a diverse group of nearly 1000 crystalline compounds for which experimental values are known. Due to an enhanced exchange interaction in the covalent bonding regime, SCANmore » substantially decreases the formation energy errors for strongly bound compounds, by approximately 50% to 110 meV/atom, as compared to the generalized gradient approximation of Perdew, Burke, and Ernzerhof (PBE). However, for intermetallic compounds, SCAN performs moderately worse than PBE with an increase in formation energy error of approximately 20%, stemming from SCAN's distinct behavior in the weak bonding regime. The formation energy errors can be further reduced via elemental chemical potential fitting. We find that SCAN leads to significantly more accurate predicted crystal volumes, moderately enhanced magnetism, and mildly improved band gaps as compared to PBE. Altogether, SCAN represents a significant improvement in accurately describing the thermodynamics of strongly bound compounds.« less

  18. Performance of the strongly constrained and appropriately normed density functional for solid-state materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaacs, Eric B.; Wolverton, Chris

    Constructed to satisfy 17 known exact constraints for a semilocal density functional, the strongly constrained and appropriately normed (SCAN) meta-generalized-gradient-approximation functional has shown early promise for accurately describing the electronic structure of molecules and solids. One open question is how well SCAN predicts the formation energy, a key quantity for describing the thermodynamic stability of solid-state compounds. To answer this question, we perform an extensive benchmark of SCAN by computing the formation energies for a diverse group of nearly 1000 crystalline compounds for which experimental values are known. Due to an enhanced exchange interaction in the covalent bonding regime, SCANmore » substantially decreases the formation energy errors for strongly bound compounds, by approximately 50% to 110 meV/atom, as compared to the generalized gradient approximation of Perdew, Burke, and Ernzerhof (PBE). However, for intermetallic compounds, SCAN performs moderately worse than PBE with an increase in formation energy error of approximately 20%, stemming from SCAN's distinct behavior in the weak bonding regime. The formation energy errors can be further reduced via elemental chemical potential fitting. We find that SCAN leads to significantly more accurate predicted crystal volumes, moderately enhanced magnetism, and mildly improved band gaps as compared to PBE. Altogether, SCAN represents a significant improvement in accurately describing the thermodynamics of strongly bound compounds.« less

  19. Stereo transparency and the disparity gradient limit

    NASA Technical Reports Server (NTRS)

    McKee, Suzanne P.; Verghese, Preeti

    2002-01-01

    Several studies (Vision Research 15 (1975) 583; Perception 9 (1980) 671) have shown that binocular fusion is limited by the disparity gradient (disparity/distance) separating image points, rather than by their absolute disparity values. Points separated by a gradient >1 appear diplopic. These results are sometimes interpreted as a constraint on human stereo matching, rather than a constraint on fusion. Here we have used psychophysical measurements on stereo transparency to show that human stereo matching is not constrained by a gradient of 1. We created transparent surfaces composed of many pairs of dots, in which each member of a pair was assigned a disparity equal and opposite to the disparity of the other member. For example, each pair could be composed of one dot with a crossed disparity of 6' and the other with uncrossed disparity of 6', vertically separated by a parametrically varied distance. When the vertical separation between the paired dots was small, the disparity gradient for each pair was very steep. Nevertheless, these opponent-disparity dot pairs produced a striking appearance of two transparent surfaces for disparity gradients ranging between 0.5 and 3. The apparent depth separating the two transparent planes was correctly matched to an equivalent disparity defined by two opaque surfaces. A test target presented between the two transparent planes was easily detected, indicating robust segregation of the disparities associated with the paired dots into two transparent surfaces with few mismatches in the target plane. Our simulations using the Tsai-Victor model show that the response profiles produced by scaled disparity-energy mechanisms can account for many of our results on the transparency generated by steep gradients.

  20. Forest gradient response in Sierran landscapes: the physical template

    USGS Publications Warehouse

    Urban, Dean L.; Miller, Carol; Halpin, Patrick N.; Stephenson, Nathan L.

    2000-01-01

    Vegetation pattern on landscapes is the manifestation of physical gradients, biotic response to these gradients, and disturbances. Here we focus on the physical template as it governs the distribution of mixed-conifer forests in California's Sierra Nevada. We extended a forest simulation model to examine montane environmental gradients, emphasizing factors affecting the water balance in these summer-dry landscapes. The model simulates the soil moisture regime in terms of the interaction of water supply and demand: supply depends on precipitation and water storage, while evapotranspirational demand varies with solar radiation and temperature. The forest cover itself can affect the water balance via canopy interception and evapotranspiration. We simulated Sierran forests as slope facets, defined as gridded stands of homogeneous topographic exposure, and verified simulated gradient response against sample quadrats distributed across Sequoia National Park. We then performed a modified sensitivity analysis of abiotic factors governing the physical gradient. Importantly, the model's sensitivity to temperature, precipitation, and soil depth varies considerably over the physical template, particularly relative to elevation. The physical drivers of the water balance have characteristic spatial scales that differ by orders of magnitude. Across large spatial extents, temperature and precipitation as defined by elevation primarily govern the location of the mixed conifer zone. If the analysis is constrained to elevations within the mixed-conifer zone, local topography comes into play as it influences drainage. Soil depth varies considerably at all measured scales, and is especially dominant at fine (within-stand) scales. Physical site variables can influence soil moisture deficit either by affecting water supply or water demand; these effects have qualitatively different implications for forest response. These results have clear implications about purely inferential approaches to gradient analysis, and bear strongly on our ability to use correlative approaches in assessing the potential responses of montane forests to anthropogenic climatic change.

Top