NASA Astrophysics Data System (ADS)
Dong, Bing; Ren, De-Qing; Zhang, Xi
2011-08-01
An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10-3 to 10-4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.
NASA Astrophysics Data System (ADS)
Jiao Ling, LIn; Xiaoli, Yin; Huan, Chang; Xiaozhou, Cui; Yi-Lin, Guo; Huan-Yu, Liao; Chun-YU, Gao; Guohua, Wu; Guang-Yao, Liu; Jin-KUn, Jiang; Qing-Hua, Tian
2018-02-01
Atmospheric turbulence limits the performance of orbital angular momentum-based free-space optical communication (FSO-OAM) system. In order to compensate phase distortion induced by atmospheric turbulence, wavefront sensorless adaptive optics (WSAO) has been proposed and studied in recent years. In this paper a new version of SPGD called MZ-SPGD, which combines the Z-SPGD based on the deformable mirror influence function and the M-SPGD based on the Zernike polynomials, is proposed. Numerical simulations show that the hybrid method decreases convergence times markedly but can achieve the same compensated effect compared to Z-SPGD and M-SPGD.
NASA Astrophysics Data System (ADS)
Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin
2009-10-01
Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.
Incoherent beam combining based on the momentum SPGD algorithm
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng
2018-05-01
Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.
NASA Astrophysics Data System (ADS)
Cao, Jingtai; Zhao, Xiaohui; Li, Zhaokun; Liu, Wei; Gu, Haijun
2017-11-01
The performance of free space optical (FSO) communication system is limited by atmospheric turbulent extremely. Adaptive optics (AO) is the significant method to overcome the atmosphere disturbance. Especially, for the strong scintillation effect, the sensor-less AO system plays a major role for compensation. In this paper, a modified artificial fish school (MAFS) algorithm is proposed to compensate the aberrations in the sensor-less AO system. Both the static and dynamic aberrations compensations are analyzed and the performance of FSO communication before and after aberrations compensations is compared. In addition, MAFS algorithm is compared with artificial fish school (AFS) algorithm, stochastic parallel gradient descent (SPGD) algorithm and simulated annealing (SA) algorithm. It is shown that the MAFS algorithm has a higher convergence speed than SPGD algorithm and SA algorithm, and reaches the better convergence value than AFS algorithm, SPGD algorithm and SA algorithm. The sensor-less AO system with MAFS algorithm effectively increases the coupling efficiency at the receiving terminal with fewer numbers of iterations. In conclusion, the MAFS algorithm has great significance for sensor-less AO system to compensate atmospheric turbulence in FSO communication system.
Geng, Chao; Luo, Wen; Tan, Yi; Liu, Hongmei; Mu, Jinbo; Li, Xinyang
2013-10-21
A novel approach of tip/tilt control by using divergence cost function in stochastic parallel gradient descent (SPGD) algorithm for coherent beam combining (CBC) is proposed and demonstrated experimentally in a seven-channel 2-W fiber amplifier array with both phase-locking and tip/tilt control, for the first time to our best knowledge. Compared with the conventional power-in-the-bucket (PIB) cost function for SPGD optimization, the tip/tilt control using divergence cost function ensures wider correction range, automatic switching control of program, and freedom of camera's intensity-saturation. Homemade piezoelectric-ring phase-modulator (PZT PM) and adaptive fiber-optics collimator (AFOC) are developed to correct piston- and tip/tilt-type aberrations, respectively. The PIB cost function is employed for phase-locking via maximization of SPGD optimization, while the divergence cost function is used for tip/tilt control via minimization. An average of 432-μrad of divergence metrics in open loop has decreased to 89-μrad when tip/tilt control implemented. In CBC, the power in the full width at half maximum (FWHM) of the main lobe increases by 32 times, and the phase residual error is less than λ/15.
NASA Astrophysics Data System (ADS)
Niu, Chaojun; Han, Xiang'e.
2015-10-01
Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.
NASA Astrophysics Data System (ADS)
He, Xiaojun; Ma, Haotong; Luo, Chuanxin
2016-10-01
The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.
Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.
2013-01-01
Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512
NASA Astrophysics Data System (ADS)
Yang, Ping; Yang, Ruo fu; Shen, Feng; Ao, Mingwu; Jiang, Wenhan
2009-05-01
Coherent combination is one of the most promising ways to realize high power laser output. A three- laser-beam coherent combination system based on adaptive optics (AO) technique has been set up in our laboratory. In this system, three 1064nm laser beams are placed side-by-side and compressed by two reflective mirrors. An active segmented deformable mirror (DM) is used to compensate the optical path difference (OPD) among three laser beams. The beams are overlapped onto a 2900Hz CCD camera to form an interference pattern while the peak intensity of the interference pattern is taken as the cost function to optimize by a stochastic parallel gradient descent (SPGD) algorithm. SPGD algorithm is realized on a RT-Linux dual-core industrial computer. A series of experiments have been accomplished and experimental results show that both static distorted aberrations in the beams and active distorted aberrations (which are brought in by a hot iron and the frequency is about 5Hz) can be compensated successfully when the gain coefficients and the perturbation amplitude of SPGD are chosed appropriately, thereby three beams can be well combined. For controlling the phase of fiber lasers, the phase characteristics of beams passing through Yb-doped dual-clad fiber amplifier are measured by means of investigating the interference pattern under different output power through experiments. The frequency of phase fluctuation is evaluated through analyzing the fluctuation of power within a 90um aperture of far-field focal spot. Experimental results show that the phase fluctuation frequencies of laser beam transmitted through fiber amplifier are mainly in the range of 100~1500Hz. As a result, to control the phase fluctuation of beams passing through fiber amplifier, the bandwidth of any potential phase control scheme must be greater than 1.5 kilohertz.
Adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope
NASA Astrophysics Data System (ADS)
Ma, Haotong; Hu, Haojun; Xie, Wenke; Zhao, Haichuan; Xu, Xiaojun; Chen, Jinbao
2013-08-01
We demonstrate the adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope based on the stochastic parallel gradient descent (SPGD) algorithm and dual phase only liquid crystal spatial light modulators (LC-SLMs). Adaptive pre-compensation the wavefront of projected laser beam at the transmitter telescope is chosen to improve the power coupling efficiency. One phase only LC-SLM adaptively optimizes phase distribution of the projected laser beam and the other generates turbulence phase screen. The intensity distributions of the dark hollow beam after passing through the turbulent atmosphere with and without adaptive beam shaping are analyzed in detail. The influence of propagation distance and aperture size of the Cassegrain-telescope on coupling efficiency are investigated theoretically and experimentally. These studies show that the power coupling can be significantly improved by adaptive beam shaping. The technique can be used in optical communication, deep space optical communication and relay mirror.
Coherent beam combining of collimated fiber array based on target-in-the-loop technique
NASA Astrophysics Data System (ADS)
Li, Xinyang; Geng, Chao; Zhang, Xiaojun; Rao, Changhui
2011-11-01
Coherent beam combining (CBC) of fiber array is a promising way to generate high power and high quality laser beams. Target-in-the-loop (TIL) technique might be an effective way to ensure atmosphere propagation compensation without wavefront sensors. In this paper, we present very recent research work about CBC of collimated fiber array using TIL technique at the Key Lab on Adaptive Optics (KLAO), CAS. A novel Adaptive Fiber Optics Collimator (AFOC) composed of phase-locking module and tip/tilt control module was developed. CBC experimental setup of three-element fiber array was established. Feedback control is realized using stochastic parallel gradient descent (SPGD) algorithm. The CBC based on TIL with piston and tip/tilt correction simultaneously is demonstrated. And the beam pointing to locate or sweep position of combined spot on target was achieved through TIL technique too. The goal of our work is achieve multi-element CBC for long-distance transmission in atmosphere.
NASA Astrophysics Data System (ADS)
Dou, Jiangpei; Ren, Deqing; Zhang, Xi; Zhu, Yongtian; Zhao, Gang; Wu, Zhen; Chen, Rui; Liu, Chengchao; Yang, Feng; Yang, Chao
2014-08-01
Almost all high-contrast imaging coronagraphs proposed until now are based on passive coronagraph optical components. Recently, Ren and Zhu proposed for the first time a coronagraph that integrates a liquid crystal array (LCA) for the active pupil apodizing and a deformable mirror (DM) for the phase corrections. Here, for demonstration purpose, we present the initial test result of a coronagraphic system that is based on two liquid crystal spatial light modulators (SLM). In the system, one SLM is served as active pupil apodizing and amplitude correction to suppress the diffraction light; another SLM is used to correct the speckle noise that is caused by the wave-front distortions. In this way, both amplitude and phase error can be actively and efficiently compensated. In the test, we use the stochastic parallel gradient descent (SPGD) algorithm to control two SLMs, which is based on the point spread function (PSF) sensing and evaluation and optimized for a maximum contrast in the discovery area. Finally, it has demonstrated a contrast of 10-6 at an inner working angular distance of ~6.2 λ/D, which is a promising technique to be used for the direct imaging of young exoplanets on ground-based telescopes.
NASA Astrophysics Data System (ADS)
Liu, Ling
The primary goal of this research is the analysis, development, and experimental demonstration of an adaptive phase-locked fiber array system for free-space optical communications and laser beam projection applications. To our knowledge, the developed adaptive phase-locked system composed of three fiber collimators (subapertures) with tip-tilt wavefront phase control at each subaperture represents the first reported fiber array system that implements both phase-locking control and adaptive wavefront tip-tilt control capabilities. This research has also resulted in the following innovations: (a) The first experimental demonstration of a phase-locked fiber array with tip-tilt wave-front aberration compensation at each fiber collimator; (b) Development and demonstration of the fastest currently reported stochastic parallel gradient descent (SPGD) system capable of operation at 180,000 iterations per second; (c) The first experimental demonstration of a laser communication link based on a phase-locked fiber array; (d) The first successful experimental demonstration of turbulence and jitter-induced phase distortion compensation in a phase-locked fiber array optical system; (e) The first demonstration of laser beam projection onto an extended target with a randomly rough surface using a conformal adaptive fiber array system. Fiber array optical systems, the subject of this study, can overcome some of the draw-backs of conventional monolithic large-aperture transmitter/receiver optical systems that are usually heavy, bulky, and expensive. The primary experimental challenges in the development of the adaptive phased-locked fiber-array included precise (<5 microrad) alignment of the fiber collimators and development of fast (100kHz-class) phase-locking and wavefront tip-tilt control systems. The precise alignment of the fiber collimator array is achieved through a specially developed initial coarse alignment tool based on high precision piezoelectric picomotors and a dynamic fine alignment mechanism implemented with specially designed and manufactured piezoelectric fiber positioners. Phase-locking of the fiber collimators is performed by controlling the phases of the output beams (beamlets) using integrated polarization-maintaining (PM) fiber-coupled LiNbO3 phase shifters. The developed phase-locking controllers are based on either the SPGD algorithm or the multi-dithering technique. Subaperture wavefront phase tip-tilt control is realized using piezoelectric fiber positioners that are controlled using a computer-based SPGD controller. Both coherent (phase-locked) and incoherent beam combining in the fiber array system are analyzed theoretically and experimentally. Two special fiber-based beam-combining testbeds have been built to demonstrate the technical feasibility of phase-locking compensation prior to free-space operation. In addition, the reciprocity of counter-propagating beams in a phase-locked fiber array system has been investigated. Coherent beam combining in a phase-locking system with wavefront phase tip-tilt compensation at each subaperture is successfully demonstrated when laboratory-simulated turbulence and wavefront jitters are present in the propagation path of the beamlets. In addition, coherent beam combining with a non-cooperative extended target in the control loop is successfully demonstrated.
Arbitrary temporal shape pulsed fiber laser based on SPGD algorithm
NASA Astrophysics Data System (ADS)
Jiang, Min; Su, Rongtao; Zhang, Pengfei; Zhou, Pu
2018-06-01
A novel adaptive pulse shaping method for a pulsed master oscillator power amplifier fiber laser to deliver an arbitrary pulse shape is demonstrated. Numerical simulation has been performed to validate the feasibility of the scheme and provide meaningful guidance for the design of the algorithm control parameters. In the proof-of-concept experiment, information on the temporal property of the laser is exchanged and evaluated through a local area network, and the laser adjusted the parameters of the seed laser according to the monitored output of the system automatically. Various pulse shapes, including a rectangular shape, ‘M’ shape, and elliptical shape are achieved through experimental iterations.
Algorithms for accelerated convergence of adaptive PCA.
Chatterjee, C; Kang, Z; Roychowdhury, V P
2000-01-01
We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.
A piloted simulator evaluation of a ground-based 4-D descent advisor algorithm
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Green, Steven M.; Erzberger, Heinz
1990-01-01
A ground-based, four dimensional (4D) descent-advisor algorithm is under development at NASA-Ames. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. The ability is investigated of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the result of the simulation studies are presented.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1983-01-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Vicroy, D. D.; Simmon, D. A.
1985-01-01
A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knox, C.E.; Vicroy, D.D.; Simmon, D.A.
A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, andmore » nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knox, C.E.
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight testsmore » flown with a T-39A (Sabreliner) airplane are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vicroy, D.D.
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. An explanation and examples of how the algorithm is used,more » as well as a detailed flow chart and listing of the algorithm are contained.« less
NASA Technical Reports Server (NTRS)
Vicroy, D. D.; Knox, C. E.
1983-01-01
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modeling required for the DC-10 airplane is described.
NASA Astrophysics Data System (ADS)
Li, Yan; Li, Lin; Huang, Yi-Fan; Du, Bao-Lin
2009-07-01
This paper analyses the dynamic residual aberrations of a conformal optical system and introduces adaptive optics (AO) correction technology to this system. The image sharpening AO system is chosen as the correction scheme. Communication between MATLAB and Code V is established via ActiveX technique in computer simulation. The SPGD algorithm is operated at seven zoom positions to calculate the optimized surface shape of the deformable mirror. After comparison of performance of the corrected system with the baseline system, AO technology is proved to be a good way of correcting the dynamic residual aberration in conformal optical design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vicroy, D.D.; Knox, C.E.
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modelingmore » required for the DC-10 airplane is described.« less
NASA Technical Reports Server (NTRS)
Knox, C. E.; Cannon, D. G.
1980-01-01
A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Cannon, D. G.
1979-01-01
A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.
Convergence Rates of Finite Difference Stochastic Approximation Algorithms
2016-06-01
dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It
NASA Technical Reports Server (NTRS)
Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.
1986-01-01
The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.
Scaling Up Coordinate Descent Algorithms for Large ℓ1 Regularization Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj
2012-07-03
We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.
Gradient descent learning algorithm overview: a general dynamical systems perspective.
Baldi, P
1995-01-01
Gives a unified treatment of gradient descent learning algorithms for neural networks using a general framework of dynamical systems. This general approach organizes and simplifies all the known algorithms and results which have been originally derived for different problems (fixed point/trajectory learning), for different models (discrete/continuous), for different architectures (forward/recurrent), and using different techniques (backpropagation, variational calculus, adjoint methods, etc.). The general approach can also be applied to derive new algorithms. The author then briefly examines some of the complexity issues and limitations intrinsic to gradient descent learning. Throughout the paper, the author focuses on the problem of trajectory learning.
Fast Optimization for Aircraft Descent and Approach Trajectory
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John
2017-01-01
We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.
A sampling algorithm for segregation analysis
Tier, Bruce; Henshall, John
2001-01-01
Methods for detecting Quantitative Trait Loci (QTL) without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC) method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated. PMID:11742631
Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.
Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob
2011-03-01
We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Person, L. H., Jr.
1981-01-01
The NASA developed, implemented, and flight tested a flight management algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control. This algorithm provides a 3D path with time control (4D) for the TCV B-737 airplane to make an idle-thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms are described and flight test results are presented.
Coordinated Beamforming for MISO Interference Channel: Complexity Analysis and Efficient Algorithms
2010-01-01
Algorithm The cyclic coordinate descent algorithm is also known as the nonlinear Gauss - Seidel iteration [32]. There are several studies of this type of...vkρ(vi−1). It can be shown that the above BB gradient projection direction is always a descent direction. The R-linear convergence of the BB method has...KKT solution ) of the inexact pricing algorithm for MISO interference channel. The latter is interesting since the convergence of the original pricing
The q-G method : A q-version of the Steepest Descent method for global optimization.
Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M
2015-01-01
In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems.
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.
1986-01-01
The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.
Sobel, E.; Lange, K.
1996-01-01
The introduction of stochastic methods in pedigree analysis has enabled geneticists to tackle computations intractable by standard deterministic methods. Until now these stochastic techniques have worked by running a Markov chain on the set of genetic descent states of a pedigree. Each descent state specifies the paths of gene flow in the pedigree and the founder alleles dropped down each path. The current paper follows up on a suggestion by Elizabeth Thompson that genetic descent graphs offer a more appropriate space for executing a Markov chain. A descent graph specifies the paths of gene flow but not the particular founder alleles traveling down the paths. This paper explores algorithms for implementing Thompson's suggestion for codominant markers in the context of automatic haplotyping, estimating location scores, and computing gene-clustering statistics for robust linkage analysis. Realistic numerical examples demonstrate the feasibility of the algorithms. PMID:8651310
Analysis of Online Composite Mirror Descent Algorithm.
Lei, Yunwen; Zhou, Ding-Xuan
2017-03-01
We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
2014-12-01
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
2017-01-01
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718
Design and realization of adaptive optical principle system without wavefront sensing
NASA Astrophysics Data System (ADS)
Wang, Xiaobin; Niu, Chaojun; Guo, Yaxing; Han, Xiang'e.
2018-02-01
In this paper, we focus on the performance improvement of the free space optical communication system and carry out the research on wavefront-sensorless adaptive optics. We use a phase only liquid crystal spatial light modulator (SLM) as the wavefront corrector. The optical intensity distribution of the distorted wavefront is detected by a CCD. We develop a wavefront controller based on ARM and a software based on the Linux operating system. The wavefront controller can control the CCD camera and the wavefront corrector. There being two SLMs in the experimental system, one simulates atmospheric turbulence and the other is used to compensate the wavefront distortion. The experimental results show that the performance quality metric (the total gray value of 25 pixels) increases from 3037 to 4863 after 200 iterations. Besides, it is demonstrated that our wavefront-sensorless adaptive optics system based on SPGD algorithm has a good performance in compensating wavefront distortion.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method
NASA Astrophysics Data System (ADS)
Sun, Yong; Meng, Zhaohai; Li, Fengting
2018-03-01
Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.
Feature Clustering for Accelerating Parallel Coordinate Descent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Chad; Tewari, Ambuj; Halappanavar, Mahantesh
2012-12-06
We demonstrate an approach for accelerating calculation of the regularization path for L1 sparse logistic regression problems. We show the benefit of feature clustering as a preconditioning step for parallel block-greedy coordinate descent algorithms.
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
Dong, Bing; Li, Yan; Han, Xin-Li; Hu, Bin
2016-09-02
For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10(-5) in optimized correction and is 1.427 × 10(-5) in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method.
MER-DIMES : a planetary landing application of computer vision
NASA Technical Reports Server (NTRS)
Cheng, Yang; Johnson, Andrew; Matthies, Larry
2005-01-01
During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.
Comparative analysis of algorithms for lunar landing control
NASA Astrophysics Data System (ADS)
Zhukov, B. I.; Likhachev, V. N.; Sazonov, V. V.; Sikharulidze, Yu. G.; Tuchin, A. G.; Tuchin, D. A.; Fedotov, V. P.; Yaroshevskii, V. S.
2015-11-01
For the descent from the pericenter of a prelanding circumlunar orbit a comparison of three algorithms for the control of lander motion is performed. These algorithms use various combinations of terminal and programmed control in a trajectory including three parts: main braking, precision braking, and descent with constant velocity. In the first approximation, autonomous navigational measurements are taken into account and an estimate of the disturbances generated by movement of the fuel in the tanks was obtained. Estimates of the accuracy for landing placement, fuel consumption, and performance of the conditions for safe lunar landing are obtained.
Air-Traffic Controllers Evaluate The Descent Advisor
NASA Technical Reports Server (NTRS)
Tobias, Leonard; Volckers, Uwe; Erzberger, Heinz
1992-01-01
Report describes study of Descent Advisor algorithm: software automation aid intended to assist air-traffic controllers in spacing traffic and meeting specified times or arrival. Based partly on mathematical models of weather conditions and performances of aircraft, it generates suggested clearances, including top-of-descent points and speed-profile data to attain objectives. Study focused on operational characteristics with specific attention to how it can be used for prediction, spacing, and metering.
Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji
2013-04-01
Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
Smart-Divert Powered Descent Guidance to Avoid the Backshell Landing Dispersion Ellipse
NASA Technical Reports Server (NTRS)
Carson, John M.; Acikmese, Behcet
2013-01-01
A smart-divert capability has been added into the Powered Descent Guidance (PDG) software originally developed for Mars pinpoint and precision landing. The smart-divert algorithm accounts for the landing dispersions of the entry backshell, which separates from the lander vehicle at the end of the parachute descent phase and prior to powered descent. The smart-divert PDG algorithm utilizes the onboard fuel and vehicle thrust vectoring to mitigate landing error in an intelligent way: ensuring that the lander touches down with minimum- fuel usage at the minimum distance from the desired landing location that also avoids impact by the descending backshell. The smart-divert PDG software implements a computationally efficient, convex formulation of the powered-descent guidance problem to provide pinpoint or precision-landing guidance solutions that are fuel-optimal and satisfy physical thrust bound and pointing constraints, as well as position and speed constraints. The initial smart-divert implementation enforced a lateral-divert corridor parallel to the ground velocity vector; this was based on guidance requirements for MSL (Mars Science Laboratory) landings. This initial method was overly conservative since the divert corridor was infinite in the down-range direction despite the backshell landing inside a calculable dispersion ellipse. Basing the divert constraint instead on a local tangent to the backshell dispersion ellipse in the direction of the desired landing site provides a far less conservative constraint. The resulting enhanced smart-divert PDG algorithm avoids impact with the descending backshell and has reduced conservatism.
Development of an analytical guidance algorithm for lunar descent
NASA Astrophysics Data System (ADS)
Chomel, Christina Tvrdik
In recent years, NASA has indicated a desire to return humans to the moon. With NASA planning manned missions within the next couple of decades, the concept development for these lunar vehicles has begun. The guidance, navigation, and control (GN&C) computer programs that will perform the function of safely landing a spacecraft on the moon are part of that development. The lunar descent guidance algorithm takes the horizontally oriented spacecraft from orbital speeds hundreds of kilometers from the desired landing point to the landing point at an almost vertical orientation and very low speed. Existing lunar descent GN&C algorithms date back to the Apollo era with little work available for implementation since then. Though these algorithms met the criteria of the 1960's, they are cumbersome today. At the basis of the lunar descent phase are two elements: the targeting, which generates a reference trajectory, and the real-time guidance, which forces the spacecraft to fly that trajectory. The Apollo algorithm utilizes a complex, iterative, numerical optimization scheme for developing the reference trajectory. The real-time guidance utilizes this reference trajectory in the form of a quartic rather than a more general format to force the real-time trajectory errors to converge to zero; however, there exist no guarantees under any conditions for this convergence. The proposed algorithm implements a purely analytical targeting algorithm used to generate two-dimensional trajectories "on-the-fly"' or to retarget the spacecraft to another landing site altogether. It is based on the analytical solutions to the equations for speed, downrange, and altitude as a function of flight path angle and assumes two constant thrust acceleration curves. The proposed real-time guidance algorithm has at its basis the three-dimensional non-linear equations of motion and a control law that is proven to converge under certain conditions through Lyapunov analysis to a reference trajectory formatted as a function of downrange, altitude, speed, and flight path angle. The two elements of the guidance algorithm are joined in Monte Carlo analysis to prove their robustness to initial state dispersions and mass and thrust errors. The robustness of the retargeting algorithm is also demonstrated.
Magnetotelluric inversion via reverse time migration algorithm of seismic data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ha, Taeyoung; Shin, Changsoo
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less
Dong, Bing; Li, Yan; Han, Xin-li; Hu, Bin
2016-01-01
For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10−5 in optimized correction and is 1.427 × 10−5 in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method. PMID:27598161
A pipeline leakage locating method based on the gradient descent algorithm
NASA Astrophysics Data System (ADS)
Li, Yulong; Yang, Fan; Ni, Na
2018-04-01
A pipeline leakage locating method based on the gradient descent algorithm is proposed in this paper. The method has low computing complexity, which is suitable for practical application. We have built experimental environment in real underground pipeline network. A lot of real data has been gathered in the past three months. Every leak point has been certificated by excavation. Results show that positioning error is within 0.4 meter. Rate of false alarm and missing alarm are both under 20%. The calculating time is not above 5 seconds.
Nonuniformity correction for an infrared focal plane array based on diamond search block matching.
Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian
2016-05-01
In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.
Trajectory Guidance for Mars Robotic Precursors: Aerocapture, Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Sostaric, Ronald R.; Zumwalt, Carlie; Garcia-Llama, Eduardo; Powell, Richard; Shidner, Jeremy
2011-01-01
Future crewed missions to Mars require improvements in landed mass capability beyond that which is possible using state-of-the-art Mars Entry, Descent, and Landing (EDL) systems. Current systems are capable of an estimated maximum landed mass of 1-1.5 metric tons (MT), while human Mars studies require 20-40 MT. A set of technologies were investigated by the EDL Systems Analysis (SA) project to assess the performance of candidate EDL architectures. A single architecture was selected for the design of a robotic precursor mission, entitled Exploration Feed Forward (EFF), whose objective is to demonstrate these technologies. In particular, inflatable aerodynamic decelerators (IADs) and supersonic retro-propulsion (SRP) have been shown to have the greatest mass benefit and extensibility to future exploration missions. In order to evaluate these technologies and develop the mission, candidate guidance algorithms have been coded into the simulation for the purposes of studying system performance. These guidance algorithms include aerocapture, entry, and powered descent. The performance of the algorithms for each of these phases in the presence of dispersions has been assessed using a Monte Carlo technique.
Simulation Test Of Descent Advisor
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Green, Steven M.
1991-01-01
Report describes piloted-simulation test of Descent Advisor (DA), subsystem of larger automation system being developed to assist human air-traffic controllers and pilots. Focuses on results of piloted simulation, in which airline crews executed controller-issued descent advisories along standard curved-path arrival routes. Crews able to achieve arrival-time precision of plus or minus 20 seconds at metering fix. Analysis of errors generated in turns resulted in further enhancements of algorithm to increase accuracies of its predicted trajectories. Evaluations by pilots indicate general support for DA concept and provide specific recommendations for improvement.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
Design of automation tools for management of descent traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1988-01-01
The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational evaluations at an en route center.
Dang, C; Xu, L
2001-03-01
In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.
Accelerating IMRT optimization by voxel sampling
NASA Astrophysics Data System (ADS)
Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.
2007-12-01
This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.
NASA Technical Reports Server (NTRS)
Cheng, Yang
2007-01-01
This viewgraph presentation reviews the use of Descent Image Motion Estimation System (DIMES) for the descent of a spacecraft onto the surface of Mars. In the past this system was used to assist in the landing of the MER spacecraft. The overall algorithm is reviewed, and views of the hardware, and views from Spirit's descent are shown. On Spirit, had DIMES not been used, the impact velocity would have been at the limit of the airbag capability and Spirit may have bounced into Endurance Crater. By using DIMES, the velocity was reduced to well within the bounds of the airbag performance and Spirit arrived safely at Mars. Views from Oppurtunity's descent are also shown. The system to avoid and detect hazards is reviewed next. Landmark Based Spacecraft Pinpoint Landing is also reviewed. A cartoon version of a pinpoint landing and the various points is shown. Mars s surface has a large amount of craters, which are ideal landmarks . According to literatures on Martian cratering, 60 % of Martian surface is heavily cratered. The ideal (craters) landmarks for pinpoint landing will be between 1000 to 50 meters in diagonal The ideal altitude for position estimation should greater than 2 km above the ground. The algorithms used to detect and match craters are reviewed.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
Functional Equivalence Acceptance Testing of FUN3D for Entry Descent and Landing Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Wood, William A.; Kleb, William L.; Alter, Stephen J.; Glass, Christopher E.; Padilla, Jose F.; Hammond, Dana P.; White, Jeffery A.
2013-01-01
The functional equivalence of the unstructured grid code FUN3D to the the structured grid code LAURA (Langley Aerothermodynamic Upwind Relaxation Algorithm) is documented for applications of interest to the Entry, Descent, and Landing (EDL) community. Examples from an existing suite of regression tests are used to demonstrate the functional equivalence, encompassing various thermochemical models and vehicle configurations. Algorithm modifications required for the node-based unstructured grid code (FUN3D) to reproduce functionality of the cell-centered structured code (LAURA) are also documented. Challenges associated with computation on tetrahedral grids versus computation on structured-grid derived hexahedral systems are discussed.
Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models
Jiang, Dingfeng; Huang, Jian
2013-01-01
Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048
Evaluating the accuracy performance of Lucas-Kanade algorithm in the circumstance of PIV application
NASA Astrophysics Data System (ADS)
Pan, Chong; Xue, Dong; Xu, Yang; Wang, JinJun; Wei, RunJie
2015-10-01
Lucas-Kanade (LK) algorithm, usually used in optical flow filed, has recently received increasing attention from PIV community due to its advanced calculation efficiency by GPU acceleration. Although applications of this algorithm are continuously emerging, a systematic performance evaluation is still lacking. This forms the primary aim of the present work. Three warping schemes in the family of LK algorithm: forward/inverse/symmetric warping, are evaluated in a prototype flow of a hierarchy of multiple two-dimensional vortices. Second-order Newton descent is also considered here. The accuracy & efficiency of all these LK variants are investigated under a large domain of various influential parameters. It is found that the constant displacement constraint, which is a necessary building block for GPU acceleration, is the most critical issue in affecting LK algorithm's accuracy, which can be somehow ameliorated by using second-order Newton descent. Moreover, symmetric warping outbids the other two warping schemes in accuracy level, robustness to noise, convergence speed and tolerance to displacement gradient, and might be the first choice when applying LK algorithm to PIV measurement.
Fan, Bingfei; Li, Qingguo; Wang, Chao; Liu, Tao
2017-01-01
Magnetic and inertial sensors have been widely used to estimate the orientation of human segments due to their low cost, compact size and light weight. However, the accuracy of the estimated orientation is easily affected by external factors, especially when the sensor is used in an environment with magnetic disturbances. In this paper, we propose an adaptive method to improve the accuracy of orientation estimations in the presence of magnetic disturbances. The method is based on existing gradient descent algorithms, and it is performed prior to sensor fusion algorithms. The proposed method includes stationary state detection and magnetic disturbance severity determination. The stationary state detection makes this method immune to magnetic disturbances in stationary state, while the magnetic disturbance severity determination helps to determine the credibility of magnetometer data under dynamic conditions, so as to mitigate the negative effect of the magnetic disturbances. The proposed method was validated through experiments performed on a customized three-axis instrumented gimbal with known orientations. The error of the proposed method and the original gradient descent algorithms were calculated and compared. Experimental results demonstrate that in stationary state, the proposed method is completely immune to magnetic disturbances, and in dynamic conditions, the error caused by magnetic disturbance is reduced by 51.2% compared with original MIMU gradient descent algorithm. PMID:28534858
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1984-01-01
Concepts to save fuel while preserving airport capacity by combining time based metering with profile descent procedures were developed. A computer algorithm is developed to provide the flight crew with the information needed to fly from an entry fix to a metering fix and arrive there at a predetermined time, altitude, and airspeed. The flight from the metering fix to an aim point near the airport was calculated. The flight path is divided into several descent and deceleration segments. Descents are performed at constant Mach numbers or calibrated airspeed, whereas decelerations occur at constant altitude. The time and distance associated with each segment are calculated from point mass equations of motion for a clean configuration with idle thrust. Wind and nonstandard atmospheric properties have a large effect on the flight path. It is found that uncertainty in the descent Mach number has a large effect on the predicted flight time. Of the possible combinations of Mach number and calibrated airspeed for a descent, only small changes were observed in the fuel consumed.
Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.
Reid, Stephen; Tibshirani, Rob
2014-07-01
We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.
Regularization Paths for Conditional Logistic Regression: The clogitL1 Package
Reid, Stephen; Tibshirani, Rob
2014-01-01
We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587
A modified three-term PRP conjugate gradient algorithm for optimization models.
Wu, Yanlin
2017-01-01
The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.
Photonic Lantern Adaptive Spatial Mode Control in LMA Fiber Amplifiers using SPGD
2015-12-15
ll.mit.edu Abstract: We demonstrate adaptive-spatial mode control (ASMC) in few- moded double- clad large mode area (LMA) fiber amplifiers by using an...combination resulting in a single fundamental mode at the output is achieved. 2015 Optical Society of America OCIS codes: (140.3510) Lasers ...fiber; (140.3425) Laser stabilization; (060.2340) Fiber optics components; (110.1080) Active or adaptive optics; References and links 1. C
On the efficiency of a randomized mirror descent algorithm in online optimization problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.
2015-04-01
A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1984-01-01
A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.
A Revised Trajectory Algorithm to Support En Route and Terminal Area Self-Spacing Concepts
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2010-01-01
This document describes an algorithm for the generation of a four dimensional trajectory. Input data for this algorithm are similar to an augmented Standard Terminal Arrival (STAR) with the augmentation in the form of altitude or speed crossing restrictions at waypoints on the route. This version of the algorithm accommodates descent Mach values that are different from the cruise Mach values. Wind data at each waypoint are also inputs into this algorithm. The algorithm calculates the altitude, speed, along path distance, and along path time for each waypoint.
Research on particle swarm optimization algorithm based on optimal movement probability
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Han; He, Baofeng
2017-01-01
The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.
Guidance and Control Algorithms for the Mars Entry, Descent and Landing Systems Analysis
NASA Technical Reports Server (NTRS)
Davis, Jody L.; CwyerCianciolo, Alicia M.; Powell, Richard W.; Shidner, Jeremy D.; Garcia-Llama, Eduardo
2010-01-01
The purpose of the Mars Entry, Descent and Landing Systems Analysis (EDL-SA) study was to identify feasible technologies that will enable human exploration of Mars, specifically to deliver large payloads to the Martian surface. This paper focuses on the methods used to guide and control two of the contending technologies, a mid- lift-to-drag (L/D) rigid aeroshell and a hypersonic inflatable aerodynamic decelerator (HIAD), through the entry portion of the trajectory. The Program to Optimize Simulated Trajectories II (POST2) is used to simulate and analyze the trajectories of the contending technologies and guidance and control algorithms. Three guidance algorithms are discussed in this paper: EDL theoretical guidance, Numerical Predictor-Corrector (NPC) guidance and Analytical Predictor-Corrector (APC) guidance. EDL-SA also considered two forms of control: bank angle control, similar to that used by Apollo and the Space Shuttle, and a center-of-gravity (CG) offset control. This paper presents the performance comparison of these guidance algorithms and summarizes the results as they impact the technology recommendations for future study.
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong
2018-01-01
Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.
Apollo LM guidance computer software for the final lunar descent.
NASA Technical Reports Server (NTRS)
Eyles, D.
1973-01-01
In all manned lunar landings to date, the lunar module Commander has taken partial manual control of the spacecraft during the final stage of the descent, below roughly 500 ft altitude. This report describes programs developed at the Charles Stark Draper Laboratory, MIT, for use in the LM's guidance computer during the final descent. At this time computational demands on the on-board computer are at a maximum, and particularly close interaction with the crew is necessary. The emphasis is on the design of the computer software rather than on justification of the particular guidance algorithms employed. After the computer and the mission have been introduced, the current configuration of the final landing programs and an advanced version developed experimentally by the author are described.
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
3D-Web-GIS RFID location sensing system for construction objects.
Ko, Chien-Ho
2013-01-01
Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.
3D-Web-GIS RFID Location Sensing System for Construction Objects
2013-01-01
Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
An annealed chaotic maximum neural network for bipartite subgraph problem.
Wang, Jiahai; Tang, Zheng; Wang, Ronglong
2004-04-01
In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.
Hazard avoidance via descent images for safe landing
NASA Astrophysics Data System (ADS)
Yan, Ruicheng; Cao, Zhiguo; Zhu, Lei; Fang, Zhiwen
2013-10-01
In planetary or lunar landing missions, hazard avoidance is critical for landing safety. Therefore, it is very important to correctly detect hazards and effectively find a safe landing area during the last stage of descent. In this paper, we propose a passive sensing based HDA (hazard detection and avoidance) approach via descent images to lower the landing risk. In hazard detection stage, a statistical probability model on the basis of the hazard similarity is adopted to evaluate the image and detect hazardous areas, so that a binary hazard image can be generated. Afterwards, a safety coefficient, which jointly utilized the proportion of hazards in the local region and the inside hazard distribution, is proposed to find potential regions with less hazards in the binary hazard image. By using the safety coefficient in a coarse-to-fine procedure and combining it with the local ISD (intensity standard deviation) measure, the safe landing area is determined. The algorithm is evaluated and verified with many simulated descent downward looking images rendered from lunar orbital satellite images.
Regression Analysis of Top of Descent Location for Idle-thrust Descents
NASA Technical Reports Server (NTRS)
Stell, Laurel; Bronsvoort, Jesper; McDonald, Greg
2013-01-01
In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. The independent variables cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also recorded or computed post-operations. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajec- tory parameters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowl- edge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace. In particular, a model for TOD location that is linear in the independent variables would enable decision support tool human-machine interfaces for which a kinetic approach would be computationally too slow.
Scalable Nonparametric Low-Rank Kernel Learning Using Block Coordinate Descent.
Hu, En-Liang; Kwok, James T
2015-09-01
Nonparametric kernel learning (NPKL) is a flexible approach to learn the kernel matrix directly without assuming any parametric form. It can be naturally formulated as a semidefinite program (SDP), which, however, is not very scalable. To address this problem, we propose the combined use of low-rank approximation and block coordinate descent (BCD). Low-rank approximation avoids the expensive positive semidefinite constraint in the SDP by replacing the kernel matrix variable with V(T)V, where V is a low-rank matrix. The resultant nonlinear optimization problem is then solved by BCD, which optimizes each column of V sequentially. It can be shown that the proposed algorithm has nice convergence properties and low computational complexities. Experiments on a number of real-world data sets show that the proposed algorithm outperforms state-of-the-art NPKL solvers.
Engineering description of the ascent/descent bet product
NASA Technical Reports Server (NTRS)
Seacord, A. W., II
1986-01-01
The Ascent/Descent output product is produced in the OPIP routine from three files which constitute its input. One of these, OPIP.IN, contains mission specific parameters. Meteorological data, such as atmospheric wind velocities, temperatures, and density, are obtained from the second file, the Corrected Meteorological Data File (METDATA). The third file is the TRJATTDATA file which contains the time-tagged state vectors that combine trajectory information from the Best Estimate of Trajectory (BET) filter, LBRET5, and Best Estimate of Attitude (BEA) derived from IMU telemetry. Each term in the two output data files (BETDATA and the Navigation Block, or NAVBLK) are defined. The description of the BETDATA file includes an outline of the algorithm used to calculate each term. To facilitate describing the algorithms, a nomenclature is defined. The description of the nomenclature includes a definition of the coordinate systems used. The NAVBLK file contains navigation input parameters. Each term in NAVBLK is defined and its source is listed. The production of NAVBLK requires only two computational algorithms. These two algorithms, which compute the terms DELTA and RSUBO, are described. Finally, the distribution of data in the NAVBLK records is listed.
Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.
Liu, Li; Lin, Weikai; Jin, Mingwu
2015-01-01
In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.
Methodology Development for the Reconstruction of the ESA Huygens Probe Entry and Descent Trajectory
NASA Astrophysics Data System (ADS)
Kazeminejad, B.
2005-01-01
The European Space Agency's (ESA) Huygens probe performed a successful entry and descent into Titan's atmosphere on January 14, 2005, and landed safely on the satellite's surface. A methodology was developed, implemented, and tested to reconstruct the Huygens probe trajectory from its various science and engineering measurements, which were performed during the probe's entry and descent to the surface of Titan, Saturn's largest moon. The probe trajectory reconstruction is an essential effort that has to be done as early as possible in the post-flight data analysis phase as it guarantees a correct and consistent interpretation of all the experiment data and furthermore provides a reference set of data for "ground-truthing" orbiter remote sensing measurements. The entry trajectory is reconstructed from the measured probe aerodynamic drag force, which also provides a means to derive the upper atmospheric properties like density, pressure, and temperature. The descent phase reconstruction is based upon a combination of various atmospheric measurements such as pressure, temperature, composition, speed of sound, and wind speed. A significant amount of effort was spent to outline and implement a least-squares trajectory estimation algorithm that provides a means to match the entry and descent trajectory portions in case of discontinuity. An extensive test campaign of the algorithm is presented which used the Huygens Synthetic Dataset (HSDS) developed by the Huygens Project Scientist Team at ESA/ESTEC as a test bed. This dataset comprises the simulated sensor output (and the corresponding measurement noise and uncertainty) of all the relevant probe instruments. The test campaign clearly showed that the proposed methodology is capable of utilizing all the relevant probe data, and will provide the best estimate of the probe trajectory once real instrument measurements from the actual probe mission are available. As a further test case using actual flight data the NASA Mars Pathfinder entry and descent trajectory and the space craft attitude was reconstructed from the 3-axis accelerometer measurements which are archived on the Planetary Data System. The results are consistent with previously published reconstruction efforts.
Recursive least-squares learning algorithms for neural networks
NASA Astrophysics Data System (ADS)
Lewis, Paul S.; Hwang, Jenq N.
1990-11-01
This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].
A Fast Deep Learning System Using GPU
2014-06-01
hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...widely used in data modeling until three decades later when efficient training algorithm for RBM is invented by Hinton [3] and the computing power is...be trained using most of optimization algorithms , such as BP, conjugate gradient descent (CGD) or Levenberg-Marquardt (LM). The advantage of this
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1984-01-01
Attention is given to a computer algorithm yielding the data required for a flight crew to navigate from an entry fix, about 100 nm from an airport, to a metering fix, and arrive there at a predetermined time, altitude, and airspeed. The flight path is divided into several descent and deceleration segments. Results for the case of a B-737 airliner indicate that wind and nonstandard atmospheric properties have a significant effect on the flight path and must be taken into account. While a range of combinations of Mach number and calibrated airspeed is possible for the descent segments leading to the metering fix, only small changes in the fuel consumed were observed for this range of combinations. A combination that is based on scheduling flexibility therefore seems preferable.
A network of spiking neurons for computing sparse representations in an energy efficient way
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.
2013-01-01
Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853
Online learning in optical tomography: a stochastic approach
NASA Astrophysics Data System (ADS)
Chen, Ke; Li, Qin; Liu, Jian-Guo
2018-07-01
We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.
A network of spiking neurons for computing sparse representations in an energy-efficient way.
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B
2012-11-01
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Massive parallelization of serial inference algorithms for a complex generalized linear model
Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David
2014-01-01
Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363
Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization.
Lu, Hanbing; Jesmanowicz, Andrzej; Li, Shi-Jiang; Hyde, James S
2004-01-01
MRI gradient coil design is a type of nonlinear constrained optimization. A practical problem in transverse gradient coil design using the conjugate gradient descent (CGD) method is that wire elements move at different rates along orthogonal directions (r, phi, z), and tend to cross, breaking the constraints. A momentum-weighted conjugate gradient descent (MW-CGD) method is presented to overcome this problem. This method takes advantage of the efficiency of the CGD method combined with momentum weighting, which is also an intrinsic property of the Levenberg-Marquardt algorithm, to adjust step sizes along the three orthogonal directions. A water-cooled, 12.8 cm inner diameter, three axis torque-balanced gradient coil for rat imaging was developed based on this method, with an efficiency of 2.13, 2.08, and 4.12 mT.m(-1).A(-1) along X, Y, and Z, respectively. Experimental data demonstrate that this method can improve efficiency by 40% and field uniformity by 27%. This method has also been applied to the design of a gradient coil for the human brain, employing remote current return paths. The benefits of this design include improved gradient field uniformity and efficiency, with a shorter length than gradient coil designs using coaxial return paths. Copyright 2003 Wiley-Liss, Inc.
Piloted simulation of a ground-based time-control concept for air traffic control
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Green, Steven M.
1989-01-01
A concept for aiding air traffic controllers in efficiently spacing traffic and meeting scheduled arrival times at a metering fix was developed and tested in a real time simulation. The automation aid, referred to as the ground based 4-D descent advisor (DA), is based on accurate models of aircraft performance and weather conditions. The DA generates suggested clearances, including both top-of-descent-point and speed-profile data, for one or more aircraft in order to achieve specific time or distance separation objectives. The DA algorithm is used by the air traffic controller to resolve conflicts and issue advisories to arrival aircraft. A joint simulation was conducted using a piloted simulator and an advanced concept air traffic control simulation to study the acceptability and accuracy of the DA automation aid from both the pilot's and the air traffic controller's perspectives. The results of the piloted simulation are examined. In the piloted simulation, airline crews executed controller issued descent advisories along standard curved path arrival routes, and were able to achieve an arrival time precision of + or - 20 sec at the metering fix. An analysis of errors generated in turns resulted in further enhancements of the algorithm to improve the predictive accuracy. Evaluations by pilots indicate general support for the concept and provide specific recommendations for improvement.
WS-BP: An efficient wolf search based back-propagation algorithm
NASA Astrophysics Data System (ADS)
Nawi, Nazri Mohd; Rehman, M. Z.; Khan, Abdullah
2015-05-01
Wolf Search (WS) is a heuristic based optimization algorithm. Inspired by the preying and survival capabilities of the wolves, this algorithm is highly capable to search large spaces in the candidate solutions. This paper investigates the use of WS algorithm in combination with back-propagation neural network (BPNN) algorithm to overcome the local minima problem and to improve convergence in gradient descent. The performance of the proposed Wolf Search based Back-Propagation (WS-BP) algorithm is compared with Artificial Bee Colony Back-Propagation (ABC-BP), Bat Based Back-Propagation (Bat-BP), and conventional BPNN algorithms. Specifically, OR and XOR datasets are used for training the network. The simulation results show that the WS-BP algorithm effectively avoids the local minima and converge to global minima.
A Gradient Taguchi Method for Engineering Optimization
NASA Astrophysics Data System (ADS)
Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song
2017-10-01
To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.
2016-01-01
This paper presents an algorithm, for use with a Portable Powered Ankle-Foot Orthosis (i.e., PPAFO) that can automatically detect changes in gait modes (level ground, ascent and descent of stairs or ramps), thus allowing for appropriate ankle actuation control during swing phase. An artificial neural network (ANN) algorithm used input signals from an inertial measurement unit and foot switches, that is, vertical velocity and segment angle of the foot. Output from the ANN was filtered and adjusted to generate a final data set used to classify different gait modes. Five healthy male subjects walked with the PPAFO on the right leg for two test scenarios (walking over level ground and up and down stairs or a ramp; three trials per scenario). Success rate was quantified by the number of correctly classified steps with respect to the total number of steps. The results indicated that the proposed algorithm's success rate was high (99.3%, 100%, and 98.3% for level, ascent, and descent modes in the stairs scenario, respectively; 98.9%, 97.8%, and 100% in the ramp scenario). The proposed algorithm continuously detected each step's gait mode with faster timing and higher accuracy compared to a previous algorithm that used a decision tree based on maximizing the reliability of the mode recognition. PMID:28070188
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1971-01-01
The method of steepest descent used in optimizing one-dimensional layered radiation shields is extended to multidimensional, multiconstraint situations. The multidimensional optimization algorithm and equations are developed for the case of a dose constraint in any one direction being dependent only on the shield thicknesses in that direction and independent of shield thicknesses in other directions. Expressions are derived for one-, two-, and three-dimensional cases (one, two, and three constraints). The precedure is applicable to the optimization of shields where there are different dose constraints and layering arrangements in the principal directions.
NASA Technical Reports Server (NTRS)
Prevot, Thomas
2012-01-01
This paper describes the underlying principles and algorithms for computing the primary controller managed spacing (CMS) tools developed at NASA for precisely spacing aircraft along efficient descent paths. The trajectory-based CMS tools include slot markers, delay indications and speed advisories. These tools are one of three core NASA technologies integrated in NASAs ATM technology demonstration-1 (ATD-1) that will operationally demonstrate the feasibility of fuel-efficient, high throughput arrival operations using Automatic Dependent Surveillance Broadcast (ADS-B) and ground-based and airborne NASA technologies for precision scheduling and spacing.
Approximate solution of the p-median minimization problem
NASA Astrophysics Data System (ADS)
Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.
2016-09-01
A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.
Ciaccio, Edward J; Micheli-Tzanakou, Evangelia
2007-07-01
Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.
Karayiannis, N B
2000-01-01
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.
Minimum-fuel turning climbout and descent guidance of transport jets
NASA Technical Reports Server (NTRS)
Neuman, F.; Kreindler, E.
1983-01-01
The complete flightpath optimization problem for minimum fuel consumption from takeoff to landing including the initial and final turns from and to the runway heading is solved. However, only the initial and final segments which contain the turns are treated, since the straight-line climbout, cruise, and descent problems have already been solved. The paths are derived by generating fields of extremals, using the necessary conditions of optimal control together with singular arcs and state constraints. Results show that the speed profiles for straight flight and turning flight are essentially identical except for the final horizontal accelerating or decelerating turns. The optimal turns require no abrupt maneuvers, and an approximation of the optimal turns could be easily integrated with present straight-line climb-cruise-descent fuel-optimization algorithms. Climbout at the optimal IAS rather than the 250-knot terminal-area speed limit would save 36 lb of fuel for the 727-100 aircraft.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.
Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter
2012-08-01
An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.
2008-01-01
This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.
A hybrid Gerchberg-Saxton-like algorithm for DOE and CGH calculation
NASA Astrophysics Data System (ADS)
Wang, Haichao; Yue, Weirui; Song, Qiang; Liu, Jingdan; Situ, Guohai
2017-02-01
The Gerchberg-Saxton (GS) algorithm is widely used in various disciplines of modern sciences and technologies where phase retrieval is required. However, this legendary algorithm most likely stagnates after a few iterations. Many efforts have been taken to improve this situation. Here we propose to introduce the strategy of gradient descent and weighting technique to the GS algorithm, and demonstrate it using two examples: design of a diffractive optical element (DOE) to achieve off-axis illumination in lithographic tools, and design of a computer generated hologram (CGH) for holographic display. Both numerical simulation and optical experiments are carried out for demonstration.
Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance
NASA Technical Reports Server (NTRS)
Jones, Brandon M.
2005-01-01
Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.
Wavefront sensing and adaptive control in phased array of fiber collimators
NASA Astrophysics Data System (ADS)
Lachinova, Svetlana L.; Vorontsov, Mikhail A.
2011-03-01
A new wavefront control approach for mitigation of atmospheric turbulence-induced wavefront phase aberrations in coherent fiber-array-based laser beam projection systems is introduced and analyzed. This approach is based on integration of wavefront sensing capabilities directly into the fiber-array transmitter aperture. In the coherent fiber array considered, we assume that each fiber collimator (subaperture) of the array is capable of precompensation of local (onsubaperture) wavefront phase tip and tilt aberrations using controllable rapid displacement of the tip of the delivery fiber at the collimating lens focal plane. In the technique proposed, this tip and tilt phase aberration control is based on maximization of the optical power received through the same fiber collimator using the stochastic parallel gradient descent (SPGD) technique. The coordinates of the fiber tip after the local tip and tilt aberrations are mitigated correspond to the coordinates of the focal-spot centroid of the optical wave backscattered off the target. Similar to a conventional Shack-Hartmann wavefront sensor, phase function over the entire fiber-array aperture can then be retrieved using the coordinates obtained. The piston phases that are required for coherent combining (phase locking) of the outgoing beams at the target plane can be further calculated from the reconstructed wavefront phase. Results of analysis and numerical simulations are presented. Performance of adaptive precompensation of phase aberrations in this laser beam projection system type is compared for various system configurations characterized by the number of fiber collimators and atmospheric turbulence conditions. The wavefront control concept presented can be effectively applied for long-range laser beam projection scenarios for which the time delay related with the double-pass laser beam propagation to the target and back is compared or even exceeds the characteristic time of the atmospheric turbulence change - scenarios when conventional target-in-the-loop phase-locking techniques fail.
Identity-by-Descent-Based Phasing and Imputation in Founder Populations Using Graphical Models
Palin, Kimmo; Campbell, Harry; Wright, Alan F; Wilson, James F; Durbin, Richard
2011-01-01
Accurate knowledge of haplotypes, the combination of alleles co-residing on a single copy of a chromosome, enables powerful gene mapping and sequence imputation methods. Since humans are diploid, haplotypes must be derived from genotypes by a phasing process. In this study, we present a new computational model for haplotype phasing based on pairwise sharing of haplotypes inferred to be Identical-By-Descent (IBD). We apply the Bayesian network based model in a new phasing algorithm, called systematic long-range phasing (SLRP), that can capitalize on the close genetic relationships in isolated founder populations, and show with simulated and real genome-wide genotype data that SLRP substantially reduces the rate of phasing errors compared to previous phasing algorithms. Furthermore, the method accurately identifies regions of IBD, enabling linkage-like studies without pedigrees, and can be used to impute most genotypes with very low error rate. Genet. Epidemiol. 2011. © 2011 Wiley Periodicals, Inc.35:853-860, 2011 PMID:22006673
Cosmic Microwave Background Mapmaking with a Messenger Field
NASA Astrophysics Data System (ADS)
Huffenberger, Kevin M.; Næss, Sigurd K.
2018-01-01
We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.
NASA Astrophysics Data System (ADS)
Pinson, Robin Marie
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed to find the global minimum provided one exists. In addition, an outer optimization loop using Brent's method determines the optimal flight time corresponding to the minimum propellant usage over all flight times. Inclusion of additional trajectory constraints, solely vertical motion near the landing site and glide slope, were evaluated. Through a theoretical proof involving the Minimum Principle from Optimal Control Theory and the Karush-Kuhn-Tucker conditions it was shown that the relaxed problem is identical to the original problem at the minimum point. Therefore, the optimal solution of the relaxed problem is an optimal solution of the original problem, referred to as lossless convexification. A key finding is that this holds for all levels of gravity model fidelity. The designed thrust magnitude profiles were the bang-bang predicted by Optimal Control Theory. The first high fidelity gravity model employed was the 2x2 spherical harmonics model assuming a perfect triaxial ellipsoid and placement of the coordinate frame at the asteroid's center of mass and aligned with the semi-major axes. The spherical harmonics model is not valid inside the Brillouin sphere and this becomes relevant for irregularly shaped asteroids. Then, a higher fidelity model was implemented combining the 4x4 spherical harmonics gravity model with the interior spherical Bessel gravity model. All gravitational terms in the equations of motion are evaluated with the position vector from the previous iteration, creating the successive solution method. Methodology success was shown by applying the algorithm to three triaxial ellipsoidal asteroids with four different rotation speeds using the 2x2 gravity model. Finally, the algorithm was tested using the irregularly shaped asteroid, Castalia.
Algorithms for Mathematical Programming with Emphasis on Bi-level Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldfarb, Donald; Iyengar, Garud
2014-05-22
The research supported by this grant was focused primarily on first-order methods for solving large scale and structured convex optimization problems and convex relaxations of nonconvex problems. These include optimal gradient methods, operator and variable splitting methods, alternating direction augmented Lagrangian methods, and block coordinate descent methods.
A conjugate gradient method with descent properties under strong Wolfe line search
NASA Astrophysics Data System (ADS)
Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.
2017-09-01
The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.
Precise Image-Based Motion Estimation for Autonomous Small Body Exploration
NASA Technical Reports Server (NTRS)
Johnson, Andrew Edie; Matthies, Larry H.
2000-01-01
We have developed and tested a software algorithm that enables onboard autonomous motion estimation near small bodies using descent camera imagery and laser altimetry. Through simulation and testing, we have shown that visual feature tracking can decrease uncertainty in spacecraft motion to a level that makes landing on small, irregularly shaped, bodies feasible. Possible future work will include qualification of the algorithm as a flight experiment for the Deep Space 4/Champollion comet lander mission currently under study at the Jet Propulsion Laboratory.
The Double Star Orbit Initial Value Problem
NASA Astrophysics Data System (ADS)
Hensley, Hagan
2018-04-01
Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.
Discrete Analog Processing for Tracking and Guidance Control
1980-11-01
be called the multi- sample algorithm, satisfies -4 67 tD (Da - d) 0 (4.2.2.3) Thus, this descent algorithm will determine a coefficient vector a... flJ -TI:-* IS; 7" rR(VI Dr TH~I ("vFP)ALLCj TT$ C_ F 2C OH Til TPACK I! NC SYS TE ! f- 1I3 cc cc *’I cc. CC snUpcF FIL1j: C~T 01C 0 (1 cc CC OEJCT F I LF
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10 6 particles on 65,536 MPI tasks.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409
Cyclic coordinate descent: A robotics algorithm for protein loop closure.
Canutescu, Adrian A; Dunbrack, Roland L
2003-05-01
In protein structure prediction, it is often the case that a protein segment must be adjusted to connect two fixed segments. This occurs during loop structure prediction in homology modeling as well as in ab initio structure prediction. Several algorithms for this purpose are based on the inverse Jacobian of the distance constraints with respect to dihedral angle degrees of freedom. These algorithms are sometimes unstable and fail to converge. We present an algorithm developed originally for inverse kinematics applications in robotics. In robotics, an end effector in the form of a robot hand must reach for an object in space by altering adjustable joint angles and arm lengths. In loop prediction, dihedral angles must be adjusted to move the C-terminal residue of a segment to superimpose on a fixed anchor residue in the protein structure. The algorithm, referred to as cyclic coordinate descent or CCD, involves adjusting one dihedral angle at a time to minimize the sum of the squared distances between three backbone atoms of the moving C-terminal anchor and the corresponding atoms in the fixed C-terminal anchor. The result is an equation in one variable for the proposed change in each dihedral. The algorithm proceeds iteratively through all of the adjustable dihedral angles from the N-terminal to the C-terminal end of the loop. CCD is suitable as a component of loop prediction methods that generate large numbers of trial structures. It succeeds in closing loops in a large test set 99.79% of the time, and fails occasionally only for short, highly extended loops. It is very fast, closing loops of length 8 in 0.037 sec on average.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Applying Gradient Descent in Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Cui, Nan
2018-04-01
With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Powered ankle-foot prosthesis to assist level-ground and stair-descent gaits.
Au, Samuel; Berniker, Max; Herr, Hugh
2008-05-01
The human ankle varies impedance and delivers net positive work during the stance period of walking. In contrast, commercially available ankle-foot prostheses are passive during stance, causing many clinical problems for transtibial amputees, including non-symmetric gait patterns, higher gait metabolism, and poorer shock absorption. In this investigation, we develop and evaluate a myoelectric-driven, finite state controller for a powered ankle-foot prosthesis that modulates both impedance and power output during stance. The system employs both sensory inputs measured local to the external prosthesis, and myoelectric inputs measured from residual limb muscles. Using local prosthetic sensing, we first develop two finite state controllers to produce biomimetic movement patterns for level-ground and stair-descent gaits. We then employ myoelectric signals as control commands to manage the transition between these finite state controllers. To transition from level-ground to stairs, the amputee flexes the gastrocnemius muscle, triggering the prosthetic ankle to plantar flex at terminal swing, and initiating the stair-descent state machine algorithm. To transition back to level-ground walking, the amputee flexes the tibialis anterior muscle, triggering the ankle to remain dorsiflexed at terminal swing, and initiating the level-ground state machine algorithm. As a preliminary evaluation of clinical efficacy, we test the device on a transtibial amputee with both the proposed controller and a conventional passive-elastic control. We find that the amputee can robustly transition between the finite state controllers through direct muscle activation, allowing rapid transitioning from level-ground to stair walking patterns. Additionally, we find that the proposed finite state controllers result in a more biomimetic ankle response, producing net propulsive work during level-ground walking and greater shock absorption during stair descent. The results of this study highlight the potential of prosthetic leg controllers that exploit neural signals to trigger terrain-appropriate, local prosthetic leg behaviors.
Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.
An Impact-Location Estimation Algorithm for Subsonic Uninhabited Aircraft
NASA Technical Reports Server (NTRS)
Bauer, Jeffrey E.; Teets, Edward
1997-01-01
An impact-location estimation algorithm is being used at the NASA Dryden Flight Research Center to support range safety for uninhabited aerial vehicle flight tests. The algorithm computes an impact location based on the descent rate, mass, and altitude of the vehicle and current wind information. The predicted impact location is continuously displayed on the range safety officer's moving map display so that the flightpath of the vehicle can be routed to avoid ground assets if the flight must be terminated. The algorithm easily adapts to different vehicle termination techniques and has been shown to be accurate to the extent required to support range safety for subsonic uninhabited aerial vehicles. This paper describes how the algorithm functions, how the algorithm is used at NASA Dryden, and how various termination techniques are handled by the algorithm. Other approaches to predicting the impact location and the reasons why they were not selected for real-time implementation are also discussed.
Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent
De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle
2018-01-01
Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770
Energy minimization in medical image analysis: Methodologies and applications.
Zhao, Feng; Xie, Xianghua
2016-02-01
Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.
Distributed Control by Lagrangian Steepest Descent
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bieniawski, Stefan
2004-01-01
Often adaptive, distributed control can be viewed as an iterated game between independent players. The coupling between the players mixed strategies, arising as the system evolves from one instant to the next, is determined by the system designer. Information theory tells us that the most likely joint strategy of the players, given a value of the expectation of the overall control objective function, is the minimizer of a function o the joint strategy. So the goal of the system designer is to speed evolution of the joint strategy to that Lagrangian mhimbhgpoint,lowerthe expectated value of the control objective function, and repeat Here we elaborate the theory of algorithms that do this using local descent procedures, and that thereby achieve efficient, adaptive, distributed control.
NASA Astrophysics Data System (ADS)
Jia, Ningning; Y Lam, Edmund
2010-04-01
Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.
Fractional-order gradient descent learning of BP neural networks with Caputo derivative.
Wang, Jian; Wen, Yanqing; Gou, Yida; Ye, Zhenyun; Chen, Hua
2017-05-01
Fractional calculus has been found to be a promising area of research for information processing and modeling of some physical systems. In this paper, we propose a fractional gradient descent method for the backpropagation (BP) training of neural networks. In particular, the Caputo derivative is employed to evaluate the fractional-order gradient of the error defined as the traditional quadratic energy function. The monotonicity and weak (strong) convergence of the proposed approach are proved in detail. Two simulations have been implemented to illustrate the performance of presented fractional-order BP algorithm on three small datasets and one large dataset. The numerical simulations effectively verify the theoretical observations of this paper as well. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hybrid DFP-CG method for solving unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa
2017-09-01
The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.
Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar
2017-03-01
The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse
A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).
Narayanan, Shrikanth
2009-01-01
We describe a method for unsupervised region segmentation of an image using its spatial frequency domain representation. The algorithm was designed to process large sequences of real-time magnetic resonance (MR) images containing the 2-D midsagittal view of a human vocal tract airway. The segmentation algorithm uses an anatomically informed object model, whose fit to the observed image data is hierarchically optimized using a gradient descent procedure. The goal of the algorithm is to automatically extract the time-varying vocal tract outline and the position of the articulators to facilitate the study of the shaping of the vocal tract during speech production. PMID:19244005
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
Integration of energy management concepts into the flight deck
NASA Technical Reports Server (NTRS)
Morello, S. A.
1981-01-01
The rapid rise of fuel costs has become a major concern of the commercial aviation industry, and it has become mandatory to seek means by which to conserve fuel. A research program was initiated in 1979 to investigate the integration of fuel-conservative energy/flight management computations and information into today's and tomorrow's flight deck. One completed effort within this program has been the development and flight testing of a fuel-efficient, time-based metering descent algorithm in a research cockpit environment. Research flights have demonstrated that time guidance and control in the cockpit was acceptable to both pilots and ATC controllers. Proper descent planning and energy management can save fuel for the individual aircraft as well as the fleet by helping to maintain a regularized flow into the terminal area.
Seismic noise attenuation using an online subspace tracking algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission
NASA Technical Reports Server (NTRS)
Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.
2004-01-01
In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.
Controller evaluations of the descent advisor automation aid
NASA Technical Reports Server (NTRS)
Tobias, Leonard; Volckers, Uwe; Erzberger, Heinz
1989-01-01
An automation aid to assist air traffic controllers in efficiently spacing traffic and meeting arrival times at a fix has been developed at NASA Ames Research Center. The automation aid, referred to as the descent advisor (DA), is based on accurate models of aircraft performance and weather conditions. The DA generates suggested clearances, including both top-of-descent point and speed profile data, for one or more aircraft in order to achieve specific time or distance separation objectives. The DA algorithm is interfaced with a mouse-based, menu-driven controller display that allows the air traffic controller to interactively use its accurate predictive capability to resolve conflicts and issue advisories to arrival aircraft. This paper focuses on operational issues concerning the utilization of the DA, specifically, how the DA can be used for prediction, intrail spacing, and metering. In order to evaluate the DA, a real time simulation was conducted using both current and retired controller subjects. Controllers operated in teams of two, as they do in the present environment; issues of training and team interaction will be discussed. Evaluations by controllers indicated considerable enthusiasm for the DA aid, and provided specific recommendations for using the tool effectively.
Algorithm for Training a Recurrent Multilayer Perceptron
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.
2004-01-01
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.
Shahsavari, Shadab; Rezaie Shirmard, Leila; Amini, Mohsen; Abedin Dokoosh, Farid
2017-01-01
Formulation of a nanoparticulate Fingolimod delivery system based on biodegradable poly(3-hydroxybutyrate-co-3-hydroxyvalerate) was optimized according to artificial neural networks (ANNs). Concentration of poly(3-hydroxybutyrate-co-3-hydroxyvalerate), PVA and amount of Fingolimod is considered as the input value, and the particle size, polydispersity index, loading capacity, and entrapment efficacy as output data in experimental design study. In vitro release study was carried out for best formulation according to statistical analysis. ANNs are employed to generate the best model to determine the relationships between various values. In order to specify the model with the best accuracy and proficiency for the in vitro release, a multilayer percepteron with different training algorithm has been examined. Three training model formulations including Levenberg-Marquardt (LM), gradient descent, and Bayesian regularization were employed for training the ANN models. It is demonstrated that the predictive ability of each training algorithm is in the order of LM > gradient descent > Bayesian regularization. Also, optimum formulation was achieved by LM training function with 15 hidden layers and 20 neurons. The transfer function of the hidden layer for this formulation and the output layer were tansig and purlin, respectively. Also, the optimization process was developed by minimizing the error among the predicted and observed values of training algorithm (about 0.0341). Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Flight test experience using advanced airborne equipment in a time-based metered traffic environment
NASA Technical Reports Server (NTRS)
Morello, S. A.
1980-01-01
A series of test flights have demonstrated that time-based metering guidance and control was acceptable to pilots and air traffic controllers. The descent algorithm of the technique, with good representation of aircraft performance and wind modeling, yielded arrival time accuracy within 12 sec. It is expected that this will represent significant fuel savings (1) through a reduction of the time error dispersions at the metering fix for the entire fleet, and (2) for individual aircraft as well, through the presentation of guidance for a fuel-efficient descent. Air traffic controller workloads were also reduced, in keeping with the reduction of required communications resulting from the transfer of navigation responsibilities to pilots. A second series of test flights demonstrated that an existing flight management system could be modified to operate in the new mode.
NASA Technical Reports Server (NTRS)
Dutta, Soumyo; Way, David W.
2017-01-01
Mars 2020, the next planned U.S. rover mission to land on Mars, is based on the design of the successful 2012 Mars Science Laboratory (MSL) mission. Mars 2020 retains most of the entry, descent, and landing (EDL) sequences of MSL, including the closed-loop entry guidance scheme based on the Apollo guidance algorithm. However, unlike MSL, Mars 2020 will trigger the parachute deployment and descent sequence on range trigger rather than the previously used velocity trigger. This difference will greatly reduce the landing ellipse sizes. Additionally, the relative contribution of each models to the total ellipse sizes have changed greatly due to the switch to range trigger. This paper considers the effect on trajectory dispersions due to changing the trigger schemes and the contributions of these various models to trajectory and EDL performance.
NASA Technical Reports Server (NTRS)
Kopasakis, George
1997-01-01
Performance Seeking Control (PSC) attempts to find and control the process at the operating condition that will generate maximum performance. In this paper a nonlinear multivariable PSC methodology will be developed, utilizing the Fuzzy Model Reference Learning Control (FMRLC) and the method of Steepest Descent or Gradient (SDG). This PSC control methodology employs the SDG method to find the operating condition that will generate maximum performance. This operating condition is in turn passed to the FMRLC controller as a set point for the control of the process. The conventional SDG algorithm is modified in this paper in order for convergence to occur monotonically. For the FMRLC control, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for effective tuning of the FMRLC controller.
Overview of the Phoenix Entry, Descent and Landing System Architecture
NASA Technical Reports Server (NTRS)
Grover, Myron R., III; Cichy, Benjamin D.; Desai, Prasun N.
2008-01-01
NASA s Phoenix Mars Lander began its journey to Mars from Cape Canaveral, Florida in August 2007, but its journey to the launch pad began many years earlier in 1997 as NASA s Mars Surveyor Program 2001 Lander. In the intervening years, the entry, descent and landing (EDL) system architecture went through a series of changes, resulting in the system flown to the surface of Mars on May 25th, 2008. Some changes, such as entry velocity and landing site elevation, were the result of differences in mission design. Other changes, including the removal of hypersonic guidance, the reformulation of the parachute deployment algorithm, and the addition of the backshell avoidance maneuver, were driven by constant efforts to augment system robustness. An overview of the Phoenix EDL system architecture is presented along with rationales driving these architectural changes.
Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Lu, Ping
2016-01-01
Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on- board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles. There are two previous studies that form the background to the current investigation. The first set looked in-depth at applying convex optimization to a powered descent trajectory on Mars with promising results.1, 2 This showed that the powered descent equations of motion can be relaxed and formed into a convex optimization problem and that the optimal solution of the relaxed problem is indeed a feasible solution to the original problem. This analysis used a constant gravity field. The second area applied a successive solution process to formulate a second order cone program that designs rendezvous and proximity operations trajectories.3, 4 These trajectories included a Newtonian gravity model. The equivalence of the solutions between the relaxed and the original problem is theoretically established. The proposed solution for designing the asteroid powered descent trajectory is to use convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process to design the fuel optimal trajectory. The solution to the convex optimization problem is the thrust profile, magnitude and direction, that will yield the minimum fuel trajectory for a soft landing at the target site, subject to various mission and operational constraints. The equations of motion are formulated in a rotating coordinate system and includes a high fidelity gravity model. The vehicle's thrust magnitude can vary between maximum and minimum bounds during the burn. Also, constraints are included to ensure that the vehicle does not run out of propellant, or go below the asteroid's surface, and any vehicle pointing requirements. The equations of motion are discretized and propagated with the trapezoidal rule in order to produce equality constraints for the optimization problem. These equality constraints allow the optimization algorithm to solve the entire problem, without including a propagator inside the optimization algorithm.
Assessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module
NASA Technical Reports Server (NTRS)
Gay, Robert S.
2011-01-01
Orion Crew Module (CM) touchdown detection is critical to activating the post-landing sequence that safe?s the Reaction Control Jets (RCS), ensures that the vehicle remains upright, and establishes communication with recovery forces. In order to accommodate safe landing of an unmanned vehicle or incapacitated crew, an onboard automated detection system is required. An Orion-specific touchdown detection algorithm was developed and evaluated to differentiate landing events from in-flight events. The proposed method will be used to initiate post-landing cutting of the parachute riser lines, to prevent CM rollover, and to terminate RCS jet firing prior to submersion. The RCS jets continue to fire until touchdown to maintain proper CM orientation with respect to the flight path and to limit impact loads, but have potentially hazardous consequences if submerged while firing. The time available after impact to cut risers and initiate the CM Up-righting System (CMUS) is measured in minutes, whereas the time from touchdown to RCS jet submersion is a function of descent velocity, sea state conditions, and is often less than one second. Evaluation of the detection algorithms was performed for in-flight events (e.g. descent under chutes) using hi-fidelity rigid body analyses in the Decelerator Systems Simulation (DSS), whereas water impacts were simulated using a rigid finite element model of the Orion CM in LS-DYNA. Two touchdown detection algorithms were evaluated with various thresholds: Acceleration magnitude spike detection, and Accumulated velocity changed (over a given time window) spike detection. Data for both detection methods is acquired from an onboard Inertial Measurement Unit (IMU) sensor. The detection algorithms were tested with analytically generated in-flight and landing IMU data simulations. The acceleration spike detection proved to be faster while maintaining desired safety margin. Time to RCS jet submersion was predicted analytically across a series of simulated Orion landing conditions. This paper details the touchdown detection method chosen and the analysis used to support the decision.
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
An Airborne Conflict Resolution Approach Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mondoloni, Stephane; Conway, Sheila
2001-01-01
An airborne conflict resolution approach is presented that is capable of providing flight plans forecast to be conflict-free with both area and traffic hazards. This approach is capable of meeting constraints on the flight plan such as required times of arrival (RTA) at a fix. The conflict resolution algorithm is based upon a genetic algorithm, and can thus seek conflict-free flight plans meeting broader flight planning objectives such as minimum time, fuel or total cost. The method has been applied to conflicts occurring 6 to 25 minutes in the future in climb, cruise and descent phases of flight. The conflict resolution approach separates the detection, trajectory generation and flight rules function from the resolution algorithm. The method is capable of supporting pilot-constructed resolutions, cooperative and non-cooperative maneuvers, and also providing conflict resolution on trajectories forecast by an onboard FMC.
Li, Qu; Yao, Min; Yang, Jianhua; Xu, Ning
2014-01-01
Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.
Railway obstacle detection algorithm using neural network
NASA Astrophysics Data System (ADS)
Yu, Mingyang; Yang, Peng; Wei, Sen
2018-05-01
Aiming at the difficulty of detection of obstacle in outdoor railway scene, a data-oriented method based on neural network to obtain image objects is proposed. First, we mark objects of images(such as people, trains, animals) acquired on the Internet. and then use the residual learning units to build Fast R-CNN framework. Then, the neural network is trained to get the target image characteristics by using stochastic gradient descent algorithm. Finally, a well-trained model is used to identify an outdoor railway image. if it includes trains and other objects, it will issue an alert. Experiments show that the correct rate of warning reached 94.85%.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
A Simple Algorithm for the Metric Traveling Salesman Problem
NASA Technical Reports Server (NTRS)
Grimm, M. J.
1984-01-01
An algorithm was designed for a wire list net sort problem. A branch and bound algorithm for the metric traveling salesman problem is presented for this. The algorithm is a best bound first recursive descent where the bound is based on the triangle inequality. The bounded subsets are defined by the relative order of the first K of the N cities (i.e., a K city subtour). When K equals N, the bound is the length of the tour. The algorithm is implemented as a one page subroutine written in the C programming language for the VAX 11/750. Average execution times for randomly selected planar points using the Euclidean metric are 0.01, 0.05, 0.42, and 3.13 seconds for ten, fifteen, twenty, and twenty-five cities, respectively. Maximum execution times for a hundred cases are less than eleven times the averages. The speed of the algorithms is due to an initial ordering algorithm that is a N squared operation. The algorithm also solves the related problem where the tour does not return to the starting city and the starting and/or ending cities may be specified. It is possible to extend the algorithm to solve a nonsymmetric problem satisfying the triangle inequality.
On Nonconvex Decentralized Gradient Descent
2016-08-01
and J. Bolte, On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Math . Program., 116: 5-16, 2009. [2] H...splitting, and regularized Gauss-Seidel methods, Math . Pro- gram., Ser. A, 137: 91-129, 2013. [3] P. Bianchi and J. Jakubowicz, Convergence of a multi-agent...subgradient method under random communication topologies , IEEE J. Sel. Top. Signal Process., 5:754-771, 2011. [11] A. Nedic and A. Ozdaglar, Distributed
Neural network explanation using inversion.
Saad, Emad W; Wunsch, Donald C
2007-01-01
An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.
Performance study of LMS based adaptive algorithms for unknown system identification
NASA Astrophysics Data System (ADS)
Javed, Shazia; Ahmad, Noor Atinah
2014-07-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Performance study of LMS based adaptive algorithms for unknown system identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javed, Shazia; Ahmad, Noor Atinah
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less
Real-Time Adaptive Control of Flow-Induced Cavity Tones
NASA Technical Reports Server (NTRS)
Kegerise, Michael A.; Cabell, Randolph H.; Cattafesta, Louis N.
2004-01-01
An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. The adaptive control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. The algorithm was also able t o maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Controller performance was evaluated with a measure of output disturbance rejection and an input sensitivity transfer function. The results suggest that disturbances entering the cavity flow are colocated with the control input at the cavity leading edge. In that case, only tonal components of the cavity wall-pressure fluctuations can be suppressed and arbitrary broadband pressure reduction is not possible. In the control-algorithm development, the cavity dynamics are treated as linear and time invariant (LTI) for a fixed Mach number. The experimental results lend support this treatment.
On the Use of a Range Trigger for the Mars Science Laboratory Entry Descent and Landing
NASA Technical Reports Server (NTRS)
Way, David W.
2011-01-01
In 2012, during the Entry, Descent, and Landing (EDL) of the Mars Science Laboratory (MSL) entry vehicle, a 21.5 m Viking-heritage, Disk-Gap-Band, supersonic parachute will be deployed at approximately Mach 2. The baseline algorithm for commanding this parachute deployment is a navigated planet-relative velocity trigger. This paper compares the performance of an alternative range-to-go trigger (sometimes referred to as Smart Chute ), which can significantly reduce the landing footprint size. Numerical Monte Carlo results, predicted by the POST2 MSL POST End-to-End EDL simulation, are corroborated and explained by applying propagation of uncertainty methods to develop an analytic estimate for the standard deviation of Mach number. A negative correlation is shown to exist between the standard deviations of wind velocity and the planet-relative velocity at parachute deploy, which mitigates the Mach number rise in the case of the range trigger.
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
Pharmacogenomics of warfarin in populations of African descent
Suarez-Kurtz, Guilherme; Botton, Mariana R
2013-01-01
Warfarin is the most commonly prescribed oral anticoagulant worldwide despite its narrow therapeutic index and the notorious inter- and intra-individual variability in dose required for the target clinical effect. Pharmacogenetic polymorphisms are major determinants of warfarin pharmacokinetic and dynamics and included in several warfarin dosing algorithms. This review focuses on warfarin pharmacogenomics in sub-Saharan peoples, African Americans and admixed Brazilians. These ‘Black’ populations differ in several aspects, notably their extent of recent admixture with Europeans, a factor which impacts on the frequency distribution of pharmacogenomic polymorphisms relevant to warfarin dose requirement for the target clinical effect. Whereas a small number of polymorphisms in VKORC1 (3673G > A, rs9923231), CYP2C9 (alleles *2 and *3, rs1799853 and rs1057910, respectively) and arguably CYP4F2 (rs2108622), may capture most of the pharmacogenomic influence on warfarin dose variance in White populations, additional polymorphisms in these, and in other, genes (e.g. CALU rs339097) increase the predictive power of pharmacogenetic warfarin dosing algorithms in the Black populations examined. A personalized strategy for initiation of warfarin therapy, allowing for improved safety and cost-effectiveness for populations of African descent must take into account their pharmacogenomic diversity, as well as socio-economical, cultural and medical factors. Accounting for this heterogeneity in algorithms that are ‘friendly’ enough to be adopted by warfarin prescribers worldwide requires gathering information from trials at different population levels, but demands also a critical appraisal of racial/ethnic labels that are commonly used in the clinical pharmacology literature but do not accurately reflect genetic ancestry and population diversity. PMID:22676711
Intelligence system based classification approach for medical disease diagnosis
NASA Astrophysics Data System (ADS)
Sagir, Abdu Masanawa; Sathasivam, Saratha
2017-08-01
The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.
Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.
Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong
2011-09-01
Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.
Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-Test Results
NASA Technical Reports Server (NTRS)
Brown, Nelson Andrew; Schaefer, Jacob Robert
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.
Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1995-01-01
Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
NASA Astrophysics Data System (ADS)
Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.
2004-07-01
The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.
Algorithms for Maneuvering Spacecraft Around Small Bodies
NASA Technical Reports Server (NTRS)
Acikmese, A. Bechet; Bayard, David
2006-01-01
A document describes mathematical derivations and applications of autonomous guidance algorithms for maneuvering spacecraft in the vicinities of small astronomical bodies like comets or asteroids. These algorithms compute fuel- or energy-optimal trajectories for typical maneuvers by solving the associated optimal-control problems with relevant control and state constraints. In the derivations, these problems are converted from their original continuous (infinite-dimensional) forms to finite-dimensional forms through (1) discretization of the time axis and (2) spectral discretization of control inputs via a finite number of Chebyshev basis functions. In these doubly discretized problems, the Chebyshev coefficients are the variables. These problems are, variously, either convex programming problems or programming problems that can be convexified. The resulting discrete problems are convex parameter-optimization problems; this is desirable because one can take advantage of very efficient and robust algorithms that have been developed previously and are well established for solving such problems. These algorithms are fast, do not require initial guesses, and always converge to global optima. Following the derivations, the algorithms are demonstrated by applying them to numerical examples of flyby, descent-to-hover, and ascent-from-hover maneuvers.
Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-test Results
NASA Technical Reports Server (NTRS)
Brown, Nelson Andrew; Schaefer, Jacob Robert
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.
Phase imaging using shifted wavefront sensor images.
Zhang, Zhengyun; Chen, Zhi; Rehman, Shakil; Barbastathis, George
2014-11-01
We propose a new approach to the complete retrieval of a coherent field (amplitude and phase) using the same hardware configuration as a Shack-Hartmann sensor but with two modifications: first, we add a transversally shifted measurement to resolve ambiguities in the measured phase; and second, we employ factored form descent (FFD), an inverse algorithm for coherence retrieval, with a hard rank constraint. We verified the proposed approach using both numerical simulations and experiments.
Learning Structured Classifiers with Dual Coordinate Ascent
2010-06-01
stochastic gradient descent (SGD) [LeCun et al., 1998], and the margin infused relaxed algorithm (MIRA) [ Crammer et al., 2006]. This paper presents a...evaluate these methods on the Prague Dependency Treebank us- ing online large-margin learning tech- niques ( Crammer et al., 2003; McDonald et al., 2005...between two kinds of factors: hard constraint factors, which are used to rule out forbidden par- tial assignments by mapping them to zero potential values
Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy
NASA Astrophysics Data System (ADS)
Rendon, A.; Beck, J. C.; Lilge, Lothar
2008-02-01
Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
NASA Astrophysics Data System (ADS)
Arias, E.; Florez, E.; Pérez-Torres, J. F.
2017-06-01
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu7, Cu9, and Cu11 as benchmark systems, and Cu38 and Ni9 as novel systems. New equilibrium structures for Cu9, Cu11, Cu38, and Ni9 are reported.
Arias, E; Florez, E; Pérez-Torres, J F
2017-06-28
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu 7 , Cu 9 , and Cu 11 as benchmark systems, and Cu 38 and Ni 9 as novel systems. New equilibrium structures for Cu 9 , Cu 11 , Cu 38 , and Ni 9 are reported.
Design factors and considerations for a time-based flight management system
NASA Technical Reports Server (NTRS)
Vicroy, D. D.; Williams, D. H.; Sorensen, J. A.
1986-01-01
Recent NASA Langley Research Center research to develop a technology data base from which an advanced Flight Management System (FMS) design might evolve is reviewed. In particular, the generation of fixed range cruise/descent reference trajectories which meet predefined end conditions of altitude, speed, and time is addressed. Results on the design and theoretical basis of the trajectory generation algorithm are presented, followed by a brief discussion of a series of studies that are being conducted to determine the accuracy requirements of the aircraft and weather models resident in the trajectory generation algorithm. Finally, studies to investigate the interface requirements between the pilot and an advanced FMS are considered.
Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.
Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban
2015-07-20
In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.
CP decomposition approach to blind separation for DS-CDMA system using a new performance index
NASA Astrophysics Data System (ADS)
Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss
2014-12-01
In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.
A modified conjugate gradient coefficient with inexact line search for unconstrained optimization
NASA Astrophysics Data System (ADS)
Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa
2016-11-01
Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
Algorithms for the optimization of RBE-weighted dose in particle therapy.
Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M
2013-01-21
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
Algorithms for the optimization of RBE-weighted dose in particle therapy
NASA Astrophysics Data System (ADS)
Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.
2013-01-01
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
NASA Astrophysics Data System (ADS)
Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang
2018-05-01
Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.
Hazardous gas detection for FTIR-based hyperspectral imaging system using DNN and CNN
NASA Astrophysics Data System (ADS)
Kim, Yong Chan; Yu, Hyeong-Geun; Lee, Jae-Hoon; Park, Dong-Jo; Nam, Hyun-Woo
2017-10-01
Recently, a hyperspectral imaging system (HIS) with a Fourier Transform InfraRed (FTIR) spectrometer has been widely used due to its strengths in detecting gaseous fumes. Even though numerous algorithms for detecting gaseous fumes have already been studied, it is still difficult to detect target gases properly because of atmospheric interference substances and unclear characteristics of low concentration gases. In this paper, we propose detection algorithms for classifying hazardous gases using a deep neural network (DNN) and a convolutional neural network (CNN). In both the DNN and CNN, spectral signal preprocessing, e.g., offset, noise, and baseline removal, are carried out. In the DNN algorithm, the preprocessed spectral signals are used as feature maps of the DNN with five layers, and it is trained by a stochastic gradient descent (SGD) algorithm (50 batch size) and dropout regularization (0.7 ratio). In the CNN algorithm, preprocessed spectral signals are trained with 1 × 3 convolution layers and 1 × 2 max-pooling layers. As a result, the proposed algorithms improve the classification accuracy rate by 1.5% over the existing support vector machine (SVM) algorithm for detecting and classifying hazardous gases.
Real-Time Feedback Control of Flow-Induced Cavity Tones. Part 2; Adaptive Control
NASA Technical Reports Server (NTRS)
Kegerise, M. A.; Cabell, R. H.; Cattafesta, L. N., III
2006-01-01
An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. Past input-output data and an estimate of the open-loop pulse response sequence are all that is needed to implement the algorithm for application at fixed Mach numbers. Transient measurements made during controller adaptation revealed that the controller coefficients converged to a steady state in the mean, and this implies that adaptation can be turned off at some point with no degradation in control performance. When converged, the control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. However, as in the case of fixed-gain GPC, the adaptive GPC performance was limited by spillover in sidebands around the suppressed Rossiter modes. The algorithm was also able to maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Beyond this range, stable operation of the control algorithm was not possible due to the fixed plant model in the algorithm.
On the use of harmony search algorithm in the training of wavelet neural networks
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2015-10-01
Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.
Optimisation des trajectoires verticales par la methode de la recherche de l'harmonie =
NASA Astrophysics Data System (ADS)
Ruby, Margaux
Face au rechauffement climatique, les besoins de trouver des solutions pour reduire les emissions de CO2 sont urgentes. L'optimisation des trajectoires est un des moyens pour reduire la consommation de carburant lors d'un vol. Afin de determiner la trajectoire optimale de l'avion, differents algorithmes ont ete developpes. Le but de ces algorithmes est de reduire au maximum le cout total d'un vol d'un avion qui est directement lie a la consommation de carburant et au temps de vol. Un autre parametre, nomme l'indice de cout est considere dans la definition du cout de vol. La consommation de carburant est fournie via des donnees de performances pour chaque phase de vol. Dans le cas de ce memoire, les phases d'un vol complet, soit, une phase de montee, une phase de croisiere et une phase de descente, sont etudies. Des " marches de montee " etaient definies comme des montees de 2 000ft lors de la phase de croisiere sont egalement etudiees. L'algorithme developpe lors de ce memoire est un metaheuristique, nomme la recherche de l'harmonie, qui, concilie deux types de recherches : la recherche locale et la recherche basee sur une population. Cet algorithme se base sur l'observation des musiciens lors d'un concert, ou plus exactement sur la capacite de la musique a trouver sa meilleure harmonie, soit, en termes d'optimisation, le plus bas cout. Differentes donnees d'entrees comme le poids de l'avion, la destination, la vitesse de l'avion initiale et le nombre d'iterations doivent etre, entre autre, fournies a l'algorithme pour qu'il soit capable de determiner la solution optimale qui est definie comme : [Vitesse de montee, Altitude, Vitesse de croisiere, Vitesse de descente]. L'algorithme a ete developpe a l'aide du logiciel MATLAB et teste pour plusieurs destinations et plusieurs poids pour un seul type d'avion. Pour la validation, les resultats obtenus par cet algorithme ont ete compares dans un premier temps aux resultats obtenus suite a une recherche exhaustive qui a utilisee toutes les combinaisons possibles. Cette recherche exhaustive nous a fourni l'optimal global; ainsi, la solution de notre algorithme doit se rapprocher le plus possible de la recherche exhaustive afin de prouver qu'il donne des resultats proche de l'optimal global. Une seconde comparaison a ete effectuee entre les resultats fournis par l'algorithme et ceux du Flight Management System (FMS) qui est un systeme d'avionique situe dans le cockpit de l'avion fournissant la route a suivre afin d'optimiser la trajectoire. Le but est de prouver que l'algorithme de la recherche de l'harmonie donne de meilleurs resultats que l'algorithme implemente dans le FMS.
NASA Technical Reports Server (NTRS)
Brown, Nelson
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. This presentation also focuses on the design of the flight experiment and the practical challenges of conducting the experiment.
NASA Astrophysics Data System (ADS)
Shahri, Abbas; Mousavinaseri, Mahsasadat; Naderi, Shima; Espersson, Maria
2015-04-01
Application of Artificial Neural Networks (ANNs) in many areas of engineering, in particular to geotechnical engineering problems such as site characterization has demonstrated some degree of success. The present paper aims to evaluate the feasibility of several various types of ANN models to predict the clay sensitivity of soft clays form piezocone penetration test data (CPTu). To get the aim, a research database of CPTu data of 70 test points around the Göta River near the Lilli Edet in the southwest of Sweden which is a high prone land slide area were collected and considered as input for ANNs. For training algorithms the quick propagation, conjugate gradient descent, quasi-Newton, limited memory quasi-Newton and Levenberg-Marquardt were developed tested and trained using the CPTu data to provide a comparison between the results of field investigation and ANN models to estimate the clay sensitivity. The reason of using the clay sensitivity parameter in this study is due to its relation to landslides in Sweden.A special high sensitive clay namely quick clay is considered as the main responsible for experienced landslides in Sweden which has high sensitivity and prone to slide. The training and testing program was started with 3-2-1 ANN architecture structure. By testing and trying several various architecture structures and changing the hidden layer in order to have a higher output resolution the 3-4-4-3-1 architecture structure for ANN in this study was confirmed. The tested algorithm showed that increasing the hidden layers up to 4 layers in ANN can improve the results and the 3-4-4-3-1 architecture structure ANNs for prediction of clay sensitivity represent reliable and reasonable response. The obtained results showed that the conjugate gradient descent algorithm with R2=0.897 has the best performance among the tested algorithms. Keywords: clay sensitivity, landslide, Artificial Neural Network
Powered Descent Trajectory Guidance and Some Considerations for Human Lunar Landing
NASA Technical Reports Server (NTRS)
Sostaric, Ronald R.
2007-01-01
The Autonomous Precision Landing and Hazard Detection and Avoidance Technology development (ALHAT) will enable an accurate (better than 100m) landing on the lunar surface. This technology will also permit autonomous (independent from ground) avoidance of hazards detected in real time. A preliminary trajectory guidance algorithm capable of supporting these tasks has been developed and demonstrated in simulations. Early results suggest that with expected improvements in sensor technology and lunar mapping, mission objectives are achievable.
2014-09-30
to establish the performance of algorithms detecting dives, strokes , clicks, respiration and gait changes. We have also found that a combination of...whale click count, total click count, vocal duration, SOC2 depth, EOC3 depth) Descent 40 bits (duration, vertical speed, stroke count 0...100 m, stroke count 100-400 m, OBDA4, sum sr35) Bottom 26 bits (movement index6, OBDA, jerk events7, median jerk depth) Ascent
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
A study on the performance comparison of metaheuristic algorithms on the learning of neural networks
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2017-08-01
The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.
Pharmacogenomics of warfarin in populations of African descent.
Suarez-Kurtz, Guilherme; Botton, Mariana R
2013-02-01
Warfarin is the most commonly prescribed oral anticoagulant worldwide despite its narrow therapeutic index and the notorious inter- and intra-individual variability in dose required for the target clinical effect. Pharmacogenetic polymorphisms are major determinants of warfarin pharmacokinetic and dynamics and included in several warfarin dosing algorithms. This review focuses on warfarin pharmacogenomics in sub-Saharan peoples, African Americans and admixed Brazilians. These 'Black' populations differ in several aspects, notably their extent of recent admixture with Europeans, a factor which impacts on the frequency distribution of pharmacogenomic polymorphisms relevant to warfarin dose requirement for the target clinical effect. Whereas a small number of polymorphisms in VKORC1 (3673G > A, rs9923231), CYP2C9 (alleles *2 and *3, rs1799853 and rs1057910, respectively) and arguably CYP4F2 (rs2108622), may capture most of the pharmacogenomic influence on warfarin dose variance in White populations, additional polymorphisms in these, and in other, genes (e.g. CALU rs339097) increase the predictive power of pharmacogenetic warfarin dosing algorithms in the Black populations examined. A personalized strategy for initiation of warfarin therapy, allowing for improved safety and cost-effectiveness for populations of African descent must take into account their pharmacogenomic diversity, as well as socio-economical, cultural and medical factors. Accounting for this heterogeneity in algorithms that are 'friendly' enough to be adopted by warfarin prescribers worldwide requires gathering information from trials at different population levels, but demands also a critical appraisal of racial/ethnic labels that are commonly used in the clinical pharmacology literature but do not accurately reflect genetic ancestry and population diversity. © 2012 The Authors. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.
Yao, Rui; Templeton, Alistair K; Liao, Yixiang; Turian, Julius V; Kiel, Krystyna D; Chu, James C H
2014-01-01
To validate an in-house optimization program that uses adaptive simulated annealing (ASA) and gradient descent (GD) algorithms and investigate features of physical dose and generalized equivalent uniform dose (gEUD)-based objective functions in high-dose-rate (HDR) brachytherapy for cervical cancer. Eight Syed/Neblett template-based cervical cancer HDR interstitial brachytherapy cases were used for this study. Brachytherapy treatment plans were first generated using inverse planning simulated annealing (IPSA). Using the same dwell positions designated in IPSA, plans were then optimized with both physical dose and gEUD-based objective functions, using both ASA and GD algorithms. Comparisons were made between plans both qualitatively and based on dose-volume parameters, evaluating each optimization method and objective function. A hybrid objective function was also designed and implemented in the in-house program. The ASA plans are higher on bladder V75% and D2cc (p=0.034) and lower on rectum V75% and D2cc (p=0.034) than the IPSA plans. The ASA and GD plans are not significantly different. The gEUD-based plans have higher homogeneity index (p=0.034), lower overdose index (p=0.005), and lower rectum gEUD and normal tissue complication probability (p=0.005) than the physical dose-based plans. The hybrid function can produce a plan with dosimetric parameters between the physical dose-based and gEUD-based plans. The optimized plans with the same objective value and dose-volume histogram could have different dose distributions. Our optimization program based on ASA and GD algorithms is flexible on objective functions, optimization parameters, and can generate optimized plans comparable with IPSA. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.
Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron
2017-10-21
The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.
Learning and optimization with cascaded VLSI neural network building-block chips
NASA Technical Reports Server (NTRS)
Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.
1992-01-01
To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.
Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai
2017-03-01
This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Shiri, Jalal
2012-06-01
Estimating sediment volume carried by a river is an important issue in water resources engineering. This paper compares the accuracy of three different soft computing methods, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Gene Expression Programming (GEP), in estimating daily suspended sediment concentration on rivers by using hydro-meteorological data. The daily rainfall, streamflow and suspended sediment concentration data from Eel River near Dos Rios, at California, USA are used as a case study. The comparison results indicate that the GEP model performs better than the other models in daily suspended sediment concentration estimation for the particular data sets used in this study. Levenberg-Marquardt, conjugate gradient and gradient descent training algorithms were used for the ANN models. Out of three algorithms, the Conjugate gradient algorithm was found to be better than the others.
Zhao, Tuo; Liu, Han
2016-01-01
We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430
NASA Astrophysics Data System (ADS)
Motta, Mario; Zhang, Shiwei
2018-05-01
We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.
Maneuvering Rotorcraft Noise Prediction: A New Code for a New Problem
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Bres, Guillaume A.; Perez, Guillaume; Jones, Henry E.
2002-01-01
This paper presents the unique aspects of the development of an entirely new maneuver noise prediction code called PSU-WOPWOP. The main focus of the code is the aeroacoustic aspects of the maneuver noise problem, when the aeromechanical input data are provided (namely aircraft and blade motion, blade airloads). The PSU-WOPWOP noise prediction capability was developed for rotors in steady and transient maneuvering flight. Featuring an object-oriented design, the code allows great flexibility for complex rotor configuration and motion (including multiple rotors and full aircraft motion). The relative locations and number of hinges, flexures, and body motions can be arbitrarily specified to match the any specific rotorcraft. An analysis of algorithm efficiency is performed for maneuver noise prediction along with a description of the tradeoffs made specifically for the maneuvering noise problem. Noise predictions for the main rotor of a rotorcraft in steady descent, transient (arrested) descent, hover and a mild "pop-up" maneuver are demonstrated.
Multi-Sensor Fusion for Enhanced Contextual Awareness of Everyday Activities with Ubiquitous Devices
Guiry, John J.; van de Ven, Pepijn; Nelson, John
2014-01-01
In this paper, the authors investigate the role that smart devices, including smartphones and smartwatches, can play in identifying activities of daily living. A feasibility study involving N = 10 participants was carried out to evaluate the devices' ability to differentiate between nine everyday activities. The activities examined include walking, running, cycling, standing, sitting, elevator ascents, elevator descents, stair ascents and stair descents. The authors also evaluated the ability of these devices to differentiate indoors from outdoors, with the aim of enhancing contextual awareness. Data from this study was used to train and test five well known machine learning algorithms: C4.5, CART, Naïve Bayes, Multi-Layer Perceptrons and finally Support Vector Machines. Both single and multi-sensor approaches were examined to better understand the role each sensor in the device can play in unobtrusive activity recognition. The authors found overall results to be promising, with some models correctly classifying up to 100% of all instances. PMID:24662406
Guiry, John J; van de Ven, Pepijn; Nelson, John
2014-03-21
In this paper, the authors investigate the role that smart devices, including smartphones and smartwatches, can play in identifying activities of daily living. A feasibility study involving N = 10 participants was carried out to evaluate the devices' ability to differentiate between nine everyday activities. The activities examined include walking, running, cycling, standing, sitting, elevator ascents, elevator descents, stair ascents and stair descents. The authors also evaluated the ability of these devices to differentiate indoors from outdoors, with the aim of enhancing contextual awareness. Data from this study was used to train and test five well known machine learning algorithms: C4.5, CART, Naïve Bayes, Multi-Layer Perceptrons and finally Support Vector Machines. Both single and multi-sensor approaches were examined to better understand the role each sensor in the device can play in unobtrusive activity recognition. The authors found overall results to be promising, with some models correctly classifying up to 100% of all instances.
Learning algorithms for human-machine interfaces.
Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A
2009-05-01
The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore-Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction.
Learning Algorithms for Human–Machine Interfaces
Fishbach, Alon; Mussa-Ivaldi, Ferdinando A.
2012-01-01
The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore–Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction. PMID:19203886
A fast summation method for oscillatory lattice sums
NASA Astrophysics Data System (ADS)
Denlinger, Ryan; Gimbutas, Zydrunas; Greengard, Leslie; Rokhlin, Vladimir
2017-02-01
We present a fast summation method for lattice sums of the type which arise when solving wave scattering problems with periodic boundary conditions. While there are a variety of effective algorithms in the literature for such calculations, the approach presented here is new and leads to a rigorous analysis of Wood's anomalies. These arise when illuminating a grating at specific combinations of the angle of incidence and the frequency of the wave, for which the lattice sums diverge. They were discovered by Wood in 1902 as singularities in the spectral response. The primary tools in our approach are the Euler-Maclaurin formula and a steepest descent argument. The resulting algorithm has super-algebraic convergence and requires only milliseconds of CPU time.
Using High Resolution Design Spaces for Aerodynamic Shape Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Li, Wu; Padula, Sharon
2004-01-01
This paper explains why high resolution design spaces encourage traditional airfoil optimization algorithms to generate noisy shape modifications, which lead to inaccurate linear predictions of aerodynamic coefficients and potential failure of descent methods. By using auxiliary drag constraints for a simultaneous drag reduction at all design points and the least shape distortion to achieve the targeted drag reduction, an improved algorithm generates relatively smooth optimal airfoils with no severe off-design performance degradation over a range of flight conditions, in high resolution design spaces parameterized by cubic B-spline functions. Simulation results using FUN2D in Euler flows are included to show the capability of the robust aerodynamic shape optimization method over a range of flight conditions.
Dynamic metrology and data processing for precision freeform optics fabrication and testing
NASA Astrophysics Data System (ADS)
Aftab, Maham; Trumper, Isaac; Huang, Lei; Choi, Heejoo; Zhao, Wenchuan; Graves, Logan; Oh, Chang Jin; Kim, Dae Wook
2017-06-01
Dynamic metrology holds the key to overcoming several challenging limitations of conventional optical metrology, especially with regards to precision freeform optical elements. We present two dynamic metrology systems: 1) adaptive interferometric null testing; and 2) instantaneous phase shifting deflectometry, along with an overview of a gradient data processing and surface reconstruction technique. The adaptive null testing method, utilizing a deformable mirror, adopts a stochastic parallel gradient descent search algorithm in order to dynamically create a null testing condition for unknown freeform optics. The single-shot deflectometry system implemented on an iPhone uses a multiplexed display pattern to enable dynamic measurements of time-varying optical components or optics in vibration. Experimental data, measurement accuracy / precision, and data processing algorithms are discussed.
Precise Image-Based Motion Estimation for Autonomous Small Body Exploration
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Matthies, Larry H.
1998-01-01
Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.
The Mars Science Laboratory Entry, Descent, and Landing Flight Software
NASA Technical Reports Server (NTRS)
Gostelow, Kim P.
2013-01-01
This paper describes the design, development, and testing of the EDL program from the perspective of the software engineer. We briefly cover the overall MSL flight software organization, and then the organization of EDL itself. We discuss the timeline, the structure of the GNC code (but not the algorithms as they are covered elsewhere in this conference) and the command and telemetry interfaces. Finally, we cover testing and the influence that testability had on the EDL flight software design.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2016-01-01
This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.
Predictability of Top of Descent Location for Operational Idle-Thrust Descents
NASA Technical Reports Server (NTRS)
Stell, Laurel L.
2010-01-01
To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the trajectory predictor and its uncertainty models, commercial flights executed idle-thrust descents at a specified descent speed, and the recorded data included the specified descent speed profile, aircraft weight, and the winds entered into the FMS as well as the radar data. The FMS computed the intended descent path assuming idle thrust after top of descent (TOD), and the controllers and pilots then endeavored to allow the FMS to fly the descent to the meter fix with minimal human intervention. The horizontal flight path, cruise and meter fix altitudes, and actual TOD location were extracted from the radar data. Using approximately 70 descents each in Boeing 757 and Airbus 319/320 aircraft, multiple regression estimated TOD location as a linear function of the available predictive factors. The cruise and meter fix altitudes, descent speed, and wind clearly improve goodness of fit. The aircraft weight improves fit for the Airbus descents but not for the B757. Except for a few statistical outliers, the residuals have absolute value less than 5 nmi. Thus, these predictive factors adequately explain the TOD location, which indicates the data do not include excessive noise.
Piecewise convexity of artificial neural networks.
Rister, Blaine; Rubin, Daniel L
2017-10-01
Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.
Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen
2016-07-27
Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.
Robust camera calibration for sport videos using court models
NASA Astrophysics Data System (ADS)
Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang
2003-12-01
We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
Multigrid optimal mass transport for image registration and morphing
NASA Astrophysics Data System (ADS)
Rehman, Tauseef ur; Tannenbaum, Allen
2007-02-01
In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
Agent Collaborative Target Localization and Classification in Wireless Sensor Networks
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.
Walter, Jonathan P; Pandy, Marcus G
2017-10-01
The aim of this study was to perform multi-body, muscle-driven, forward-dynamics simulations of human gait using a 6-degree-of-freedom (6-DOF) model of the knee in tandem with a surrogate model of articular contact and force control. A forward-dynamics simulation incorporating position, velocity and contact force-feedback control (FFC) was used to track full-body motion capture data recorded for multiple trials of level walking and stair descent performed by two individuals with instrumented knee implants. Tibiofemoral contact force errors for FFC were compared against those obtained from a standard computed muscle control algorithm (CMC) with a 6-DOF knee contact model (CMC6); CMC with a 1-DOF translating hinge-knee model (CMC1); and static optimization with a 1-DOF translating hinge-knee model (SO). Tibiofemoral joint loads predicted by FFC and CMC6 were comparable for level walking, however FFC produced more accurate results for stair descent. SO yielded reasonable predictions of joint contact loading for level walking but significant differences between model and experiment were observed for stair descent. CMC1 produced the least accurate predictions of tibiofemoral contact loads for both tasks. Our findings suggest that reliable estimates of knee-joint loading may be obtained by incorporating position, velocity and force-feedback control with a multi-DOF model of joint contact in a forward-dynamics simulation of gait. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Efficient algorithms for polyploid haplotype phasing.
He, Dan; Saha, Subrata; Finkers, Richard; Parida, Laxmi
2018-05-09
Inference of haplotypes, or the sequence of alleles along the same chromosomes, is a fundamental problem in genetics and is a key component for many analyses including admixture mapping, identifying regions of identity by descent and imputation. Haplotype phasing based on sequencing reads has attracted lots of attentions. Diploid haplotype phasing where the two haplotypes are complimentary have been studied extensively. In this work, we focused on Polyploid haplotype phasing where we aim to phase more than two haplotypes at the same time from sequencing data. The problem is much more complicated as the search space becomes much larger and the haplotypes do not need to be complimentary any more. We proposed two algorithms, (1) Poly-Harsh, a Gibbs Sampling based algorithm which alternatively samples haplotypes and the read assignments to minimize the mismatches between the reads and the phased haplotypes, (2) An efficient algorithm to concatenate haplotype blocks into contiguous haplotypes. Our experiments showed that our method is able to improve the quality of the phased haplotypes over the state-of-the-art methods. To our knowledge, our algorithm for haplotype blocks concatenation is the first algorithm that leverages the shared information across multiple individuals to construct contiguous haplotypes. Our experiments showed that it is both efficient and effective.
Optimization-based image reconstruction from sparse-view data in offset-detector CBCT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan
2013-01-01
The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
Optimum Strategies for Selecting Descent Flight-Path Angles
NASA Technical Reports Server (NTRS)
Wu, Minghong G. (Inventor); Green, Steven M. (Inventor)
2016-01-01
An information processing system and method for adaptively selecting an aircraft descent flight path for an aircraft, are provided. The system receives flight adaptation parameters, including aircraft flight descent time period, aircraft flight descent airspace region, and aircraft flight descent flyability constraints. The system queries a plurality of flight data sources and retrieves flight information including any of winds and temperatures aloft data, airspace/navigation constraints, airspace traffic demand, and airspace arrival delay model. The system calculates a set of candidate descent profiles, each defined by at least one of a flight path angle and a descent rate, and each including an aggregated total fuel consumption value for the aircraft following a calculated trajectory, and a flyability constraints metric for the calculated trajectory. The system selects a best candidate descent profile having the least fuel consumption value while the fly ability constraints metric remains within aircraft flight descent flyability constraints.
A new modified conjugate gradient coefficient for solving system of linear equations
NASA Astrophysics Data System (ADS)
Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.
2017-09-01
Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations
Analysis of Air Traffic Track Data with the AutoBayes Synthesis System
NASA Technical Reports Server (NTRS)
Schumann, Johann Martin Philip; Cate, Karen; Lee, Alan G.
2010-01-01
The Next Generation Air Traffic System (NGATS) is aiming to provide substantial computer support for the air traffic controllers. Algorithms for the accurate prediction of aircraft movements are of central importance for such software systems but trajectory prediction has to work reliably in the presence of unknown parameters and uncertainties. We are using the AutoBayes program synthesis system to generate customized data analysis algorithms that process large sets of aircraft radar track data in order to estimate parameters and uncertainties. In this paper, we present, how the tasks of finding structure in track data, estimation of important parameters in climb trajectories, and the detection of continuous descent approaches can be accomplished with compact task-specific AutoBayes specifications. We present an overview of the AutoBayes architecture and describe, how its schema-based approach generates customized analysis algorithms, documented C/C++ code, and detailed mathematical derivations. Results of experiments with actual air traffic control data are discussed.
Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana
2016-01-01
With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie
2014-04-15
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less
Feasibility study of low-dose intra-operative cone-beam CT for image-guided surgery
NASA Astrophysics Data System (ADS)
Han, Xiao; Shi, Shuanghe; Bian, Junguo; Helm, Patrick; Sidky, Emil Y.; Pan, Xiaochuan
2011-03-01
Cone-beam computed tomography (CBCT) has been increasingly used during surgical procedures for providing accurate three-dimensional anatomical information for intra-operative navigation and verification. High-quality CBCT images are in general obtained through reconstruction from projection data acquired at hundreds of view angles, which is associated with a non-negligible amount of radiation exposure to the patient. In this work, we have applied a novel image-reconstruction algorithm, the adaptive-steepest-descent-POCS (ASD-POCS) algorithm, to reconstruct CBCT images from projection data at a significantly reduced number of view angles. Preliminary results from experimental studies involving both simulated data and real data show that images of comparable quality to those presently available in clinical image-guidance systems can be obtained by use of the ASD-POCS algorithm from a fraction of the projection data that are currently used. The result implies potential value of the proposed reconstruction technique for low-dose intra-operative CBCT imaging applications.
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
NASA Astrophysics Data System (ADS)
Kozynchenko, Alexander I.; Kozynchenko, Sergey A.
2017-03-01
In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.
Self-Organizing Hidden Markov Model Map (SOHMMM).
Ferles, Christos; Stafylopatis, Andreas
2013-12-01
A hybrid approach combining the Self-Organizing Map (SOM) and the Hidden Markov Model (HMM) is presented. The Self-Organizing Hidden Markov Model Map (SOHMMM) establishes a cross-section between the theoretic foundations and algorithmic realizations of its constituents. The respective architectures and learning methodologies are fused in an attempt to meet the increasing requirements imposed by the properties of deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and protein chain molecules. The fusion and synergy of the SOM unsupervised training and the HMM dynamic programming algorithms bring forth a novel on-line gradient descent unsupervised learning algorithm, which is fully integrated into the SOHMMM. Since the SOHMMM carries out probabilistic sequence analysis with little or no prior knowledge, it can have a variety of applications in clustering, dimensionality reduction and visualization of large-scale sequence spaces, and also, in sequence discrimination, search and classification. Two series of experiments based on artificial sequence data and splice junction gene sequences demonstrate the SOHMMM's characteristics and capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Evaluating and minimizing noise impact due to aircraft flyover
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Cook, G.
1979-01-01
Existing techniques were used to assess the noise impact on a community due to aircraft operation and to optimize the flight paths of an approaching aircraft with respect to the annoyance produced. Major achievements are: (1) the development of a population model suitable for determining the noise impact, (2) generation of a numerical computer code which uses this population model along with the steepest descent algorithm to optimize approach/landing trajectories, (3) implementation of this optimization code in several fictitious cases as well as for the community surrounding Patrick Henry International Airport, Virginia.
Adaptive conversion of a high-order mode beam into a near-diffraction-limited beam.
Zhao, Haichuan; Wang, Xiaolin; Ma, Haotong; Zhou, Pu; Ma, Yanxing; Xu, Xiaojun; Zhao, Yijun
2011-08-01
We present a new method for efficiently transforming a high-order mode beam into a nearly Gaussian beam with much higher beam quality. The method is based on modulation of phases of different lobes by stochastic parallel gradient descent algorithm and coherent addition after phase flattening. We demonstrate the method by transforming an LP11 mode into a nearly Gaussian beam. The experimental results reveal that the power in the diffraction-limited bucket in the far field is increased by more than a factor of 1.5.
Joint estimation of motion and illumination change in a sequence of images
NASA Astrophysics Data System (ADS)
Koo, Ja-Keoung; Kim, Hyo-Hun; Hong, Byung-Woo
2015-09-01
We present an algorithm that simultaneously computes optical flow and estimates illumination change from an image sequence in a unified framework. We propose an energy functional consisting of conventional optical flow energy based on Horn-Schunck method and an additional constraint that is designed to compensate for illumination changes. Any undesirable illumination change that occurs in the imaging procedure in a sequence while the optical flow is being computed is considered a nuisance factor. In contrast to the conventional optical flow algorithm based on Horn-Schunck functional, which assumes the brightness constancy constraint, our algorithm is shown to be robust with respect to temporal illumination changes in the computation of optical flows. An efficient conjugate gradient descent technique is used in the optimization procedure as a numerical scheme. The experimental results obtained from the Middlebury benchmark dataset demonstrate the robustness and the effectiveness of our algorithm. In addition, comparative analysis of our algorithm and Horn-Schunck algorithm is performed on the additional test dataset that is constructed by applying a variety of synthetic bias fields to the original image sequences in the Middlebury benchmark dataset in order to demonstrate that our algorithm outperforms the Horn-Schunck algorithm. The superior performance of the proposed method is observed in terms of both qualitative visualizations and quantitative accuracy errors when compared to Horn-Schunck optical flow algorithm that easily yields poor results in the presence of small illumination changes leading to violation of the brightness constancy constraint.
Shape regularized active contour based on dynamic programming for anatomical structure segmentation
NASA Astrophysics Data System (ADS)
Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra
2005-04-01
We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.
Estimating the degree of identity by descent in consanguineous couples.
Carr, Ian M; Markham, Sir Alexander F; Pena, Sérgio D J
2011-12-01
In some clinical and research settings, it is often necessary to identify the true level of "identity by descent" (IBD) between two individuals. However, as the individuals become more distantly related, it is increasingly difficult to accurately calculate this value. Consequently, we have developed a computer program that uses genome-wide SNP genotype data from related individuals to estimate the size and extent of IBD in their genomes. In addition, the software can compare a couple's IBD regions with either the autozygous regions of a relative affected by an autosomal recessive disease of unknown cause, or the IBD regions in the parents of the affected relative. It is then possible to calculate the probability of one of the couple's children suffering from the same disease. The software works by finding SNPs that exclude any possible IBD and then identifies regions that lack these SNPs, while exceeding a minimum size and number of SNPs. The accuracy of the algorithm was established by estimating the pairwise IBD between different members of a large pedigree with varying known coefficients of genetic relationship (CGR). © 2011 Wiley Periodicals, Inc.
A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem.
Dang, Chuangyin; Xu, Lei
2002-02-01
A Lagrange multiplier and Hopfield-type barrier function method is proposed for approximating a solution of the traveling salesman problem. The method is derived from applications of Lagrange multipliers and a Hopfield-type barrier function and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the method searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that lower and upper bounds on variables are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the method converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the method seems more effective and efficient than the softassign algorithm.
Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Lu, Ping
2016-01-01
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.
Liu, Xiaozheng; Yuan, Zhenming; Zhu, Junming; Xu, Dongrong
2013-12-07
The demons algorithm is a popular algorithm for non-rigid image registration because of its computational efficiency and simple implementation. The deformation forces of the classic demons algorithm were derived from image gradients by considering the deformation to decrease the intensity dissimilarity between images. However, the methods using the difference of image intensity for medical image registration are easily affected by image artifacts, such as image noise, non-uniform imaging and partial volume effects. The gradient magnitude image is constructed from the local information of an image, so the difference in a gradient magnitude image can be regarded as more reliable and robust for these artifacts. Then, registering medical images by considering the differences in both image intensity and gradient magnitude is a straightforward selection. In this paper, based on a diffeomorphic demons algorithm, we propose a chain-type diffeomorphic demons algorithm by combining the differences in both image intensity and gradient magnitude for medical image registration. Previous work had shown that the classic demons algorithm can be considered as an approximation of a second order gradient descent on the sum of the squared intensity differences. By optimizing the new dissimilarity criteria, we also present a set of new demons forces which were derived from the gradients of the image and gradient magnitude image. We show that, in controlled experiments, this advantage is confirmed, and yields a fast convergence.
Fuel-Efficient Descent and Landing Guidance Logic for a Safe Lunar Touchdown
NASA Technical Reports Server (NTRS)
Lee, Allan Y.
2011-01-01
The landing of a crewed lunar lander on the surface of the Moon will be the climax of any Moon mission. At touchdown, the landing mechanism must absorb the load imparted on the lander due to the vertical component of the lander's touchdown velocity. Also, a large horizontal velocity must be avoided because it could cause the lander to tip over, risking the life of the crew. To be conservative, the worst-case lander's touchdown velocity is always assumed in designing the landing mechanism, making it very heavy. Fuel-optimal guidance algorithms for soft planetary landing have been studied extensively. In most of these studies, the lander is constrained to touchdown with zero velocity. With bounds imposed on the magnitude of the engine thrust, the optimal control solutions typically have a "bang-bang" thrust profile: the thrust magnitude "bangs" instantaneously between its maximum and minimum magnitudes. But the descent engine might not be able to throttle between its extremes instantaneously. There is also a concern about the acceptability of "bang-bang" control to the crew. In our study, the optimal control of a lander is formulated with a cost function that penalizes both the touchdown velocity and the fuel cost of the descent engine. In this formulation, there is not a requirement to achieve a zero touchdown velocity. Only a touchdown velocity that is consistent with the capability of the landing gear design is required. Also, since the nominal throttle level for the terminal descent sub-phase is well below the peak engine thrust, no bound on the engine thrust is used in our formulated problem. Instead of bangbang type solution, the optimal thrust generated is a continuous function of time. With this formulation, we can easily derive analytical expressions for the optimal thrust vector, touchdown velocity components, and other system variables. These expressions provide insights into the "physics" of the optimal landing and terminal descent maneuver. These insights could help engineers to achieve a better "balance" between the conflicting needs of achieving a safe touchdown velocity, a low-weight landing mechanism, low engine fuel cost, and other design goals. In comparing the computed optimal control results with the preflight landing trajectory design of the Apollo-11 mission, we noted interesting similarities between the two missions.
Flight Management System Execution of Idle-Thrust Descents in Operations
NASA Technical Reports Server (NTRS)
Stell, Laurel L.
2011-01-01
To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the trajectory predictor and its error models, commercial flights executed idle-thrust descents, and the recorded data includes the target speed profile and FMS intent trajectories. The FMS computes the intended descent path assuming idle thrust after top of descent (TOD), and any intervention by the controllers that alters the FMS execution of the descent is recorded so that such flights are discarded from the analysis. The horizontal flight path, cruise and meter fix altitudes, and actual TOD location are extracted from the radar data. Using more than 60 descents in Boeing 777 aircraft, the actual speeds are compared to the intended descent speed profile. In addition, three aspects of the accuracy of the FMS intent trajectory are analyzed: the meter fix crossing time, the TOD location, and the altitude at the meter fix. The actual TOD location is within 5 nmi of the intent location for over 95% of the descents. Roughly 90% of the time, the airspeed is within 0.01 of the target Mach number and within 10 KCAS of the target descent CAS, but the meter fix crossing time is only within 50 sec of the time computed by the FMS. Overall, the aircraft seem to be executing the descents as intended by the designers of the onboard automation.
VTOL shipboard letdown guidance system analysis
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Karmali, M. S.
1983-01-01
Alternative letdown guidance strategies are examined for landing of a VTOL aircraft onboard a small aviation ship under adverse environmental conditions. Off line computer simulation of shipboard landing task is utilized for assessing the relative merits of the proposed guidance schemes. The touchdown performance of a nominal constant rate of descent (CROD) letdown strategy serves as a benchmark for ranking the performance of the alternative letdown schemes. Analysis of ship motion time histories indicates the existence of an alternating sequence of quiescent and rough motions called lulls and swells. A real time algorithms lull/swell classification based upon ship motion pattern features is developed. The classification algorithm is used to command a go/no go signal to indicate the initiation and termination of an acceptable landing window. Simulation results show that such a go/no go pattern based letdown guidance strategy improves touchdown performance.
Robotic Lunar Lander Development Project Status
NASA Technical Reports Server (NTRS)
Hammond, Monica; Bassler, Julie; Morse, Brian
2010-01-01
This slide presentation reviews the status of the development of a robotic lunar lander. The goal of the project is to perform engineering tests and risk reduction activities to support the development of a small lunar lander for lunar surface science. This includes: (1) risk reduction for the flight of the robotic lander, (i.e., testing and analyzing various phase of the project); (2) the incremental development for the design of the robotic lander, which is to demonstrate autonomous, controlled descent and landing on airless bodies, and design of thruster configuration for 1/6th of the gravity of earth; (3) cold gas test article in flight demonstration testing; (4) warm gas testing of the robotic lander design; (5) develop and test landing algorithms; (6) validate the algorithms through analysis and test; and (7) tests of the flight propulsion system.
NASA Astrophysics Data System (ADS)
Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas
1992-07-01
Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.
Efficient two-dimensional compressive sensing in MIMO radar
NASA Astrophysics Data System (ADS)
Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad
2017-12-01
Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.
NASA Astrophysics Data System (ADS)
Aiyoshi, Eitaro; Masuda, Kazuaki
On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.
System Verification of MSL Skycrane Using an Integrated ADAMS Simulation
NASA Technical Reports Server (NTRS)
White, Christopher; Antoun, George; Brugarolas, Paul; Lih, Shyh-Shiuh; Peng, Chia-Yen; Phan, Linh; San Martin, Alejandro; Sell, Steven
2012-01-01
Mars Science Laboratory (MSL) will use the Skycrane architecture to execute final descent and landing maneuvers. The Skycrane phase uses closed-loop feedback control throughout the entire phase, starting with rover separation, through mobility deploy, and through touchdown, ending only when the bridles have completely slacked. The integrated ADAMS simulation described in this paper couples complex dynamical models created by the mechanical subsystem with actual GNC flight software algorithms that have been compiled and linked into ADAMS. These integrated simulations provide the project with the best means to verify key Skycrane requirements which have a tightly coupled GNC-Mechanical aspect to them. It also provides the best opportunity to validate the design of the algorithm that determines when to cut the bridles. The results of the simulations show the excellent performance of the Skycrane system.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
Haplotype assembly in polyploid genomes and identical by descent shared tracts.
Aguiar, Derek; Istrail, Sorin
2013-07-01
Genome-wide haplotype reconstruction from sequence data, or haplotype assembly, is at the center of major challenges in molecular biology and life sciences. For complex eukaryotic organisms like humans, the genome is vast and the population samples are growing so rapidly that algorithms processing high-throughput sequencing data must scale favorably in terms of both accuracy and computational efficiency. Furthermore, current models and methodologies for haplotype assembly (i) do not consider individuals sharing haplotypes jointly, which reduces the size and accuracy of assembled haplotypes, and (ii) are unable to model genomes having more than two sets of homologous chromosomes (polyploidy). Polyploid organisms are increasingly becoming the target of many research groups interested in the genomics of disease, phylogenetics, botany and evolution but there is an absence of theory and methods for polyploid haplotype reconstruction. In this work, we present a number of results, extensions and generalizations of compass graphs and our HapCompass framework. We prove the theoretical complexity of two haplotype assembly optimizations, thereby motivating the use of heuristics. Furthermore, we present graph theory-based algorithms for the problem of haplotype assembly using our previously developed HapCompass framework for (i) novel implementations of haplotype assembly optimizations (minimum error correction), (ii) assembly of a pair of individuals sharing a haplotype tract identical by descent and (iii) assembly of polyploid genomes. We evaluate our methods on 1000 Genomes Project, Pacific Biosciences and simulated sequence data. HapCompass is available for download at http://www.brown.edu/Research/Istrail_Lab/. Supplementary data are available at Bioinformatics online.
The Clark Phase-able Sample Size Problem: Long-Range Phasing and Loss of Heterozygosity in GWAS
NASA Astrophysics Data System (ADS)
Halldórsson, Bjarni V.; Aguiar, Derek; Tarpine, Ryan; Istrail, Sorin
A phase transition is taking place today. The amount of data generated by genome resequencing technologies is so large that in some cases it is now less expensive to repeat the experiment than to store the information generated by the experiment. In the next few years it is quite possible that millions of Americans will have been genotyped. The question then arises of how to make the best use of this information and jointly estimate the haplotypes of all these individuals. The premise of the paper is that long shared genomic regions (or tracts) are unlikely unless the haplotypes are identical by descent (IBD), in contrast to short shared tracts which may be identical by state (IBS). Here we estimate for populations, using the US as a model, what sample size of genotyped individuals would be necessary to have sufficiently long shared haplotype regions (tracts) that are identical by descent (IBD), at a statistically significant level. These tracts can then be used as input for a Clark-like phasing method to obtain a complete phasing solution of the sample. We estimate in this paper that for a population like the US and about 1% of the people genotyped (approximately 2 million), tracts of about 200 SNPs long are shared between pairs of individuals IBD with high probability which assures the Clark method phasing success. We show on simulated data that the algorithm will get an almost perfect solution if the number of individuals being SNP arrayed is large enough and the correctness of the algorithm grows with the number of individuals being genotyped.
Control algorithms for aerobraking in the Martian atmosphere
NASA Technical Reports Server (NTRS)
Ward, Donald T.; Shipley, Buford W., Jr.
1991-01-01
The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.
A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks
Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping
2013-01-01
In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249
Field evaluation of flight deck procedures for flying CTAS descents
DOT National Transportation Integrated Search
1997-01-01
Flight deck descent procedures were developed for a field evaluation of the CTAS Descent Advisor conducted in the fall of 1995. During this study, CTAS descent clearances were issued to 185 commercial flights at Denver International Airport. Data col...
Real-time path planning and autonomous control for helicopter autorotation
NASA Astrophysics Data System (ADS)
Yomchinda, Thanan
Autorotation is a descending maneuver that can be used to recover helicopters in the event of total loss of engine power; however it is an extremely difficult and complex maneuver. The objective of this work is to develop a real-time system which provides full autonomous control for autorotation landing of helicopters. The work includes the development of an autorotation path planning method and integration of the path planner with a primary flight control system. The trajectory is divided into three parts: entry, descent and flare. Three different optimization algorithms are used to generate trajectories for each of these segments. The primary flight control is designed using a linear dynamic inversion control scheme, and a path following control law is developed to track the autorotation trajectories. Details of the path planning algorithm, trajectory following control law, and autonomous autorotation system implementation are presented. The integrated system is demonstrated in real-time high fidelity simulations. Results indicate feasibility of the capability of the algorithms to operate in real-time and of the integrated systems ability to provide safe autorotation landings. Preliminary simulations of autonomous autorotation on a small UAV are presented which will lead to a final hardware demonstration of the algorithms.
Vectorial mask optimization methods for robust optical lithography
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.
2012-10-01
Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks.
Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek; You, Ilsun
2017-11-29
As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks' survivability, in terms of anti-interference, network energy saving, etc.
A different approach to estimate nonlinear regression model using numerical methods
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks
Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek
2017-01-01
As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks’ survivability, in terms of anti-interference, network energy saving, etc. PMID:29186072
Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun
2015-09-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
How to define pathologic pelvic floor descent in MR defecography during defecation?
Schawkat, Khoschy; Heinrich, Henriette; Parker, Helen L; Barth, Borna K; Mathew, Rishi P; Weishaupt, Dominik; Fox, Mark; Reiner, Caecilia S
2018-06-01
To assess the extents of pelvic floor descent both during the maximal straining phase and the defecation phase in healthy volunteers and in patients with pelvic floor disorders, studied with MR defecography (MRD), and to define specific threshold values for pelvic floor descent during the defecation phase. Twenty-two patients (mean age 51 ± 19.4) with obstructed defecation and 20 healthy volunteers (mean age 33.4 ± 11.5) underwent 3.0T MRD in supine position using midsagittal T2-weighted images. Two radiologists performed measurements in reference to PCL-lines in straining and during defecation. In order to identify cutoff values of pelvic floor measurements for diagnosis of pathologic pelvic floor descent [anterior, middle, and posterior compartments (AC, MC, PC)], receiver-operating characteristic (ROC) curves were plotted. Pelvic floor descent of all three compartments was significantly larger during defecation than at straining in patients and healthy volunteers (p < 0.002). When grading pelvic floor descent in the straining phase, only two healthy volunteers showed moderate PC descent (10%), which is considered pathologic. However, when applying the grading system during defecation, PC descent was overestimated with 50% of the healthy volunteers (10 of 20) showing moderate PC descent. The AUC for PC measurements during defecation was 0.77 (p = 0.003) and suggests a cutoff value of 45 mm below the PCL to identify patients with pathologic PC descent. With the adapted cutoff, only 15% of healthy volunteers show pathologic PC descent during defecation. MRD measurements during straining and defecation can be used to differentiate patients with pelvic floor dysfunction from healthy volunteers. However, different cutoff values should be used during straining and during defecation to define normal or pathologic PC descent.
Evaluation of pelvic descent disorders by dynamic contrast roentgenography.
Takano, M; Hamada, A
2000-10-01
For precise diagnosis and rational treatment of the increasing number of patients with descent of intrapelvic organ(s) and anatomic plane(s), dynamic contrast roentgenography of multiple intrapelvic organs and planes is described. Sixty-six patients, consisting of 11 males, with a mean age (+/- standard deviation) of 65.6+/-14.2 years and with chief complaints of intrapelvic organ and perineal descent or defecation problems, were examined in this study. Dynamic contrast roentgenography was obtained by opacifying the ileum, urinary bladder, vagina, rectum, and the perineum. Films were taken at both squeeze and strain phases. On the films the lowest points of each organ and plane were plotted, and the distances from the standard line drawn at the upper surface of the sacrum were measured. The values were corrected to percentages according to the height of the sacrococcygeal bone of each patient. From these corrected values, organ or plane descents at strain and squeeze were diagnosed and graphically demonstrated as a descentgram in each patient. Among 17 cases with subjective symptoms of bladder descent, 9 cases (52.9 percent) showed roentgenographic descent. By the same token, among the cases with subjective feeling of descent of the vagina, uterus, peritoneum, perineum, rectum, and anus, roentgenographic descent was confirmed in 15 of 20 (75 percent), 7 of 9 (77.8 percent), 6 of 16 (37.5 percent), 33 of 33 (100 percent), 25 of 37 (67.6 percent), and 22 of 36 (61.6 percent), respectively. The descentgrams were divided into three patterns: anorectal descent type, female genital descent type, and total organ descent type. Dynamic contrast roentgenography and successive descentgraphy of multiple intrapelvic organs and planes are useful for objective diagnosis and rational treatment of patients with descent disorders of the intrapelvic organ(s) and plane(s).
Performance Characterization of a Landmark Measurement System for ARRM Terrain Relative Navigation
NASA Technical Reports Server (NTRS)
Shoemaker, Michael A.; Wright, Cinnamon; Liounis, Andrew J.; Getzandanner, Kenneth M.; Van Eepoel, John M.; DeWeese, Keith D.
2016-01-01
This paper describes the landmark measurement system being developed for terrain relative navigation on NASAs Asteroid Redirect Robotic Mission (ARRM),and the results of a performance characterization study given realistic navigational and model errors. The system is called Retina, and is derived from the stereo-photoclinometry methods widely used on other small-body missions. The system is simulated using synthetic imagery of the asteroid surface and discussion is given on various algorithmic design choices. Unlike other missions, ARRMs Retina is the first planned autonomous use of these methods during the close-proximity and descent phase of the mission.
Performance Characterization of a Landmark Measurement System for ARRM Terrain Relative Navigation
NASA Technical Reports Server (NTRS)
Shoemaker, Michael; Wright, Cinnamon; Liounis, Andrew; Getzandanner, Kenneth; Van Eepoel, John; Deweese, Keith
2016-01-01
This paper describes the landmark measurement system being developed for terrain relative navigation on NASAs Asteroid Redirect Robotic Mission (ARRM),and the results of a performance characterization study given realistic navigational and model errors. The system is called Retina, and is derived from the stereophotoclinometry methods widely used on other small-body missions. The system is simulated using synthetic imagery of the asteroid surface and discussion is given on various algorithmic design choices. Unlike other missions, ARRMs Retina is the first planned autonomous use of these methods during the close-proximity and descent phase of the mission.
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
Deep kernel learning method for SAR image target recognition
NASA Astrophysics Data System (ADS)
Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao
2017-10-01
With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.
Analysis of various descent trajectories for a hypersonic-cruise, cold-wall research airplane
NASA Technical Reports Server (NTRS)
Lawing, P. L.
1975-01-01
The probable descent operating conditions for a hypersonic air-breathing research airplane were examined. Descents selected were cruise angle of attack, high dynamic pressure, high lift coefficient, turns, and descents with drag brakes. The descents were parametrically exercised and compared from the standpoint of cold-wall (367 K) aircraft heat load. The descent parameters compared were total heat load, peak heating rate, time to landing, time to end of heat pulse, and range. Trends in total heat load as a function of cruise Mach number, cruise dynamic pressure, angle-of-attack limitation, pull-up g-load, heading angle, and drag-brake size are presented.
NASA Technical Reports Server (NTRS)
1980-01-01
The results of three nonlinear the Monte Carlo dispersion analyses for the Space Transportation System 1 Flight (STS-1) Orbiter Descent Operational Flight Profile, Cycle 3 are presented. Fifty randomly selected simulation for the end of mission (EOM) descent, the abort once around (AOA) descent targeted line are steep target line, and the AOA descent targeted to the shallow target line are analyzed. These analyses compare the flight environment with system and operational constraints on the flight environment and in some cases use simplified system models as an aid in assessing the STS-1 descent flight profile. In addition, descent flight envelops are provided as a data base for use by system specialists to determine the flight readiness for STS-1. The results of these dispersion analyses supersede results of the dispersion analysis previously documented.
Development of an Interval Management Algorithm Using Ground Speed Feedback for Delayed Traffic
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Swieringa, Kurt A.; Underwood, Matthew C.; Abbott, Terence; Leonard, Robert D.
2016-01-01
One of the goals of NextGen is to enable frequent use of Optimized Profile Descents (OPD) for aircraft, even during periods of peak traffic demand. NASA is currently testing three new technologies that enable air traffic controllers to use speed adjustments to space aircraft during arrival and approach operations. This will allow an aircraft to remain close to their OPD. During the integration of these technologies, it was discovered that, due to a lack of accurate trajectory information for the leading aircraft, Interval Management aircraft were exhibiting poor behavior. NASA's Interval Management algorithm was modified to address the impact of inaccurate trajectory information and a series of studies were performed to assess the impact of this modification. These studies show that the modification provided some improvement when the Interval Management system lacked accurate trajectory information for the leading aircraft.
Material parameter estimation with terahertz time-domain spectroscopy.
Dorney, T D; Baraniuk, R G; Mittleman, D M
2001-07-01
Imaging systems based on terahertz (THz) time-domain spectroscopy offer a range of unique modalities owing to the broad bandwidth, subpicosecond duration, and phase-sensitive detection of the THz pulses. Furthermore, the possibility exists for combining spectroscopic characterization or identification with imaging because the radiation is broadband in nature. To achieve this, we require novel methods for real-time analysis of THz waveforms. This paper describes a robust algorithm for extracting material parameters from measured THz waveforms. Our algorithm simultaneously obtains both the thickness and the complex refractive index of an unknown sample under certain conditions. In contrast, most spectroscopic transmission measurements require knowledge of the sample's thickness for an accurate determination of its optical parameters. Our approach relies on a model-based estimation, a gradient descent search, and the total variation measure. We explore the limits of this technique and compare the results with literature data for optical parameters of several different materials.
2014-01-01
Background Hemoglobin Shepherds Bush (Human Genome Variation Society name: HBB:c.224G > A) is an unstable hemoglobin variant resulting from a β 74 GGC to GAC mutation (Gly to Asp) that manifests clinically as hemolytic anemia or gall bladder disease due to chronic subclinical hemolysis. Case presentation We report a Pennsylvania family of English descent with this condition, first noticed in a 6-year-old female. The proband presented with splenomegaly, fatigue, dark urine and an elevated indirect bilirubin. Hemoglobin identification studies and subsequent genetic testing performed according to a systematic algorithm elucidated the diagnosis of Hb Shepherds Bush. Conclusions This is the first case of this rare hemoglobin variant identified in North America to our knowledge. It was identified using a systematic algorithm of diagnostic tests that should be followed whenever considering a rare hemoglobinopathy as part of the differential diagnosis. PMID:24428873
Active semi-supervised learning method with hybrid deep belief networks.
Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong
2014-01-01
In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.
Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction
Gregor, Jens; Fessler, Jeffrey A.
2015-01-01
Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906
Multi-Sensor Registration of Earth Remotely Sensed Imagery
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)
2001-01-01
Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).
Gaussian diffusion sinogram inpainting for X-ray CT metal artifact reduction.
Peng, Chengtao; Qiu, Bensheng; Li, Ming; Guan, Yihui; Zhang, Cheng; Wu, Zhongyi; Zheng, Jian
2017-01-05
Metal objects implanted in the bodies of patients usually generate severe streaking artifacts in reconstructed images of X-ray computed tomography, which degrade the image quality and affect the diagnosis of disease. Therefore, it is essential to reduce these artifacts to meet the clinical demands. In this work, we propose a Gaussian diffusion sinogram inpainting metal artifact reduction algorithm based on prior images to reduce these artifacts for fan-beam computed tomography reconstruction. In this algorithm, prior information that originated from a tissue-classified prior image is used for the inpainting of metal-corrupted projections, and it is incorporated into a Gaussian diffusion function. The prior knowledge is particularly designed to locate the diffusion position and improve the sparsity of the subtraction sinogram, which is obtained by subtracting the prior sinogram of the metal regions from the original sinogram. The sinogram inpainting algorithm is implemented through an approach of diffusing prior energy and is then solved by gradient descent. The performance of the proposed metal artifact reduction algorithm is compared with two conventional metal artifact reduction algorithms, namely the interpolation metal artifact reduction algorithm and normalized metal artifact reduction algorithm. The experimental datasets used included both simulated and clinical datasets. By evaluating the results subjectively, the proposed metal artifact reduction algorithm causes fewer secondary artifacts than the two conventional metal artifact reduction algorithms, which lead to severe secondary artifacts resulting from impertinent interpolation and normalization. Additionally, the objective evaluation shows the proposed approach has the smallest normalized mean absolute deviation and the highest signal-to-noise ratio, indicating that the proposed method has produced the image with the best quality. No matter for the simulated datasets or the clinical datasets, the proposed algorithm has reduced the metal artifacts apparently.
Autonomous spacecraft landing through human pre-attentive vision.
Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E
2012-06-01
In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting.
Relative Terrain Imaging Navigation (RETINA) Tool for the Asteroid Redirect Robotic Mission (ARRM)
NASA Technical Reports Server (NTRS)
Wright, Cinnamon A.; Van Eepoel, John; Liounis, Andrew; Shoemaker, Michael; DeWeese, Keith; Getzandanner, Kenneth
2016-01-01
As a part of the NASA initiative to collect a boulder off of an asteroid and return it to Lunar orbit, the Satellite Servicing Capabilities Office (SSCO) and NASA GSFC are developing an on-board relative terrain imaging navigation algorithm for the Asteroid Redirect Robotic Mission (ARRM). After performing several flybys and dry runs to verify and refine the shape, spin, and gravity models and obtain centimeter level imagery, the spacecraft will descend to the surface of the asteroid to capture a boulder and return it to Lunar Orbit. The algorithm implements Stereophotoclinometry methods to register landmarks with images taken onboard the spacecraft, and use these measurements to estimate the position and orientation of the spacecraft with respect to the asteroid. This paper will present an overview of the ARRM GN&C system and concept of operations as well as a description of the algorithm and its implementation. These techniques will be demonstrated for the descent to the surface of the proposed asteroid of interest, 2008 EV5, and preliminary results will be shown.
NASA Astrophysics Data System (ADS)
Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong
2017-01-01
A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model.
Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong
2017-01-01
A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model. PMID:28120889
Martín, Andrés; Barrientos, Antonio; Del Cerro, Jaime
2018-03-22
This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.
NASA Astrophysics Data System (ADS)
Xu, Ye; Wang, Ling; Wang, Shengyao; Liu, Min
2014-09-01
In this article, an effective hybrid immune algorithm (HIA) is presented to solve the distributed permutation flow-shop scheduling problem (DPFSP). First, a decoding method is proposed to transfer a job permutation sequence to a feasible schedule considering both factory dispatching and job sequencing. Secondly, a local search with four search operators is presented based on the characteristics of the problem. Thirdly, a special crossover operator is designed for the DPFSP, and mutation and vaccination operators are also applied within the framework of the HIA to perform an immune search. The influence of parameter setting on the HIA is investigated based on the Taguchi method of design of experiment. Extensive numerical testing results based on 420 small-sized instances and 720 large-sized instances are provided. The effectiveness of the HIA is demonstrated by comparison with some existing heuristic algorithms and the variable neighbourhood descent methods. New best known solutions are obtained by the HIA for 17 out of 420 small-sized instances and 585 out of 720 large-sized instances.
Joint Chance-Constrained Dynamic Programming
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob
2012-01-01
This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
Online Sequential Projection Vector Machine with Adaptive Data Mean Update.
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
Gait mode recognition and control for a portable-powered ankle-foot orthosis.
David Li, Yifan; Hsiao-Wecksler, Elizabeth T
2013-06-01
Ankle foot orthoses (AFOs) are widely used as assistive/rehabilitation devices to correct the gait of people with lower leg neuromuscular dysfunction and muscle weakness. We have developed a portable powered ankle-foot orthosis (PPAFO), which uses a pneumatic bi-directional rotary actuator powered by compressed CO2 to provide untethered dorsiflexor and plantarflexor assistance at the ankle joint. Since portability is a key to the success of the PPAFO as an assist device, it is critical to recognize and control for gait modes (i.e. level walking, stair ascent/descent). While manual mode switching is implemented in most powered orthotic/prosthetic device control algorithms, we propose an automatic gait mode recognition scheme by tracking the 3D position of the PPAFO from an inertial measurement unit (IMU). The control scheme was designed to match the torque profile of physiological gait data during different gait modes. Experimental results indicate that, with an optimized threshold, the controller was able to identify the position, orientation and gait mode in real time, and properly control the actuation. It was also illustrated that during stair descent, a mode-specific actuation control scheme could better restore gait kinematic and kinetic patterns, compared to using the level ground controller.
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
Product Distribution Theory for Control of Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Lee, Chia Fan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.
Flight Evaluation of Center-TRACON Automation System Trajectory Prediction Process
NASA Technical Reports Server (NTRS)
Williams, David H.; Green, Steven M.
1998-01-01
Two flight experiments (Phase 1 in October 1992 and Phase 2 in September 1994) were conducted to evaluate the accuracy of the Center-TRACON Automation System (CTAS) trajectory prediction process. The Transport Systems Research Vehicle (TSRV) Boeing 737 based at Langley Research Center flew 57 arrival trajectories that included cruise and descent segments; at the same time, descent clearance advisories from CTAS were followed. Actual trajectories of the airplane were compared with the trajectories predicted by the CTAS trajectory synthesis algorithms and airplane Flight Management System (FMS). Trajectory prediction accuracy was evaluated over several levels of cockpit automation that ranged from a conventional cockpit to performance-based FMS vertical navigation (VNAV). Error sources and their magnitudes were identified and measured from the flight data. The major source of error during these tests was found to be the predicted winds aloft used by CTAS. The most significant effect related to flight guidance was the cross-track and turn-overshoot errors associated with conventional VOR guidance. FMS lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and airplane performance model errors.
The Yearly Variation in Fall-Winter Arctic Winter Vortex Descent
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.; Newman, Paul A.
1999-01-01
Using the change in HALOE methane profiles from early September to late March, we have estimated the minimum amount of diabatic descent within the polar which takes place during Arctic winter. The year to year variations are a result in the year to year variations in stratospheric wave activity which (1) modify the temperature of the vortex and thus the cooling rate; (2) reduce the apparent descent by mixing high amounts of methane into the vortex. The peak descent amounts from HALOE methane vary from l0km -14km near the arrival altitude of 25 km. Using a diabatic trajectory calculation, we compare forward and backward trajectories over the course of the winter using UKMO assimilated stratospheric data. The forward calculation agrees fairly well with the observed descent. The backward calculation appears to be unable to produce the observed amount of descent, but this is only an apparent effect due to the density decrease in parcels with altitude. Finally we show the results for unmixed descent experiments - where the parcels are fixed in latitude and longitude and allowed to descend based on the local cooling rate. Unmixed descent is found to always exceed mixed descent, because when normal parcel motion is included, the path average cooling is always less than the cooling at a fixed polar point.
Automatic toilet seat lowering apparatus
Guerty, Harold G.
1994-09-06
A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat. A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat.
A Self Contained Method for Safe and Precise Lunar Landing
NASA Technical Reports Server (NTRS)
Paschall, Stephen C., II; Brady, Tye; Cohanim, Babak; Sostaric, Ronald
2008-01-01
The return of humans to the Moon will require increased capability beyond that of the previous Apollo missions. Longer stay times and a greater flexibility with regards to landing locations are among the many improvements planned. A descent and landing system that can land the vehicle more accurately than Apollo with a greater ability to detect and avoid hazards is essential to the development of a Lunar Outpost, and also for increasing the number of potentially reachable Lunar Sortie locations. This descent and landing system should allow landings in more challenging terrain and provide more flexibility with regards to mission timing and lighting considerations, while maintaining safety as the top priority. The lunar landing system under development by the ALHAT (Autonomous precision Landing and Hazard detection Avoidance Technology) project is addressing this by providing terrain-relative navigation measurements to enhance global-scale precision, an onboard hazard-detection system to select safe landing locations, and an Autonomous GNC (Guidance, Navigation, and Control) capability to process these measurements and safely direct the vehicle to this landing location. This ALHAT landing system will enable safe and precise lunar landings without requiring lunar infrastructure in the form of navigation aids or a priori identified hazard-free landing locations. The safe landing capability provided by ALHAT uses onboard active sensing to detect hazards that are large enough to be a danger to the vehicle but too small to be detected from orbit, given currently planned orbital terrain resolution limits. Algorithms to interpret raw active sensor terrain data and generate hazard maps as well as identify safe sites and recalculate new trajectories to those sites are included as part of the ALHAT System. These improvements to descent and landing will help contribute to repeated safe and precise landings for a wide variety of terrain on the Moon.
Orion MPCV Touchdown Detection Threshold Development and Testing
NASA Technical Reports Server (NTRS)
Daum, Jared; Gay, Robert
2013-01-01
A robust method of detecting Orion Multi ]Purpose Crew Vehicle (MPCV) splashdown is necessary to ensure crew and hardware safety during descent and after touchdown. The proposed method uses a triple redundant system to inhibit Reaction Control System (RCS) thruster firings, detach parachute risers from the vehicle, and transition to the post ]landing segment of the Flight Software (FSW). The vehicle crew is the prime input for touchdown detection, followed by an autonomous FSW algorithm, and finally a strictly time based backup timer. RCS thrusters must be inhibited before submersion in water to protect against possible damage due to firing these jets under water. In addition, neglecting to declare touchdown will not allow the vehicle to transition to post ]landing activities such as activating the Crew Module Up ]righting System (CMUS), resulting in possible loss of communication and difficult recovery. A previous AIAA paper gAssessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module h concluded that a strictly Inertial Measurement Unit (IMU) based detection method using an acceleration spike algorithm had the highest safety margins and shortest detection times of other methods considered. That study utilized finite element simulations of vehicle splashdown, generated by LS ]DYNA, which were expanded to a larger set of results using a Kriging surface fit. The study also used the Decelerator Systems Simulation (DSS) to generate flight dynamics during vehicle descent under parachutes. Proto ]type IMU and FSW MATLAB models provided the basis for initial algorithm development and testing. This paper documents an in ]depth trade study, using the same dynamics data and MATLAB simulations as the earlier work, to further develop the acceleration detection method. By studying the combined effects of data rate, filtering on the rotational acceleration correction, data persistence limits and values of acceleration thresholds, an optimal configuration was determined. The lever arm calculation, which removes the centripetal acceleration caused by vehicle rotation, requires that the vehicle angular acceleration be derived from vehicle body rates, necessitating the addition of a 2nd order filter to smooth the data. It was determined that using 200 Hz data directly from the vehicle IMU outperforms the 40 Hz FSW data rate. Data persistence counter values and acceleration thresholds were balanced in order to meet desired safety and performance. The algorithm proved to exhibit ample safety margin against early detection while under parachutes, and adequate performance upon vehicle splashdown. Fall times from algorithm initiation were also studied, and a backup timer length was chosen to provide a large safety margin, yet still trigger detection before CMUS inflation. This timer serves as a backup to the primary acceleration detection method. Additionally, these parameters were tested for safety on actual flight test data, demonstrating expected safety margins.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2015-01-01
This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.
Online selective kernel-based temporal difference learning.
Chen, Xingguo; Gao, Yang; Wang, Ruili
2013-12-01
In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.
The research of conformal optical design
NASA Astrophysics Data System (ADS)
Li, Lin; Li, Yan; Huang, Yi-fan; Du, Bao-lin
2009-07-01
Conformal optical domes are characterized as having external more elongated optical surfaces that are optimized to minimize drag, increased missile velocity and extended operational range. The outer surface of the conformal domes typically deviate greatly from spherical surface descriptions, so the inherent asymmetry of conformal surfaces leads to variations in the aberration content presented to the optical sensor as it is gimbaled across the field of regard, which degrades the sensor's ability to properly image targets of interest and then undermine the overall system performance. Consequently, the aerodynamic advantages of conformal domes cannot be realized in practical systems unless the dynamic aberration correction techniques are developed to restore adequate optical imaging capabilities. Up to now, many optical correction solutions have been researched in conformal optical design, including static aberrations corrections and dynamic aberrations corrections. There are three parts in this paper. Firstly, the combination of static and dynamic aberration correction is introduced. A system for correcting optical aberration created by a conformal dome has an outer surface and an inner surface. The optimization of the inner surface is regard as the static aberration correction; moreover, a deformable mirror is placed at the position of the secondary mirror in the two-mirror all reflective imaging system, which is the dynamic aberration correction. Secondly, the using of appropriate surface types is very important in conformal dome design. Better performing optical systems can result from surface types with adequate degrees of freedom to describe the proper corrector shape. Two surface types and the methods of using them are described, including Zernike polynomial surfaces used in correct elements and user-defined surfaces used in deformable mirror (DM). Finally, the Adaptive optics (AO) correction is presented. In order to correct the dynamical residual aberration in conformal optical design, the SPGD optimization algorithm is operated at each zoom position to calculate the optimized surface shape of the MEMS DM. The communication between MATLAB and Code V established via ActiveX technique is applied in simulation analysis.
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
An analysis of neural receptive field plasticity by point process adaptive filtering
Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor
2001-01-01
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043
Validation of Genome-Wide Prostate Cancer Associations in Men of African Descent
Chang, Bao-Li; Spangler, Elaine; Gallagher, Stephen; Haiman, Christopher A.; Henderson, Brian; Isaacs, William; Benford, Marnita L.; Kidd, LaCreis R.; Cooney, Kathleen; Strom, Sara; Ann Ingles, Sue; Stern, Mariana C.; Corral, Roman; Joshi, Amit D.; Xu, Jianfeng; Giri, Veda N.; Rybicki, Benjamin; Neslund-Dudas, Christine; Kibel, Adam S.; Thompson, Ian M.; Leach, Robin J.; Ostrander, Elaine A.; Stanford, Janet L.; Witte, John; Casey, Graham; Eeles, Rosalind; Hsing, Ann W.; Chanock, Stephen; Hu, Jennifer J.; John, Esther M.; Park, Jong; Stefflova, Klara; Zeigler-Johnson, Charnita; Rebbeck, Timothy R.
2010-01-01
Background Genome-wide association studies (GWAS) have identified numerous prostate cancer susceptibility alleles, but these loci have been identified primarily in men of European descent. There is limited information about the role of these loci in men of African descent. Methods We identified 7,788 prostate cancer cases and controls with genotype data for 47 GWAS-identified loci. Results We identified significant associations for SNP rs10486567 at JAZF1, rs10993994 at MSMB, rs12418451 and rs7931342 at 11q13, and rs5945572 and rs5945619 at NUDT10/11. These associations were in the same direction and of similar magnitude as those reported in men of European descent. Significance was attained at all report prostate cancer susceptibility regions at chromosome 8q24, including associations reaching genome-wide significance in region 2. Conclusion We have validated in men of African descent the associations at some, but not all, prostate cancer susceptibility loci originally identified in European descent populations. This may be due to heterogeneity in genetic etiology or in the pattern of genetic variation across populations. Impact The genetic etiology of prostate cancer in men of African descent differs from that of men of European descent. PMID:21071540
Studies of the hormonal control of postnatal testicular descent in the rat.
Spencer, J R; Vaughan, E D; Imperato-McGinley, J
1993-03-01
Dihydrotestosterone is believed to control the transinguinal phase of testicular descent based on hormonal manipulation studies performed in postnatal rats. In the present study, these hormonal manipulation experiments were repeated, and the results were compared with those obtained using the antiandrogens flutamide and cyproterone acetate. 17 beta-estradiol completely blocked testicular descent, but testosterone and dihydrotestosterone were equally effective in reversing this inhibition. Neither flutamide nor cyproterone acetate prevented testicular descent in postnatal rats despite marked peripheral antiandrogenic action. Further analysis of the data revealed a correlation between testicular size and descent. Androgen receptor blockade did not produce a marked reduction in testicular size and consequently did not prevent testicular descent, whereas estradiol alone caused marked testicular atrophy and testicular maldescent. Reduction of the estradiol dosage or concomitant administration of androgens or human chorionic gonadotropin resulted in both increased testicular size and degree of descent. These data suggest that growth of the neonatal rat testis may contribute to its passage into the scrotum.
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Murad-Regadas, Sthela M; Pinheiro Regadas, Francisco Sergio; Rodrigues, Lusmar V; da Silva Vilarinho, Adjra; Buchen, Guilherme; Borges, Livia Olinda; Veras, Lara B; da Cruz, Mariana Murad
2016-12-01
Defecography is an established method of evaluating dynamic anorectal dysfunction, but conventional defecography does not allow for visualization of anatomic structures. The purpose of this study was to describe the use of dynamic 3-dimensional endovaginal ultrasonography for evaluating perineal descent in comparison with echodefecography (3-dimensional anorectal ultrasonography) and to study the relationship between perineal descent and symptoms and anatomic/functional abnormalities of the pelvic floor. This was a prospective study. The study was conducted at a large university tertiary care hospital. Consecutive female patients were eligible if they had pelvic floor dysfunction, obstructed defecation symptoms, and a score >6 on the Cleveland Clinic Florida Constipation Scale. Each patient underwent both echodefecography and dynamic 3-dimensional endovaginal ultrasonography to evaluate posterior pelvic floor dysfunction. Normal perineal descent was defined on echodefecography as puborectalis muscle displacement ≤2.5 cm; excessive perineal descent was defined as displacement >2.5 cm. Of 61 women, 29 (48%) had normal perineal descent; 32 (52%) had excessive perineal descent. Endovaginal ultrasonography identified 27 of the 29 patients in the normal group as having anorectal junction displacement ≤1 cm (mean = 0.6 cm; range, 0.1-1.0 cm) and a mean anorectal junction position of 0.6 cm (range, 0-2.3 cm) above the symphysis pubis during the Valsalva maneuver and correctly identified 30 of the 32 patients in the excessive perineal descent group. The κ statistic showed almost perfect agreement (κ = 0.86) between the 2 methods for categorization into the normal and excessive perineal descent groups. Perineal descent was not related to fecal or urinary incontinence or anatomic and functional factors (sphincter defects, pubovisceral muscle defects, levator hiatus area, grade II or III rectocele, intussusception, or anismus). The study did not include a control group without symptoms. Three-dimensional endovaginal ultrasonography is a reliable technique for assessment of perineal descent. Using this technique, excessive perineal descent can be defined as displacement of the anorectal junction >1 cm and/or its position below the symphysis pubis on Valsalva maneuver.
Adjoint shape optimization for fluid-structure interaction of ducted flows
NASA Astrophysics Data System (ADS)
Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.
2018-03-01
Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qichun; Zhou, Jinglin; Wang, Hong
In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
Yang, Ping; Ning, Yu; Lei, Xiang; Xu, Bing; Li, Xinyang; Dong, Lizhi; Yan, Hu; Liu, Wenjing; Jiang, Wenhan; Liu, Lei; Wang, Chao; Liang, Xingbo; Tang, Xiaojun
2010-03-29
We present a slab laser amplifier beam cleanup experimental system based on a 39-actuator rectangular piezoelectric deformable mirror. Rather than use a wave-front sensor to measure distortions in the wave-front and then apply a conjugation wave-front for compensating them, the system uses a Stochastic Parallel Gradient Descent algorithm to maximize the power contained within a far-field designated bucket. Experimental results demonstrate that at the output power of 335W, more than 30% energy concentrates in the 1x diffraction-limited area while the beam quality is enhanced greatly.
An image morphing technique based on optimal mass preserving mapping.
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2007-06-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.
An Image Morphing Technique Based on Optimal Mass Preserving Mapping
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2013-01-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128
14 CFR 23.69 - Enroute climb/descent.
Code of Federal Regulations, 2010 CFR
2010-01-01
... climb/descent. (a) All engines operating. The steady gradient and rate of climb must be determined at.... The steady gradient and rate of climb/descent must be determined at each weight, altitude, and ambient...
Bernard, Olivier; Alata, Olivier; Francaux, Marc
2006-03-01
Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.
Effects of flutamide and finasteride on rat testicular descent.
Spencer, J R; Torrado, T; Sanchez, R S; Vaughan, E D; Imperato-McGinley, J
1991-08-01
The endocrine control of descent of the testis in mammalian species is poorly understood. The androgen dependency of testicular descent was studied in the rat using an antiandrogen (flutamide) and an inhibitor of the enzyme 5 alpha-reductase (finasteride). Androgen receptor blockade inhibited testicular descent more effectively than inhibition of 5 alpha-reductase activity. Moreover, its inhibitory effect was limited to the outgrowth phase of the gubernaculum testis, particularly the earliest stages of outgrowth. Gubernacular size was also significantly reduced in fetuses exposed to flutamide during the outgrowth period. In contrast, androgen receptor blockade or 5 alpha-reductase inhibition applied after the initiation of gubernacular outgrowth or during the regression phase did not affect testicular descent. Successful inhibition of the development of epididymis and vas by prenatal flutamide did not correlate with ipsilateral testicular maldescent, suggesting that an intact epididymis is not required for descent of the testis. Plasma androgen assays confirmed significant inhibition of dihydrotestosterone formation in finasteride-treated rats. These data suggest that androgens, primarily testosterone, are required during the early phases of gubernacular outgrowth for subsequent successful completion of testicular descent.
Testicular descent related to growth hormone treatment.
Papadimitriou, Anastasios; Fountzoula, Ioanna; Grigoriadou, Despina; Christianakis, Stratos; Tzortzatou, Georgia
2003-01-01
An 8.7 year-old boy with cryptorchidism and growth hormone (GH) deficiency due to septooptic dysplasia presented testicular descent related to the commencement of hGH treatment. This case suggests a role for GH in testicular descent.
Aircraft Vortex Wake Descent and Decay under Real Atmospheric Effects
DOT National Transportation Integrated Search
1973-10-01
Aircraft vortex wake descent and decay in a real atmosphere is studied analytically. Factors relating to encounter hazard, wake generation, wake descent and stability, and atmospheric dynamics are considered. Operational equations for encounter hazar...
NASA Astrophysics Data System (ADS)
Golomazov, M. M.; Ivankov, A. A.
2016-12-01
Methods for calculating the aerodynamic impact of the Martian atmosphere on the descent module "Exomars-2018" intended for solving the problem of heat protection of the descent module during aerodynamic deceleration are presented. The results of the investigation are also given. The flow field and radiative and convective heat exchange are calculated along the trajectory of the descent module until parachute system activation.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1974-01-01
Apollo lunar-descent guidance transfers the Lunar Module from a near-circular orbit to touchdown, traversing a 17 deg central angle and a 15 km altitude in 11 min. A group of interactive programs in an onboard computer guide the descent, controlling altitude and the descent propulsion system throttle. A ground-based program pre-computes guidance targets. The concepts involved in this guidance are described. Explicit and implicit guidance are discussed, guidance equations are derived, and the earlier Apollo explicit equation is shown to be an inferior special case of the later implicit equation. Interactive guidance, by which the two-man crew selects a landing site in favorable terrain and directs the trajectory there, is discussed. Interactive terminal-descent guidance enables the crew to control the essentially vertical descent rate in order to land in minimum time with safe contact speed. The altitude maneuver routine uses concepts that make gimbal lock inherently impossible.
NASA Technical Reports Server (NTRS)
Smith, Charlee C., Jr.; Lovell, Powell M., Jr.
1954-01-01
An investigation is being conducted to determine the dynamic stability and control characteristics of a 0.13-scale flying model of Convair XFY-1 vertically rising airplane. This paper presents the results of flight and force tests to determine the stability and control characteristics of the model in vertical descent and landings in still air. The tests indicated that landings, including vertical descent from altitudes representing up to 400 feet for the full-scale airplane and at rates of descent up to 15 or 20 feet per second (full scale), can be performed satisfactorily. Sustained vertical descent in still air probably will be more difficult to perform because of large random trim changes that become greater as the descent velocity is increased. A slight steady head wind or cross wind might be sufficient to eliminate the random trim changes.
Multi-AUV Target Search Based on Bioinspired Neurodynamics Model in 3-D Underwater Environments.
Cao, Xiang; Zhu, Daqi; Yang, Simon X
2016-11-01
Target search in 3-D underwater environments is a challenge in multiple autonomous underwater vehicles (multi-AUVs) exploration. This paper focuses on an effective strategy for multi-AUV target search in the 3-D underwater environments with obstacles. First, the Dempster-Shafer theory of evidence is applied to extract information of environment from the sonar data to build a grid map of the underwater environments. Second, a topologically organized bioinspired neurodynamics model based on the grid map is constructed to represent the dynamic environment. The target globally attracts the AUVs through the dynamic neural activity landscape of the model, while the obstacles locally push the AUVs away to avoid collision. Finally, the AUVs plan their search path to the targets autonomously by a steepest gradient descent rule. The proposed algorithm deals with various situations, such as static targets search, dynamic targets search, and one or several AUVs break down in the 3-D underwater environments with obstacles. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve search task of multiple targets with higher efficiency and adaptability compared with other algorithms.
Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle
2013-01-01
The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.
Zhang, Yuwei; Cao, Zexing; Zhang, John Zenghui; Xia, Fei
2017-02-27
Construction of coarse-grained (CG) models for large biomolecules used for multiscale simulations demands a rigorous definition of CG sites for them. Several coarse-graining methods such as the simulated annealing and steepest descent (SASD) based on the essential dynamics coarse-graining (ED-CG) or the stepwise local iterative optimization (SLIO) based on the fluctuation maximization coarse-graining (FM-CG), were developed to do it. However, the practical applications of these methods such as SASD based on ED-CG are subject to limitations because they are too expensive. In this work, we extend the applicability of ED-CG by combining it with the SLIO algorithm. A comprehensive comparison of optimized results and accuracy of various algorithms based on ED-CG show that SLIO is the fastest as well as the most accurate algorithm among them. ED-CG combined with SLIO could give converged results as the number of CG sites increases, which demonstrates that it is another efficient method for coarse-graining large biomolecules. The construction of CG sites for Ras protein by using MD fluctuations demonstrates that the CG sites derived from FM-CG can reflect the fluctuation properties of secondary structures in Ras accurately.
Pixel-By Estimation of Scene Motion in Video
NASA Astrophysics Data System (ADS)
Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.
2017-05-01
The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
Pardo-Montero, Juan; Fenwick, John D
2010-06-01
The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape comprising two regions: One where the dose to the target is close to prescription and trade-offs can be made between doses to the organs at risk and (small) changes in target dose, and one where very substantial rectal sparing is achieved at the cost of large target underdosage. Plans computed following the approach using a conformal arc and four blocked arcs generally lie close to the Pareto front, although distances of some plans from high gradient regions of the Pareto front can be greater. Only around 12% of plans lie a relative Euclidean distance of 0.15 or greater from the Pareto front. Using the alternative distance measure of Craft ["Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization," Phys. Medica (to be published)], around 2/5 of plans lie more than 0.05 from the front. Computation of blocked arcs is quite fast, the algorithms requiring 35%-80% of the running time per iteration needed for conventional inverse plan computation. The geometry-based arc approach to multicriteria optimization of rotational therapy allows solutions to be obtained that lie close to the Pareto front. Both the image-reconstruction type and gradient-descent algorithms produce similar modulated arcs, the latter one perhaps being preferred because it is more easily implementable in standard treatment planning systems. Moderate unblocking provides a good way of dealing with OARs which abut the PTV. Optimization of geometry-based arcs is faster than usual inverse optimization of treatment plans, making this approach more rapid than an inverse-based Pareto front reconstruction.
Kidd, La Creis Renee; VanCleave, Tiva T.; Doll, Mark A.; Srivastava, Daya S.; Thacker, Brandon; Komolafe, Oyeyemi; Pihur, Vasyl; Brock, Guy N.; Hein, David W.
2011-01-01
Objective We evaluated the individual and combination effects of NAT1, NAT2 and tobacco smoking in a case-control study of 219 incident prostate cancer (PCa) cases and 555 disease-free men. Methods Allelic discriminations for 15 NAT1 and NAT2 loci were detected in germ-line DNA samples using Taqman polymerase chain reaction (PCR) assays. Single gene, gene-gene and gene-smoking interactions were analyzed using logistic regression models and multi-factor dimensionality reduction (MDR) adjusted for age and subpopulation stratification. MDR involves a rigorous algorithm that has ample statistical power to assess and visualize gene-gene and gene-environment interactions using relatively small samples sizes (i.e., 200 cases and 200 controls). Results Despite the relatively high prevalence of NAT1*10/*10 (40.1%), NAT2 slow (30.6%), and NAT2 very slow acetylator genotypes (10.1%) among our study participants, these putative risk factors did not individually or jointly increase PCa risk among all subjects or a subset analysis restricted to tobacco smokers. Conclusion Our data do not support the use of N-acetyltransferase genetic susceptibilities as PCa risk factors among men of African descent; however, subsequent studies in larger sample populations are needed to confirm this finding. PMID:21709725
Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation
NASA Astrophysics Data System (ADS)
Zhou, XueFei
2018-04-01
With the development of computer technology, the applications of machine learning are more and more extensive. And machine learning is providing endless opportunities to develop new applications. One of those applications is image recognition by using Convolutional Neural Networks (CNNs). CNN is one of the most common algorithms in image recognition. It is significant to understand its theory and structure for every scholar who is interested in this field. CNN is mainly used in computer identification, especially in voice, text recognition and other aspects of the application. It utilizes hierarchical structure with different layers to accelerate computing speed. In addition, the greatest features of CNNs are the weight sharing and dimension reduction. And all of these consolidate the high effectiveness and efficiency of CNNs with idea computing speed and error rate. With the help of other learning altruisms, CNNs could be used in several scenarios for machine learning, especially for deep learning. Based on the general introduction to the background and the core solution CNN, this paper is going to focus on summarizing how Gradient Descent and Backpropagation work, and how they contribute to the high performances of CNNs. Also, some practical applications will be discussed in the following parts. The last section exhibits the conclusion and some perspectives of future work.
Siggs, Owen M.; Miosge, Lisa A.; Roots, Carla M.; Enders, Anselm; Bertram, Edward M.; Crockford, Tanya L.; Whittle, Belinda; Potter, Paul K.; Simon, Michelle M.; Mallon, Ann-Marie; Brown, Steve D. M.; Beutler, Bruce; Goodnow, Christopher C.; Lunter, Gerton; Cornall, Richard J.
2013-01-01
Forward genetics screens with N-ethyl-N-nitrosourea (ENU) provide a powerful way to illuminate gene function and generate mouse models of human disease; however, the identification of causative mutations remains a limiting step. Current strategies depend on conventional mapping, so the propagation of affected mice requires non-lethal screens; accurate tracking of phenotypes through pedigrees is complex and uncertain; out-crossing can introduce unexpected modifiers; and Sanger sequencing of candidate genes is inefficient. Here we show how these problems can be efficiently overcome using whole-genome sequencing (WGS) to detect the ENU mutations and then identify regions that are identical by descent (IBD) in multiple affected mice. In this strategy, we use a modification of the Lander-Green algorithm to isolate causative recessive and dominant mutations, even at low coverage, on a pure strain background. Analysis of the IBD regions also allows us to calculate the ENU mutation rate (1.54 mutations per Mb) and to model future strategies for genetic screens in mice. The introduction of this approach will accelerate the discovery of causal variants, permit broader and more informative lethal screens to be used, reduce animal costs, and herald a new era for ENU mutagenesis. PMID:23382690
Design and Analysis of Map Relative Localization for Access to Hazardous Landing Sites on Mars
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Aaron, Seth; Cheng, Yang; Montgomery, James; Trawny, Nikolas; Tweddle, Brent; Vaughan, Geoffrey; Zheng, Jason
2016-01-01
Human and robotic planetary lander missions require accurate surface relative position knowledge to land near science targets or next to pre-deployed assets. In the absence of GPS, accurate position estimates can be obtained by automatically matching sensor data collected during descent to an on-board map. The Lander Vision System (LVS) that is being developed for Mars landing applications generates landmark matches in descent imagery and combines these with inertial data to estimate vehicle position, velocity and attitude. This paper describes recent LVS design work focused on making the map relative localization algorithms robust to challenging environmental conditions like bland terrain, appearance differences between the map and image and initial input state errors. Improved results are shown using data from a recent LVS field test campaign. This paper also fills a gap in analysis to date by assessing the performance of the LVS with data sets containing significant vertical motion including a complete data set from the Mars Science Laboratory mission, a Mars landing simulation, and field test data taken over multiple altitudes above the same scene. Accurate and robust performance is achieved for all data sets indicating that vertical motion does not play a significant role in position estimation performance.
A Descent Rate Control Approach to Developing an Autonomous Descent Vehicle
NASA Astrophysics Data System (ADS)
Fields, Travis D.
Circular parachutes have been used for aerial payload/personnel deliveries for over 100 years. In the past two decades, significant work has been done to improve the landing accuracies of cargo deliveries for humanitarian and military applications. This dissertation discusses the approach developed in which a circular parachute is used in conjunction with an electro-mechanical reefing system to manipulate the landing location. Rather than attempt to steer the autonomous descent vehicle directly, control of the landing location is accomplished by modifying the amount of time spent in a particular wind layer. Descent rate control is performed by reversibly reefing the parachute canopy. The first stage of the research investigated the use of a single actuation during descent (with periodic updates), in conjunction with a curvilinear target. Simulation results using real-world wind data are presented, illustrating the utility of the methodology developed. Additionally, hardware development and flight-testing of the single actuation autonomous descent vehicle are presented. The next phase of the research focuses on expanding the single actuation descent rate control methodology to incorporate a multi-actuation path-planning system. By modifying the parachute size throughout the descent, the controllability of the system greatly increases. The trajectory planning methodology developed provides a robust approach to accurately manipulate the landing location of the vehicle. The primary benefits of this system are the inherent robustness to release location errors and the ability to overcome vehicle uncertainties (mass, parachute size, etc.). A separate application of the path-planning methodology is also presented. An in-flight path-prediction system was developed for use in high-altitude ballooning by utilizing the path-planning methodology developed for descent vehicles. The developed onboard system improves landing location predictions in-flight using collected flight information during the ascent and descent. Simulation and real-world flight tests (using the developed low-cost hardware) demonstrate the significance of the improvements achievable when flying the developed system.
Miller, Vonda H; Jansen, Ben H
2008-12-01
Computer algorithms that match human performance in recognizing written text or spoken conversation remain elusive. The reasons why the human brain far exceeds any existing recognition scheme to date in the ability to generalize and to extract invariant characteristics relevant to category matching are not clear. However, it has been postulated that the dynamic distribution of brain activity (spatiotemporal activation patterns) is the mechanism by which stimuli are encoded and matched to categories. This research focuses on supervised learning using a trajectory based distance metric for category discrimination in an oscillatory neural network model. Classification is accomplished using a trajectory based distance metric. Since the distance metric is differentiable, a supervised learning algorithm based on gradient descent is demonstrated. Classification of spatiotemporal frequency transitions and their relation to a priori assessed categories is shown along with the improved classification results after supervised training. The results indicate that this spatiotemporal representation of stimuli and the associated distance metric is useful for simple pattern recognition tasks and that supervised learning improves classification results.
Multigrid one shot methods for optimal control problems: Infinite dimensional control
NASA Technical Reports Server (NTRS)
Arian, Eyal; Taasan, Shlomo
1994-01-01
The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.
Graph Matching: Relax at Your Own Risk.
Lyzinski, Vince; Fishkind, Donniell E; Fiori, Marcelo; Vogelstein, Joshua T; Priebe, Carey E; Sapiro, Guillermo
2016-01-01
Graph matching-aligning a pair of graphs to minimize their edge disagreements-has received wide-spread attention from both theoretical and applied communities over the past several decades, including combinatorics, computer vision, and connectomics. Its attention can be partially attributed to its computational difficulty. Although many heuristics have previously been proposed in the literature to approximately solve graph matching, very few have any theoretical support for their performance. A common technique is to relax the discrete problem to a continuous problem, therefore enabling practitioners to bring gradient-descent-type algorithms to bear. We prove that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimal permutation. These theoretical results suggest that initializing the indefinite algorithm with the convex optimum might yield improved practical performance. Indeed, experimental results illuminate and corroborate these theoretical findings, demonstrating that excellent results are achieved in both benchmark and real data problems by amalgamating the two approaches.
Mars Entry Atmospheric Data System Modelling and Algorithm Development
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.
2009-01-01
The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less
Mars Pathfinder Atmospheric Entry Navigation Operations
NASA Technical Reports Server (NTRS)
Braun, R. D.; Spencer, D. A.; Kallemeyn, P. H.; Vaughan, R. M.
1997-01-01
On July 4, 1997, after traveling close to 500 million km, the Pathfinder spacecraft successfully completed entry, descent, and landing, coming to rest on the surface of Mars just 27 km from its target point. In the present paper, the atmospheric entry and approach navigation activities required in support of this mission are discussed. In particular, the flight software parameter update and landing site prediction analyses performed by the Pathfinder operations navigation team are described. A suite of simulation tools developed during Pathfinder's design cycle, but extendible to Pathfinder operations, are also presented. Data regarding the accuracy of the primary parachute deployment algorithm is extracted from the Pathfinder flight data, demonstrating that this algorithm performed as predicted. The increased probability of mission success through the software parameter update process is discussed. This paper also demonstrates the importance of modeling atmospheric flight uncertainties in the estimation of an accurate landing site. With these atmospheric effects included, the final landed ellipse prediction differs from the post-flight determined landing site by less then 0.5 km in downtrack.
A Space Affine Matching Approach to fMRI Time Series Analysis.
Chen, Liang; Zhang, Weishi; Liu, Hongbo; Feng, Shigang; Chen, C L Philip; Wang, Huili
2016-07-01
For fMRI time series analysis, an important challenge is to overcome the potential delay between hemodynamic response signal and cognitive stimuli signal, namely the same frequency but different phase (SFDP) problem. In this paper, a novel space affine matching feature is presented by introducing the time domain and frequency domain features. The time domain feature is used to discern different stimuli, while the frequency domain feature to eliminate the delay. And then we propose a space affine matching (SAM) algorithm to match fMRI time series by our affine feature, in which a normal vector is estimated using gradient descent to explore the time series matching optimally. The experimental results illustrate that the SAM algorithm is insensitive to the delay between the hemodynamic response signal and the cognitive stimuli signal. Our approach significantly outperforms GLM method while there exists the delay. The approach can help us solve the SFDP problem in fMRI time series matching and thus of great promise to reveal brain dynamics.
Improving the Incoherence of a Learned Dictionary via Rank Shrinkage.
Ubaru, Shashanka; Seghouane, Abd-Krim; Saad, Yousef
2017-01-01
This letter considers the problem of dictionary learning for sparse signal representation whose atoms have low mutual coherence. To learn such dictionaries, at each step, we first update the dictionary using the method of optimal directions (MOD) and then apply a dictionary rank shrinkage step to decrease its mutual coherence. In the rank shrinkage step, we first compute a rank 1 decomposition of the column-normalized least squares estimate of the dictionary obtained from the MOD step. We then shrink the rank of this learned dictionary by transforming the problem of reducing the rank to a nonnegative garrotte estimation problem and solving it using a path-wise coordinate descent approach. We establish theoretical results that show that the rank shrinkage step included will reduce the coherence of the dictionary, which is further validated by experimental results. Numerical experiments illustrating the performance of the proposed algorithm in comparison to various other well-known dictionary learning algorithms are also presented.
An evaluation of descent strategies for TNAV-equipped aircraft in an advanced metering environment
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Schwab, R. W.; Groce, J. L.; Coote, M. A.
1986-01-01
Investigated were the effects on system throughput and fleet fuel usage of arrival aircraft utilizing three 4D RNAV descent strategies (cost optimal, clean-idle Mach/CAS and constant descent angle Mach/CAS), both individually and in combination, in an advanced air traffic control metering environment. Results are presented for all mixtures of arrival traffic consisting of three Boeing commercial jet types and for all combinations of the three descent strategies for a typical en route metering airport arrival distribution.
OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE
NASA Technical Reports Server (NTRS)
Lee, H.
1994-01-01
For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.
Power Control and Optimization of Photovoltaic and Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Ghaffari, Azad
Power map and Maximum Power Point (MPP) of Photovoltaic (PV) and Wind Energy Conversion Systems (WECS) highly depend on system dynamics and environmental parameters, e.g., solar irradiance, temperature, and wind speed. Power optimization algorithms for PV systems and WECS are collectively known as Maximum Power Point Tracking (MPPT) algorithm. Gradient-based Extremum Seeking (ES), as a non-model-based MPPT algorithm, governs the system to its peak point on the steepest descent curve regardless of changes of the system dynamics and variations of the environmental parameters. Since the power map shape defines the gradient vector, then a close estimate of the power map shape is needed to create user assignable transients in the MPPT algorithm. The Hessian gives a precise estimate of the power map in a neighborhood around the MPP. The estimate of the inverse of the Hessian in combination with the estimate of the gradient vector are the key parts to implement the Newton-based ES algorithm. Hence, we generate an estimate of the Hessian using our proposed perturbation matrix. Also, we introduce a dynamic estimator to calculate the inverse of the Hessian which is an essential part of our algorithm. We present various simulations and experiments on the micro-converter PV systems to verify the validity of our proposed algorithm. The ES scheme can also be used in combination with other control algorithms to achieve desired closed-loop performance. The WECS dynamics is slow which causes even slower response time for the MPPT based on the ES. Hence, we present a control scheme, extended from Field-Oriented Control (FOC), in combination with feedback linearization to reduce the convergence time of the closed-loop system. Furthermore, the nonlinear control prevents magnetic saturation of the stator of the Induction Generator (IG). The proposed control algorithm in combination with the ES guarantees the closed-loop system robustness with respect to high level parameter uncertainty in the IG dynamics. The simulation results verify the effectiveness of the proposed algorithm.
Khachatryan, Naira; Medeiros, Felipe A.; Sharpsten, Lucie; Bowd, Christopher; Sample, Pamela A.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Weinreb, Robert N.; Miki, Atsuya; Hammel, Na’ama; Zangwill, Linda M.
2015-01-01
Purpose To evaluate racial differences in the development of visual field (VF) damage in glaucoma suspects. Design Prospective, observational cohort study. Methods Six hundred thirty six eyes from 357 glaucoma suspects with normal VF at baseline were included from the multicenter African Descent and Glaucoma Evaluation Study (ADAGES). Racial differences in the development of VF damage were examined using multivariable Cox Proportional Hazard models. Results Thirty one (25.4%) of 122 African descent participants and 47 (20.0%) of 235 European descent participants developed VF damage (p=0.078). In multivariable analysis, worse baseline VF mean deviation, higher mean arterial pressure during follow up, and a race *mean intraocular pressure (IOP) interaction term were significantly associated with the development of VF damage suggesting that racial differences in the risk of VF damage varied by IOP. At higher mean IOP levels, race was predictive of the development of VF damage even after adjusting for potentially confounding factors. At mean IOPs during follow-up of 22, 24 and 26 mmHg, multivariable hazard ratios (95%CI) for the development of VF damage in African descent compared to European descent subjects were 2.03 (1.15–3.57), 2.71 (1.39–5.29), and 3.61 (1.61–8.08), respectively. However, at lower mean IOP levels (below 22 mmHg) during follow-up, African descent was not predictive of the development of VF damage. Conclusion In this cohort of glaucoma suspects with similar access to treatment, multivariate analysis revealed that at higher mean IOP during follow-up, individuals of African descent were more likely to develop VF damage than individuals of European descent. PMID:25597839
van der Stoep, T
Compared to the percentage of ethnic minorities in the general population, ethnic minorities are overrepresented in forensic psychiatry. If these minorities are to be treated successfully, we need to know more about this group. So far, however, little is known about the differences between mental disorders and types of offences associated with patients of non-Dutch descent and those associated with patients of Dutch descent.
AIM: To take the first steps to obtain the information we need in order to provide customised care for patients of non-Dutch descent.
METHOD: It proved possible to identify differences between patients of Dutch and non-Dutch descent with regard to treatment, diagnosis and offences committed within a group of patients who were admitted to the forensic psychiatric centre Oostvaarderskliniek during the period 2001 - 2014.
RESULTS: The treatment of patients of non-Dutch descent lasted longer than the treatment of patients of Dutch descent (8.5 year versus 6.6 year). Furthermore, patients from ethnic minority groups were diagnosed more often with schizophrenia (49.1% versus 21.4%), but less often with pervasive developmental disorders or sexual disorders. Patients of non-Dutch descent were more often convicted for sexual crimes where the victim was aged 16 years or older, whereas patients of Dutch descent were convicted of sexual crimes where the victim was under 16.
CONCLUSION: There are differences between patients of Dutch and non-Dutch descent with regard to treatment duration, diagnosis and offences they commit. Future research needs to investigate whether these results are representative for the entire field of forensic psychiatry and to discover the reasons for these differences.
Reverse engineering a gene network using an asynchronous parallel evolution strategy
2010-01-01
Background The use of reverse engineering methods to infer gene regulatory networks by fitting mathematical models to gene expression data is becoming increasingly popular and successful. However, increasing model complexity means that more powerful global optimisation techniques are required for model fitting. The parallel Lam Simulated Annealing (pLSA) algorithm has been used in such approaches, but recent research has shown that island Evolutionary Strategies can produce faster, more reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse engineering problem: inferring parameters in the gap gene network. We find that the asynchronous piES exhibits very little communication overhead, and shows significant speed-up for up to 50 nodes: the piES running on 50 nodes is nearly 10 times faster than the best serial algorithm. We compare the asynchronous piES to pLSA on the same test problem, measuring the time required to reach particular levels of residual error, and show that it shows much faster convergence than pLSA across all optimisation conditions tested. Conclusions Our results demonstrate that the piES is consistently faster and more reliable than the pLSA algorithm on this problem, and scales better with increasing numbers of nodes. In addition, the piES is especially well suited to further improvements and adaptations: Firstly, the algorithm's fast initial descent speed and high reliability make it a good candidate for being used as part of a global/local search hybrid algorithm. Secondly, it has the potential to be used as part of a hierarchical evolutionary algorithm, which takes advantage of modern multi-core computing architectures. PMID:20196855
The impact of Asian descent on the incidence of acquired severe aplastic anaemia in children.
McCahon, Emma; Tang, Keith; Rogers, Paul C J; McBride, Mary L; Schultz, Kirk R
2003-04-01
Previous studies have suggested an increased incidence of acquired severe aplastic anaemia in Asian populations. We evaluated the incidence of aplastic anaemia in people of Asian descent, using a well-defined paediatric (0-14 years) population in British Columbia, Canada to minimize environmental factors. The incidence in children of East/South-east Asian descent (6.9/million/year) and South Asian (East Indian) descent (7.3/million/year) was higher than for those of White/mixed ethnic descent (1.7/million/year). There appeared to be no contribution by environmental factors. This study shows that Asian children have an increased incidence of severe aplastic anaemia possibly as a result of a genetic predisposition.
Intrascrotal CGRP 8-37 causes a delay in testicular descent in mice.
Samarakkody, U K; Hutson, J M
1992-07-01
The genitofemoral nerve is a key factor in the inguinoscrotal descent of the testis. The effect of androgens may be mediated via the central nervous system, which in turn secretes the neurotransmitter calcitonin gene-related peptide (CGRP) at the genitofemoral nerve endings, to cause testicular descent. The effect of endogenous CGRP was examined by weekly injections of a vehicle with or without synthetic antagonist (CGRP 8-37) into the developing scrotum of neonatal mice. The descent of the testis was delayed in the experimental group compared with the control group. At 2 weeks of age 43% of controls had descended testes compared with 0% of experimental animals. At 3 weeks of age 17% of experimentals still had undescended testes, whereas all testes were descended in controls. At 4 weeks 3 testes remained undescended in the experimental group. It is concluded that the CGRP antagonist can retard testicular descent. This result is consistent with the hypothesis that CGRP is an important intermediary in testicular descent.
Statistical efficiency of adaptive algorithms.
Widrow, Bernard; Kamenetsky, Max
2003-01-01
The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS algorithm has enjoyed very widespread application. It is used in almost every modem for channel equalization and echo cancelling. Furthermore, it is related to the famous backpropagation algorithm used for training neural networks.
Houle, A M; Gagné, D
1995-01-01
The androgen-regulated paracrine factor, calcitonin gene-related peptide (CGRP), has been proposed as a possible mediator of testicular descent. This peptide has been found to increase rhythmic contractions of gubernaculae and is known to be released by the genitofemoral nerve. We have investigated the ability of CGRP to induce premature testicular descent. CGRP was administered alone, or in combination with human chorionic gonadotropin (hCG) to C57BL/6 male mice postnatally. The extent of testicular descent at 18 days postpartum was then ascertained. The potential relationship between testicular weight and descent was also examined. Our results show that testes of mice treated with either hCG alone, or in combination with 500 ng CGRP, were at a significantly lower position than those of controls by 16% and 17%, respectively. In contrast, mice treated with 500 ng of CGRP alone had testes at a higher position when compared to those of controls, by 19%. In mice treated with 50 ng of CGRP alone or in combination with hCG, testes were at a position similar to those in controls. Furthermore, testicular descent was analyzed in relation to testicular weight, and we found that significantly smaller testes per gram of body weight than those of controls were at a significantly lower position compared to those of controls. Our data demonstrate that CGRP had no effect on postnatal testicular descent and that there is no relationship between postnatal descent and testicular weight.
Transformable descent vehicles
NASA Astrophysics Data System (ADS)
Pichkhadze, K. M.; Finchenko, V. S.; Aleksashkin, S. N.; Ostreshko, B. A.
2016-12-01
This article presents some types of planetary descent vehicles, the shape of which varies in different flight phases. The advantages of such vehicles over those with unchangeable form (from launch to landing) are discussed. It is shown that the use of transformable descent vehicles widens the scope of possible tasks to solve.
43 CFR 10.14 - Lineal descent and cultural affiliation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... evidence sufficient to: (i) Establish the identity and cultural characteristics of the earlier group, (ii... 43 Public Lands: Interior 1 2011-10-01 2011-10-01 false Lineal descent and cultural affiliation... GRAVES PROTECTION AND REPATRIATION REGULATIONS General § 10.14 Lineal descent and cultural affiliation...
43 CFR 10.14 - Lineal descent and cultural affiliation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... evidence sufficient to: (i) Establish the identity and cultural characteristics of the earlier group, (ii... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Lineal descent and cultural affiliation... GRAVES PROTECTION AND REPATRIATION REGULATIONS General § 10.14 Lineal descent and cultural affiliation...
New displacement-based methods for optimal truss topology design
NASA Technical Reports Server (NTRS)
Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.
1991-01-01
Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.
Optimal Pitch Thrust-Vector Angle and Benefits for all Flight Regimes
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Bolonkin, Alexander
2000-01-01
The NASA Dryden Flight Research Center is exploring the optimum thrust-vector angle on aircraft. Simple aerodynamic performance models for various phases of aircraft flight are developed and optimization equations and algorithms are presented in this report. Results of optimal angles of thrust vectors and associated benefits for various flight regimes of aircraft (takeoff, climb, cruise, descent, final approach, and landing) are given. Results for a typical wide-body transport aircraft are also given. The benefits accruable for this class of aircraft are small, but the technique can be applied to other conventionally configured aircraft. The lower L/D aerodynamic characteristics of fighters generally would produce larger benefits than those produced for transport aircraft.
A parametric LQ approach to multiobjective control system design
NASA Technical Reports Server (NTRS)
Kyr, Douglas E.; Buchner, Marc
1988-01-01
The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.
Efficient methods for overlapping group lasso.
Yuan, Lei; Liu, Jun; Ye, Jieping
2013-09-01
The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.
Pixel-based OPC optimization based on conjugate gradients.
Ma, Xu; Arce, Gonzalo R
2011-01-31
Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.
Overview of the Phoenix Entry, Descent and Landing System
NASA Technical Reports Server (NTRS)
Grover, Rob
2005-01-01
A viewgraph presentation on the entry, descent and landing system of Phoenix is shown. The topics include: 1) Phoenix Mission Goals; 2) Payload; 3) Aeroshell/Entry Comparison; 4) Entry Trajectory Comparison; 5) Phoenix EDL Timeline; 6) Hypersonic Phase; 7) Parachute Phase; 8) Terminal Descent Phase; and 9) EDL Communications.
NASA Technical Reports Server (NTRS)
Brown, Charles; Andrew, Robert; Roe, Scott; Frye, Ronald; Harvey, Michael; Vu, Tuan; Balachandran, Krishnaiyer; Bly, Ben
2012-01-01
The Ascent/Descent Software Suite has been used to support a variety of NASA Shuttle Program mission planning and analysis activities, such as range safety, on the Integrated Planning System (IPS) platform. The Ascent/Descent Software Suite, containing Ascent Flight Design (ASC)/Descent Flight Design (DESC) Configuration items (Cis), lifecycle documents, and data files used for shuttle ascent and entry modeling analysis and mission design, resides on IPS/Linux workstations. A list of tools in Navigation (NAV)/Prop Software Suite represents tool versions established during or after the IPS Equipment Rehost-3 project.
Descent Stage of Mars Science Laboratory During Assembly
NASA Technical Reports Server (NTRS)
2008-01-01
This image from early October 2008 shows personnel working on the descent stage of NASA's Mars Science Laboratory inside the Spacecraft Assembly Facility at NASA's Jet Propulsion Laboratory, Pasadena, Calif. The descent stage will provide rocket-powered deceleration for a phase of the arrival at Mars after the phases using the heat shield and parachute. When it nears the surface, the descent stage will lower the rover on a bridle the rest of the way to the ground. The larger three of the orange spheres in the descent stage are fuel tanks. The smaller two are tanks for pressurant gas used for pushing the fuel to the rocket engines. JPL, a division of the California Institute of Technology, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington.Steepest descent method implementation on unconstrained optimization problem using C++ program
NASA Astrophysics Data System (ADS)
Napitupulu, H.; Sukono; Mohd, I. Bin; Hidayat, Y.; Supian, S.
2018-03-01
Steepest Descent is known as the simplest gradient method. Recently, many researches are done to obtain the appropriate step size in order to reduce the objective function value progressively. In this paper, the properties of steepest descent method from literatures are reviewed together with advantages and disadvantages of each step size procedure. The development of steepest descent method due to its step size procedure is discussed. In order to test the performance of each step size, we run a steepest descent procedure in C++ program. We implemented it to unconstrained optimization test problem with two variables, then we compare the numerical results of each step size procedure. Based on the numerical experiment, we conclude the general computational features and weaknesses of each procedure in each case of problem.
Dynamics of the Venera 13 and 14 descent modules in the parachute segment of descent
NASA Astrophysics Data System (ADS)
Vishniak, A. A.; Kariagin, V. P.; Kovtunenko, V. M.; Kotov, B. B.; Kuznetsov, V. V.; Lopatkin, A. I.; Perov, O. V.; Pichkhadze, K. M.; Rysev, O. V.
1983-05-01
The parachute system for the Venera 13 and 14 descent modules was designed to assure the prescribed duration of descent in the Venus cloud layer as well as the separation of heat-shield elements from the module. A mathematical model is developed which makes possible a numerical analysis of the dynamics of the module-parachute system with allowance for parachute inertia, atmospheric turbulence, the means by which the parachute is attachead to the module, and the elasticity and damping of the suspended system. A formula is derived for determining the period of oscillations of the module in the parachute segment of descent. A comparison of theoretical and experimental results shows that this formula can be used in the design calculations, especially at the early stage of module development.
Automation for Accommodating Fuel-Efficient Descents in Constrained Airspace
NASA Technical Reports Server (NTRS)
Coopenbarger, Richard A.
2010-01-01
Continuous descents at low engine power are desired to reduce fuel consumption, emissions and noise during arrival operations. The challenge is to allow airplanes to fly these types of efficient descents without interruption during busy traffic conditions. During busy conditions today, airplanes are commonly forced to fly inefficient, step-down descents as airtraffic controllers work to ensure separation and maximize throughput. NASA in collaboration with government and industry partners is developing new automation to help controllers accommodate continuous descents in the presence of complex traffic and airspace constraints. This automation relies on accurate trajectory predictions to compute strategic maneuver advisories. The talk will describe the concept behind this new automation and provide an overview of the simulations and flight testing used to develop and refine its underlying technology.
Tracer-Based Determination of Vortex Descent in the 1999-2000 Arctic Winter
NASA Technical Reports Server (NTRS)
Greenblatt, Jeffery B.; Jost, Hans-Juerg; Loewenstein, Max; Podolske, James R.; Hurst, Dale F.; Elkins, James W.; Schauffler, Sue M.; Atlas, Elliot L.; Herman, Robert L.; Webster, Christopher R.
2001-01-01
A detailed analysis of available in situ and remotely sensed N2O and CH4 data measured in the 1999-2000 winter Arctic vortex has been performed in order to quantify the temporal evolution of vortex descent. Differences in potential temperature (theta) among balloon and aircraft vertical profiles (an average of 19-23 K on a given N2O or CH4 isopleth) indicated significant vortex inhomogeneity in late fall as compared with late winter profiles. A composite fall vortex profile was constructed for November 26, 1999, whose error bars encompassed the observed variability. High-latitude, extravortex profiles measured in different years and seasons revealed substantial variability in N2O and CH4 on theta surfaces, but all were clearly distinguishable from the first vortex profiles measured in late fall 1999. From these extravortex-vortex differences, we inferred descent prior to November 26: 397+/-15 K (1sigma) at 30 ppbv N2O and 640 ppbv CH4, and 28+/-13 K above 200 ppbv N2O and 1280 ppbv CH4. Changes in theta were determined on five N2O and CH4 isopleths from November 26 through March 12, and descent rates were calculated on each N2O isopleth for several time intervals. The maximum descent rates were seen between November 26 and January 27: 0.82+/-0.20 K/day averaged over 50-250 ppbv N2O. By late winter (February 26-March 12), the average rate had decreased to 0.10+/-0.25 K/day. Descent rates also decreased with increasing N2O; the winter average (November 26-March 5) descent rate varied from 0.75+/-0.10 K/day at 50 ppbv to 0.40+/-0.11 K/day at 250 ppbv. Comparison of these results with observations and models of descent in prior years showed very good overall agreement. Two models of the 1999-2000 vortex descent, SLIMCAT and REPROBUS, despite theta offsets with respect to observed profiles of up to 20 K on most tracer isopleths, produced descent rates that agreed very favorably with the inferred rates from observation.
Aliberti, Sandra; Mezêncio, Bruno; Amadio, Alberto Carlos; Serrão, Julio Cerca; Mochizuki, Luis
2018-05-23
Knee pain during stair managing is a common complaint among individuals with PFP and can negatively affect their activities of daily living. Gait modification programs can be used to decrease patellofemoral pain. Immediate effects of a stair descent distal gait modification session that intended to emphasize forefoot landing during stair descent are described in this study. To analyze the immediate effects of a distal gait modification session on lower extremity movements and intensity of pain in women with patellofemoral pain during stair descent. Nonrandomized controlled trial. Sixteen women with patellofemoral pain were allocated into two groups: (1) Gait Modification Group (n = 8); and 2) Control Group (n = 8). The intensity of pain (visual analog scale) and kinematics of knee, ankle, and forefoot (multi-segmental foot model) during stair descent were assessed before and after the intervention. After the gait modification session, there was an increase of forefoot eversion and ankle plantarflexion as well as a decrease of knee flexion. An immediate decrease in patellofemoral pain intensity during stair descent was also observed. The distal gait modification session changed the lower extremity kinetic chain strategy of movement, increasing foot and ankle movement contribution and decreasing knee contribution to the task. An immediate decrease in patellofemoral pain intensity during stair descent was also observed. To emphasize forefoot landing may be a useful intervention to immediately relieve pain in patients with patellofemoral pain during stair descent. Clinical studies are needed to verify the gait modification session effects in medium and long terms.
Fazio, Massimo A.; Grytz, Rafael; Morris, Jeffrey S.; Bruno, Luigi; Girkin, Christopher A.; Downs, J. Crawford
2014-01-01
Purpose. We tested the hypothesis that the variation of peripapillary scleral structural stiffness with age is different in donors of European (ED) and African (AD) descent. Methods. Posterior scleral shells from normal eyes from donors of European (n = 20 pairs; previously reported) and African (n = 9 pairs) descent aged 0 and 90 years old were inflation tested within 48 hours post mortem. Scleral shells were pressurized from 5 to 45 mm Hg and the full-field, 3-dimensional (3D) deformation of the outer surface was recorded at submicrometric accuracy using speckle interferometry (ESPI). Mean maximum principal (tensile) strain of the peripapillary and midperipheral regions surrounding the optic nerve head (ONH) were fit using a functional mixed effects model that accounts for intradonor variability, same-race correlation, and spatial autocorrelation to estimate the effect of race on the age-related changes in mechanical scleral strain. Results. Mechanical tensile strain significantly decreased with age in the peripapillary sclera in the African and European descent groups (P < 0.001), but the age-related stiffening was significantly greater in the African descent group (P < 0.05). Maximum principal strain in the peripapillary sclera was significantly higher than in the midperipheral sclera for both ethnic groups. Conclusions. The sclera surrounding the ONH stiffens more rapidly with age in the African descent group compared to the European group. Stiffening of the peripapillary sclera with age may be related to the higher prevalence of glaucoma in the elderly and persons of African descent. PMID:25237162
Fazio, Massimo A; Grytz, Rafael; Morris, Jeffrey S; Bruno, Luigi; Girkin, Christopher A; Downs, J Crawford
2014-09-18
We tested the hypothesis that the variation of peripapillary scleral structural stiffness with age is different in donors of European (ED) and African (AD) descent. Posterior scleral shells from normal eyes from donors of European (n = 20 pairs; previously reported) and African (n = 9 pairs) descent aged 0 and 90 years old were inflation tested within 48 hours post mortem. Scleral shells were pressurized from 5 to 45 mm Hg and the full-field, 3-dimensional (3D) deformation of the outer surface was recorded at submicrometric accuracy using speckle interferometry (ESPI). Mean maximum principal (tensile) strain of the peripapillary and midperipheral regions surrounding the optic nerve head (ONH) were fit using a functional mixed effects model that accounts for intradonor variability, same-race correlation, and spatial autocorrelation to estimate the effect of race on the age-related changes in mechanical scleral strain. Mechanical tensile strain significantly decreased with age in the peripapillary sclera in the African and European descent groups (P < 0.001), but the age-related stiffening was significantly greater in the African descent group (P < 0.05). Maximum principal strain in the peripapillary sclera was significantly higher than in the midperipheral sclera for both ethnic groups. The sclera surrounding the ONH stiffens more rapidly with age in the African descent group compared to the European group. Stiffening of the peripapillary sclera with age may be related to the higher prevalence of glaucoma in the elderly and persons of African descent. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Latin American Immigrant Women and Intergenerational Sex Education
ERIC Educational Resources Information Center
Alcalde, Maria Cristina; Quelopana, Ana Maria
2013-01-01
People of Latin American descent make up the largest and fastest-growing minority group in the USA. Rates of pregnancy, childbirth, and sexually transmitted infections among people of Latin American descent are higher than among other ethnic groups. This paper builds on research that suggests that among families of Latin American descent, mothers…
Analysis of foot clearance in firefighters during ascent and descent of stairs.
Kesler, Richard M; Horn, Gavin P; Rosengren, Karl S; Hsiao-Wecksler, Elizabeth T
2016-01-01
Slips, trips, and falls are a leading cause of injury to firefighters with many injuries occurring while traversing stairs, possibly exaggerated by acute fatigue from firefighting activities and/or asymmetric load carriage. This study examined the effects that fatigue, induced by simulated firefighting activities, and hose load carriage have on foot clearance while traversing stairs. Landing and passing foot clearances for each stair during ascent and descent of a short staircase were investigated. Clearances decreased significantly (p < 0.05) post-exercise for nine of 12 ascent parameters and increased for two of eight descent parameters. Load carriage resulted in significantly decreased (p < 0.05) clearance over three ascent parameters, and one increase during descent. Decreased clearances during ascent caused by fatigue or load carriage may result in an increased trip risk. Increased clearances during descent may suggest use of a compensation strategy to ensure stair clearance or an increased risk of over-stepping during descent. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Toward a Caribbean psychology: an African-centered approach.
Sutherland, Marcia Elizabeth
2011-01-01
Although the Americas and Caribbean region are purported to comprise different ethnic groups, this article’s focus is on people of African descent, who represent the largest ethnic group in many countries. The emphasis on people of African descent is related to their family structure, ethnic identity, cultural, psychohistorical, and contemporary psychosocial realities. This article discusses the limitations of Western psychology for theory, research, and applied work on people of African descent in the Americas and Caribbean region. In view of the adaptations that some people of African descent have made to slavery, colonialism, and more contemporary forms of cultural intrusions, it is argued that when necessary, notwithstanding Western psychology’s limitations, Caribbean psychologists should reconstruct mainstream psychology to address the psychological needs of these Caribbean people. The relationship between theory and psychological interventions for the optimal development of people of African descent is emphasized throughout this article. In this regard, the African-centered and constructionist viewpoint is argued to be of utility in addressing the psychological growth and development of people of African descent living in the Americas and Caribbean region.
Ascent/descent ancillary data production user's guide
NASA Technical Reports Server (NTRS)
Brans, H. R.; Seacord, A. W., II; Ulmer, J. W.
1986-01-01
The Ascent/Descent Ancillary Data Product, also called the A/D BET because it contains a Best Estimate of the Trajectory (BET), is a collection of trajectory, attitude, and atmospheric related parameters computed for the ascent and descent phases of each Shuttle Mission. These computations are executed shortly after the event in a post-flight environment. A collection of several routines including some stand-alone routines constitute what is called the Ascent/Descent Ancillary Data Production Program. A User's Guide for that program is given. It is intended to provide the reader with all the information necessary to generate an Ascent or a Descent Ancillary Data Product. It includes descriptions of the input data and output data for each routine, and contains explicit instructions on how to run each routine. A description of the final output product is given.
Time-specific androgen blockade with flutamide inhibits testicular descent in the rat.
Husmann, D A; McPhaul, M J
1991-09-01
Inhibition of androgen action by flutamide, a nonsteroidal antiandrogen, blocked testicular descent in 40% of the testes exposed to this agent continuously from gestational day 13 through postpartal day 28. By contrast, only 11% of the testes failed to descend when blocked by 5 alpha-reductase inhibitors during the same period. Flutamide administration over narrower time intervals (gestational day 13-15, 16-17, or 18-19) revealed maximal interference with testicular descent after androgen inhibition during gestational days 16-17. No significant differences in testicular or epididymal weights were evident between descended and undescended testes; furthermore, no correlation was detected between the presence of epididymal abnormalities and testicular descent. These findings indicate that androgen inhibition during a brief period of embryonic development can block testicular descent. The mechanism through which this inhibition occurs remains to be elucidated.
A conflict analysis of 4D descent strategies in a metered, multiple-arrival route environment
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Harris, C. S.
1990-01-01
A conflict analysis was performed on multiple arrival traffic at a typical metered airport. The Flow Management Evaluation Model (FMEM) was used to simulate arrival operations using Denver Stapleton's arrival route structure. Sensitivities of conflict performance to three different 4-D descent strategies (clear-idle Mach/Constant AirSpeed (CAS), constant descent angle Mach/CAS and energy optimal) were examined for three traffic mixes represented by those found at Denver Stapleton, John F. Kennedy and typical en route metering (ERM) airports. The Monte Carlo technique was used to generate simulation entry point times. Analysis results indicate that the clean-idle descent strategy offers the best compromise in overall performance. Performance measures primarily include susceptibility to conflict and conflict severity. Fuel usage performance is extrapolated from previous descent strategy studies.
Analysis of Flight Management System Predictions of Idle-Thrust Descents
NASA Technical Reports Server (NTRS)
Stell, Laurel
2010-01-01
To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the predictor and its uncertainty models, descents from cruise to the meter fix were executed using vertical navigation in a B737-700 simulator and a B777-200 simulator, both with commercial FMSs. For both aircraft types, the FMS computed the intended descent path for a specified speed profile assuming idle thrust after top of descent (TOD), and then it controlled the avionics without human intervention. The test matrix varied aircraft weight, descent speed, and wind conditions. The first analysis in this paper determined the effect of the test matrix parameters on the FMS computation of TOD location, and it compared the results to those for the current ground predictor in the Efficient Descent Advisor (EDA). The second analysis was similar but considered the time to fly a specified distance to the meter fix. The effects of the test matrix variables together with the accuracy requirements for the predictor will determine the allowable error for the predictor inputs. For the B737, the EDA prediction of meter fix crossing time agreed well with the FMS; but its prediction of TOD location probably was not sufficiently accurate to enable idle-thrust descents in congested airspace, even though the FMS and EDA gave similar shapes for TOD location as a function of the test matrix variables. For the B777, the FMS and EDA gave different shapes for the TOD location function, and the EDA prediction of the TOD location is not accurate enough to fully enable the concept. Furthermore, the differences between the FMS and EDA predictions of meter fix crossing time for the B777 indicated that at least one of them was not sufficiently accurate.
Rotary Wing Deceleration Use on Titan
NASA Technical Reports Server (NTRS)
Young, Larry A.; Steiner, Ted J.
2011-01-01
Rotary wing decelerator (RWD) systems were compared against other methods of atmospheric deceleration and were determined to show significant potential for application to a system requiring controlled descent, low-velocity landing, and atmospheric research capability on Titan. Design space exploration and down-selection results in a system with a single rotor utilizing cyclic pitch control. Models were developed for selection of a RWD descent system for use on Titan and to determine the relationships between the key design parameters of such a system and the time of descent. The possibility of extracting power from the system during descent was also investigated.
Potential applications of skip SMV with thrust engine
NASA Astrophysics Data System (ADS)
Wang, Weilin; Savvaris, Al
2016-11-01
This paper investigates the potential applications of Space Maneuver Vehicles (SMV) with skip trajectory. Due to soaring space operations over the past decades, the risk of space debris has considerably increased such as collision risks with space asset, human property on ground and even aviation. Many active debris removal methods have been investigated and in this paper, a debris remediation method is first proposed based on skip SMV. The key point is to perform controlled re-entry. These vehicles are expected to achieve a trans-atmospheric maneuver with thrust engine. If debris is released at altitude below 80 km, debris could be captured by the atmosphere drag force and re-entry interface prediction accuracy is improved. Moreover if the debris is released in a cargo at a much lower altitude, this technique protects high value space asset from break up by the atmosphere and improves landing accuracy. To demonstrate the feasibility of this concept, the present paper presents the simulation results for two specific mission profiles: (1) descent to predetermined altitude; (2) descent to predetermined point (altitude, longitude and latitude). The evolutionary collocation method is adopted for skip trajectory optimization due to its global optimality and high-accuracy. This method is actually a two-step optimization approach based on the heuristic algorithm and the collocation method. The optimal-control problem is transformed into a nonlinear programming problem (NLP) which can be efficiently and accurately solved by the sequential quadratic programming (SQP) procedure. However, such a method is sensitive to initial values. To reduce the sensitivity problem, genetic algorithm (GA) is adopted to refine the grids and provide near optimum initial values. By comparing the simulation data from different scenarios, it is found that skip SMV is feasible in active debris removal and the evolutionary collocation method gives a truthful re-entry trajectory that satisfies the path and boundary constraints.
An Impacting Descent Probe for Europa and the Other Galilean Moons of Jupiter
NASA Astrophysics Data System (ADS)
Wurz, P.; Lasi, D.; Thomas, N.; Piazza, D.; Galli, A.; Jutzi, M.; Barabash, S.; Wieser, M.; Magnes, W.; Lammer, H.; Auster, U.; Gurvits, L. I.; Hajdas, W.
2017-08-01
We present a study of an impacting descent probe that increases the science return of spacecraft orbiting or passing an atmosphere-less planetary bodies of the solar system, such as the Galilean moons of Jupiter. The descent probe is a carry-on small spacecraft (<100 kg), to be deployed by the mother spacecraft, that brings itself onto a collisional trajectory with the targeted planetary body in a simple manner. A possible science payload includes instruments for surface imaging, characterisation of the neutral exosphere, and magnetic field and plasma measurement near the target body down to very low-altitudes ( 1 km), during the probe's fast ( km/s) descent to the surface until impact. The science goals and the concept of operation are discussed with particular reference to Europa, including options for flying through water plumes and after-impact retrieval of very-low altitude science data. All in all, it is demonstrated how the descent probe has the potential to provide a high science return to a mission at a low extra level of complexity, engineering effort, and risk. This study builds upon earlier studies for a Callisto Descent Probe for the former Europa-Jupiter System Mission of ESA and NASA, and extends them with a detailed assessment of a descent probe designed to be an additional science payload for the NASA Europa Mission.
NASA Astrophysics Data System (ADS)
Murrieta Mendoza, Alejandro
Aircraft reference trajectory is an alternative method to reduce fuel consumption, thus the pollution released to the atmosphere. Fuel consumption reduction is of special importance for two reasons: first, because the aeronautical industry is responsible of 2% of the CO2 released to the atmosphere, and second, because it will reduce the flight cost. The aircraft fuel model was obtained from a numerical performance database which was created and validated by our industrial partner from flight experimental test data. A new methodology using the numerical database was proposed in this thesis to compute the fuel burn for a given trajectory. Weather parameters such as wind and temperature were taken into account as they have an important effect in fuel burn. The open source model used to obtain the weather forecast was provided by Weather Canada. A combination of linear and bi-linear interpolations allowed finding the required weather data. The search space was modelled using different graphs: one graph was used for mapping the different flight phases such as climb, cruise and descent, and another graph was used for mapping the physical space in which the aircraft would perform its flight. The trajectory was optimized in its vertical reference trajectory using the Beam Search algorithm, and a combination of the Beam Search algorithm with a search space reduction technique. The trajectory was optimized simultaneously for the vertical and lateral reference navigation plans while fulfilling a Required Time of Arrival constraint using three different metaheuristic algorithms: the artificial bee's colony, and the ant colony optimization. Results were validated using the software FlightSIMRTM, a commercial Flight Management System, an exhaustive search algorithm, and as flown flights obtained from flightawareRTM. All algorithms were able to reduce the fuel burn, and the flight costs. None None None None None None None
Suarez-Kurtz, Guilherme; Fuchshuber-Moraes, Mateus; Struchiner, Claudio J; Parra, Esteban J
2016-08-01
Several algorithms have been proposed to reduce the genotyping effort and cost, while retaining the accuracy of N-acetyltransferase-2 (NAT2) phenotype prediction. Data from the 1000 Genomes (1KG) project and an admixed cohort of Black Brazilians were used to assess the accuracy of NAT2 phenotype prediction using algorithms based on paired single nucleotide polymorphisms (SNPs) (rs1041983 and rs1801280) or a tag SNP (rs1495741). NAT2 haplotypes comprising SNPs rs1801279, rs1041983, rs1801280, rs1799929, rs1799930, rs1208 and rs1799931 were assigned according to the arylamine N-acetyltransferases database. Contingency tables were used to visualize the agreement between the NAT2 acetylator phenotypes on the basis of these haplotypes versus phenotypes inferred by the prediction algorithms. The paired and tag SNP algorithms provided more than 96% agreement with the 7-SNP derived phenotypes in Europeans, East Asians, South Asians and Admixed Americans, but discordance of phenotype prediction occurred in 30.2 and 24.8% 1KG Africans and in 14.4 and 18.6% Black Brazilians, respectively. Paired SNP panel misclassification occurs in carriers of NATs haplotypes *13A (282T alone), *12B (282T and 803G), *6B (590A alone) and *14A (191A alone), whereas haplotype *14, defined by the 191A allele, is the major culprit of misclassification by the tag allele. Both the paired SNP and the tag SNP algorithms may be used, with economy of scale, to infer NAT2 acetylator phenotypes, including the ultra-slow phenotype, in European, East Asian, South Asian and American populations represented in the 1KG cohort. Both algorithms, however, perform poorly in populations of predominant African descent, including admixed African-Americans, African Caribbeans and Black Brazilians.
Hair Breakage in Patients of African Descent: Role of Dermoscopy
Quaresma, Maria Victória; Martinez Velasco, María Abril; Tosti, Antonella
2015-01-01
Dermoscopy represents a useful technique for the diagnosis and follow-up of hair and scalp disorders. To date, little has been published regarding dermoscopy findings of hair disorders in patients of African descent. This article illustrates how dermoscopy allows fast diagnosis of hair breakage due to intrinsic factors and chemical damage in African descent patients. PMID:27170942
Ethnic Identity and Acculturative Stress as Mediators of Depression in Students of Asian Descent
ERIC Educational Resources Information Center
Lantrip, Crystal; Mazzetti, Francesco; Grasso, Joseph; Gill, Sara; Miller, Janna; Haner, Morgynn; Rude, Stephanie; Awad, Germine
2015-01-01
This study underscored the importance of addressing the well-being of college students of Asian descent, because these students had higher rates of depression and lower positive feelings about their ethnic group compared with students of European descent, as measured by the Affirmation subscale of the Ethnic Identity Scale. Affirmation mediated…
Device for Lowering Mars Science Laboratory Rover to the Surface
NASA Technical Reports Server (NTRS)
2008-01-01
This is hardware for controlling the final lowering of NASA's Mars Science Laboratory rover to the surface of Mars from the spacecraft's hovering, rocket-powered descent stage. The photo shows the bridle device assembly, which is about two-thirds of a meter, or 2 feet, from end to end, and has two main parts. The cylinder on the left is the descent brake. On the right is the bridle assembly, including a spool of nylon and Vectran cords that will be attached to the rover. When pyrotechnic bolts fire to sever the rigid connection between the rover and the descent stage, gravity will pull the tethered rover away from the descent stage. The bridle or tether, attached to three points on the rover, will unspool from the bridle assembly, beginning from the larger-diameter portion of the spool at far right. The rotation rate of the assembly, hence the descent rate of the rover, will be governed by the descent brake. Inside the housing of that brake are gear boxes and banks of mechanical resistors engineered to prevent the bridle from spooling out too quickly or too slowly. The length of the bridle will allow the rover to be lowered about 7.5 meters (25 feet) while still tethered to the descent stage. The Starsys division of SpaceDev Inc., Poway, Calif., provided the descent brake. NASA's Jet Propulsion Laboratory, Pasadena, Calif., built the bridle assembly. Vectran is a product of Kuraray Co. Ltd., Tokyo. JPL, a division of the California Institute of Technology, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington.Antarctic Polar Descent and Planetary Wave Activity Observed in ISAMS CO from April to July 1992
NASA Technical Reports Server (NTRS)
Allen, D. R.; Stanford, J. L.; Nakamura, N.; Lopez-Valverde, M. A.; Lopez-Puertas, M.; Taylor, F. W.; Remedios, J. J.
2000-01-01
Antarctic polar descent and planetary wave activity in the upper stratosphere and lower mesosphere are observed in ISAMS CO data from April to July 1992. CO-derived mean April-to-May upper stratosphere descent rates of 15 K/day (0.25 km/day) at 60 S and 20 K/day (0.33 km/day) at 80 S are compared with descent rates from diabatic trajectory analyses. At 60 S there is excellent agreement, while at 80 S the trajectory-derived descent is significantly larger in early April. Zonal wavenumber 1 enhancement of CO is observed on 9 and 28 May, coincident with enhanced wave 1 in UKMO geopotential height. The 9 May event extends from 40 to 70 km and shows westward phase tilt with height, while the 28 May event extends from 40 to 50 km and shows virtually no phase tilt with height.
O'Connor, Michelle Y; Thoreson, Caroline K; Ramsey, Natalie L M; Ricks, Madia; Sumner, Anne E
2014-01-01
Vitamin D levels in people of African descent are often described as inadequate or deficient. Whether low vitamin D levels in people of African descent lead to compromised bone or cardiometabolic health is unknown. Clarity on this issue is essential because if clinically significant vitamin D deficiency is present, vitamin D supplementation is necessary. However, if vitamin D is metabolically sufficient, vitamin D supplementation could be wasteful of scarce resources and even harmful. In this review vitamin D physiology is described with a focus on issues specific to populations of African descent such as the influence of melanin on endogenous vitamin D production and lactose intolerance on the willingness of people to ingest vitamin D fortified foods. Then data on the relationship of vitamin D to bone and cardiometabolic health in people of African descent are evaluated. PMID:24267433
Descent Assisted Split Habitat Lunar Lander Concept
NASA Technical Reports Server (NTRS)
Mazanek, Daniel D.; Goodliff, Kandyce; Cornelius, David M.
2008-01-01
The Descent Assisted Split Habitat (DASH) lunar lander concept utilizes a disposable braking stage for descent and a minimally sized pressurized volume for crew transport to and from the lunar surface. The lander can also be configured to perform autonomous cargo missions. Although a braking-stage approach represents a significantly different operational concept compared with a traditional two-stage lander, the DASH lander offers many important benefits. These benefits include improved crew egress/ingress and large-cargo unloading; excellent surface visibility during landing; elimination of the need for deep-throttling descent engines; potentially reduced plume-surface interactions and lower vertical touchdown velocity; and reduced lander gross mass through efficient mass staging and volume segmentation. This paper documents the conceptual study on various aspects of the design, including development of sortie and outpost lander configurations and a mission concept of operations; the initial descent trajectory design; the initial spacecraft sizing estimates and subsystem design; and the identification of technology needs
Dong, Feihong; Li, Hongjun; Gong, Xiangwu; Liu, Quan; Wang, Jingchao
2015-01-01
A typical application scenario of remote wireless sensor networks (WSNs) is identified as an emergency scenario. One of the greatest design challenges for communications in emergency scenarios is energy-efficient transmission, due to scarce electrical energy in large-scale natural and man-made disasters. Integrated high altitude platform (HAP)/satellite networks are expected to optimally meet emergency communication requirements. In this paper, a novel integrated HAP/satellite (IHS) architecture is proposed, and three segments of the architecture are investigated in detail. The concept of link-state advertisement (LSA) is designed in a slow flat Rician fading channel. The LSA is received and processed by the terminal to estimate the link state information, which can significantly reduce the energy consumption at the terminal end. Furthermore, the transmission power requirements of the HAPs and terminals are derived using the gradient descent and differential equation methods. The energy consumption is modeled at both the source and system level. An innovative and adaptive algorithm is given for the energy-efficient path selection. The simulation results validate the effectiveness of the proposed adaptive algorithm. It is shown that the proposed adaptive algorithm can significantly improve energy efficiency when combined with the LSA and the energy consumption estimation. PMID:26404292
Dong, Feihong; Li, Hongjun; Gong, Xiangwu; Liu, Quan; Wang, Jingchao
2015-09-03
A typical application scenario of remote wireless sensor networks (WSNs) is identified as an emergency scenario. One of the greatest design challenges for communications in emergency scenarios is energy-efficient transmission, due to scarce electrical energy in large-scale natural and man-made disasters. Integrated high altitude platform (HAP)/satellite networks are expected to optimally meet emergency communication requirements. In this paper, a novel integrated HAP/satellite (IHS) architecture is proposed, and three segments of the architecture are investigated in detail. The concept of link-state advertisement (LSA) is designed in a slow flat Rician fading channel. The LSA is received and processed by the terminal to estimate the link state information, which can significantly reduce the energy consumption at the terminal end. Furthermore, the transmission power requirements of the HAPs and terminals are derived using the gradient descent and differential equation methods. The energy consumption is modeled at both the source and system level. An innovative and adaptive algorithm is given for the energy-efficient path selection. The simulation results validate the effectiveness of the proposed adaptive algorithm. It is shown that the proposed adaptive algorithm can significantly improve energy efficiency when combined with the LSA and the energy consumption estimation.
Ring-push metric learning for person reidentification
NASA Astrophysics Data System (ADS)
He, Botao; Yu, Shaohua
2017-05-01
Person reidentification (re-id) has been widely studied because of its extensive use in video surveillance and forensics applications. It aims to search a specific person among a nonoverlapping camera network, which is highly challenging due to large variations in the cluttered background, human pose, and camera viewpoint. We present a metric learning algorithm for learning a Mahalanobis distance for re-id. Generally speaking, there exist two forces in the conventional metric learning process, one pulling force that pulls points of the same class closer and the other pushing force that pushes points of different classes as far apart as possible. We argue that, when only a limited number of training data are given, forcing interclass distances to be as large as possible may drive the metric to overfit the uninformative part of the images, such as noises and backgrounds. To alleviate overfitting, we propose the ring-push metric learning algorithm. Different from other metric learning methods that only punish too small interclass distances, in the proposed method, both too small and too large inter-class distances are punished. By introducing the generalized logistic function as the loss, we formulate the ring-push metric learning as a convex optimization problem and utilize the projected gradient descent method to solve it. The experimental results on four public datasets demonstrate the effectiveness of the proposed algorithm.
Fan, Bingfei; Li, Qingguo; Liu, Tao
2017-12-28
With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method.
Inverse Flush Air Data System (FADS) for Real Time Simulations
NASA Astrophysics Data System (ADS)
Madhavanpillai, Jayakumar; Dhoaya, Jayanta; Balakrishnan, Vidya Saraswathi; Narayanan, Remesh; Chacko, Finitha Kallely; Narayanan, Shyam Mohan
2017-12-01
Flush Air Data Sensing System (FADS) forms a mission critical sub system in future reentry vehicles. FADS makes use of surface pressure measurements from the nose cap of the vehicle for deriving the air data parameters of the vehicle such as angle of attack, angle of sideslip, Mach number, etc. These parameters find use in the flight control and guidance systems, and also assist in the overall mission management. The FADS under consideration in this paper makes use of nine pressure ports located in the nose cap of a technology demonstrator vehicle. In flight, the air data parameters are obtained from the FADS estimation algorithm using the pressure data at the nine pressure ports. But, these pressure data will not be available, for testing the FADS package during ground simulation. So, an inverse software to FADS which estimates the pressure data at the pressure ports for a given flight condition is developed. These pressure data at the nine ports will go as input to the FADS package during ground simulation. The software is run to generate the pressure data for the descent phase trajectory of the technology demonstrator. This data is used again to generate the air data parameters from FADS algorithm. The computed results from FADS algorithm match well with the trajectory data.
Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2006-09-01
This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.
Pattern pluralism and the Tree of Life hypothesis
Doolittle, W. Ford; Bapteste, Eric
2007-01-01
Darwin claimed that a unique inclusively hierarchical pattern of relationships between all organisms based on their similarities and differences [the Tree of Life (TOL)] was a fact of nature, for which evolution, and in particular a branching process of descent with modification, was the explanation. However, there is no independent evidence that the natural order is an inclusive hierarchy, and incorporation of prokaryotes into the TOL is especially problematic. The only data sets from which we might construct a universal hierarchy including prokaryotes, the sequences of genes, often disagree and can seldom be proven to agree. Hierarchical structure can always be imposed on or extracted from such data sets by algorithms designed to do so, but at its base the universal TOL rests on an unproven assumption about pattern that, given what we know about process, is unlikely to be broadly true. This is not to say that similarities and differences between organisms are not to be accounted for by evolutionary mechanisms, but descent with modification is only one of these mechanisms, and a single tree-like pattern is not the necessary (or expected) result of their collective operation. Pattern pluralism (the recognition that different evolutionary models and representations of relationships will be appropriate, and true, for different taxa or at different scales or for different purposes) is an attractive alternative to the quixotic pursuit of a single true TOL. PMID:17261804
Length Distributions of Identity by Descent Reveal Fine-Scale Demographic History
Palamara, Pier Francesco; Lencz, Todd; Darvasi, Ariel; Pe’er, Itsik
2012-01-01
Data-driven studies of identity by descent (IBD) were recently enabled by high-resolution genomic data from large cohorts and scalable algorithms for IBD detection. Yet, haplotype sharing currently represents an underutilized source of information for population-genetics research. We present analytical results on the relationship between haplotype sharing across purportedly unrelated individuals and a population’s demographic history. We express the distribution of IBD sharing across pairs of individuals for segments of arbitrary length as a function of the population’s demography, and we derive an inference procedure to reconstruct such demographic history. The accuracy of the proposed reconstruction methodology was extensively tested on simulated data. We applied this methodology to two densely typed data sets: 500 Ashkenazi Jewish (AJ) individuals and 56 Kenyan Maasai (MKK) individuals (HapMap 3 data set). Reconstructing the demographic history of the AJ cohort, we recovered two subsequent population expansions, separated by a severe founder event, consistent with previous analysis of lower-throughput genetic data and historical accounts of AJ history. In the MKK cohort, high levels of cryptic relatedness were detected. The spectrum of IBD sharing is consistent with a demographic model in which several small-sized demes intermix through high migration rates and result in enrichment of shared long-range haplotypes. This scenario of historically structured demographies might explain the unexpected abundance of runs of homozygosity within several populations. PMID:23103233
Orion Entry, Descent, and Landing Simulation
NASA Technical Reports Server (NTRS)
Hoelscher, Brian R.
2007-01-01
The Orion Entry, Descent, and Landing simulation was created over the past two years to serve as the primary Crew Exploration Vehicle guidance, navigation, and control (GN&C) design and analysis tool at the National Aeronautics and Space Administration (NASA). The Advanced NASA Technology Architecture for Exploration Studies (ANTARES) simulation is a six degree-of-freedom tool with a unique design architecture which has a high level of flexibility. This paper describes the decision history and motivations that guided the creation of this simulation tool. The capabilities of the models within ANTARES are presented in detail. Special attention is given to features of the highly flexible GN&C architecture and the details of the implemented GN&C algorithms. ANTARES provides a foundation simulation for the Orion Project that has already been successfully used for requirements analysis, system definition analysis, and preliminary GN&C design analysis. ANTARES will find useful application in engineering analysis, mission operations, crew training, avionics-in-the-loop testing, etc. This paper focuses on the entry simulation aspect of ANTARES, which is part of a bigger simulation package supporting the entire mission profile of the Orion vehicle. The unique aspects of entry GN&C design are covered, including how the simulation is being used for Monte Carlo dispersion analysis and for support of linear stability analysis. Sample simulation output from ANTARES is presented in an appendix.
Descent Equations Starting from High Rank Chern-Simons
NASA Astrophysics Data System (ADS)
Kang, Bei; Pan, Yi; Wu, Ke; Yang, Jie; Yang, Zi-Feng
2018-04-01
In this paper a set of generalized descent equations are proposed. The solutions to those descent equations labeled by r for any r (r ≥ 2, r ɛ ℕ) are forms of degrees varying from 0 to (2r ‑ 1). And the case of r = 2 is mainly discussed. Supported by National Natural Science Foundation of China under Grant Nos. 11475116, 11401400
Mars Science Laboratory Entry, Descent and Landing System Overview
NASA Technical Reports Server (NTRS)
Steltzner, Adam D.; San Martin, A. Miguel; Rivellini, Tomasso P.; Chen, Allen
2013-01-01
The Mars Science Laboratory project recently places the Curiosity rove on the surface of Mars. With the success of the landing system, the performance envelope of entry, descent and landing capabilities has been extended over the previous state of the art. This paper will present an overview to the MSL entry, descent and landing system design and preliminary flight performance results.
Study of Some Planetary Atmospheres Features by Probe Entry and Descent Simulations
NASA Technical Reports Server (NTRS)
Gil, P. J. S.; Rosa, P. M. B.
2005-01-01
Characterization of planetary atmospheres is analyzed by its effects in the entry and descent trajectories of probes. Emphasis is on the most important variables that characterize atmospheres e.g. density profile with altitude. Probe trajectories are numerically determined with ENTRAP, a developing multi-purpose computational tool for entry and descent trajectory simulations capable of taking into account many features and perturbations. Real data from Mars Pathfinder mission is used. The goal is to be able to determine more accurately the atmosphere structure by observing real trajectories and what changes are to expect in probe descent trajectories if atmospheres have different properties than the ones assumed initially.
Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization
NASA Technical Reports Server (NTRS)
Pinson, Robin; Lu, Ping
2015-01-01
This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.
NASA aviation safety reporting system
NASA Technical Reports Server (NTRS)
1978-01-01
Reports describing various types of communication problems are presented along with summaries dealing with judgment and decision making. Concerns relating to the ground proximity warning system are summarized and several examples of true terrain proximity warnings are provided. An analytic study of reports relating to profile descents was performed. Problems were found to be associated with charting and graphic presentation of the descents, with lack of uniformity of the descent procedures among facilities using them, and with the flight crew workload engendered by profile descents, particularly when additional requirements are interposed by air traffic control during the execution of the profiles. A selection of alert bulletins and responses to them were reviewed.
Tracer-based Determination of Vortex Descent in the 1999/2000 Arctic Winter
NASA Technical Reports Server (NTRS)
Greenblatt, Jeffrey B.; Jost, Hans-Juerg; Loewenstein, Max; Podolske, James R.; Hurst, Dale F.; Elkins, James W.; Schauffler, Sue M.; Atlas, Elliot L.; Herman, Robert L.; Webster, Chrisotopher R.
2002-01-01
A detailed analysis of available in situ and remotely sensed N2O and CH4 data measured in the 1999/2000 winter Arctic vortex has been performed in order to quantify the temporal evolution of vortex descent. Differences in potential temperature (theta) among balloon and aircraft vertical profiles (an average of 19-23 K on a given N2O or CH4 isopleth) indicated significant vortex inhomogeneity in late fall as compared with late winter profiles. A composite fall vortex profile was constructed for 26 November 1999, whose error bars encompassed the observed variability. High-latitude extravortex profiles measured in different years and seasons revealed substantial variability in N2O and CH4 on theta surfaces, but all were clearly distinguishable from the first vortex profiles measured in late fall 1999. From these extravortex-vortex differences we inferred descent prior to 26 November: as much as 397 plus or minus 15 K (lsigma) at 30 ppbv N2O and 640 ppbv CH4, and falling to 28 plus or minus 13 K above 200 ppbv N2O and 1280 ppbv CH4. Changes in theta were determined on five N2O and CH4 isopleths from 26 November through 12 March, and descent rates were calculated on each N2O isopleth for several time intervals. The maximum descent rates were seen between 26 November and 27 January: 0.82 plus or minus 0.20 K/day averaged over 50- 250 ppbv N2O. By late winter (26 February to 12 March), the average rate had decreased to 0.10 plus or minus 0.25 K/day. Descent rates also decreased with increasing N2O; the winter average (26 November to 5 March) descent rate varied from 0.75 plus or minus 0.10 K/day at 50 ppbv to 0.40 plus or minus 0.11 K/day at 250 ppbv. Comparison of these results with observations and models of descent in prior years showed very good overall agreement. Two models of the 1999/2000 vortex descent, SLIMCAT and REPROBUS, despite theta offsets with respect to observed profiles of up to 20 K on most tracer isopleths, produced descent rates that agreed very favorably with the inferred rates from observation.
Frequency-domain ultrasound waveform tomography breast attenuation imaging
NASA Astrophysics Data System (ADS)
Sandhu, Gursharan Yash Singh; Li, Cuiping; Roy, Olivier; West, Erik; Montgomery, Katelyn; Boone, Michael; Duric, Neb
2016-04-01
Ultrasound waveform tomography techniques have shown promising results for the visualization and characterization of breast disease. By using frequency-domain waveform tomography techniques and a gradient descent algorithm, we have previously reconstructed the sound speed distributions of breasts of varying densities with different types of breast disease including benign and malignant lesions. By allowing the sound speed to have an imaginary component, we can model the intrinsic attenuation of a medium. We can similarly recover the imaginary component of the velocity and thus the attenuation. In this paper, we will briefly review ultrasound waveform tomography techniques, discuss attenuation and its relations to the imaginary component of the sound speed, and provide both numerical and ex vivo examples of waveform tomography attenuation reconstructions.
NASA Astrophysics Data System (ADS)
Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian
2017-08-01
In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.
An experimental trip to the Calculus of Variations
NASA Astrophysics Data System (ADS)
Arroyo, Josu
2008-04-01
This paper presents a collection of experiments in the Calculus of Variations. The implementation of the Gradient Descent algorithm built on cubic-splines acting as "numerically friendly" elementary functions, give us ways to solve variational problems by constructing the solution. It wins a pragmatic point of view: one gets solutions sometimes as fast as possible, sometimes as close as possible to the true solutions. The balance speed/precision is not always easy to achieve. Starting from the most well-known, classic or historical formulation of a variational problem, section 2 describes briefly the bridge between theoretical and computational formulations. The next sections show the results of several kind of experiments; from the most basics, as those about geodesics, to the most complex, as those about vesicles.
The dynamics of plate tectonics and mantle flow: from local to global scales.
Stadler, Georg; Gurnis, Michael; Burstedde, Carsten; Wilcox, Lucas C; Alisic, Laura; Ghattas, Omar
2010-08-27
Plate tectonics is regulated by driving and resisting forces concentrated at plate boundaries, but observationally constrained high-resolution models of global mantle flow remain a computational challenge. We capitalized on advances in adaptive mesh refinement algorithms on parallel computers to simulate global mantle flow by incorporating plate motions, with individual plate margins resolved down to a scale of 1 kilometer. Back-arc extension and slab rollback are emergent consequences of slab descent in the upper mantle. Cold thermal anomalies within the lower mantle couple into oceanic plates through narrow high-viscosity slabs, altering the velocity of oceanic plates. Viscous dissipation within the bending lithosphere at trenches amounts to approximately 5 to 20% of the total dissipation through the entire lithosphere and mantle.
Information transfer satellite concept study. Volume 4: computer manual
NASA Technical Reports Server (NTRS)
Bergin, P.; Kincade, C.; Kurpiewski, D.; Leinhaupel, F.; Millican, F.; Onstad, R.
1971-01-01
The Satellite Telecommunications Analysis and Modeling Program (STAMP) provides the user with a flexible and comprehensive tool for the analysis of ITS system requirements. While obtaining minimum cost design points, the program enables the user to perform studies over a wide range of user requirements and parametric demands. The program utilizes a total system approach wherein the ground uplink and downlink, the spacecraft, and the launch vehicle are simultaneously synthesized. A steepest descent algorithm is employed to determine the minimum total system cost design subject to the fixed user requirements and imposed constraints. In the process of converging to the solution, the pertinent subsystem tradeoffs are resolved. This report documents STAMP through a technical analysis and a description of the principal techniques employed in the program.
Quantitative characterization of turbidity by radiative transfer based reflectance imaging
Tian, Peng; Chen, Cheng; Jin, Jiahong; Hong, Heng; Lu, Jun Q.; Hu, Xin-Hua
2018-01-01
A new and noncontact approach of multispectral reflectance imaging has been developed to inversely determine the absorption coefficient of μa, the scattering coefficient of μs and the anisotropy factor g of a turbid target from one measured reflectance image. The incident beam was profiled with a diffuse reflectance standard for deriving both measured and calculated reflectance images. A GPU implemented Monte Carlo code was developed to determine the parameters with a conjugate gradient descent algorithm and the existence of unique solutions was shown. We noninvasively determined embedded region thickness in heterogeneous targets and estimated in vivo optical parameters of nevi from 4 patients between 500 and 950nm for melanoma diagnosis to demonstrate the potentials of quantitative reflectance imaging. PMID:29760971
Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions
Liu, Weidong; Luo, Xi
2014-01-01
This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463
Spallek, Jacob; Spix, Claudia; Zeeb, Hajo; Kaatsch, Peter; Razum, Oliver
2008-01-01
Background Cancer risks of migrants might differ from risks of the indigenous population due to differences in socioeconomic status, life style, or genetic factors. The aim of this study was to investigate cancer patterns among children of Turkish descent in Germany. Methods We identified cases with Turkish names (as a proxy of Turkish descent) among the 37,259 cases of childhood cancer registered in the German Childhood Cancer Registry (GCCR) during 1980–2005. As it is not possible to obtain reference population data for children of Turkish descent, the distribution of cancer diagnoses was compared between cases of Turkish descent and all remaining (mainly German) cases in the registry, using proportional cancer incidence ratios (PCIRs). Results The overall distribution of cancer diagnoses was similar in the two groups. The PCIRs in three diagnosis groups were increased for cases of Turkish descent: acute non-lymphocytic leukaemia (PCIR 1.23; CI (95%) 1.02–1.47), Hodgkin's disease (1.34; 1.13–1.59) and Non-Hodgkin/Burkitt lymphoma (1.19; 1.02–1.39). Age, sex, and period of diagnosis showed no influence on the distribution of diagnoses. Conclusion No major differences were found in cancer patterns among cases of Turkish descent compared to all other cases in the GCCR. Slightly higher proportions of systemic malignant diseases indicate that analytical studies involving migrants may help investigating the causes of such cancers. PMID:18462495
Zhang, Fan; Liu, Ming; Harper, Stephen; Lee, Michael; Huang, He
2014-07-22
To enable intuitive operation of powered artificial legs, an interface between user and prosthesis that can recognize the user's movement intent is desired. A novel neural-machine interface (NMI) based on neuromuscular-mechanical fusion developed in our previous study has demonstrated a great potential to accurately identify the intended movement of transfemoral amputees. However, this interface has not yet been integrated with a powered prosthetic leg for true neural control. This study aimed to report (1) a flexible platform to implement and optimize neural control of powered lower limb prosthesis and (2) an experimental setup and protocol to evaluate neural prosthesis control on patients with lower limb amputations. First a platform based on a PC and a visual programming environment were developed to implement the prosthesis control algorithms, including NMI training algorithm, NMI online testing algorithm, and intrinsic control algorithm. To demonstrate the function of this platform, in this study the NMI based on neuromuscular-mechanical fusion was hierarchically integrated with intrinsic control of a prototypical transfemoral prosthesis. One patient with a unilateral transfemoral amputation was recruited to evaluate our implemented neural controller when performing activities, such as standing, level-ground walking, ramp ascent, and ramp descent continuously in the laboratory. A novel experimental setup and protocol were developed in order to test the new prosthesis control safely and efficiently. The presented proof-of-concept platform and experimental setup and protocol could aid the future development and application of neurally-controlled powered artificial legs.
NASA Astrophysics Data System (ADS)
Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza
2017-07-01
In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.
The Role of la Familia for Women of Mexican Descent Who Are Leaders in Higher Education
ERIC Educational Resources Information Center
Elizondo, Sandra Gray
2012-01-01
The purpose of this qualitative case study was to describe the role of "la familia" for women of Mexican descent as it relates to their development as leaders and their leadership in academia. Purposeful sampling was utilized to reach the goal of 18 participants who were female academic leaders of Mexican descent teaching full time in…
G-Guidance Interface Design for Small Body Mission Simulation
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Carson, John; Phan, Linh
2008-01-01
The G-Guidance software implements a guidance and control (G and C) algorithm for small-body, autonomous proximity operations, developed under the Small Body GN and C task at JPL. The software is written in Matlab and interfaces with G-OPT, a JPL-developed optimization package written in C that provides G-Guidance with guaranteed convergence to a solution in a finite computation time with a prescribed accuracy. The resulting program is computationally efficient and is a prototype of an onboard, real-time algorithm for autonomous guidance and control. Two thruster firing schemes are available in G-Guidance, allowing tailoring of the software for specific mission maneuvers. For example, descent, landing, or rendezvous benefit from a thruster firing at the maneuver termination to mitigate velocity errors. Conversely, ascent or separation maneuvers benefit from an immediate firing to avoid potential drift toward a second body. The guidance portion of this software explicitly enforces user-defined control constraints and thruster silence times while minimizing total fuel usage. This program is currently specialized to small-body proximity operations, but the underlying method can be generalized to other applications.
NASA Astrophysics Data System (ADS)
Li, Jia; Wang, Qiang; Yan, Wenjie; Shen, Yi
2015-12-01
Cooperative spectrum sensing exploits the spatial diversity to improve the detection of occupied channels in cognitive radio networks (CRNs). Cooperative compressive spectrum sensing (CCSS) utilizing the sparsity of channel occupancy further improves the efficiency by reducing the number of reports without degrading detection performance. In this paper, we firstly and mainly propose the referred multi-candidate orthogonal matrix matching pursuit (MOMMP) algorithms to efficiently and effectively detect occupied channels at fusion center (FC), where multi-candidate identification and orthogonal projection are utilized to respectively reduce the number of required iterations and improve the probability of exact identification. Secondly, two common but different approaches based on threshold and Gaussian distribution are introduced to realize the multi-candidate identification. Moreover, to improve the detection accuracy and energy efficiency, we propose the matrix construction based on shrinkage and gradient descent (MCSGD) algorithm to provide a deterministic filter coefficient matrix of low t-average coherence. Finally, several numerical simulations validate that our proposals provide satisfactory performance with higher probability of detection, lower probability of false alarm and less detection time.
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
Visual navigation using edge curve matching for pinpoint planetary landing
NASA Astrophysics Data System (ADS)
Cui, Pingyuan; Gao, Xizhen; Zhu, Shengying; Shao, Wei
2018-05-01
Pinpoint landing is challenging for future Mars and asteroid exploration missions. Vision-based navigation scheme based on feature detection and matching is practical and can achieve the required precision. However, existing algorithms are computationally prohibitive and utilize poor-performance measurements, which pose great challenges for the application of visual navigation. This paper proposes an innovative visual navigation scheme using crater edge curves during descent and landing phase. In the algorithm, the edge curves of the craters tracked from two sequential images are utilized to determine the relative attitude and position of the lander through a normalized method. Then, considering error accumulation of relative navigation, a method is developed. That is to integrate the crater-based relative navigation method with crater-based absolute navigation method that identifies craters using a georeferenced database for continuous estimation of absolute states. In addition, expressions of the relative state estimate bias are derived. Novel necessary and sufficient observability criteria based on error analysis are provided to improve the navigation performance, which hold true for similar navigation systems. Simulation results demonstrate the effectiveness and high accuracy of the proposed navigation method.
A hybrid neural network model for noisy data regression.
Lee, Eric W M; Lim, Chee Peng; Yuen, Richard K K; Lo, S M
2004-04-01
A hybrid neural network model, based on the fusion of fuzzy adaptive resonance theory (FA ART) and the general regression neural network (GRNN), is proposed in this paper. Both FA and the GRNN are incremental learning systems and are very fast in network training. The proposed hybrid model, denoted as GRNNFA, is able to retain these advantages and, at the same time, to reduce the computational requirements in calculating and storing information of the kernels. A clustering version of the GRNN is designed with data compression by FA for noise removal. An adaptive gradient-based kernel width optimization algorithm has also been devised. Convergence of the gradient descent algorithm can be accelerated by the geometric incremental growth of the updating factor. A series of experiments with four benchmark datasets have been conducted to assess and compare effectiveness of GRNNFA with other approaches. The GRNNFA model is also employed in a novel application task for predicting the evacuation time of patrons at typical karaoke centers in Hong Kong in the event of fire. The results positively demonstrate the applicability of GRNNFA in noisy data regression problems.
Optimal block cosine transform image coding for noisy channels
NASA Technical Reports Server (NTRS)
Vaishampayan, V.; Farvardin, N.
1986-01-01
The two dimensional block transform coding scheme based on the discrete cosine transform was studied extensively for image coding applications. While this scheme has proven to be efficient in the absence of channel errors, its performance degrades rapidly over noisy channels. A method is presented for the joint source channel coding optimization of a scheme based on the 2-D block cosine transform when the output of the encoder is to be transmitted via a memoryless design of the quantizers used for encoding the transform coefficients. This algorithm produces a set of locally optimum quantizers and the corresponding binary code assignment for the assumed transform coefficient statistics. To determine the optimum bit assignment among the transform coefficients, an algorithm was used based on the steepest descent method, which under certain convexity conditions on the performance of the channel optimized quantizers, yields the optimal bit allocation. Comprehensive simulation results for the performance of this locally optimum system over noisy channels were obtained and appropriate comparisons against a reference system designed for no channel error were rendered.
Computational applications of the many-interacting-worlds interpretation of quantum mechanics.
Sturniolo, Simone
2018-05-01
While historically many quantum-mechanical simulations of molecular dynamics have relied on the Born-Oppenheimer approximation to separate electronic and nuclear behavior, recently a great deal of interest has arisen in quantum effects in nuclear dynamics as well. Due to the computational difficulty of solving the Schrödinger equation in full, these effects are often treated with approximate methods. In this paper, we present an algorithm to tackle these problems using an extension to the many-interacting-worlds approach to quantum mechanics. This technique uses a kernel function to rebuild the probability density, and therefore, in contrast with the approximation presented in the original paper, it can be naturally extended to n-dimensional systems. This opens up the possibility of performing quantum ground-state searches with steepest-descent methods, and it could potentially lead to real-time quantum molecular-dynamics simulations. The behavior of the algorithm is studied in different potentials and numbers of dimensions and compared both to the original approach and to exact Schrödinger equation solutions whenever possible.
Coordinated control system modelling of ultra-supercritical unit based on a new T-S fuzzy structure.
Hou, Guolian; Du, Huan; Yang, Yu; Huang, Congzhi; Zhang, Jianhua
2018-03-01
The thermal power plant, especially the ultra-supercritical unit is featured with severe nonlinearity, strong multivariable coupling. In order to deal with these difficulties, it is of great importance to build an accurate and simple model of the coordinated control system (CCS) in the ultra-supercritical unit. In this paper, an improved T-S fuzzy model identification approach is proposed. First of all, the k-means++ algorithm is employed to identify the premise parameters so as to guarantee the number of fuzzy rules. Then, the local linearized models are determined by using the incremental historical data around the cluster centers, which are obtained via the stochastic gradient descent algorithm with momentum and variable learning rate. Finally, with the proposed method, the CCS model of a 1000 MW USC unit in Tai Zhou power plant is developed. The effectiveness of the proposed approach is validated by the given extensive simulation results, and it can be further employed to design the overall advanced controllers for the CCS in an USC unit. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Live Speech Driven Head-and-Eye Motion Generators.
Le, Binh H; Ma, Xiaohan; Deng, Zhigang
2012-11-01
This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.
Identification and control of plasma vertical position using neural network in Damavand tokamak.
Rasouli, H; Rasouli, C; Koohi, A
2013-02-01
In this work, a nonlinear model is introduced to determine the vertical position of the plasma column in Damavand tokamak. Using this model as a simulator, a nonlinear neural network controller has been designed. In the first stage, the electronic drive and sensory circuits of Damavand tokamak are modified. These circuits can control the vertical position of the plasma column inside the vacuum vessel. Since the vertical position of plasma is an unstable parameter, a direct closed loop system identification algorithm is performed. In the second stage, a nonlinear model is identified for plasma vertical position, based on the multilayer perceptron (MLP) neural network (NN) structure. Estimation of simulator parameters has been performed by back-propagation error algorithm using Levenberg-Marquardt gradient descent optimization technique. The model is verified through simulation of the whole closed loop system using both simulator and actual plant in similar conditions. As the final stage, a MLP neural network controller is designed for simulator model. In the last step, online training is performed to tune the controller parameters. Simulation results justify using of the NN controller for the actual plant.
Identification and Reconfigurable Control of Impaired Multi-Rotor Drones
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Bencomo, Alfredo
2016-01-01
The paper presents an algorithm for control and safe landing of impaired multi-rotor drones when one or more motors fail simultaneously or in any sequence. It includes three main components: an identification block, a reconfigurable control block, and a decisions making block. The identification block monitors each motor load characteristics and the current drawn, based on which the failures are detected. The control block generates the required total thrust and three axis torques for the altitude, horizontal position and/or orientation control of the drone based on the time scale separation and nonlinear dynamic inversion. The horizontal displacement is controlled by modulating the roll and pitch angles. The decision making algorithm maps the total thrust and three torques into the individual motor thrusts based on the information provided by the identification block. The drone continues the mission execution as long as the number of functioning motors provide controllability of it. Otherwise, the controller is switched to the safe mode, which gives up the yaw control, commands a safe landing spot and descent rate while maintaining the horizontal attitude.
NASA Astrophysics Data System (ADS)
Bogoljubova, M. N.; Afonasov, A. I.; Kozlov, B. N.; Shavdurov, D. E.
2018-05-01
A predictive simulation technique of optimal cutting modes in the turning of workpieces made of nickel-based heat-resistant alloys, different from the well-known ones, is proposed. The impact of various factors on the cutting process with the purpose of determining optimal parameters of machining in concordance with certain effectiveness criteria is analyzed in the paper. A mathematical model of optimization, algorithms and computer programmes, visual graphical forms reflecting dependences of the effectiveness criteria – productivity, net cost, and tool life on parameters of the technological process - have been worked out. A nonlinear model for multidimensional functions, “solution of the equation with multiple unknowns”, “a coordinate descent method” and heuristic algorithms are accepted to solve the problem of optimization of cutting mode parameters. Research shows that in machining of workpieces made from heat-resistant alloy AISI N07263, the highest possible productivity will be achieved with the following parameters: cutting speed v = 22.1 m/min., feed rate s=0.26 mm/rev; tool life T = 18 min.; net cost – 2.45 per hour.
Surface erosion caused on Mars from Viking descent engine plume
Hutton, R.E.; Moore, H.J.; Scott, R.F.; Shorthill, R.W.; Spitzer, C.R.
1980-01-01
During the Martian landings the descent engine plumes on Viking Lander 1 (VL-1) and Viking Lander 2 (VL-2) eroded the Martian surface materials. This had been anticipated and investigated both analytically and experimentally during the design phase of the Viking spacecraft. This paper presents data on erosion obtained during the tests of the Viking descent engine and the evidence for erosion by the descent engines of VL-1 and VL-2 on Mars. From these and other results, it is concluded that there are four distinct surface materials on Mars: (1) drift material, (2) crusty to cloddy material, (3) blocky material, and (4) rock. ?? 1980 D. Reidel Publishing Co.
Integrated Targeting and Guidance for Powered Planetary Descent
NASA Astrophysics Data System (ADS)
Azimov, Dilmurat M.; Bishop, Robert H.
2018-02-01
This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.
Integrated Targeting and Guidance for Powered Planetary Descent
NASA Astrophysics Data System (ADS)
Azimov, Dilmurat M.; Bishop, Robert H.
2018-06-01
This paper presents an on-board guidance and targeting design that enables explicit state and thrust vector control and on-board targeting for planetary descent and landing. These capabilities are developed utilizing a new closed-form solution for the constant thrust arc of the braking phase of the powered descent trajectory. The key elements of proven targeting and guidance architectures, including braking and approach phase quartics, are employed. It is demonstrated that implementation of the proposed solution avoids numerical simulation iterations, thereby facilitating on-board execution of targeting procedures during the descent. It is shown that the shape of the braking phase constant thrust arc is highly dependent on initial mass and propulsion system parameters. The analytic solution process is explicit in terms of targeting and guidance parameters, while remaining generic with respect to planetary body and descent trajectory design. These features increase the feasibility of extending the proposed integrated targeting and guidance design to future cargo and robotic landing missions.
A molecular signature of an arrest of descent in human parturition
MITTAL, Pooja; ROMERO, Roberto; TARCA, Adi L.; DRAGHICI, Sorin; NHAN-CHANG, Chia-Ling; CHAIWORAPONGSA, Tinnakorn; HOTRA, John; GOMEZ, Ricardo; KUSANOVIC, Juan Pedro; LEE, Deug-Chan; KIM, Chong Jai; HASSAN, Sonia S.
2010-01-01
Objective This study was undertaken to identify the molecular basis of an arrest of descent. Study Design Human myometrium was obtained from women in term labor (TL; n=29) and arrest of descent (AODes, n=21). Gene expression was characterized using Illumina® HumanHT-12 microarrays. A moderated t-test and false discovery rate adjustment were applied for analysis. Confirmatory qRT-PCR and immunoblot was performed in an independent sample set. Results 400 genes were differentially expressed between women with an AODes compared to those with TL. Gene Ontology analysis indicated enrichment of biological processes and molecular functions related to inflammation and muscle function. Impacted pathways included inflammation and the actin cytoskeleton. Overexpression of HIF1A, IL-6, and PTGS2 in AODES was confirmed. Conclusion We have identified a stereotypic pattern of gene expression in the myometrium of women with an arrest of descent. This represents the first study examining the molecular basis of an arrest of descent using a genome-wide approach. PMID:21284969
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y. M., E-mail: ymingy@gmail.com; Bednarz, B.; Svatos, M.
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship withinmore » a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.« less
Svatos, M.; Zankowski, C.; Bednarz, B.
2016-01-01
Purpose: The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and “4π” delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of “concurrent” Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. Methods: The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. Results: The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7–8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. Conclusions: This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead. PMID:27277051
Design principles of descent vehicles with an inflatable braking device
NASA Astrophysics Data System (ADS)
Alexashkin, S. N.; Pichkhadze, K. M.; Finchenko, V. S.
2013-12-01
A new type of descent vehicle (DVs) is described: a descent vehicle with an inflatable braking device (IBD DV). IBD development issues, as well as materials needed for the design, manufacturing, and testing of an IBD and its thermal protection, are discussed. A list is given of Russian integrated test facilities intended for testing IBD DVs. Progress is described in the development of IBD DVs in Russia and abroad.
Synonymous ABCA3 Variants Do Not Increase Risk for Neonatal Respiratory Distress Syndrome
Wambach, Jennifer A.; Wegner, Daniel J.; Heins, Hillary B.; Druley, Todd E.; Mitra, Robi D.; Hamvas, Aaron; Cole, F. Sessions
2014-01-01
Objective To determine whether synonymous variants in the adenosine triphosphate-binding cassette A3 transporter (ABCA3) gene increase the risk for neonatal respiratory distress syndrome (RDS) in term and late preterm infants of European and African descent. Study design Using next-generation pooled sequencing of race-stratified DNA samples from infants of European and African descent at $34 weeks gestation with and without RDS (n = 503), we scanned all exons of ABCA3, validated each synonymous variant with an independent genotyping platform, and evaluated race-stratified disease risk associated with common synonymous variants and collapsed frequencies of rare synonymous variants. Results The synonymous ABCA3 variant frequency spectrum differs between infants of European descent and those of African descent. Using in silico prediction programs and statistical strategies, we found no potentially disruptive synonymous ABCA3 variants or evidence of selection pressure. Individual common synonymous variants and collapsed frequencies of rare synonymous variants did not increase disease risk in term and late-preterm infants of European or African descent. Conclusion In contrast to rare, nonsynonymous ABCA3 mutations, synonymous ABCA3 variants do not increase the risk for neonatal RDS among term and late-preterm infants of European or African descent. PMID:24657120
Li, Tao; Gao, Liang; Chen, Peng; Bu, Siyuan; Cao, Dehong; Yang, Lu; Wei, Qiang
2016-05-01
To assess the efficacy of intranasal luteinizing hormone-releasing hormone (LHRH) therapy for cryptorchidism. Eligible studies were identified by two reviewers using PubMed, Embase, and Web of Science databases. Primary outcomes were complete testicular descent rate, complete testicular descent rate for nonpalpable testis, and pre-scrotal and inguinal testis. Secondary outcomes included testicular descent with different medicines strategy and a subgroup analysis. Pooled data including the 1255 undescended testes showed that complete testicular descent rate was 20.9 % in LHRH group versus 5.6 % in the placebo group, which was significantly different [relative risk (RR) 3.94, 95 % confidence interval (CI) 2.14-7.28, P < 0.0001]. There was also a significant difference in the incidence of pre-scrotal and inguinal position testis descent, with 22.8 % in the LHRH group versus 3.6 % in the placebo group (RR 5.79, 95 % CI 2.94-11.39, P < 0.00001). However, side effects were more frequent in the LHRH group (RR 2.61, 95 % CI 1.52-4.49, P = 0.0005). There were no significant differences for nonpalpable testes. LHRH had significant benefits on testicular descent, particularly for inguinal and pre-scrotal testes, which was also accompanied by temporary slight side effects.
Rebbeck, Timothy R.; Devesa, Susan S.; Chang, Bao-Li; Bunker, Clareann H.; Cheng, Iona; Cooney, Kathleen; Eeles, Rosalind; Fernandez, Pedro; Giri, Veda N.; Gueye, Serigne M.; Haiman, Christopher A.; Henderson, Brian E.; Heyns, Chris F.; Hu, Jennifer J.; Ingles, Sue Ann; Isaacs, William; Jalloh, Mohamed; John, Esther M.; Kibel, Adam S.; Kidd, LaCreis R.; Layne, Penelope; Leach, Robin J.; Neslund-Dudas, Christine; Okobia, Michael N.; Ostrander, Elaine A.; Park, Jong Y.; Patrick, Alan L.; Phelan, Catherine M.; Ragin, Camille; Roberts, Robin A.; Rybicki, Benjamin A.; Stanford, Janet L.; Strom, Sara; Thompson, Ian M.; Witte, John; Xu, Jianfeng; Yeboah, Edward; Hsing, Ann W.; Zeigler-Johnson, Charnita M.
2013-01-01
Prostate cancer (CaP) is the leading cancer among men of African descent in the USA, Caribbean, and Sub-Saharan Africa (SSA). The estimated number of CaP deaths in SSA during 2008 was more than five times that among African Americans and is expected to double in Africa by 2030. We summarize publicly available CaP data and collected data from the men of African descent and Carcinoma of the Prostate (MADCaP) Consortium and the African Caribbean Cancer Consortium (AC3) to evaluate CaP incidence and mortality in men of African descent worldwide. CaP incidence and mortality are highest in men of African descent in the USA and the Caribbean. Tumor stage and grade were highest in SSA. We report a higher proportion of T1 stage prostate tumors in countries with greater percent gross domestic product spent on health care and physicians per 100,000 persons. We also observed that regions with a higher proportion of advanced tumors reported lower mortality rates. This finding suggests that CaP is underdiagnosed and/or underreported in SSA men. Nonetheless, CaP incidence and mortality represent a significant public health problem in men of African descent around the world. PMID:23476788
Evolutionary analyses of non-genealogical bonds produced by introgressive descent.
Bapteste, Eric; Lopez, Philippe; Bouchard, Frédéric; Baquero, Fernando; McInerney, James O; Burian, Richard M
2012-11-06
All evolutionary biologists are familiar with evolutionary units that evolve by vertical descent in a tree-like fashion in single lineages. However, many other kinds of processes contribute to evolutionary diversity. In vertical descent, the genetic material of a particular evolutionary unit is propagated by replication inside its own lineage. In what we call introgressive descent, the genetic material of a particular evolutionary unit propagates into different host structures and is replicated within these host structures. Thus, introgressive descent generates a variety of evolutionary units and leaves recognizable patterns in resemblance networks. We characterize six kinds of evolutionary units, of which five involve mosaic lineages generated by introgressive descent. To facilitate detection of these units in resemblance networks, we introduce terminology based on two notions, P3s (subgraphs of three nodes: A, B, and C) and mosaic P3s, and suggest an apparatus for systematic detection of introgressive descent. Mosaic P3s correspond to a distinct type of evolutionary bond that is orthogonal to the bonds of kinship and genealogy usually examined by evolutionary biologists. We argue that recognition of these evolutionary bonds stimulates radical rethinking of key questions in evolutionary biology (e.g., the relations among evolutionary players in very early phases of evolutionary history, the origin and emergence of novelties, and the production of new lineages). This line of research will expand the study of biological complexity beyond the usual genealogical bonds, revealing additional sources of biodiversity. It provides an important step to a more realistic pluralist treatment of evolutionary complexity.
NASA Astrophysics Data System (ADS)
Ghafoor, N.; Zarnecki, J.
When the ESA Huygens Probe arrives at Titan in 2005, measurements taken during and after the descent through the atmosphere are likely to revolutionise our under- standing of SaturnSs most enigmatic moon. The accurate atmospheric profiling of Titan from these measurements will require knowledge of the probe descent trajectory and in some cases attitude history, whilst certain atmospheric information (e.g. wind speeds) may be inferred directly from the probe dynamics during descent. Two of the instruments identified as contributing valuable information for the reconstruction of the probeSs parachute descent dynamics are the Surface Science Package Tilt sensor (SSP-TIL) and the Huygens Atmospheric Structure Instrument servo accelerometer (HASI-ACC). This presentation provides an overview of these sensors and their static calibration before describing an investigation into their real-life dynamic performance under simulated Titan-gravity conditions via a low-cost parabolic flight opportunity. The combined use of SSP-TIL and HASI-ACC in characterising the aircraft dynam- ics is also demonstrated and some important challenges are highlighted. Results from some simple spin tests are also presented. Finally, having validated the performance of the sensors under simulated Titan conditions, estimates are made as to the output of SSP-TIL and HASI-ACC under a variety of probe dynamics, ranging from verti- cal descent with spin to a simple 3 degree-of-freedom parachute descent model with horizontal gusting. It is shown how careful consideration must be given to the instru- mentsS principles of operation in each case, and also the impact of the sampling rates and resolutions as selected for the Huygens mission. The presentation concludes with a discussion of ongoing work on more advanced descent modelling and surface dy- namics modelling, and also of a proposal for the testing of the sensors on a sea-surface.
Mars Descent Imager (MARDI) on the Mars Polar Lander
Malin, M.C.; Caplinger, M.A.; Carr, M.H.; Squyres, S.; Thomas, P.; Veverka, J.
2001-01-01
The Mars Descent Imager, or MARDI, experiment on the Mars Polar Lander (MPL) consists of a camera characterized by small physical size and mass (???6 ?? 6 ?? 12 cm, including baffle; <500 gm), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 x 1000 pixel, low noise). The intent of the investigation is to acquire nested images over a range of resolutions, from 8 m/pixel to better than 1 cm/pixel, during the roughly 2 min it takes the MPL to descend from 8 km to the surface under parachute and rocket-powered deceleration. Observational goals will include studies of (1) surface morphology (e.g., nature and distribution of landforms indicating past and present environmental processes); (2) local and regional geography (e.g., context for other lander instruments: precise location, detailed local relief); and (3) relationships to features seen in orbiter data. To accomplish these goals, MARDI will collect three types of images. Four small images (256 x 256 pixels) will be acquired on 0.5 s centers beginning 0.3 s before MPL's heatshield is jettisoned. Sixteen full-frame images (1024 X 1024, circularly edited) will be acquired on 5.3 s centers thereafter. Just after backshell jettison but prior to the start of powered descent, a "best final nonpowered descent image" will be acquired. Five seconds after the start of powered descent, the camera will begin acquiring images on 4 s centers. Storage for as many as ten 800 x 800 pixel images is available during terminal descent. A number of spacecraft factors are likely to impact the quality of MARDI images, including substantial motion blur resulting from large rates of attitude variation during parachute descent and substantial rocket-engine-induced vibration during powered descent. In addition, the mounting location of the camera places the exhaust plume of the hydrazine engines prominently in the field of view. Copyright 2001 by the American Geophysical Union.
Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.
Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan
2018-04-01
The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have similar levels of performance in the remaining aspects.
Mars Science Laboratory's Descent Stage
NASA Technical Reports Server (NTRS)
2008-01-01
This portion of NASA's Mars Science Laboratory, called the descent stage, does its main work during the final few minutes before touchdown on Mars. The descent stage will provide rocket-powered deceleration for a phase of the arrival at Mars after the phases using the heat shield and parachute. When it nears the surface, the descent stage will lower the rover on a bridle the rest of the way to the ground. The Mars Science Laboratory spacecraft is being assembled and tested for launch in 2011. This image was taken at NASA's Jet Propulsion Laboratory, Pasadena, Calif., which manages the Mars Science Laboratory Mission for NASA's Science Mission Directorate, Washington. JPL is a division of the California Institute of Technology.Advances in POST2 End-to-End Descent and Landing Simulation for the ALHAT Project
NASA Technical Reports Server (NTRS)
Davis, Jody L.; Striepe, Scott A.; Maddock, Robert W.; Hines, Glenn D.; Paschall, Stephen, II; Cohanim, Babak E.; Fill, Thomas; Johnson, Michael C.; Bishop, Robert H.; DeMars, Kyle J.;
2008-01-01
Program to Optimize Simulated Trajectories II (POST2) is used as a basis for an end-to-end descent and landing trajectory simulation that is essential in determining design and integration capability and system performance of the lunar descent and landing system and environment models for the Autonomous Landing and Hazard Avoidance Technology (ALHAT) project. The POST2 simulation provides a six degree-of-freedom capability necessary to test, design and operate a descent and landing system for successful lunar landing. This paper presents advances in the development and model-implementation of the POST2 simulation, as well as preliminary system performance analysis, used for the testing and evaluation of ALHAT project system models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A., E-mail: anastasio@wustl.edu
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that ismore » solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.« less
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A
2016-04-01
The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.
Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.
2016-01-01
Purpose: The development of iterative image reconstruction algorithms for cone-beam computed tomography (CBCT) remains an active and important research area. Even with hardware acceleration, the overwhelming majority of the available 3D iterative algorithms that implement nonsmooth regularizers remain computationally burdensome and have not been translated for routine use in time-sensitive applications such as image-guided radiation therapy (IGRT). In this work, two variants of the fast iterative shrinkage thresholding algorithm (FISTA) are proposed and investigated for accelerated iterative image reconstruction in CBCT. Methods: Algorithm acceleration was achieved by replacing the original gradient-descent step in the FISTAs by a subproblem that is solved by use of the ordered subset simultaneous algebraic reconstruction technique (OS-SART). Due to the preconditioning matrix adopted in the OS-SART method, two new weighted proximal problems were introduced and corresponding fast gradient projection-type algorithms were developed for solving them. We also provided efficient numerical implementations of the proposed algorithms that exploit the massive data parallelism of multiple graphics processing units. Results: The improved rates of convergence of the proposed algorithms were quantified in computer-simulation studies and by use of clinical projection data corresponding to an IGRT study. The accelerated FISTAs were shown to possess dramatically improved convergence properties as compared to the standard FISTAs. For example, the number of iterations to achieve a specified reconstruction error could be reduced by an order of magnitude. Volumetric images reconstructed from clinical data were produced in under 4 min. Conclusions: The FISTA achieves a quadratic convergence rate and can therefore potentially reduce the number of iterations required to produce an image of a specified image quality as compared to first-order methods. We have proposed and investigated accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582
Pizzoferrato, Anne-Cécile; Fauconnier, Arnaud; Bader, Georges; de Tayrac, Renaud; Fort, Julie; Fritel, Xavier
2016-07-01
Obstetric trauma during childbirth is considered a major risk factor for postpartum urinary incontinence (UI), particularly stress urinary incontinence. Our aim was to investigate the relation between postpartum UI, mode of delivery, and urethral descent, and to define a group of women who are particularly at risk of postnatal UI. A total of 186 women were included their first pregnancy. Validated questionnaires about urinary symptoms during pregnancy, 2 and 12 months after delivery, were administered. Urethral descent was assessed clinically and by ultrasound at inclusion. Multivariate logistic regression analysis was used to determine the risk factors for UI during pregnancy, at 2 months and 1 year after first delivery. The prevalence of UI was 38.6, 46.5, 35.6, and 34.4 % at inclusion, late pregnancy, 2 months postpartum, and 1 year postpartum respectively. No significant association was found between UI at late pregnancy and urethral descent assessed clinically or by ultrasound. The only risk factor for UI at 2 months postpartum was UI at inclusion (OR 6.27 [95 % CI 2.70-14.6]). The risk factors for UI at 1 year postpartum were UI at inclusion (6.14 [2.22-16.9]), body mass index (BMI), and urethral descent at inclusion, assessed clinically (7.21 [2.20-23.7]) or by ultrasound. The mode of delivery was not associated with urethral descent. Prenatal urethral descent and UI during pregnancy are risk factors for UI at 1 year postpartum. These results indicate that postnatal UI is more strongly influenced by susceptibility factors existing before first delivery than by the mode of delivery.
NASA Technical Reports Server (NTRS)
2008-01-01
These three images show the progression of 'stacking' the Mars Science Laboratory rover and its descent stage in one of the Jet Propulsion Laboratory's 'clean room.' In the first image, the car-size rover is in the middle of the picture with several team members surrounding it. The team members are all dressed in special head-to-toe white suits, called 'bunny suits.' One team member is holding on to a tether to guide the large insect-like descent stage down on top of the rover. The descent stage looms high in this image. The second image shows the descent stage a few feet above the rover with the team member continuing to guide the two pieces together. The final image shows the two pieces on top of each other. Imagine taking a very long 10-month journey with someone you've just recently met! The assembly team successfully introduced the Mars Science Laboratory rover to one of its space travel partners. For the first time, it was coupled with its 'descent stage,' the part of the spacecraft that lowers the rover to the Martian surface. Up until now, thousands of hands and minds have been making sure this pairing is a perfect fit ... on paper. The intricate parts of the rover and descent stage have all separately undergone some serious testing. Now that they're stacked together, their teams can see how they fit together in real life. With this match-making a success, the rover and descent stage will be joined with the protective case (the 'aeroshell') for more testing. But, these pieces aren't staying together forever! They'll be separated, checked, and assembled many more times before finally coming together just before launch.Eye Movement Patterns of the Elderly during Stair Descent:Effect of Illumination
NASA Astrophysics Data System (ADS)
Kasahara, Satoko; Okabe, Sonoko; Nakazato, Naoko; Ohno, Yuko
The relationship between the eye movement pattern during stair descent and illumination was studied in 4 elderly people in comparison with that in 5 young people. The illumination condition was light (85.0±30.9 lx) or dark (0.7±0.3 lx), and data of eye movements were obtained using an eye mark recorder. A flight of 15 steps was used for the experiment, and data on 3 steps in the middle, on which the descent movements were stabilized, were analyzed. The elderly subjects pointed their eyes mostly directly in front in the facial direction regardless of the illumination condition, but the young subjects tended to look down under the light condition. The young subjects are considered to have confirmed the safety of the front by peripheral vision, checked the stepping surface by central vision, and still maintained the upright position without leaning forward during stair descent. The elderly subjects, in contrast, always looked at the visual target by central vision even under the light condition and leaned forward. The range of eye movements was larger vertically than horizontally in both groups, and a characteristic eye movement pattern of repeating a vertical shuttle movement synchronous with descent of each step was observed. Under the dark condition, the young subjects widened the range of vertical eye movements and reduced duration of fixation. The elderly subjects showed no change in the range of eye movements but increased duration of fixation during stair descent. These differences in the eye movements are considered to be compensatory reactions to narrowing of the vertical visual field, reduced dark adaptation, and reduced dynamic visual acuity due to aging. These characteristics of eye movements of the elderly lead to an anteriorly leaned posture and lack of attention to the front during stair descent.
2016-05-11
AFRL-AFOSR-JP-TR-2016-0046 Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization U Kang Korea...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect...Designing Feature and Data Parallel Stochastic Coordinate Descent Method for Matrix and Tensor Factorization 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386
Munabi, Ian Guyton; Luboga, Samuel Abilemech; Mirembe, Florence
2015-01-01
Fetal head descent is used to demonstrate the maternal pelvis capacity to accommodate the fetal head. This is especially important in low resource settings that have high rates of childbirth related maternal deaths and morbidity. This study looked at maternal height and an additional measure, maternal pelvis height, from automotive engineering. The objective of the study was to determine the associations between maternal: height and pelvis height with the rate of fetal head descent in expectant Ugandan mothers. This was a cross sectional study on 1265 singleton mothers attending antenatal clinics at five hospitals in various parts of Uganda. In addition to the routine antenatal examination, each mother had their pelvis height recorded following informed consent. Survival analysis was done using STATA 12. It was found that 27% of mothers had fetal head descent with an incident rate of 0.028 per week after the 25th week of pregnancy. Significant associations were observed between the rate of fetal head descent with: maternal height (Adj Haz ratio 0.93 P < 0.01) and maternal pelvis height (Adj Haz ratio 1.15 P < 0.01). The significant associations observed between maternal: height and pelvis height with rate of fetal head descent, demonstrate a need for further study of maternal pelvis height as an additional decision support tool for screening mothers in low resource settings.
Munabi, Ian Guyton; Luboga, Samuel Abilemech; Mirembe, Florence
2015-01-01
Introduction Fetal head descent is used to demonstrate the maternal pelvis capacity to accommodate the fetal head. This is especially important in low resource settings that have high rates of childbirth related maternal deaths and morbidity. This study looked at maternal height and an additional measure, maternal pelvis height, from automotive engineering. The objective of the study was to determine the associations between maternal: height and pelvis height with the rate of fetal head descent in expectant Ugandan mothers. Methods This was a cross sectional study on 1265 singleton mothers attending antenatal clinics at five hospitals in various parts of Uganda. In addition to the routine antenatal examination, each mother had their pelvis height recorded following informed consent. Survival analysis was done using STATA 12. Results It was found that 27% of mothers had fetal head descent with an incident rate of 0.028 per week after the 25th week of pregnancy. Significant associations were observed between the rate of fetal head descent with: maternal height (Adj Haz ratio 0.93 P < 0.01) and maternal pelvis height (Adj Haz ratio 1.15 P < 0.01). Conclusion The significant associations observed between maternal: height and pelvis height with rate of fetal head descent, demonstrate a need for further study of maternal pelvis height as an additional decision support tool for screening mothers in low resource settings. PMID:26918071
Powered Flight Design and Reconstructed Performance Summary for the Mars Science Laboratory Mission
NASA Technical Reports Server (NTRS)
Sell, Steven; Chen, Allen; Davis, Jody; San Martin, Miguel; Serricchio, Frederick; Singh, Gurkirpal
2013-01-01
The Powered Flight segment of Mars Science Laboratory's (MSL) Entry, Descent, and Landing (EDL) system extends from backshell separation through landing. This segment is responsible for removing the final 0.1% of the kinetic energy dissipated during EDL and culminating with the successful touchdown of the rover on the surface of Mars. Many challenges exist in the Powered Flight segment: extraction of Powered Descent Vehicle from the backshell, performing a 300m divert maneuver to avoid the backshell and parachute, slowing the descent from 85 m/s to 0.75 m/s and successfully lowering the rover on a 7.5m bridle beneath the rocket-powered Descent Stage and gently placing it on the surface using the Sky Crane Maneuver. Finally, the nearly-spent Descent Stage must execute a Flyaway maneuver to ensure surface impact a safe distance from the Rover. This paper provides an overview of the powered flight design, key features, and event timeline. It also summarizes Curiosity's as flown performance on the night of August 5th as reconstructed by the flight team.
Transmit Designs for the MIMO Broadcast Channel With Statistical CSI
NASA Astrophysics Data System (ADS)
Wu, Yongpeng; Jin, Shi; Gao, Xiqi; McKay, Matthew R.; Xiao, Chengshan
2014-09-01
We investigate the multiple-input multiple-output broadcast channel with statistical channel state information available at the transmitter. The so-called linear assignment operation is employed, and necessary conditions are derived for the optimal transmit design under general fading conditions. Based on this, we introduce an iterative algorithm to maximize the linear assignment weighted sum-rate by applying a gradient descent method. To reduce complexity, we derive an upper bound of the linear assignment achievable rate of each receiver, from which a simplified closed-form expression for a near-optimal linear assignment matrix is derived. This reveals an interesting construction analogous to that of dirty-paper coding. In light of this, a low complexity transmission scheme is provided. Numerical examples illustrate the significant performance of the proposed low complexity scheme.
Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros
NASA Technical Reports Server (NTRS)
Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.
1973-01-01
Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.
A statistical physics perspective on alignment-independent protein sequence comparison.
Chattopadhyay, Amit K; Nasiev, Diar; Flower, Darren R
2015-08-01
Within bioinformatics, the textual alignment of amino acid sequences has long dominated the determination of similarity between proteins, with all that implies for shared structure, function and evolutionary descent. Despite the relative success of modern-day sequence alignment algorithms, so-called alignment-free approaches offer a complementary means of determining and expressing similarity, with potential benefits in certain key applications, such as regression analysis of protein structure-function studies, where alignment-base similarity has performed poorly. Here, we offer a fresh, statistical physics-based perspective focusing on the question of alignment-free comparison, in the process adapting results from 'first passage probability distribution' to summarize statistics of ensemble averaged amino acid propensity values. In this article, we introduce and elaborate this approach. © The Author 2015. Published by Oxford University Press.
Sieve estimation of Cox models with latent structures.
Cao, Yongxiu; Huang, Jian; Liu, Yanyan; Zhao, Xingqiu
2016-12-01
This article considers sieve estimation in the Cox model with an unknown regression structure based on right-censored data. We propose a semiparametric pursuit method to simultaneously identify and estimate linear and nonparametric covariate effects based on B-spline expansions through a penalized group selection method with concave penalties. We show that the estimators of the linear effects and the nonparametric component are consistent. Furthermore, we establish the asymptotic normality of the estimator of the linear effects. To compute the proposed estimators, we develop a modified blockwise majorization descent algorithm that is efficient and easy to implement. Simulation studies demonstrate that the proposed method performs well in finite sample situations. We also use the primary biliary cirrhosis data to illustrate its application. © 2016, The International Biometric Society.
... report menopausal hot flashes than do women of European descent. Hot flashes are less common in women of Japanese and Chinese descent than in white European women. Complications Nighttime hot flashes (night sweats) can ...
... most common in people of Eastern or Central European Jewish, French Canadian, and Cajun descent. But anyone ... most commonly affects people of Eastern and Central European Jewish, Cajun, and French Canadian descent, but it ...
NASA Technical Reports Server (NTRS)
Schaefer, Jacob; Brown, Nelson
2013-01-01
A peak-seeking control approach for real-time trim configuration optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control approach is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an FA-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are controlled for optimization of fuel flow. This presentation presents the design and integration of this peak-seeking controller on a modified NASA FA-18 airplane with research flight control computers. A research flight was performed to collect data to build a realistic model of the performance function and characterize measurement noise. This model was then implemented into a nonlinear six-degree-of-freedom FA-18 simulation along with the peak-seeking control algorithm. With the goal of eventual flight tests, the algorithm was first evaluated in the improved simulation environment. Results from the simulation predict good convergence on minimum fuel flow with a 2.5-percent reduction in fuel flow relative to the baseline trim of the aircraft.
Quantification of transplant-derived circulating cell-free DNA in absence of a donor genotype
Kharbanda, Sandhya; Koh, Winston; Martin, Lance R.; Khush, Kiran K.; Valantine, Hannah; Pritchard, Jonathan K.; De Vlaminck, Iwijn
2017-01-01
Quantification of cell-free DNA (cfDNA) in circulating blood derived from a transplanted organ is a powerful approach to monitoring post-transplant injury. Genome transplant dynamics (GTD) quantifies donor-derived cfDNA (dd-cfDNA) by taking advantage of single-nucleotide polymorphisms (SNPs) distributed across the genome to discriminate donor and recipient DNA molecules. In its current implementation, GTD requires genotyping of both the transplant recipient and donor. However, in practice, donor genotype information is often unavailable. Here, we address this issue by developing an algorithm that estimates dd-cfDNA levels in the absence of a donor genotype. Our algorithm predicts heart and lung allograft rejection with an accuracy that is similar to conventional GTD. We furthermore refined the algorithm to handle closely related recipients and donors, a scenario that is common in bone marrow and kidney transplantation. We show that it is possible to estimate dd-cfDNA in bone marrow transplant patients that are unrelated or that are siblings of the donors, using a hidden Markov model (HMM) of identity-by-descent (IBD) states along the genome. Last, we demonstrate that comparing dd-cfDNA to the proportion of donor DNA in white blood cells can differentiate between relapse and the onset of graft-versus-host disease (GVHD). These methods alleviate some of the barriers to the implementation of GTD, which will further widen its clinical application. PMID:28771616
Li, Qingguo
2017-01-01
With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method. PMID:29283432
NASA Astrophysics Data System (ADS)
Roggemann, M.; Soehnel, G.; Archer, G.
Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birge, J. R.; Qi, L.; Wei, Z.
In this paper we give a variant of the Topkis-Veinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a Fritz-John point of the problem. We introduce a Fritz-John (FJ) function, an FJ1 strong second-order sufficiency condition (FJ1-SSOSC), and an FJ2 strong second-order sufficiency condition (FJ2-SSOSC), and then show, without any constraint qualification (CQ), that (i) ifmore » an FJ point z satisfies the FJ1-SSOSC, then there exists a neighborhood N(z) of z such that, for any FJ point y element of N(z) {l_brace}z {r_brace} , f{sub 0}(y) {ne} f{sub 0}(z) , where f{sub 0} is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2-SSOSC, then z is a strict local minimum of the problem. The result (i) implies that the entire iteration point sequence generated by the method converges to an FJ point. We also show that if the parameters are chosen large enough, a unit step length can be accepted by the proposed algorithm.« less
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
NASA Technical Reports Server (NTRS)
Schaefer, Jacob; Brown, Nelson A.
2013-01-01
A peak-seeking control approach for real-time trim configuration optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control approach is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are controlled for optimization of fuel flow. This paper presents the design and integration of this peak-seeking controller on a modified NASA F/A-18 airplane with research flight control computers. A research flight was performed to collect data to build a realistic model of the performance function and characterize measurement noise. This model was then implemented into a nonlinear six-degree-of-freedom F/A-18 simulation along with the peak-seeking control algorithm. With the goal of eventual flight tests, the algorithm was first evaluated in the improved simulation environment. Results from the simulation predict good convergence on minimum fuel flow with a 2.5-percent reduction in fuel flow relative to the baseline trim of the aircraft.
The Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT)
NASA Technical Reports Server (NTRS)
Epp, Chirold D.; Smith, Thomas B.
2007-01-01
As NASA plans to send humans back to the Moon and develop a lunar outpost, technologies must be developed to place humans and cargo safely, precisely, repeatedly, on the lunar surface with the capability to avoid surface hazards. Exploration Space Architecture Study requirements include the need for global lunar surface access with safe, precise landing without lighting constraints on terrain that may have landing hazards for human scale landing vehicles. Landing accuracies of perhaps 1,000 meters for sortie crew missions to 10 s of meters for Outpost class missions are required. The Autonomous precision Landing Hazard Avoidance Technology (ALHAT) project will develop the new and unique descent and landing Guidance, Navigation and Control (GNC) hardware and software technologies necessary for these capabilities. The ALHAT project will qualify a lunar descent and landing GNC system to a Technology Readiness Level (TRL) of 6 capable of supporting lunar crewed, cargo, and robotic missions. The (ALHAT) development project was chartered by NASA Headquarters in October 2006. The initial effort to write a project plan and define an ALHAT Team was followed by a fairly aggressive research and analysis effort to determine what technologies existed that could be developed and applied to the lunar landing problems indicated above. This paper describes the project development, research, analysis and concept evolution that has occurred since the assignment of the project. This includes the areas of systems engineering, GNC, sensors, sensor algorithms, simulations, fielding testing, laboratory testing, Hardware-In-The-Loop testing, system avionics and system certification concepts.
Apollo 15 mission report, supplement 4: Descent propulsion system final flight evaluation
NASA Technical Reports Server (NTRS)
Avvenire, A. T.; Wood, S. C.
1972-01-01
The results of a postflight analysis of the LM-10 Descent Propulsion System (DPS) during the Apollo 15 Mission are reported. The analysis determined the steady state performance of the DPS during the descent phase of the manned lunar landing. Flight measurement discrepancies are discussed. Simulated throttle performance results are cited along with overall performance results. Evaluations of the propellant quantity gaging system, propellant loading, pressurization system, and engine are reported. Graphic illustrations of the evaluations are included.
Apollo experience report: Descent propulsion system
NASA Technical Reports Server (NTRS)
Hammock, W. R., Jr.; Currie, E. C.; Fisher, A. E.
1973-01-01
The propulsion system for the descent stage of the lunar module was designed to provide thrust to transfer the fully loaded lunar module with two crewmen from the lunar parking orbit to the lunar surface. A history of the development of this system is presented. Development was accomplished primarily by ground testing of individual components and by testing the integrated system. Unique features of the descent propulsion system were the deep throttling capability and the use of a lightweight cryogenic helium pressurization system.
Cryptorchidism and delayed testicular descent in Florida black bears.
Dunbar, M R; Cunningham, M W; Wooding, J B; Roth, R P
1996-10-01
Retained testes were found in 11 (16%) of 71 black bears (Ursus americanus) examined over a 3-year period in Florida (USA). Four of the 11 bears were older than one year and weighed more than 32 kg; therefore, they were considered to be cryptorchid. The remaining seven bears may have had delayed testicular descent due to their apparent normal immature development. This is the first known published report of the prevalence of cryptorchidism and apparently normal delayed testicular descent in a black bear population.
NASA Technical Reports Server (NTRS)
Steltzner, Adam D.; San Martin, A. Miguel; Rivellini, Tommaso P.
2013-01-01
The Mars Science Laboratory project recently landed the Curiosity rover on the surface of Mars. With the success of the landing system, the performance envelope of entry, descent, and landing capabilities has been extended over the previous state of the art. This paper will present an overview of the MSL entry, descent, and landing system, a discussion of a subset of its development challenges, and include a discussion of preliminary results of the flight reconstruction effort.
Mars Science Laboratory Descent Stage
2011-11-10
The descent stage of NASA Mars Science Laboratory spacecraft is being lifted during assembly of the spacecraft in this photograph taken inside the Payload Hazardous Servicing Facility at NASA Kennedy Space Center, Fla.
Descent Stage of Mars Science Laboratory During Assembly
2008-11-19
This image from early October 2008 shows personnel working on the descent stage of NASA Mars Science Laboratory inside the Spacecraft Assembly Facility at NASA Jet Propulsion Laboratory, Pasadena, Calif.
Cree, Bruce A C; Stuart, William H; Tornatore, Carlo S; Jeffery, Douglas R; Pace, Amy L; Cha, Choon H
2011-04-01
Patients with multiple sclerosis (MS) who are of African descent experience a more aggressive disease course than patients who are of white race/ethnicity. In phase 3 clinical trials (Natalizumab Safety and Efficacy in Relapsing Remitting Multiple Sclerosis [AFFIRM] and Safety and Efficacy of Natalizumab in Combination With Interferon Beta-1a in Patients With Relapsing Remitting Multiple Sclerosis [SENTINEL]), natalizumab use significantly improved clinical and magnetic resonance imaging outcomes over 2 years in patients with relapsing MS. Because patients of African descent may be less responsive to interferon beta treatment than patients of white race/ethnicity, the efficacy of natalizumab therapy in this population is clinically important. To evaluate the efficacy of natalizumab use in patients of African descent with relapsing MS. Post hoc analysis. Academic research. Patients of African descent with relapsing MS who received natalizumab or placebo in the phase 3 AFFIRM study and those who received natalizumab plus intramuscular interferon beta-1a or placebo plus intramuscular interferon beta-1a in the phase 3 SENTINEL study. Efficacy of natalizumab use in patients of African descent with relapsing MS who participated in the AFFIRM or SENTINEL trial. Forty-nine patients of African descent participated in AFFIRM (n = 10) or SENTINEL (n = 39). Demographic and baseline disease characteristics were similar between patients treated with natalizumab (n = 21) or placebo (n = 28). Natalizumab therapy significantly reduced the annualized MS relapse rate by 60% (0.21 vs 0.53 in the placebo group, P = .02). Compared with placebo use, natalizumab therapy also significantly reduced the accumulation of lesions observed on magnetic resonance imaging over 2 years: the mean number of gadolinium-enhancing lesions was reduced by 79% (0.19 vs 0.91, P = .03), and the mean number of new or enlarged T2-weighted lesions was reduced by 90% (0.88 vs 8.52, P = .008). Natalizumab therapy significantly improved the relapse rate and accumulation of brain lesions in patients of African descent with relapsing MS.
Measurement of CPAS Main Parachute Rate of Descent
NASA Technical Reports Server (NTRS)
Ray, Eric S.
2011-01-01
The Crew Exploration Vehicle Parachute Assembly System (CPAS) is being designed to land the Orion Crew Module (CM) at a safe rate of descent at splashdown. Flight test performance must be measured to a high degree of accuracy to ensure this requirement is met with the most efficient design possible. Although the design includes three CPAS Main parachutes, the requirement is that the system must not exceed 33 ft/s under two Main parachutes, should one of the Main parachutes fail. Therefore, several tests were conducted with clusters of two Mains. All of the steady-state rate of descent data are normalized to standard sea level conditions and checked against the limit. As the Orion design gains weight, the system is approaching this limit to within measurement precision. Parachute "breathing," cluster interactions, and atmospheric anomalies can cause the rate of descent to vary widely and lead to challenges in characterizing parachute terminal performance. An early test had contradictory rate of descent results from optical trajectory and Differential Global Positioning Systems (DGPS). A thorough analysis of the data sources and error propagation was conducted to determine the uncertainty in the trajectory. It was discovered that the Time Space Position Information (TSPI) from the optical tracking provided accurate position data. However, the velocity from TPSI must be computed via numerical differentiation, which is prone to large error. DGPS obtains position through pseudo-range calculations from multiple satellites and velocity through Doppler shift of the carrier frequency. Because the velocity from DGPS is a direct measurement, it is more accurate than TSPI velocity. To remedy the situation, a commercial off-the-shelf product that combines GPS and an Inertial Measurement Unit (IMU) was purchased to significantly improve rate of descent measurements. This had the added benefit of solving GPS dropouts during aircraft extraction. Statistical probability distributions for CPAS Main parachute rate of descent and drag coefficient were computed and plotted. Using test data, a terminal rate of descent at splashdown can be estimated as a function of canopy loading.
NASA Astrophysics Data System (ADS)
Atkinson, David H.; Kazeminejad, Bobby; Lebreton, Jean-Pierre
2015-04-01
Cassini/Huygens, a flagship mission to explore the rings, atmosphere, magnetic field, and moons that make up the Saturn system, is a joint endeavor of NASA, the European Space Agency, and Agenzia Spaziale Italiana. Comprising two spacecraft - a Saturn orbiter built by NASA and a Titan entry/descent probe built by the European Space Agency - Cassini/Huygens was launched in October 1997 and arrived at Saturn in 2004. The Huygens probe parachuted to the surface of Titan in January 2005. During the descent, six science instruments provided measurements of Titan's atmosphere, clouds, and winds, and photographed Titan's surface. It was recognized early in the Huygens program that to correctly interpret and correlate results from the probe science experiments and to provide a reference set of data for ground truth calibration of the Cassini orbiter remote sensing observations, an accurate reconstruction of the probe entry and descent trajectory and surface landing location would be necessary. The Huygens Descent Trajectory Working Group (DTWG) was chartered in 1996 as a subgroup of the Huygens Science Working Team. With membership comprising representatives from all the probe engineering and instrument teams as well as representatives of industry and the Cassini and Huygens Project Scientists, the DTWG presented an organizational framework within which instrument data was shared, the entry and descent trajectory reconstruction implemented, and the trajectory reconstruction efficiently disseminated. The primary goal of the Descent Trajectory Working Group was to develop retrieval methodologies for the probe descent trajectory reconstruction from the entry interface altitude of 1270 km to the surface using navigation data, and engineering and science data acquired by the instruments on the Huygens Probe, and to provide a reconstruction of the Huygens probe trajectory from entry to the surface of Titan that is maximally consistent with all available engineering and science data sets. The official project entry and descent trajectory reconstruction effort was published by the DTWG in 2007. A revised descent trajectory was released in 2010 that accounts for updated measurements of Titan's pole coordinates derived from radar images of Titan taken during Cassini flybys after 2007. The effect of the updated pole positions on Huygens is a southward shift of the trajectory by about 0.3 degrees with a much smaller effect of less than 0.01 degree in the zonal (west to east) direction. The revised Huygens landing coordinates of 192.335 degrees West and 10.573 degrees South with longitude and latitude residuals of respectively 0.035 degrees and 0.007 degrees, respectively, are in excellent agreement with results of recent landing site investigations using visual and radar images from the Cassini VIMS instrument. Acknowledgements *J.-P.L's work was performed while at ESA/ESTEC. DA and BK would like to express appreciation to the European Space Agency's Research and Scientific Support Department for funding the Descent Trajectory Working Group. The work of the Descent Trajectory Working Group would not have been possible without the dedicated efforts of all the Huygens principal investigators and their teams, and the science and engineering data provided from each experiment team, including M. Fulchignoni and the HASI Team, H. Niemann and the GCMS Team, J. Zarnecki and the SSP Team, M. Tomasko and the DISR Team, M. Bird and the DWE Team, and G. Israel and the ACP Team. Additionally, special thanks for many years of support to D.L. Matson, R.T. Mitchell, M. Pérez-Ayúcar, O. Witasse; J. Jones, D. Roth, N. Strange on the Cassini Navigation Team at JPL; A.-M. Schipper and P. Couzin at Thales Alenia; C. Sollazzo, D. Salt, J. Wheadon and S. Standley from the Huygens Ops Team; and R. Trautner and H. Svedhem on the Radar Team at ESTEC.
Mars Science Laboratory Rover and Descent Stage
2008-11-19
In this February 17, 2009, image, NASA Mars Science Laboratory rover is attached to the spacecraft descent stage. The image was taken inside the Spacecraft Assembly Facility at NASA JPL, Pasadena, Calif.
Tensor completion for estimating missing values in visual data.
Liu, Ji; Musialski, Przemyslaw; Wonka, Peter; Ye, Jieping
2013-01-01
In this paper, we propose an algorithm to estimate missing values in tensors of visual data. The values can be missing due to problems in the acquisition process or because the user manually identified unwanted outliers. Our algorithm works even with a small amount of samples and it can propagate structure to fill larger missing regions. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm that generalizes the established definition of the matrix trace norm. Second, similarly to matrix completion, the tensor completion is formulated as a convex optimization problem. Unfortunately, the straightforward problem extension is significantly harder to solve than the matrix case because of the dependency among multiple constraints. To tackle this problem, we developed three algorithms: simple low rank tensor completion (SiLRTC), fast low rank tensor completion (FaLRTC), and high accuracy low rank tensor completion (HaLRTC). The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependent relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; the FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one and can be used to solve a general tensor trace norm minimization problem; the HaLRTC algorithm applies the alternating direction method of multipliers (ADMMs) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches. The efficiency comparison indicates that FaLTRC and HaLRTC are more efficient than SiLRTC and between FaLRTC an- HaLRTC the former is more efficient to obtain a low accuracy solution and the latter is preferred if a high-accuracy solution is desired.
... descent. The disease is more likely to affect boys than girls. Most cases occur in children younger than 5 ... descent. The disease is more likely to affect boys than girls. Most cases occur in children younger than 5 ...
Crew Procedures for Continuous Descent Arrivals Using Conventional Guidance
NASA Technical Reports Server (NTRS)
Oseguera-Lohr, Rosa M.; Williams, David H.; Lewis, Elliot T,
2007-01-01
This paper presents results from a simulation study which investigated the use of Continuous Descent Arrival (CDA) procedures for conducting a descent through a busy terminal area, using conventional transport-category automation. This research was part of the Low Noise Flight Procedures (LNFP) element within the Quiet Aircraft Technology (QAT) Project, that addressed development of flight guidance, and supporting pilot and Air Traffic Control (ATC) procedures for low noise operations. The procedures and chart were designed to be easy to understand, and to make it easy for the crew to make changes via the Flight Management Computer Control-Display Unit (FMC-CDU) to accommodate changes from ATC. The test runs were intended to represent situations typical of what exists in many of today's terminal areas, including interruptions to the descent in the form of clearances issued by ATC.
Vavoulis, Dimitrios V.; Straub, Volko A.; Aston, John A. D.; Feng, Jianfeng
2012-01-01
Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models. PMID:22396632
NASA Technical Reports Server (NTRS)
Kemmerly, Guy T.
1990-01-01
A moving-model ground-effect testing method was used to study the influence of rate-of-descent on the aerodynamic characteristics for the F-15 STOL and Maneuver Technology Demonstrator (S/MTD) configuration for both the approach and roll-out phases of landing. The approach phase was modeled for three rates of descent, and the results were compared to the predictions from the F-15 S/MTD simulation data base (prediction based on data obtained in a wind tunnel with zero rate of descent). This comparison showed significant differences due both to the rate of descent in the moving-model test and to the presence of the ground boundary layer in the wind tunnel test. Relative to the simulation data base predictions, the moving-model test showed substantially less lift increase in ground effect, less nose-down pitching moment, and less increase in drag. These differences became more prominent at the larger thrust vector angles. Over the small range of rates of descent tested using the moving-model technique, the effect of rate of descent on longitudinal aerodynamics was relatively constant. The results of this investigation indicate no safety-of-flight problems with the lower jets vectored up to 80 deg on approach. The results also indicate that this configuration could employ a nozzle concept using lower reverser vector angles up to 110 deg on approach if a no-flare approach procedure were adopted and if inlet reingestion does not pose a problem.
Mission and Navigation Design for the 2009 Mars Science Laboratory Mission
NASA Technical Reports Server (NTRS)
D'Amario, Louis A.
2008-01-01
NASA s Mars Science Laboratory mission will launch the next mobile science laboratory to Mars in the fall of 2009 with arrival at Mars occurring in the summer of 2010. A heat shield, parachute, and rocket-powered descent stage, including a sky crane, will be used to land the rover safely on the surface of Mars. The direction of the atmospheric entry vehicle lift vector will be controlled by a hypersonic entry guidance algorithm to compensate for entry trajectory errors and counteract atmospheric and aerodynamic dispersions. The key challenges for mission design are (1) develop a launch/arrival strategy that provides communications coverage during the Entry, Descent, and Landing phase either from an X-band direct-to-Earth link or from a Ultra High Frequency link to the Mars Reconnaissance Orbiter for landing latitudes between 30 deg North and 30 deg South, while satisfying mission constraints on Earth departure energy and Mars atmospheric entry speed, and (2) generate Earth-departure targets for the Atlas V-541 launch vehicle for the specified launch/arrival strategy. The launch/arrival strategy employs a 30-day baseline launch period and a 27-day extended launch period with varying arrival dates at Mars. The key challenges for navigation design are (1) deliver the spacecraft to the atmospheric entry interface point (Mars radius of 3522.2 km) with an inertial entry flight path angle error of +/- 0.20 deg (3 sigma), (2) provide knowledge of the entry state vector accurate to +/- 2.8 km (3 sigma) in position and +/- 2.0 m/s (3 sigma) in velocity for initializing the entry guidance algorithm, and (3) ensure a 99% probability of successful delivery at Mars with respect to available cruise stage propellant. Orbit determination is accomplished via ground processing of multiple complimentary radiometric data types: Doppler, range, and Delta-Differential One-way Ranging (a Very Long Baseline Interferometry measurement). The navigation strategy makes use of up to five interplanetary trajectory correction maneuvers to achieve entry targeting requirements. The requirements for cruise propellant usage and atmospheric entry targeting and knowledge are met with ample margins.
Spread of cattle led to the loss of matrilineal descent in Africa: a coevolutionary analysis.
Holden, Clare Janaki; Mace, Ruth
2003-01-01
Matrilineal descent is rare in human societies that keep large livestock. However, this negative correlation does not provide reliable evidence that livestock and descent rules are functionally related, because human cultures are not statistically independent owing to their historical relationships (Galton's problem). We tested the hypothesis that when matrilineal cultures acquire cattle they become patrilineal using a sample of 68 Bantu- and Bantoid-speaking populations from sub-Saharan Africa. We used a phylogenetic comparative method to control for Galton's problem, and a maximum-parsimony Bantu language tree as a model of population history. We tested for coevolution between cattle and descent. We also tested the direction of cultural evolution--were cattle acquired before matriliny was lost? The results support the hypothesis that acquiring cattle led formerly matrilineal Bantu-speaking cultures to change to patrilineal or mixed descent. We discuss possible reasons for matriliny's association with horticulture and its rarity in pastoralist societies. We outline the daughter-biased parental investment hypothesis for matriliny, which is supported by data on sex, wealth and reproductive success from two African societies, the matrilineal Chewa in Malawi and the patrilineal Gabbra in Kenya. PMID:14667331
Experimental studies of the rotor flow downwash on the Stability of multi-rotor crafts in descent
NASA Astrophysics Data System (ADS)
Veismann, Marcel; Dougherty, Christopher; Gharib, Morteza
2017-11-01
All rotorcrafts, including helicopters and multicopters, have the inherent problem of entering rotor downwash during vertical descent. As a result, the craft is subject to highly unsteady flow, called vortex ring state (VRS), which leads to a loss of lift and reduced stability. To date, experimental efforts to investigate this phenomenon have been largely limited to analysis of a single, fixed rotor mounted in a horizontal wind tunnel. Our current work aims to understand the interaction of multiple rotors in vertical descent by mounting a multi-rotor craft in a low speed, vertical wind tunnel. Experiments were performed with a fixed and rotationally free mounting; the latter allowing us to better capture the dynamics of a free flying drone. The effect of rotor separation on stability, generated thrust, and rotor wake interaction was characterized using force gauge data and PIV analysis for various descent velocities. The results obtained help us better understand fluid-craft interactions of drones in vertical descent and identify possible sources of instability. The presented material is based upon work supported by the Center for Autonomous Systems and Technologies (CAST) at the Graduate Aerospace Laboratories of the California Institute of Technology (GALCIT).
Crowell, Candice N; Delgado-Romero, Edward A; Mosley, Della V; Huynh, Sophia
2016-08-01
Research on Black sexual health often fails to represent the heterogeneity of Black ethnic groups. For people of Caribbean descent in the USA, ethnicity is a salient cultural factor that influences definitions and experiences of sexual health. Most research on people of Caribbean descent focuses on the relatively high rate of STIs, but sexual health is defined more broadly than STI prevalence. Psychological and emotional indicators and the voice of participants are important to consider when exploring the sexual health of a minority culture. The purpose of this study was to qualitatively explore how heterosexual Black men of Caribbean descent define and understand sexual health for themselves. Eleven men who self-identified as Black, Caribbean and heterosexual participated in three focus groups and were asked to define sexual health, critique behaviours expertly identified as healthy and address what encourages and discourages sexual health in their lives. Findings point to six dimensions of sexual health for heterosexual Black men of Caribbean descent. These include: heterosexually privileged, protective, contextual, interpersonal, cultural and pleasurable dimensions. There were some notable departures from current expert definitions of sexual health. Recommendations for further theory development are provided.
2007 Mars Phoenix Entry, Descent, and Landing Simulation and Modeling Analysis
NASA Technical Reports Server (NTRS)
Prince, Jill L.; Grover, Myron R.; Desai, Prasun N.; Queen, Eric M.
2007-01-01
This viewgraph presentation reviews the entry, descent, and landing of the 2007 Mars Phoenix lander. Aerodynamics characteristics along with Monte Carlo analyses are also presented for launch and landing site opportunities.