Analysis of Online Composite Mirror Descent Algorithm.
Lei, Yunwen; Zhou, Ding-Xuan
2017-03-01
We study the convergence of the online composite mirror descent algorithm, which involves a mirror map to reflect the geometry of the data and a convex objective function consisting of a loss and a regularizer possibly inducing sparsity. Our error analysis provides convergence rates in terms of properties of the strongly convex differentiable mirror map and the objective function. For a class of objective functions with Hölder continuous gradients, the convergence rates of the excess (regularized) risk under polynomially decaying step sizes have the order [Formula: see text] after [Formula: see text] iterates. Our results improve the existing error analysis for the online composite mirror descent algorithm by avoiding averaging and removing boundedness assumptions, and they sharpen the existing convergence rates of the last iterate for online gradient descent without any boundedness assumptions. Our methodology mainly depends on a novel error decomposition in terms of an excess Bregman distance, refined analysis of self-bounding properties of the objective function, and the resulting one-step progress bounds.
Convergence Rates of Finite Difference Stochastic Approximation Algorithms
2016-06-01
dfferences as gradient approximations. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the...descent algorithm, under various updating schemes using finite dfferences as gradient approximations. It is shown that the convergence of these...the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. It
On the efficiency of a randomized mirror descent algorithm in online optimization problems
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.
2015-04-01
A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.
NASA Astrophysics Data System (ADS)
Dong, Bing; Ren, De-Qing; Zhang, Xi
2011-08-01
An adaptive optics (AO) system based on a stochastic parallel gradient descent (SPGD) algorithm is proposed to reduce the speckle noises in the optical system of a stellar coronagraph in order to further improve the contrast. The principle of the SPGD algorithm is described briefly and a metric suitable for point source imaging optimization is given. The feasibility and good performance of the SPGD algorithm is demonstrated by an experimental system featured with a 140-actuator deformable mirror and a Hartmann-Shark wavefront sensor. Then the SPGD based AO is applied to a liquid crystal array (LCA) based coronagraph to improve the contrast. The LCA can modulate the incoming light to generate a pupil apodization mask of any pattern. A circular stepped pattern is used in our preliminary experiment and the image contrast shows improvement from 10-3 to 10-4.5 at an angular distance of 2λ/D after being corrected by SPGD based AO.
Yang, Ping; Ning, Yu; Lei, Xiang; Xu, Bing; Li, Xinyang; Dong, Lizhi; Yan, Hu; Liu, Wenjing; Jiang, Wenhan; Liu, Lei; Wang, Chao; Liang, Xingbo; Tang, Xiaojun
2010-03-29
We present a slab laser amplifier beam cleanup experimental system based on a 39-actuator rectangular piezoelectric deformable mirror. Rather than use a wave-front sensor to measure distortions in the wave-front and then apply a conjugation wave-front for compensating them, the system uses a Stochastic Parallel Gradient Descent algorithm to maximize the power contained within a far-field designated bucket. Experimental results demonstrate that at the output power of 335W, more than 30% energy concentrates in the 1x diffraction-limited area while the beam quality is enhanced greatly.
Algorithms for accelerated convergence of adaptive PCA.
Chatterjee, C; Kang, Z; Roychowdhury, V P
2000-01-01
We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.
Dynamic metrology and data processing for precision freeform optics fabrication and testing
NASA Astrophysics Data System (ADS)
Aftab, Maham; Trumper, Isaac; Huang, Lei; Choi, Heejoo; Zhao, Wenchuan; Graves, Logan; Oh, Chang Jin; Kim, Dae Wook
2017-06-01
Dynamic metrology holds the key to overcoming several challenging limitations of conventional optical metrology, especially with regards to precision freeform optical elements. We present two dynamic metrology systems: 1) adaptive interferometric null testing; and 2) instantaneous phase shifting deflectometry, along with an overview of a gradient data processing and surface reconstruction technique. The adaptive null testing method, utilizing a deformable mirror, adopts a stochastic parallel gradient descent search algorithm in order to dynamically create a null testing condition for unknown freeform optics. The single-shot deflectometry system implemented on an iPhone uses a multiplexed display pattern to enable dynamic measurements of time-varying optical components or optics in vibration. Experimental data, measurement accuracy / precision, and data processing algorithms are discussed.
Mirror gait retraining for the treatment of patellofemoral pain in female runners
Willy, Richard W.; Scholz, John P.; Davis, Irene S.
2012-01-01
Background Abnormal hip mechanics are often implicated in female runners with patellofemoral pain. We sought to evaluate a simple gait retraining technique, using a full-length mirror, in female runners with patellofemoral pain and abnormal hip mechanics. Transfer of the new motor skill to the untrained tasks of single leg squat and step descent was also evaluated. Methods Ten female runners with patellofemoral pain completed 8 sessions of mirror and verbal feedback on their lower extremity alignment during treadmill running. During the last 4 sessions, mirror and verbal feedback were progressively removed. Hip mechanics were assessed during running gait, a single leg squat and a step descent, both pre- and post-retraining. Subjects returned to their normal running routines and analyses were repeated at 1-month and 3-month post-retraining. Data were analyzed via repeated measures analysis of variance. Findings Subjects reduced peaks of hip adduction, contralateral pelvic drop, and hip abduction moment during running (P<0.05, effect size=0.69–2.91). Skill transfer to single leg squatting and step descent was noted (P<0.05, effect size=0.91–1.35). At 1 and 3 months post retraining, most mechanics were maintained in the absence of continued feedback. Subjects reported improvements in pain and function (P<0.05, effect size=3.81–7.61) and maintained through 3 months post retraining. Interpretation Mirror gait retraining was effective in improving mechanics and measures of pain and function. Skill transfer to the untrained tasks of squatting and step descent indicated that a higher level of motor learning had occurred. Extended follow-up is needed to determine the long term efficacy of this treatment. PMID:22917625
A piloted simulator evaluation of a ground-based 4-D descent advisor algorithm
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Green, Steven M.; Erzberger, Heinz
1990-01-01
A ground-based, four dimensional (4D) descent-advisor algorithm is under development at NASA-Ames. The algorithm combines detailed aerodynamic, propulsive, and atmospheric models with an efficient numerical integration scheme to generate 4D descent advisories. The ability is investigated of the 4D descent advisor algorithm to provide adequate control of arrival time for aircraft not equipped with on-board 4D guidance systems. A piloted simulation was conducted to determine the precision with which the descent advisor could predict the 4D trajectories of typical straight-in descents flown by airline pilots under different wind conditions. The effects of errors in the estimation of wind and initial aircraft weight were also studied. A description of the descent advisor as well as the result of the simulation studies are presented.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1983-01-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Vicroy, D. D.; Simmon, D. A.
1985-01-01
A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, and nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knox, C.E.; Vicroy, D.D.; Simmon, D.A.
A simple, airborne, flight-management descent algorithm was developed and programmed into a small programmable calculator. The algorithm may be operated in either a time mode or speed mode. The time mode was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The speed model was designed for planning fuel-conservative descents when time is not a consideration. The descent path for both modes was calculated for a constant with considerations given for the descent Mach/airspeed schedule, gross weight, wind, wind gradient, andmore » nonstandard temperature effects. Flight tests, using the algorithm on the programmable calculator, showed that the open-loop guidance could be useful to airline flight crews for planning and executing fuel-conservative descents.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knox, C.E.
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight testsmore » flown with a T-39A (Sabreliner) airplane are presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vicroy, D.D.
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. An explanation and examples of how the algorithm is used,more » as well as a detailed flow chart and listing of the algorithm are contained.« less
NASA Technical Reports Server (NTRS)
Vicroy, D. D.; Knox, C. E.
1983-01-01
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modeling required for the DC-10 airplane is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vicroy, D.D.; Knox, C.E.
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modelingmore » required for the DC-10 airplane is described.« less
NASA Astrophysics Data System (ADS)
Jiao Ling, LIn; Xiaoli, Yin; Huan, Chang; Xiaozhou, Cui; Yi-Lin, Guo; Huan-Yu, Liao; Chun-YU, Gao; Guohua, Wu; Guang-Yao, Liu; Jin-KUn, Jiang; Qing-Hua, Tian
2018-02-01
Atmospheric turbulence limits the performance of orbital angular momentum-based free-space optical communication (FSO-OAM) system. In order to compensate phase distortion induced by atmospheric turbulence, wavefront sensorless adaptive optics (WSAO) has been proposed and studied in recent years. In this paper a new version of SPGD called MZ-SPGD, which combines the Z-SPGD based on the deformable mirror influence function and the M-SPGD based on the Zernike polynomials, is proposed. Numerical simulations show that the hybrid method decreases convergence times markedly but can achieve the same compensated effect compared to Z-SPGD and M-SPGD.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Cannon, D. G.
1980-01-01
A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Cannon, D. G.
1979-01-01
A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.
NASA Technical Reports Server (NTRS)
Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.
1986-01-01
The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.
Scaling Up Coordinate Descent Algorithms for Large ℓ1 Regularization Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj
2012-07-03
We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.
Gradient descent learning algorithm overview: a general dynamical systems perspective.
Baldi, P
1995-01-01
Gives a unified treatment of gradient descent learning algorithms for neural networks using a general framework of dynamical systems. This general approach organizes and simplifies all the known algorithms and results which have been originally derived for different problems (fixed point/trajectory learning), for different models (discrete/continuous), for different architectures (forward/recurrent), and using different techniques (backpropagation, variational calculus, adjoint methods, etc.). The general approach can also be applied to derive new algorithms. The author then briefly examines some of the complexity issues and limitations intrinsic to gradient descent learning. Throughout the paper, the author focuses on the problem of trajectory learning.
Fast Optimization for Aircraft Descent and Approach Trajectory
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John
2017-01-01
We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.
A sampling algorithm for segregation analysis
Tier, Bruce; Henshall, John
2001-01-01
Methods for detecting Quantitative Trait Loci (QTL) without markers have generally used iterative peeling algorithms for determining genotype probabilities. These algorithms have considerable shortcomings in complex pedigrees. A Monte Carlo Markov chain (MCMC) method which samples the pedigree of the whole population jointly is described. Simultaneous sampling of the pedigree was achieved by sampling descent graphs using the Metropolis-Hastings algorithm. A descent graph describes the inheritance state of each allele and provides pedigrees guaranteed to be consistent with Mendelian sampling. Sampling descent graphs overcomes most, if not all, of the limitations incurred by iterative peeling algorithms. The algorithm was able to find the QTL in most of the simulated populations. However, when the QTL was not modeled or found then its effect was ascribed to the polygenic component. No QTL were detected when they were not simulated. PMID:11742631
Regularization Paths for Cox's Proportional Hazards Model via Coordinate Descent.
Simon, Noah; Friedman, Jerome; Hastie, Trevor; Tibshirani, Rob
2011-03-01
We introduce a pathwise algorithm for the Cox proportional hazards model, regularized by convex combinations of ℓ 1 and ℓ 2 penalties (elastic net). Our algorithm fits via cyclical coordinate descent, and employs warm starts to find a solution along a regularization path. We demonstrate the efficacy of our algorithm on real and simulated data sets, and find considerable speedup between our algorithm and competing methods.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Person, L. H., Jr.
1981-01-01
The NASA developed, implemented, and flight tested a flight management algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control. This algorithm provides a 3D path with time control (4D) for the TCV B-737 airplane to make an idle-thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms are described and flight test results are presented.
Coordinated Beamforming for MISO Interference Channel: Complexity Analysis and Efficient Algorithms
2010-01-01
Algorithm The cyclic coordinate descent algorithm is also known as the nonlinear Gauss - Seidel iteration [32]. There are several studies of this type of...vkρ(vi−1). It can be shown that the above BB gradient projection direction is always a descent direction. The R-linear convergence of the BB method has...KKT solution ) of the inexact pricing algorithm for MISO interference channel. The latter is interesting since the convergence of the original pricing
NASA Astrophysics Data System (ADS)
Zhou, Pu; Wang, Xiaolin; Li, Xiao; Chen, Zilum; Xu, Xiaojun; Liu, Zejin
2009-10-01
Coherent summation of fibre laser beams, which can be scaled to a relatively large number of elements, is simulated by using the stochastic parallel gradient descent (SPGD) algorithm. The applicability of this algorithm for coherent summation is analysed and its optimisaton parameters and bandwidth limitations are studied.
The q-G method : A q-version of the Steepest Descent method for global optimization.
Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M
2015-01-01
In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems.
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.
1986-01-01
The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.
Sobel, E.; Lange, K.
1996-01-01
The introduction of stochastic methods in pedigree analysis has enabled geneticists to tackle computations intractable by standard deterministic methods. Until now these stochastic techniques have worked by running a Markov chain on the set of genetic descent states of a pedigree. Each descent state specifies the paths of gene flow in the pedigree and the founder alleles dropped down each path. The current paper follows up on a suggestion by Elizabeth Thompson that genetic descent graphs offer a more appropriate space for executing a Markov chain. A descent graph specifies the paths of gene flow but not the particular founder alleles traveling down the paths. This paper explores algorithms for implementing Thompson's suggestion for codominant markers in the context of automatic haplotyping, estimating location scores, and computing gene-clustering statistics for robust linkage analysis. Realistic numerical examples demonstrate the feasibility of the algorithms. PMID:8651310
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
2014-12-01
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazoe, Kenji; Mochi, Iacopo; Goldberg, Kenneth A.
The wavefront retrieval by gradient descent algorithm that is typically applied to coherent or incoherent imaging is extended to retrieve a wavefront from a series of through-focus images by partially coherent illumination. For accurate retrieval, we modeled partial coherence as well as object transmittance into the gradient descent algorithm. However, this modeling increases the computation time due to the complexity of partially coherent imaging simulation that is repeatedly used in the optimization loop. To accelerate the computation, we incorporate not only the Fourier transform but also an eigenfunction decomposition of the image. As a demonstration, the extended algorithm is appliedmore » to retrieve a field-dependent wavefront of a microscope operated at extreme ultraviolet wavelength (13.4 nm). The retrieved wavefront qualitatively matches the expected characteristics of the lens design.« less
2017-01-01
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks. PMID:29236718
Adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope
NASA Astrophysics Data System (ADS)
Ma, Haotong; Hu, Haojun; Xie, Wenke; Zhao, Haichuan; Xu, Xiaojun; Chen, Jinbao
2013-08-01
We demonstrate the adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope based on the stochastic parallel gradient descent (SPGD) algorithm and dual phase only liquid crystal spatial light modulators (LC-SLMs). Adaptive pre-compensation the wavefront of projected laser beam at the transmitter telescope is chosen to improve the power coupling efficiency. One phase only LC-SLM adaptively optimizes phase distribution of the projected laser beam and the other generates turbulence phase screen. The intensity distributions of the dark hollow beam after passing through the turbulent atmosphere with and without adaptive beam shaping are analyzed in detail. The influence of propagation distance and aperture size of the Cassegrain-telescope on coupling efficiency are investigated theoretically and experimentally. These studies show that the power coupling can be significantly improved by adaptive beam shaping. The technique can be used in optical communication, deep space optical communication and relay mirror.
NASA Astrophysics Data System (ADS)
Yang, Ping; Yang, Ruo fu; Shen, Feng; Ao, Mingwu; Jiang, Wenhan
2009-05-01
Coherent combination is one of the most promising ways to realize high power laser output. A three- laser-beam coherent combination system based on adaptive optics (AO) technique has been set up in our laboratory. In this system, three 1064nm laser beams are placed side-by-side and compressed by two reflective mirrors. An active segmented deformable mirror (DM) is used to compensate the optical path difference (OPD) among three laser beams. The beams are overlapped onto a 2900Hz CCD camera to form an interference pattern while the peak intensity of the interference pattern is taken as the cost function to optimize by a stochastic parallel gradient descent (SPGD) algorithm. SPGD algorithm is realized on a RT-Linux dual-core industrial computer. A series of experiments have been accomplished and experimental results show that both static distorted aberrations in the beams and active distorted aberrations (which are brought in by a hot iron and the frequency is about 5Hz) can be compensated successfully when the gain coefficients and the perturbation amplitude of SPGD are chosed appropriately, thereby three beams can be well combined. For controlling the phase of fiber lasers, the phase characteristics of beams passing through Yb-doped dual-clad fiber amplifier are measured by means of investigating the interference pattern under different output power through experiments. The frequency of phase fluctuation is evaluated through analyzing the fluctuation of power within a 90um aperture of far-field focal spot. Experimental results show that the phase fluctuation frequencies of laser beam transmitted through fiber amplifier are mainly in the range of 100~1500Hz. As a result, to control the phase fluctuation of beams passing through fiber amplifier, the bandwidth of any potential phase control scheme must be greater than 1.5 kilohertz.
RES: Regularized Stochastic BFGS Algorithm
NASA Astrophysics Data System (ADS)
Mokhtari, Aryan; Ribeiro, Alejandro
2014-12-01
RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.
Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method
NASA Astrophysics Data System (ADS)
Sun, Yong; Meng, Zhaohai; Li, Fengting
2018-03-01
Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.
Feature Clustering for Accelerating Parallel Coordinate Descent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Chad; Tewari, Ambuj; Halappanavar, Mahantesh
2012-12-06
We demonstrate an approach for accelerating calculation of the regularization path for L1 sparse logistic regression problems. We show the benefit of feature clustering as a preconditioning step for parallel block-greedy coordinate descent algorithms.
Enhancements on the Convex Programming Based Powered Descent Guidance Algorithm for Mars Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, Lars; Scharf, Daniel P.; Wolf, Aron
2008-01-01
In this paper, we present enhancements on the powered descent guidance algorithm developed for Mars pinpoint landing. The guidance algorithm solves the powered descent minimum fuel trajectory optimization problem via a direct numerical method. Our main contribution is to formulate the trajectory optimization problem, which has nonconvex control constraints, as a finite dimensional convex optimization problem, specifically as a finite dimensional second order cone programming (SOCP) problem. SOCP is a subclass of convex programming, and there are efficient SOCP solvers with deterministic convergence properties. Hence, the resulting guidance algorithm can potentially be implemented onboard a spacecraft for real-time applications. Particularly, this paper discusses the algorithmic improvements obtained by: (i) Using an efficient approach to choose the optimal time-of-flight; (ii) Using a computationally inexpensive way to detect the feasibility/ infeasibility of the problem due to the thrust-to-weight constraint; (iii) Incorporating the rotation rate of the planet into the problem formulation; (iv) Developing additional constraints on the position and velocity to guarantee no-subsurface flight between the time samples of the temporal discretization; (v) Developing a fuel-limited targeting algorithm; (vi) Initial result on developing an onboard table lookup method to obtain almost fuel optimal solutions in real-time.
MER-DIMES : a planetary landing application of computer vision
NASA Technical Reports Server (NTRS)
Cheng, Yang; Johnson, Andrew; Matthies, Larry
2005-01-01
During the Mars Exploration Rovers (MER) landings, the Descent Image Motion Estimation System (DIMES) was used for horizontal velocity estimation. The DIMES algorithm combines measurements from a descent camera, a radar altimeter and an inertial measurement unit. To deal with large changes in scale and orientation between descent images, the algorithm uses altitude and attitude measurements to rectify image data to level ground plane. Feature selection and tracking is employed in the rectified data to compute the horizontal motion between images. Differences of motion estimates are then compared to inertial measurements to verify correct feature tracking. DIMES combines sensor data from multiple sources in a novel way to create a low-cost, robust and computationally efficient velocity estimation solution, and DIMES is the first use of computer vision to control a spacecraft during planetary landing. In this paper, the detailed implementation of the DIMES algorithm and the results from the two landings on Mars are presented.
Comparative analysis of algorithms for lunar landing control
NASA Astrophysics Data System (ADS)
Zhukov, B. I.; Likhachev, V. N.; Sazonov, V. V.; Sikharulidze, Yu. G.; Tuchin, A. G.; Tuchin, D. A.; Fedotov, V. P.; Yaroshevskii, V. S.
2015-11-01
For the descent from the pericenter of a prelanding circumlunar orbit a comparison of three algorithms for the control of lander motion is performed. These algorithms use various combinations of terminal and programmed control in a trajectory including three parts: main braking, precision braking, and descent with constant velocity. In the first approximation, autonomous navigational measurements are taken into account and an estimate of the disturbances generated by movement of the fuel in the tanks was obtained. Estimates of the accuracy for landing placement, fuel consumption, and performance of the conditions for safe lunar landing are obtained.
Air-Traffic Controllers Evaluate The Descent Advisor
NASA Technical Reports Server (NTRS)
Tobias, Leonard; Volckers, Uwe; Erzberger, Heinz
1992-01-01
Report describes study of Descent Advisor algorithm: software automation aid intended to assist air-traffic controllers in spacing traffic and meeting specified times or arrival. Based partly on mathematical models of weather conditions and performances of aircraft, it generates suggested clearances, including top-of-descent points and speed-profile data to attain objectives. Study focused on operational characteristics with specific attention to how it can be used for prediction, spacing, and metering.
Chakrabartty, Shantanu; Shaga, Ravi K; Aono, Kenji
2013-04-01
Analog circuits that are calibrated using digital-to-analog converters (DACs) use a digital signal processor-based algorithm for real-time adaptation and programming of system parameters. In this paper, we first show that this conventional framework for adaptation yields suboptimal calibration properties because of artifacts introduced by quantization noise. We then propose a novel online stochastic optimization algorithm called noise-shaping or ΣΔ gradient descent, which can shape the quantization noise out of the frequency regions spanning the parameter adaptation trajectories. As a result, the proposed algorithms demonstrate superior parameter search properties compared to floating-point gradient methods and better convergence properties than conventional quantized gradient-methods. In the second part of this paper, we apply the ΣΔ gradient descent algorithm to two examples of real-time digital calibration: 1) balancing and tracking of bias currents, and 2) frequency calibration of a band-pass Gm-C biquad filter biased in weak inversion. For each of these examples, the circuits have been prototyped in a 0.5-μm complementary metal-oxide-semiconductor process, and we demonstrate that the proposed algorithm is able to find the optimal solution even in the presence of spurious local minima, which are introduced by the nonlinear and non-monotonic response of calibration DACs.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
Smart-Divert Powered Descent Guidance to Avoid the Backshell Landing Dispersion Ellipse
NASA Technical Reports Server (NTRS)
Carson, John M.; Acikmese, Behcet
2013-01-01
A smart-divert capability has been added into the Powered Descent Guidance (PDG) software originally developed for Mars pinpoint and precision landing. The smart-divert algorithm accounts for the landing dispersions of the entry backshell, which separates from the lander vehicle at the end of the parachute descent phase and prior to powered descent. The smart-divert PDG algorithm utilizes the onboard fuel and vehicle thrust vectoring to mitigate landing error in an intelligent way: ensuring that the lander touches down with minimum- fuel usage at the minimum distance from the desired landing location that also avoids impact by the descending backshell. The smart-divert PDG software implements a computationally efficient, convex formulation of the powered-descent guidance problem to provide pinpoint or precision-landing guidance solutions that are fuel-optimal and satisfy physical thrust bound and pointing constraints, as well as position and speed constraints. The initial smart-divert implementation enforced a lateral-divert corridor parallel to the ground velocity vector; this was based on guidance requirements for MSL (Mars Science Laboratory) landings. This initial method was overly conservative since the divert corridor was infinite in the down-range direction despite the backshell landing inside a calculable dispersion ellipse. Basing the divert constraint instead on a local tangent to the backshell dispersion ellipse in the direction of the desired landing site provides a far less conservative constraint. The resulting enhanced smart-divert PDG algorithm avoids impact with the descending backshell and has reduced conservatism.
NASA Astrophysics Data System (ADS)
Roggemann, M.; Soehnel, G.; Archer, G.
Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.
Development of an analytical guidance algorithm for lunar descent
NASA Astrophysics Data System (ADS)
Chomel, Christina Tvrdik
In recent years, NASA has indicated a desire to return humans to the moon. With NASA planning manned missions within the next couple of decades, the concept development for these lunar vehicles has begun. The guidance, navigation, and control (GN&C) computer programs that will perform the function of safely landing a spacecraft on the moon are part of that development. The lunar descent guidance algorithm takes the horizontally oriented spacecraft from orbital speeds hundreds of kilometers from the desired landing point to the landing point at an almost vertical orientation and very low speed. Existing lunar descent GN&C algorithms date back to the Apollo era with little work available for implementation since then. Though these algorithms met the criteria of the 1960's, they are cumbersome today. At the basis of the lunar descent phase are two elements: the targeting, which generates a reference trajectory, and the real-time guidance, which forces the spacecraft to fly that trajectory. The Apollo algorithm utilizes a complex, iterative, numerical optimization scheme for developing the reference trajectory. The real-time guidance utilizes this reference trajectory in the form of a quartic rather than a more general format to force the real-time trajectory errors to converge to zero; however, there exist no guarantees under any conditions for this convergence. The proposed algorithm implements a purely analytical targeting algorithm used to generate two-dimensional trajectories "on-the-fly"' or to retarget the spacecraft to another landing site altogether. It is based on the analytical solutions to the equations for speed, downrange, and altitude as a function of flight path angle and assumes two constant thrust acceleration curves. The proposed real-time guidance algorithm has at its basis the three-dimensional non-linear equations of motion and a control law that is proven to converge under certain conditions through Lyapunov analysis to a reference trajectory formatted as a function of downrange, altitude, speed, and flight path angle. The two elements of the guidance algorithm are joined in Monte Carlo analysis to prove their robustness to initial state dispersions and mass and thrust errors. The robustness of the retargeting algorithm is also demonstrated.
Magnetotelluric inversion via reverse time migration algorithm of seismic data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ha, Taeyoung; Shin, Changsoo
2007-07-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less
NASA Astrophysics Data System (ADS)
Huang, Lei; Zhou, Chenlu; Gong, Mali; Ma, Xingkun; Bian, Qi
2016-07-01
Deformable mirror is a widely used wavefront corrector in adaptive optics system, especially in astronomical, image and laser optics. A new structure of DM-3D DM is proposed, which has removable actuators and can correct different aberrations with different actuator arrangements. A 3D DM consists of several reflection mirrors. Every mirror has a single actuator and is independent of each other. Two kinds of actuator arrangement algorithm are compared: random disturbance algorithm (RDA) and global arrangement algorithm (GAA). Correction effects of these two algorithms and comparison are analyzed through numerical simulation. The simulation results show that 3D DM with removable actuators can obviously improve the correction effects.
A pipeline leakage locating method based on the gradient descent algorithm
NASA Astrophysics Data System (ADS)
Li, Yulong; Yang, Fan; Ni, Na
2018-04-01
A pipeline leakage locating method based on the gradient descent algorithm is proposed in this paper. The method has low computing complexity, which is suitable for practical application. We have built experimental environment in real underground pipeline network. A lot of real data has been gathered in the past three months. Every leak point has been certificated by excavation. Results show that positioning error is within 0.4 meter. Rate of false alarm and missing alarm are both under 20%. The calculating time is not above 5 seconds.
Nonuniformity correction for an infrared focal plane array based on diamond search block matching.
Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian
2016-05-01
In scene-based nonuniformity correction algorithms, artificial ghosting and image blurring degrade the correction quality severely. In this paper, an improved algorithm based on the diamond search block matching algorithm and the adaptive learning rate is proposed. First, accurate transform pairs between two adjacent frames are estimated by the diamond search block matching algorithm. Then, based on the error between the corresponding transform pairs, the gradient descent algorithm is applied to update correction parameters. During the process of gradient descent, the local standard deviation and a threshold are utilized to control the learning rate to avoid the accumulation of matching error. Finally, the nonuniformity correction would be realized by a linear model with updated correction parameters. The performance of the proposed algorithm is thoroughly studied with four real infrared image sequences. Experimental results indicate that the proposed algorithm can reduce the nonuniformity with less ghosting artifacts in moving areas and can also overcome the problem of image blurring in static areas.
Multidither Adaptive Algorithms.
1977-01-01
MIRROR MECHANICAL PROPERTIES...........17 Deformable Mirror Design and Construction...........17 Influence Function .......................26...actuator location numbering guide ......... ....................... 27 6 Influence function profiles of beryllium mirror...28 7 Influence function profile of beryllium mirror ........ ...................... 29 8 RADC mirror faceplate influence function ........ . 30 9
Trajectory Guidance for Mars Robotic Precursors: Aerocapture, Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Sostaric, Ronald R.; Zumwalt, Carlie; Garcia-Llama, Eduardo; Powell, Richard; Shidner, Jeremy
2011-01-01
Future crewed missions to Mars require improvements in landed mass capability beyond that which is possible using state-of-the-art Mars Entry, Descent, and Landing (EDL) systems. Current systems are capable of an estimated maximum landed mass of 1-1.5 metric tons (MT), while human Mars studies require 20-40 MT. A set of technologies were investigated by the EDL Systems Analysis (SA) project to assess the performance of candidate EDL architectures. A single architecture was selected for the design of a robotic precursor mission, entitled Exploration Feed Forward (EFF), whose objective is to demonstrate these technologies. In particular, inflatable aerodynamic decelerators (IADs) and supersonic retro-propulsion (SRP) have been shown to have the greatest mass benefit and extensibility to future exploration missions. In order to evaluate these technologies and develop the mission, candidate guidance algorithms have been coded into the simulation for the purposes of studying system performance. These guidance algorithms include aerocapture, entry, and powered descent. The performance of the algorithms for each of these phases in the presence of dispersions has been assessed using a Monte Carlo technique.
Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes
NASA Technical Reports Server (NTRS)
Spechler, Joshua A.; Hoppe, Daniel J.; Sigrist, Norbert; Shi, Fang; Seo, Byoung-Joon; Bikkannavar, Siddarayappa A.
2013-01-01
Segment mirror phasing, a critical step of segment mirror alignment, requires the ability to sense and correct the relative pistons between segments from up to a few hundred microns to a fraction of wavelength in order to bring the mirror system to its full diffraction capability. When sampling the aperture of a telescope, using auto-collimating flats (ACFs) is more economical. The performance of a telescope with a segmented primary mirror strongly depends on how well those primary mirror segments can be phased. One such process to phase primary mirror segments in the axial piston direction is dispersed fringe sensing (DFS). DFS technology can be used to co-phase the ACFs. DFS is essentially a signal fitting and processing operation. It is an elegant method of coarse phasing segmented mirrors. DFS performance accuracy is dependent upon careful calibration of the system as well as other factors such as internal optical alignment, system wavefront errors, and detector quality. Novel improvements to the algorithm have led to substantial enhancements in DFS performance. The Advanced Dispersed Fringe Sensing (ADFS) Algorithm is designed to reduce the sensitivity to calibration errors by determining the optimal fringe extraction line. Applying an angular extraction line dithering procedure and combining this dithering process with an error function while minimizing the phase term of the fitted signal, defines in essence the ADFS algorithm.
Simulation Test Of Descent Advisor
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Green, Steven M.
1991-01-01
Report describes piloted-simulation test of Descent Advisor (DA), subsystem of larger automation system being developed to assist human air-traffic controllers and pilots. Focuses on results of piloted simulation, in which airline crews executed controller-issued descent advisories along standard curved-path arrival routes. Crews able to achieve arrival-time precision of plus or minus 20 seconds at metering fix. Analysis of errors generated in turns resulted in further enhancements of algorithm to increase accuracies of its predicted trajectories. Evaluations by pilots indicate general support for DA concept and provide specific recommendations for improvement.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
[Motion control of moving mirror based on fixed-mirror adjustment in FTIR spectrometer].
Li, Zhong-bing; Xu, Xian-ze; Le, Yi; Xu, Feng-qiu; Li, Jun-wei
2012-08-01
The performance of the uniform motion of the moving mirror, which is the only constant motion part in FTIR spectrometer, and the performance of the alignment of the fixed mirror play a key role in FTIR spectrometer, and affect the interference effect and the quality of the spectrogram and may restrict the precision and resolution of the instrument directly. The present article focuses on the research on the uniform motion of the moving mirror and the alignment of the fixed mirror. In order to improve the FTIR spectrometer, the maglev support system was designed for the moving mirror and the phase detection technology was adopted to adjust the tilt angle between the moving mirror and the fixed mirror. This paper also introduces an improved fuzzy PID control algorithm to get the accurate speed of the moving mirror and realize the control strategy from both hardware design and algorithm. The results show that the development of the moving mirror motion control system gets sufficient accuracy and real-time, which can ensure the uniform motion of the moving mirror and the alignment of the fixed mirror.
Design of automation tools for management of descent traffic
NASA Technical Reports Server (NTRS)
Erzberger, Heinz; Nedell, William
1988-01-01
The design of an automated air traffic control system based on a hierarchy of advisory tools for controllers is described. Compatibility of the tools with the human controller, a key objective of the design, is achieved by a judicious selection of tasks to be automated and careful attention to the design of the controller system interface. The design comprises three interconnected subsystems referred to as the Traffic Management Advisor, the Descent Advisor, and the Final Approach Spacing Tool. Each of these subsystems provides a collection of tools for specific controller positions and tasks. This paper focuses primarily on the Descent Advisor which provides automation tools for managing descent traffic. The algorithms, automation modes, and graphical interfaces incorporated in the design are described. Information generated by the Descent Advisor tools is integrated into a plan view traffic display consisting of a high-resolution color monitor. Estimated arrival times of aircraft are presented graphically on a time line, which is also used interactively in combination with a mouse input device to select and schedule arrival times. Other graphical markers indicate the location of the fuel-optimum top-of-descent point and the predicted separation distances of aircraft at a designated time-control point. Computer generated advisories provide speed and descent clearances which the controller can issue to aircraft to help them arrive at the feeder gate at the scheduled times or with specified separation distances. Two types of horizontal guidance modes, selectable by the controller, provide markers for managing the horizontal flightpaths of aircraft under various conditions. The entire system consisting of descent advisor algorithm, a library of aircraft performance models, national airspace system data bases, and interactive display software has been implemented on a workstation made by Sun Microsystems, Inc. It is planned to use this configuration in operational evaluations at an en route center.
Dang, C; Xu, L
2001-03-01
In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.
Accelerating IMRT optimization by voxel sampling
NASA Astrophysics Data System (ADS)
Martin, Benjamin C.; Bortfeld, Thomas R.; Castañon, David A.
2007-12-01
This paper presents a new method for accelerating intensity-modulated radiation therapy (IMRT) optimization using voxel sampling. Rather than calculating the dose to the entire patient at each step in the optimization, the dose is only calculated for some randomly selected voxels. Those voxels are then used to calculate estimates of the objective and gradient which are used in a randomized version of a steepest descent algorithm. By selecting different voxels on each step, we are able to find an optimal solution to the full problem. We also present an algorithm to automatically choose the best sampling rate for each structure within the patient during the optimization. Seeking further improvements, we experimented with several other gradient-based optimization algorithms and found that the delta-bar-delta algorithm performs well despite the randomness. Overall, we were able to achieve approximately an order of magnitude speedup on our test case as compared to steepest descent.
NASA Technical Reports Server (NTRS)
Cheng, Yang
2007-01-01
This viewgraph presentation reviews the use of Descent Image Motion Estimation System (DIMES) for the descent of a spacecraft onto the surface of Mars. In the past this system was used to assist in the landing of the MER spacecraft. The overall algorithm is reviewed, and views of the hardware, and views from Spirit's descent are shown. On Spirit, had DIMES not been used, the impact velocity would have been at the limit of the airbag capability and Spirit may have bounced into Endurance Crater. By using DIMES, the velocity was reduced to well within the bounds of the airbag performance and Spirit arrived safely at Mars. Views from Oppurtunity's descent are also shown. The system to avoid and detect hazards is reviewed next. Landmark Based Spacecraft Pinpoint Landing is also reviewed. A cartoon version of a pinpoint landing and the various points is shown. Mars s surface has a large amount of craters, which are ideal landmarks . According to literatures on Martian cratering, 60 % of Martian surface is heavily cratered. The ideal (craters) landmarks for pinpoint landing will be between 1000 to 50 meters in diagonal The ideal altitude for position estimation should greater than 2 km above the ground. The algorithms used to detect and match craters are reviewed.
Error analysis of stochastic gradient descent ranking.
Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan
2013-06-01
Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.
A Comparison of Techniques To Find Mirrored Hosts on the WWW.
ERIC Educational Resources Information Center
Bharat, Krishna; Broder, Andrei; Dean, Jefferey; Henzinger, Monika R.
2000-01-01
Compares several "top-down" algorithms for identifying mirrored hosts on the Web. The algorithms operate on the basis of URL strings and linkage data: the type of information about Web pages easily available from Web proxies and crawlers. Results reveal that the best approach is a combination of five algorithms: on test data this…
Functional Equivalence Acceptance Testing of FUN3D for Entry Descent and Landing Applications
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Wood, William A.; Kleb, William L.; Alter, Stephen J.; Glass, Christopher E.; Padilla, Jose F.; Hammond, Dana P.; White, Jeffery A.
2013-01-01
The functional equivalence of the unstructured grid code FUN3D to the the structured grid code LAURA (Langley Aerothermodynamic Upwind Relaxation Algorithm) is documented for applications of interest to the Entry, Descent, and Landing (EDL) community. Examples from an existing suite of regression tests are used to demonstrate the functional equivalence, encompassing various thermochemical models and vehicle configurations. Algorithm modifications required for the node-based unstructured grid code (FUN3D) to reproduce functionality of the cell-centered structured code (LAURA) are also documented. Challenges associated with computation on tetrahedral grids versus computation on structured-grid derived hexahedral systems are discussed.
Majorization Minimization by Coordinate Descent for Concave Penalized Generalized Linear Models
Jiang, Dingfeng; Huang, Jian
2013-01-01
Recent studies have demonstrated theoretical attractiveness of a class of concave penalties in variable selection, including the smoothly clipped absolute deviation and minimax concave penalties. The computation of the concave penalized solutions in high-dimensional models, however, is a difficult task. We propose a majorization minimization by coordinate descent (MMCD) algorithm for computing the concave penalized solutions in generalized linear models. In contrast to the existing algorithms that use local quadratic or local linear approximation to the penalty function, the MMCD seeks to majorize the negative log-likelihood by a quadratic loss, but does not use any approximation to the penalty. This strategy makes it possible to avoid the computation of a scaling factor in each update of the solutions, which improves the efficiency of coordinate descent. Under certain regularity conditions, we establish theoretical convergence property of the MMCD. We implement this algorithm for a penalized logistic regression model using the SCAD and MCP penalties. Simulation studies and a data example demonstrate that the MMCD works sufficiently fast for the penalized logistic regression in high-dimensional settings where the number of covariates is much larger than the sample size. PMID:25309048
Evaluating the accuracy performance of Lucas-Kanade algorithm in the circumstance of PIV application
NASA Astrophysics Data System (ADS)
Pan, Chong; Xue, Dong; Xu, Yang; Wang, JinJun; Wei, RunJie
2015-10-01
Lucas-Kanade (LK) algorithm, usually used in optical flow filed, has recently received increasing attention from PIV community due to its advanced calculation efficiency by GPU acceleration. Although applications of this algorithm are continuously emerging, a systematic performance evaluation is still lacking. This forms the primary aim of the present work. Three warping schemes in the family of LK algorithm: forward/inverse/symmetric warping, are evaluated in a prototype flow of a hierarchy of multiple two-dimensional vortices. Second-order Newton descent is also considered here. The accuracy & efficiency of all these LK variants are investigated under a large domain of various influential parameters. It is found that the constant displacement constraint, which is a necessary building block for GPU acceleration, is the most critical issue in affecting LK algorithm's accuracy, which can be somehow ameliorated by using second-order Newton descent. Moreover, symmetric warping outbids the other two warping schemes in accuracy level, robustness to noise, convergence speed and tolerance to displacement gradient, and might be the first choice when applying LK algorithm to PIV measurement.
Fan, Bingfei; Li, Qingguo; Wang, Chao; Liu, Tao
2017-01-01
Magnetic and inertial sensors have been widely used to estimate the orientation of human segments due to their low cost, compact size and light weight. However, the accuracy of the estimated orientation is easily affected by external factors, especially when the sensor is used in an environment with magnetic disturbances. In this paper, we propose an adaptive method to improve the accuracy of orientation estimations in the presence of magnetic disturbances. The method is based on existing gradient descent algorithms, and it is performed prior to sensor fusion algorithms. The proposed method includes stationary state detection and magnetic disturbance severity determination. The stationary state detection makes this method immune to magnetic disturbances in stationary state, while the magnetic disturbance severity determination helps to determine the credibility of magnetometer data under dynamic conditions, so as to mitigate the negative effect of the magnetic disturbances. The proposed method was validated through experiments performed on a customized three-axis instrumented gimbal with known orientations. The error of the proposed method and the original gradient descent algorithms were calculated and compared. Experimental results demonstrate that in stationary state, the proposed method is completely immune to magnetic disturbances, and in dynamic conditions, the error caused by magnetic disturbance is reduced by 51.2% compared with original MIMU gradient descent algorithm. PMID:28534858
On efficient randomized algorithms for finding the PageRank vector
NASA Astrophysics Data System (ADS)
Gasnikov, A. V.; Dmitriev, D. Yu.
2015-03-01
Two randomized methods are considered for finding the PageRank vector; in other words, the solution of the system p T = p T P with a stochastic n × n matrix P, where n ˜ 107-109, is sought (in the class of probability distributions) with accuracy ɛ: ɛ ≫ n -1. Thus, the possibility of brute-force multiplication of P by the column is ruled out in the case of dense objects. The first method is based on the idea of Markov chain Monte Carlo algorithms. This approach is efficient when the iterative process p {/t+1 T} = p {/t T} P quickly reaches a steady state. Additionally, it takes into account another specific feature of P, namely, the nonzero off-diagonal elements of P are equal in rows (this property is used to organize a random walk over the graph with the matrix P). Based on modern concentration-of-measure inequalities, new bounds for the running time of this method are presented that take into account the specific features of P. In the second method, the search for a ranking vector is reduced to finding the equilibrium in the antagonistic matrix game where S n (1) is a unit simplex in ℝ n and I is the identity matrix. The arising problem is solved by applying a slightly modified Grigoriadis-Khachiyan algorithm (1995). This technique, like the Nazin-Polyak method (2009), is a randomized version of Nemirovski's mirror descent method. The difference is that randomization in the Grigoriadis-Khachiyan algorithm is used when the gradient is projected onto the simplex rather than when the stochastic gradient is computed. For sparse matrices P, the method proposed yields noticeably better results.
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1984-01-01
Concepts to save fuel while preserving airport capacity by combining time based metering with profile descent procedures were developed. A computer algorithm is developed to provide the flight crew with the information needed to fly from an entry fix to a metering fix and arrive there at a predetermined time, altitude, and airspeed. The flight from the metering fix to an aim point near the airport was calculated. The flight path is divided into several descent and deceleration segments. Descents are performed at constant Mach numbers or calibrated airspeed, whereas decelerations occur at constant altitude. The time and distance associated with each segment are calculated from point mass equations of motion for a clean configuration with idle thrust. Wind and nonstandard atmospheric properties have a large effect on the flight path. It is found that uncertainty in the descent Mach number has a large effect on the predicted flight time. Of the possible combinations of Mach number and calibrated airspeed for a descent, only small changes were observed in the fuel consumed.
Fizeau interferometric cophasing of segmented mirrors: experimental validation.
Cheetham, Anthony; Cvetojevic, Nick; Norris, Barnaby; Sivaramakrishnan, Anand; Tuthill, Peter
2014-06-02
We present an optical testbed demonstration of the Fizeau Interferometric Cophasing of Segmented Mirrors (FICSM) algorithm. FICSM allows a segmented mirror to be phased with a science imaging detector and three filters (selected among the normal science complement). It requires no specialised, dedicated wavefront sensing hardware. Applying random piston and tip/tilt aberrations of more than 5 wavelengths to a small segmented mirror array produced an initial unphased point spread function with an estimated Strehl ratio of 9% that served as the starting point for our phasing algorithm. After using the FICSM algorithm to cophase the pupil, we estimated a Strehl ratio of 94% based on a comparison between our data and simulated encircled energy metrics. Our final image quality is limited by the accuracy of our segment actuation, which yields a root mean square (RMS) wavefront error of 25 nm. This is the first hardware demonstration of coarse and fine phasing an 18-segment pupil with the James Webb Space Telescope (JWST) geometry using a single algorithm. FICSM can be implemented on JWST using any of its scientic imaging cameras making it useful as a fall-back in the event that accepted phasing strategies encounter problems. We present an operational sequence that would co-phase such an 18-segment primary in 3 sequential iterations of the FICSM algorithm. Similar sequences can be readily devised for any segmented mirror.
Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.
Reid, Stephen; Tibshirani, Rob
2014-07-01
We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.
Regularization Paths for Conditional Logistic Regression: The clogitL1 Package
Reid, Stephen; Tibshirani, Rob
2014-01-01
We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587
A modified three-term PRP conjugate gradient algorithm for optimization models.
Wu, Yanlin
2017-01-01
The nonlinear conjugate gradient (CG) algorithm is a very effective method for optimization, especially for large-scale problems, because of its low memory requirement and simplicity. Zhang et al. (IMA J. Numer. Anal. 26:629-649, 2006) firstly propose a three-term CG algorithm based on the well known Polak-Ribière-Polyak (PRP) formula for unconstrained optimization, where their method has the sufficient descent property without any line search technique. They proved the global convergence of the Armijo line search but this fails for the Wolfe line search technique. Inspired by their method, we will make a further study and give a modified three-term PRP CG algorithm. The presented method possesses the following features: (1) The sufficient descent property also holds without any line search technique; (2) the trust region property of the search direction is automatically satisfied; (3) the steplengh is bounded from below; (4) the global convergence will be established under the Wolfe line search. Numerical results show that the new algorithm is more effective than that of the normal method.
Simulation results for a finite element-based cumulative reconstructor
NASA Astrophysics Data System (ADS)
Wagner, Roland; Neubauer, Andreas; Ramlau, Ronny
2017-10-01
Modern ground-based telescopes rely on adaptive optics (AO) systems for the compensation of image degradation caused by atmospheric turbulences. Within an AO system, measurements of incoming light from guide stars are used to adjust deformable mirror(s) in real time that correct for atmospheric distortions. The incoming wavefront has to be derived from sensor measurements, and this intermediate result is then translated into the shape(s) of the deformable mirror(s). Rapid changes of the atmosphere lead to the need for fast wavefront reconstruction algorithms. We review a fast matrix-free algorithm that was developed by Neubauer to reconstruct the incoming wavefront from Shack-Hartmann measurements based on a finite element discretization of the telescope aperture. The method is enhanced by a domain decomposition ansatz. We show that this algorithm reaches the quality of standard approaches in end-to-end simulation while at the same time maintaining the speed of recently introduced solvers with linear order speed.
Inverting Image Data For Optical Testing And Alignment
NASA Technical Reports Server (NTRS)
Shao, Michael; Redding, David; Yu, Jeffrey W.; Dumont, Philip J.
1993-01-01
Data from images produced by slightly incorrectly figured concave primary mirror in telescope processed into estimate of spherical aberration of mirror, by use of algorithm finding nonlinear least-squares best fit between actual images and synthetic images produced by multiparameter mathematical model of telescope optical system. Estimated spherical aberration, in turn, converted into estimate of deviation of reflector surface from nominal precise shape. Algorithm devised as part of effort to determine error in surface figure of primary mirror of Hubble space telescope, so corrective lens designed. Modified versions of algorithm also used to find optical errors in other components of telescope or of other optical systems, for purposes of testing, alignment, and/or correction.
Algorithm for ion beam figuring of low-gradient mirrors.
Jiao, Changjun; Li, Shengyi; Xie, Xuhui
2009-07-20
Ion beam figuring technology for low-gradient mirrors is discussed. Ion beam figuring is a noncontact machining technique in which a beam of high-energy ions is directed toward a target workpiece to remove material in a predetermined and controlled fashion. Owing to this noncontact mode of material removal, problems associated with tool wear and edge effects, which are common in conventional contact polishing processes, are avoided. Based on the Bayesian principle, an iterative dwell time algorithm for planar mirrors is deduced from the computer-controlled optical surfacing (CCOS) principle. With the properties of the removal function, the shaping process of low-gradient mirrors can be approximated by the linear model for planar mirrors. With these discussions, the error surface figuring technology for low-gradient mirrors with a linear path is set up. With the near-Gaussian property of the removal function, the figuring process with a spiral path can be described by the conventional linear CCOS principle, and a Bayesian-based iterative algorithm can be used to deconvolute the dwell time. Moreover, the selection criterion of the spiral parameter is given. Ion beam figuring technology with a spiral scan path based on these methods can be used to figure mirrors with non-axis-symmetrical errors. Experiments on SiC chemical vapor deposition planar and Zerodur paraboloid samples are made, and the final surface errors are all below 1/100 lambda.
NASA Technical Reports Server (NTRS)
Knox, C. E.
1984-01-01
A simple airborne flight management descent algorithm designed to define a flight profile subject to the constraints of using idle thrust, a clean airplane configuration (landing gear up, flaps zero, and speed brakes retracted), and fixed-time end conditions was developed and flight tested in the NASA TSRV B-737 research airplane. The research test flights, conducted in the Denver ARTCC automated time-based metering LFM/PD ATC environment, demonstrated that time guidance and control in the cockpit was acceptable to the pilots and ATC controllers and resulted in arrival of the airplane over the metering fix with standard deviations in airspeed error of 6.5 knots, in altitude error of 23.7 m (77.8 ft), and in arrival time accuracy of 12 sec. These accuracies indicated a good representation of airplane performance and wind modeling. Fuel savings will be obtained on a fleet-wide basis through a reduction of the time error dispersions at the metering fix and on a single-airplane basis by presenting the pilot with guidance for a fuel-efficient descent.
A Revised Trajectory Algorithm to Support En Route and Terminal Area Self-Spacing Concepts
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2010-01-01
This document describes an algorithm for the generation of a four dimensional trajectory. Input data for this algorithm are similar to an augmented Standard Terminal Arrival (STAR) with the augmentation in the form of altitude or speed crossing restrictions at waypoints on the route. This version of the algorithm accommodates descent Mach values that are different from the cruise Mach values. Wind data at each waypoint are also inputs into this algorithm. The algorithm calculates the altitude, speed, along path distance, and along path time for each waypoint.
Phase Retrieval Using a Genetic Algorithm on the Systematic Image-Based Optical Alignment Testbed
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
2003-01-01
NASA s Marshall Space Flight Center s Systematic Image-Based Optical Alignment (SIBOA) Testbed was developed to test phase retrieval algorithms and hardware techniques. Individuals working with the facility developed the idea of implementing phase retrieval by breaking the determination of the tip/tilt of each mirror apart from the piston motion (or translation) of each mirror. Presented in this report is an algorithm that determines the optimal phase correction associated only with the piston motion of the mirrors. A description of the Phase Retrieval problem is first presented. The Systematic Image-Based Optical Alignment (SIBOA) Testbeb is then described. A Discrete Fourier Transform (DFT) is necessary to transfer the incoming wavefront (or estimate of phase error) into the spatial frequency domain to compare it with the image. A method for reducing the DFT to seven scalar/matrix multiplications is presented. A genetic algorithm is then used to search for the phase error. The results of this new algorithm on a test problem are presented.
Research on particle swarm optimization algorithm based on optimal movement probability
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Han; He, Baofeng
2017-01-01
The particle swarm optimization algorithm to improve the control precision, and has great application value training neural network and fuzzy system control fields etc.The traditional particle swarm algorithm is used for the training of feed forward neural networks,the search efficiency is low, and easy to fall into local convergence.An improved particle swarm optimization algorithm is proposed based on error back propagation gradient descent. Particle swarm optimization for Solving Least Squares Problems to meme group, the particles in the fitness ranking, optimization problem of the overall consideration, the error back propagation gradient descent training BP neural network, particle to update the velocity and position according to their individual optimal and global optimization, make the particles more to the social optimal learning and less to its optimal learning, it can avoid the particles fall into local optimum, by using gradient information can accelerate the PSO local search ability, improve the multi beam particle swarm depth zero less trajectory information search efficiency, the realization of improved particle swarm optimization algorithm. Simulation results show that the algorithm in the initial stage of rapid convergence to the global optimal solution can be near to the global optimal solution and keep close to the trend, the algorithm has faster convergence speed and search performance in the same running time, it can improve the convergence speed of the algorithm, especially the later search efficiency.
Guidance and Control Algorithms for the Mars Entry, Descent and Landing Systems Analysis
NASA Technical Reports Server (NTRS)
Davis, Jody L.; CwyerCianciolo, Alicia M.; Powell, Richard W.; Shidner, Jeremy D.; Garcia-Llama, Eduardo
2010-01-01
The purpose of the Mars Entry, Descent and Landing Systems Analysis (EDL-SA) study was to identify feasible technologies that will enable human exploration of Mars, specifically to deliver large payloads to the Martian surface. This paper focuses on the methods used to guide and control two of the contending technologies, a mid- lift-to-drag (L/D) rigid aeroshell and a hypersonic inflatable aerodynamic decelerator (HIAD), through the entry portion of the trajectory. The Program to Optimize Simulated Trajectories II (POST2) is used to simulate and analyze the trajectories of the contending technologies and guidance and control algorithms. Three guidance algorithms are discussed in this paper: EDL theoretical guidance, Numerical Predictor-Corrector (NPC) guidance and Analytical Predictor-Corrector (APC) guidance. EDL-SA also considered two forms of control: bank angle control, similar to that used by Apollo and the Space Shuttle, and a center-of-gravity (CG) offset control. This paper presents the performance comparison of these guidance algorithms and summarizes the results as they impact the technology recommendations for future study.
NASA Astrophysics Data System (ADS)
Dou, Jiangpei; Ren, Deqing; Zhang, Xi; Zhu, Yongtian; Zhao, Gang; Wu, Zhen; Chen, Rui; Liu, Chengchao; Yang, Feng; Yang, Chao
2014-08-01
Almost all high-contrast imaging coronagraphs proposed until now are based on passive coronagraph optical components. Recently, Ren and Zhu proposed for the first time a coronagraph that integrates a liquid crystal array (LCA) for the active pupil apodizing and a deformable mirror (DM) for the phase corrections. Here, for demonstration purpose, we present the initial test result of a coronagraphic system that is based on two liquid crystal spatial light modulators (SLM). In the system, one SLM is served as active pupil apodizing and amplitude correction to suppress the diffraction light; another SLM is used to correct the speckle noise that is caused by the wave-front distortions. In this way, both amplitude and phase error can be actively and efficiently compensated. In the test, we use the stochastic parallel gradient descent (SPGD) algorithm to control two SLMs, which is based on the point spread function (PSF) sensing and evaluation and optimized for a maximum contrast in the discovery area. Finally, it has demonstrated a contrast of 10-6 at an inner working angular distance of ~6.2 λ/D, which is a promising technique to be used for the direct imaging of young exoplanets on ground-based telescopes.
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong
2018-01-01
Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.
Apollo LM guidance computer software for the final lunar descent.
NASA Technical Reports Server (NTRS)
Eyles, D.
1973-01-01
In all manned lunar landings to date, the lunar module Commander has taken partial manual control of the spacecraft during the final stage of the descent, below roughly 500 ft altitude. This report describes programs developed at the Charles Stark Draper Laboratory, MIT, for use in the LM's guidance computer during the final descent. At this time computational demands on the on-board computer are at a maximum, and particularly close interaction with the crew is necessary. The emphasis is on the design of the computer software rather than on justification of the particular guidance algorithms employed. After the computer and the mission have been introduced, the current configuration of the final landing programs and an advanced version developed experimentally by the author are described.
Mapped Landmark Algorithm for Precision Landing
NASA Technical Reports Server (NTRS)
Johnson, Andrew; Ansar, Adnan; Matthies, Larry
2007-01-01
A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.
3D-Web-GIS RFID location sensing system for construction objects.
Ko, Chien-Ho
2013-01-01
Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.
3D-Web-GIS RFID Location Sensing System for Construction Objects
2013-01-01
Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821
Stochastic Spectral Descent for Discrete Graphical Models
Carlson, David; Hsieh, Ya-Ping; Collins, Edo; ...
2015-12-14
Interest in deep probabilistic graphical models has in-creased in recent years, due to their state-of-the-art performance on many machine learning applications. Such models are typically trained with the stochastic gradient method, which can take a significant number of iterations to converge. Since the computational cost of gradient estimation is prohibitive even for modestly sized models, training becomes slow and practically usable models are kept small. In this paper we propose a new, largely tuning-free algorithm to address this problem. Our approach derives novel majorization bounds based on the Schatten- norm. Intriguingly, the minimizers of these bounds can be interpreted asmore » gradient methods in a non-Euclidean space. We thus propose using a stochastic gradient method in non-Euclidean space. We both provide simple conditions under which our algorithm is guaranteed to converge, and demonstrate empirically that our algorithm leads to dramatically faster training and improved predictive ability compared to stochastic gradient descent for both directed and undirected graphical models.« less
Gradient descent for robust kernel-based regression
NASA Astrophysics Data System (ADS)
Guo, Zheng-Chu; Hu, Ting; Shi, Lei
2018-06-01
In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.
An annealed chaotic maximum neural network for bipartite subgraph problem.
Wang, Jiahai; Tang, Zheng; Wang, Ronglong
2004-04-01
In this paper, based on maximum neural network, we propose a new parallel algorithm that can help the maximum neural network escape from local minima by including a transient chaotic neurodynamics for bipartite subgraph problem. The goal of the bipartite subgraph problem, which is an NP- complete problem, is to remove the minimum number of edges in a given graph such that the remaining graph is a bipartite graph. Lee et al. presented a parallel algorithm using the maximum neural model (winner-take-all neuron model) for this NP- complete problem. The maximum neural model always guarantees a valid solution and greatly reduces the search space without a burden on the parameter-tuning. However, the model has a tendency to converge to a local minimum easily because it is based on the steepest descent method. By adding a negative self-feedback to the maximum neural network, we proposed a new parallel algorithm that introduces richer and more flexible chaotic dynamics and can prevent the network from getting stuck at local minima. After the chaotic dynamics vanishes, the proposed algorithm is then fundamentally reined by the gradient descent dynamics and usually converges to a stable equilibrium point. The proposed algorithm has the advantages of both the maximum neural network and the chaotic neurodynamics. A large number of instances have been simulated to verify the proposed algorithm. The simulation results show that our algorithm finds the optimum or near-optimum solution for the bipartite subgraph problem superior to that of the best existing parallel algorithms.
Hazard avoidance via descent images for safe landing
NASA Astrophysics Data System (ADS)
Yan, Ruicheng; Cao, Zhiguo; Zhu, Lei; Fang, Zhiwen
2013-10-01
In planetary or lunar landing missions, hazard avoidance is critical for landing safety. Therefore, it is very important to correctly detect hazards and effectively find a safe landing area during the last stage of descent. In this paper, we propose a passive sensing based HDA (hazard detection and avoidance) approach via descent images to lower the landing risk. In hazard detection stage, a statistical probability model on the basis of the hazard similarity is adopted to evaluate the image and detect hazardous areas, so that a binary hazard image can be generated. Afterwards, a safety coefficient, which jointly utilized the proportion of hazards in the local region and the inside hazard distribution, is proposed to find potential regions with less hazards in the binary hazard image. By using the safety coefficient in a coarse-to-fine procedure and combining it with the local ISD (intensity standard deviation) measure, the safe landing area is determined. The algorithm is evaluated and verified with many simulated descent downward looking images rendered from lunar orbital satellite images.
NASA Astrophysics Data System (ADS)
Zhang, Lei; Li, Dong; Liu, Yu; Liu, Jingxiao; Li, Jingsong; Yu, Benli
2017-11-01
We demonstrate the validity of the simultaneous reverse optimization reconstruction (SROR) algorithm in circular subaperture stitching interferometry (CSSI), which is previously proposed for non-null aspheric annular subaperture stitching interferometry (ASSI). The merits of the modified SROR algorithm in CSSI, such as auto retrace error correction, no need of overlap and even permission of missed coverage, are analyzed in detail in simulations and experiments. Meanwhile, a practical CSSI system is proposed for this demonstration. An optical wedge is employed to deflect the incident beam for subaperture scanning by its rotation and shift instead of the six-axis motion-control system. Also the reference path can provide variable Zernike defocus for each subaperture test, which would decrease the fringe density. Experiments validating the SROR algorithm in this CSSI is implemented with cross validation by testing of paraboloidal mirror, flat mirror and astigmatism mirror. It is an indispensable supplement in SROR application in general subaperture stitching interferometry.
Regression Analysis of Top of Descent Location for Idle-thrust Descents
NASA Technical Reports Server (NTRS)
Stell, Laurel; Bronsvoort, Jesper; McDonald, Greg
2013-01-01
In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. The independent variables cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also recorded or computed post-operations. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajec- tory parameters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowl- edge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace. In particular, a model for TOD location that is linear in the independent variables would enable decision support tool human-machine interfaces for which a kinetic approach would be computationally too slow.
Finite element analyses of thin film active grazing incidence x-ray optics
NASA Astrophysics Data System (ADS)
Davis, William N.; Reid, Paul B.; Schwartz, Daniel A.
2010-09-01
The Chandra X-ray Observatory, with its sub-arc second resolution, has revolutionized X-ray astronomy by revealing an extremely complex X-ray sky and demonstrating the power of the X-ray window in exploring fundamental astrophysical problems. Larger area telescopes of still higher angular resolution promise further advances. We are engaged in the development of a mission concept, Generation-X, a 0.1 arc second resolution x-ray telescope with tens of square meters of collecting area, 500 times that of Chandra. To achieve these two requirements of imaging and area, we are developing a grazing incidence telescope comprised of many mirror segments. Each segment is an adjustable mirror that is a section of a paraboloid or hyperboloid, aligned and figure corrected in situ on-orbit. To that end, finite element analyses of thin glass mirrors are performed to determine influence functions for each actuator on the mirrors, in order to develop algorithms for correction of mirror deformations. The effects of several mirror mounting schemes are also studied. The finite element analysis results, combined with measurements made on prototype mirrors, will be used to further refine the correction algorithms.
Scalable Nonparametric Low-Rank Kernel Learning Using Block Coordinate Descent.
Hu, En-Liang; Kwok, James T
2015-09-01
Nonparametric kernel learning (NPKL) is a flexible approach to learn the kernel matrix directly without assuming any parametric form. It can be naturally formulated as a semidefinite program (SDP), which, however, is not very scalable. To address this problem, we propose the combined use of low-rank approximation and block coordinate descent (BCD). Low-rank approximation avoids the expensive positive semidefinite constraint in the SDP by replacing the kernel matrix variable with V(T)V, where V is a low-rank matrix. The resultant nonlinear optimization problem is then solved by BCD, which optimizes each column of V sequentially. It can be shown that the proposed algorithm has nice convergence properties and low computational complexities. Experiments on a number of real-world data sets show that the proposed algorithm outperforms state-of-the-art NPKL solvers.
Engineering description of the ascent/descent bet product
NASA Technical Reports Server (NTRS)
Seacord, A. W., II
1986-01-01
The Ascent/Descent output product is produced in the OPIP routine from three files which constitute its input. One of these, OPIP.IN, contains mission specific parameters. Meteorological data, such as atmospheric wind velocities, temperatures, and density, are obtained from the second file, the Corrected Meteorological Data File (METDATA). The third file is the TRJATTDATA file which contains the time-tagged state vectors that combine trajectory information from the Best Estimate of Trajectory (BET) filter, LBRET5, and Best Estimate of Attitude (BEA) derived from IMU telemetry. Each term in the two output data files (BETDATA and the Navigation Block, or NAVBLK) are defined. The description of the BETDATA file includes an outline of the algorithm used to calculate each term. To facilitate describing the algorithms, a nomenclature is defined. The description of the nomenclature includes a definition of the coordinate systems used. The NAVBLK file contains navigation input parameters. Each term in NAVBLK is defined and its source is listed. The production of NAVBLK requires only two computational algorithms. These two algorithms, which compute the terms DELTA and RSUBO, are described. Finally, the distribution of data in the NAVBLK records is listed.
Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.
Liu, Li; Lin, Weikai; Jin, Mingwu
2015-01-01
In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.
Methodology Development for the Reconstruction of the ESA Huygens Probe Entry and Descent Trajectory
NASA Astrophysics Data System (ADS)
Kazeminejad, B.
2005-01-01
The European Space Agency's (ESA) Huygens probe performed a successful entry and descent into Titan's atmosphere on January 14, 2005, and landed safely on the satellite's surface. A methodology was developed, implemented, and tested to reconstruct the Huygens probe trajectory from its various science and engineering measurements, which were performed during the probe's entry and descent to the surface of Titan, Saturn's largest moon. The probe trajectory reconstruction is an essential effort that has to be done as early as possible in the post-flight data analysis phase as it guarantees a correct and consistent interpretation of all the experiment data and furthermore provides a reference set of data for "ground-truthing" orbiter remote sensing measurements. The entry trajectory is reconstructed from the measured probe aerodynamic drag force, which also provides a means to derive the upper atmospheric properties like density, pressure, and temperature. The descent phase reconstruction is based upon a combination of various atmospheric measurements such as pressure, temperature, composition, speed of sound, and wind speed. A significant amount of effort was spent to outline and implement a least-squares trajectory estimation algorithm that provides a means to match the entry and descent trajectory portions in case of discontinuity. An extensive test campaign of the algorithm is presented which used the Huygens Synthetic Dataset (HSDS) developed by the Huygens Project Scientist Team at ESA/ESTEC as a test bed. This dataset comprises the simulated sensor output (and the corresponding measurement noise and uncertainty) of all the relevant probe instruments. The test campaign clearly showed that the proposed methodology is capable of utilizing all the relevant probe data, and will provide the best estimate of the probe trajectory once real instrument measurements from the actual probe mission are available. As a further test case using actual flight data the NASA Mars Pathfinder entry and descent trajectory and the space craft attitude was reconstructed from the 3-axis accelerometer measurements which are archived on the Planetary Data System. The results are consistent with previously published reconstruction efforts.
Recursive least-squares learning algorithms for neural networks
NASA Astrophysics Data System (ADS)
Lewis, Paul S.; Hwang, Jenq N.
1990-11-01
This paper presents the development of a pair of recursive least squares (ItLS) algorithms for online training of multilayer perceptrons which are a class of feedforward artificial neural networks. These algorithms incorporate second order information about the training error surface in order to achieve faster learning rates than are possible using first order gradient descent algorithms such as the generalized delta rule. A least squares formulation is derived from a linearization of the training error function. Individual training pattern errors are linearized about the network parameters that were in effect when the pattern was presented. This permits the recursive solution of the least squares approximation either via conventional RLS recursions or by recursive QR decomposition-based techniques. The computational complexity of the update is 0(N2) where N is the number of network parameters. This is due to the estimation of the N x N inverse Hessian matrix. Less computationally intensive approximations of the ilLS algorithms can be easily derived by using only block diagonal elements of this matrix thereby partitioning the learning into independent sets. A simulation example is presented in which a neural network is trained to approximate a two dimensional Gaussian bump. In this example RLS training required an order of magnitude fewer iterations on average (527) than did training with the generalized delta rule (6 1 BACKGROUND Artificial neural networks (ANNs) offer an interesting and potentially useful paradigm for signal processing and pattern recognition. The majority of ANN applications employ the feed-forward multilayer perceptron (MLP) network architecture in which network parameters are " trained" by a supervised learning algorithm employing the generalized delta rule (GDIt) [1 2]. The GDR algorithm approximates a fixed step steepest descent algorithm using derivatives computed by error backpropagatiori. The GDII algorithm is sometimes referred to as the backpropagation algorithm. However in this paper we will use the term backpropagation to refer only to the process of computing error derivatives. While multilayer perceptrons provide a very powerful nonlinear modeling capability GDR training can be very slow and inefficient. In linear adaptive filtering the analog of the GDR algorithm is the leastmean- squares (LMS) algorithm. Steepest descent-based algorithms such as GDR or LMS are first order because they use only first derivative or gradient information about the training error to be minimized. To speed up the training process second order algorithms may be employed that take advantage of second derivative or Hessian matrix information. Second order information can be incorporated into MLP training in different ways. In many applications especially in the area of pattern recognition the training set is finite. In these cases block learning can be applied using standard nonlinear optimization techniques [3 4 5].
Advanced optical technologies for space exploration
NASA Astrophysics Data System (ADS)
Clark, Natalie
2007-09-01
NASA Langley Research Center is involved in the development of photonic devices and systems for space exploration missions. Photonic technologies of particular interest are those that can be utilized for in-space communication, remote sensing, guidance navigation and control, lunar descent and landing, and rendezvous and docking. NASA Langley has recently established a class-100 clean-room which serves as a Photonics Fabrication Facility for development of prototype optoelectronic devices for aerospace applications. In this paper we discuss our design, fabrication, and testing of novel active pixels, deformable mirrors, and liquid crystal spatial light modulators. Successful implementation of these intelligent optical devices and systems in space, requires careful consideration of temperature and space radiation effects in inorganic and electronic materials. Applications including high bandwidth inertial reference units, lightweight, high precision star trackers for guidance, navigation, and control, deformable mirrors, wavefront sensing, and beam steering technologies are discussed. In addition, experimental results are presented which characterize their performance in space exploration systems
Advanced Optical Technologies for Space Exploration
NASA Technical Reports Server (NTRS)
Clark, Natalie
2007-01-01
NASA Langley Research Center is involved in the development of photonic devices and systems for space exploration missions. Photonic technologies of particular interest are those that can be utilized for in-space communication, remote sensing, guidance navigation and control, lunar descent and landing, and rendezvous and docking. NASA Langley has recently established a class-100 clean-room which serves as a Photonics Fabrication Facility for development of prototype optoelectronic devices for aerospace applications. In this paper we discuss our design, fabrication, and testing of novel active pixels, deformable mirrors, and liquid crystal spatial light modulators. Successful implementation of these intelligent optical devices and systems in space, requires careful consideration of temperature and space radiation effects in inorganic and electronic materials. Applications including high bandwidth inertial reference units, lightweight, high precision star trackers for guidance, navigation, and control, deformable mirrors, wavefront sensing, and beam steering technologies are discussed. In addition, experimental results are presented which characterize their performance in space exploration systems.
Distributed Sensing and Shape Control of Piezoelectric Bimorph Mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redmond, James M.; Barney, Patrick S.; Henson, Tammy D.
1999-07-28
As part of a collaborative effort between Sandia National Laboratories and the University of Kentucky to develop a deployable mirror for remote sensing applications, research in shape sensing and control algorithms that leverage the distributed nature of electron gun excitation for piezoelectric bimorph mirrors is summarized. A coarse shape sensing technique is developed that uses reflected light rays from the sample surface to provide discrete slope measurements. Estimates of surface profiles are obtained with a cubic spline curve fitting algorithm. Experiments on a PZT bimorph illustrate appropriate deformation trends as a function of excitation voltage. A parallel effort to effectmore » desired shape changes through electron gun excitation is also summarized. A one dimensional model-based algorithm is developed to correct profile errors in bimorph beams. A more useful two dimensional algorithm is also developed that relies on measured voltage-curvature sensitivities to provide corrective excitation profiles for the top and bottom surfaces of bimorph plates. The two algorithms are illustrated using finite element models of PZT bimorph structures subjected to arbitrary disturbances. Corrective excitation profiles that yield desired parabolic forms are computed, and are shown to provide the necessary corrective action.« less
NASA Astrophysics Data System (ADS)
Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing
2015-08-01
Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).
NASA Astrophysics Data System (ADS)
Kingsbury, Lana K.; Atcheson, Paul D.
2004-10-01
The Northrop-Grumman/Ball/Kodak team is building the JWST observatory that will be launched in 2011. To develop the flight wavefront sensing and control (WFS&C) algorithms and software, Ball is designing and building a 1 meter diameter, functionally accurate version of the JWST optical telescope element (OTE). This testbed telescope (TBT) will incorporate the same optical element control capability as the flight OTE. The secondary mirror will be controlled by a 6 degree of freedom (dof) hexapod and each of the 18 segmented primary mirror assemblies will have 6 dof hexapod control as well as radius of curvature adjustment capability. In addition to the highly adjustable primary and secondary mirrors, the TBT will include a rigid tertiary mirror, 2 fold mirrors (to direct light into the TBT) and a very stable supporting structure. The total telescope system configured residual wavefront error will be better than 175 nm RMS double pass. The primary and secondary mirror hexapod assemblies enable 5 nm piston resolution, 0.0014 arcsec tilt resolution, 100 nm translation resolution, and 0.04497 arcsec clocking resolution. The supporting structure (specifically the secondary mirror support structure) is designed to ensure that the primary mirror segments will not change their despace position relative to the secondary mirror (spaced > 1 meter apart) by greater than 500 nm within a one hour period of ambient clean room operation.
A Fast Deep Learning System Using GPU
2014-06-01
hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...widely used in data modeling until three decades later when efficient training algorithm for RBM is invented by Hinton [3] and the computing power is...be trained using most of optimization algorithms , such as BP, conjugate gradient descent (CGD) or Levenberg-Marquardt (LM). The advantage of this
NASA Technical Reports Server (NTRS)
Dejarnette, F. R.
1984-01-01
Attention is given to a computer algorithm yielding the data required for a flight crew to navigate from an entry fix, about 100 nm from an airport, to a metering fix, and arrive there at a predetermined time, altitude, and airspeed. The flight path is divided into several descent and deceleration segments. Results for the case of a B-737 airliner indicate that wind and nonstandard atmospheric properties have a significant effect on the flight path and must be taken into account. While a range of combinations of Mach number and calibrated airspeed is possible for the descent segments leading to the metering fix, only small changes in the fuel consumed were observed for this range of combinations. A combination that is based on scheduling flexibility therefore seems preferable.
A network of spiking neurons for computing sparse representations in an energy efficient way
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.
2013-01-01
Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853
Online learning in optical tomography: a stochastic approach
NASA Astrophysics Data System (ADS)
Chen, Ke; Li, Qin; Liu, Jian-Guo
2018-07-01
We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.
A network of spiking neurons for computing sparse representations in an energy-efficient way.
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B
2012-11-01
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Massive parallelization of serial inference algorithms for a complex generalized linear model
Suchard, Marc A.; Simpson, Shawn E.; Zorych, Ivan; Ryan, Patrick; Madigan, David
2014-01-01
Following a series of high-profile drug safety disasters in recent years, many countries are redoubling their efforts to ensure the safety of licensed medical products. Large-scale observational databases such as claims databases or electronic health record systems are attracting particular attention in this regard, but present significant methodological and computational concerns. In this paper we show how high-performance statistical computation, including graphics processing units, relatively inexpensive highly parallel computing devices, can enable complex methods in large databases. We focus on optimization and massive parallelization of cyclic coordinate descent approaches to fit a conditioned generalized linear model involving tens of millions of observations and thousands of predictors in a Bayesian context. We find orders-of-magnitude improvement in overall run-time. Coordinate descent approaches are ubiquitous in high-dimensional statistics and the algorithms we propose open up exciting new methodological possibilities with the potential to significantly improve drug safety. PMID:25328363
Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization.
Lu, Hanbing; Jesmanowicz, Andrzej; Li, Shi-Jiang; Hyde, James S
2004-01-01
MRI gradient coil design is a type of nonlinear constrained optimization. A practical problem in transverse gradient coil design using the conjugate gradient descent (CGD) method is that wire elements move at different rates along orthogonal directions (r, phi, z), and tend to cross, breaking the constraints. A momentum-weighted conjugate gradient descent (MW-CGD) method is presented to overcome this problem. This method takes advantage of the efficiency of the CGD method combined with momentum weighting, which is also an intrinsic property of the Levenberg-Marquardt algorithm, to adjust step sizes along the three orthogonal directions. A water-cooled, 12.8 cm inner diameter, three axis torque-balanced gradient coil for rat imaging was developed based on this method, with an efficiency of 2.13, 2.08, and 4.12 mT.m(-1).A(-1) along X, Y, and Z, respectively. Experimental data demonstrate that this method can improve efficiency by 40% and field uniformity by 27%. This method has also been applied to the design of a gradient coil for the human brain, employing remote current return paths. The benefits of this design include improved gradient field uniformity and efficiency, with a shorter length than gradient coil designs using coaxial return paths. Copyright 2003 Wiley-Liss, Inc.
Piloted simulation of a ground-based time-control concept for air traffic control
NASA Technical Reports Server (NTRS)
Davis, Thomas J.; Green, Steven M.
1989-01-01
A concept for aiding air traffic controllers in efficiently spacing traffic and meeting scheduled arrival times at a metering fix was developed and tested in a real time simulation. The automation aid, referred to as the ground based 4-D descent advisor (DA), is based on accurate models of aircraft performance and weather conditions. The DA generates suggested clearances, including both top-of-descent-point and speed-profile data, for one or more aircraft in order to achieve specific time or distance separation objectives. The DA algorithm is used by the air traffic controller to resolve conflicts and issue advisories to arrival aircraft. A joint simulation was conducted using a piloted simulator and an advanced concept air traffic control simulation to study the acceptability and accuracy of the DA automation aid from both the pilot's and the air traffic controller's perspectives. The results of the piloted simulation are examined. In the piloted simulation, airline crews executed controller issued descent advisories along standard curved path arrival routes, and were able to achieve an arrival time precision of + or - 20 sec at the metering fix. An analysis of errors generated in turns resulted in further enhancements of the algorithm to improve the predictive accuracy. Evaluations by pilots indicate general support for the concept and provide specific recommendations for improvement.
WS-BP: An efficient wolf search based back-propagation algorithm
NASA Astrophysics Data System (ADS)
Nawi, Nazri Mohd; Rehman, M. Z.; Khan, Abdullah
2015-05-01
Wolf Search (WS) is a heuristic based optimization algorithm. Inspired by the preying and survival capabilities of the wolves, this algorithm is highly capable to search large spaces in the candidate solutions. This paper investigates the use of WS algorithm in combination with back-propagation neural network (BPNN) algorithm to overcome the local minima problem and to improve convergence in gradient descent. The performance of the proposed Wolf Search based Back-Propagation (WS-BP) algorithm is compared with Artificial Bee Colony Back-Propagation (ABC-BP), Bat Based Back-Propagation (Bat-BP), and conventional BPNN algorithms. Specifically, OR and XOR datasets are used for training the network. The simulation results show that the WS-BP algorithm effectively avoids the local minima and converge to global minima.
A Gradient Taguchi Method for Engineering Optimization
NASA Astrophysics Data System (ADS)
Hwang, Shun-Fa; Wu, Jen-Chih; He, Rong-Song
2017-10-01
To balance the robustness and the convergence speed of optimization, a novel hybrid algorithm consisting of Taguchi method and the steepest descent method is proposed in this work. Taguchi method using orthogonal arrays could quickly find the optimum combination of the levels of various factors, even when the number of level and/or factor is quite large. This algorithm is applied to the inverse determination of elastic constants of three composite plates by combining numerical method and vibration testing. For these problems, the proposed algorithm could find better elastic constants in less computation cost. Therefore, the proposed algorithm has nice robustness and fast convergence speed as compared to some hybrid genetic algorithms.
2016-01-01
This paper presents an algorithm, for use with a Portable Powered Ankle-Foot Orthosis (i.e., PPAFO) that can automatically detect changes in gait modes (level ground, ascent and descent of stairs or ramps), thus allowing for appropriate ankle actuation control during swing phase. An artificial neural network (ANN) algorithm used input signals from an inertial measurement unit and foot switches, that is, vertical velocity and segment angle of the foot. Output from the ANN was filtered and adjusted to generate a final data set used to classify different gait modes. Five healthy male subjects walked with the PPAFO on the right leg for two test scenarios (walking over level ground and up and down stairs or a ramp; three trials per scenario). Success rate was quantified by the number of correctly classified steps with respect to the total number of steps. The results indicated that the proposed algorithm's success rate was high (99.3%, 100%, and 98.3% for level, ascent, and descent modes in the stairs scenario, respectively; 98.9%, 97.8%, and 100% in the ramp scenario). The proposed algorithm continuously detected each step's gait mode with faster timing and higher accuracy compared to a previous algorithm that used a decision tree based on maximizing the reliability of the mode recognition. PMID:28070188
NASA Technical Reports Server (NTRS)
Lahti, G. P.
1971-01-01
The method of steepest descent used in optimizing one-dimensional layered radiation shields is extended to multidimensional, multiconstraint situations. The multidimensional optimization algorithm and equations are developed for the case of a dose constraint in any one direction being dependent only on the shield thicknesses in that direction and independent of shield thicknesses in other directions. Expressions are derived for one-, two-, and three-dimensional cases (one, two, and three constraints). The precedure is applicable to the optimization of shields where there are different dose constraints and layering arrangements in the principal directions.
NASA Technical Reports Server (NTRS)
Prevot, Thomas
2012-01-01
This paper describes the underlying principles and algorithms for computing the primary controller managed spacing (CMS) tools developed at NASA for precisely spacing aircraft along efficient descent paths. The trajectory-based CMS tools include slot markers, delay indications and speed advisories. These tools are one of three core NASA technologies integrated in NASAs ATM technology demonstration-1 (ATD-1) that will operationally demonstrate the feasibility of fuel-efficient, high throughput arrival operations using Automatic Dependent Surveillance Broadcast (ADS-B) and ground-based and airborne NASA technologies for precision scheduling and spacing.
Approximate solution of the p-median minimization problem
NASA Astrophysics Data System (ADS)
Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.
2016-09-01
A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.
Multidither Adaptive Algorithms
1976-09-01
Influence Function Profiles of Beryllium Mirror 32 11 Three-Dimensional View of Central-Actuator Influence for RADC Beryllium Mirror 33 12 Contour Lines...installed at this writing. Some preliminary tests on the mirror have determined its excur- sion sensitivity, influence function , and frequency...The frequency response shows some unusual resonance behavior, but is usable to at least 30 kHz. The 1 5 influence function has a shape given
Quantifying Non-Equilibrium in Hypersonic Flows Using Entropy Generation
2007-03-01
mirror fabrication, 2) mirror actuation, and 3) control algorithms with a focus on potential for future space based applications. For a...electrodes transported via a conducting electrolyte [19]. When placed under a voltage potential , cations in a polymer matrix immediately swell...has the potential to create an over- damped surface preventing Wavescope from detecting any strains. The final step in the mirror fabrication is to
NASA Technical Reports Server (NTRS)
2013-01-01
Topics covered include: Fully Integrated, Miniature, High-Frequency Flow Probe Utilizing MEMS Leadless SOI Technology; Nanoscale Surface Plasmonics Sensor With Nanofluidic Control; Advanced Dispersed Fringe Sensing Algorithm for Coarse Phasing Segmented Mirror Telescopes; Neural Network Back-Propagation Algorithm for Sensing Hypergols; Bulk Moisture and Salinity Sensor; Change-Based Satellite Monitoring Using Broad Coverage and Targetable Sensing; Circularly Polarized Microwave Antenna Element with Very Low Off-Axis Cross-Polarization; Ultra-Low Heat-Leak, High-Temperature Superconducting Current Leads for Space Applications; Flash Cracking Reactor for Waste Plastic Processing; An Automated Safe-to-Mate (ASTM) Tester; Wireless Chalcogenide Nanoionic-Based Radio-Frequency Switch; Compute Element and Interface Box for the Hazard Detection System; DOT Transmit Module; Composite Aerogel Multifoil Protective Shielding; Li-Ion Electrolytes with Improved Safety and Tolerance to High-Voltage Systems; Polymer-Reinforced, Non-Brittle, Lightweight Cryogenic Insulation; Controlled, Site-Specific Functionalization of Carbon Nanotubes with Diazonium Salts; Regenerable Sorbent for CO2 Removal; Sprayable Aerogel Bead Compositions With High Shear Flow Resistance and High Thermal Insulation Value; Lexan Linear Shaped Charge Holder with Magnets and Backing Plate; Robotic Ankle for Omnidirectional Rock Anchors; Wind, Wave, and Tidal Energy Without Power Conditioning; An Active Heater Control Concept to Meet IXO Type Mirror Module Thermal-Structural Distortion Requirement; Waterless Clothes-Cleaning Machine; Integrated Electrical Wire Insulation Repair System; LVGEMS Time-of-Flight Mass Spectrometry on Satellites; Surface Inspection Tool for Optical Detection of Surface Defects; Per-Pixel, Dual-Counter Scheme for Optical Communications; Certification-Based Process Analysis; Surface Navigation Using Optimized Waypoints and Particle Swarm Optimization; Smart-Divert Powered Descent Guidance to Avoid the Backshell Landing Dispersion Ellipse; Estimating Foreign-Object-Debris Density from Photogrammetry Data; Adaptive Sampling of Spatiotemporal Phenomena with Optimization Criteria; Building a 2.5D Digital Elevation Model From 2D Imagery; Eyes on the Earth 3D; Target Trailing With Safe Navigation for Maritime Autonomous Surface Vehicles; Adams-Based Rover Terramechanics and Mobility Simulator - ARTEMIS; ISTP CDF Skeleton Editor; Uplink Summary Generator (ULSGEN) Version 1.0; Robotics On-Board Trainer (ROBoT); Software Engineering Tools for Scientific Models; Automatic Data Filter Customization Using a Genetic Algorithm; Tracker Toolkit; Towards Efficient Scientific Data Management Using Cloud Storage; On a Formal Tool for Reasoning About Flight Software Cost Analysis; A Nanostructured Composites Thermal Switch Controls Internal and External Short Circuit in Lithium Ion Batteries; Spacecraft Crew Cabin Condensation Control; and Functional Near-Infrared Spectroscopy Signals Measure Neuronal Activity in the Cortex.
Ciaccio, Edward J; Micheli-Tzanakou, Evangelia
2007-07-01
Common-mode noise degrades cardiovascular signal quality and diminishes measurement accuracy. Filtering to remove noise components in the frequency domain often distorts the signal. Two adaptive noise canceling (ANC) algorithms were tested to adjust weighted reference signals for optimal subtraction from a primary signal. Update of weight w was based upon the gradient term of the steepest descent equation: [see text], where the error epsilon is the difference between primary and weighted reference signals. nabla was estimated from Deltaepsilon(2) and Deltaw without using a variable Deltaw in the denominator which can cause instability. The Parallel Comparison (PC) algorithm computed Deltaepsilon(2) using fixed finite differences +/- Deltaw in parallel during each discrete time k. The ALOPEX algorithm computed Deltaepsilon(2)x Deltaw from time k to k + 1 to estimate nabla, with a random number added to account for Deltaepsilon(2) . Deltaw--> 0 near the optimal weighting. Using simulated data, both algorithms stably converged to the optimal weighting within 50-2000 discrete sample points k even with a SNR = 1:8 and weights which were initialized far from the optimal. Using a sharply pulsatile cardiac electrogram signal with added noise so that the SNR = 1:5, both algorithms exhibited stable convergence within 100 ms (100 sample points). Fourier spectral analysis revealed minimal distortion when comparing the signal without added noise to the ANC restored signal. ANC algorithms based upon difference calculations can rapidly and stably converge to the optimal weighting in simulated and real cardiovascular data. Signal quality is restored with minimal distortion, increasing the accuracy of biophysical measurement.
Karayiannis, N B
2000-01-01
This paper presents the development and investigates the properties of ordered weighted learning vector quantization (LVQ) and clustering algorithms. These algorithms are developed by using gradient descent to minimize reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting aggregation operators that lead to admissible reformulation functions. Minimization of admissible reformulation functions based on ordered weighted aggregation operators produces a family of soft LVQ and clustering algorithms, which includes fuzzy LVQ and clustering algorithms as special cases. The proposed LVQ and clustering algorithms are used to perform segmentation of magnetic resonance (MR) images of the brain. The diagnostic value of the segmented MR images provides the basis for evaluating a variety of ordered weighted LVQ and clustering algorithms.
NASA Astrophysics Data System (ADS)
Egron, Sylvain; Soummer, Rémi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Levecq, Olivier; Mazoyer, Johan; N'Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand
2017-09-01
The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop experiment designed to study wavefront sensing and control for a segmented space telescope, such as JWST. With the JWST Science and Operations Center co-located at STScI, JOST was developed to provide both a platform for staff training and to test alternate wavefront sensing and control strategies for independent validation or future improvements beyond the baseline operations. The design of JOST reproduces the physics of JWST's three-mirror anastigmat (TMA) using three custom aspheric lenses. It provides similar quality image as JWST (80% Strehl ratio) over a field equivalent to a NIRCam module, but at 633 nm. An Iris AO segmented mirror stands for the segmented primary mirror of JWST. Actuators allow us to control (1) the 18 segments of the segmented mirror in piston, tip, tilt and (2) the second lens, which stands for the secondary mirror, in tip, tilt and x, y, z positions. We present the most recent experimental results for the segmented mirror alignment. Our implementation of the Wavefront Sensing (WFS) algorithms using phase diversity is tested on simulation and experimentally. The wavefront control (WFC) algorithms, which rely on a linear model for optical aberrations induced by misalignment of the secondary lens and the segmented mirror, are tested and validated both on simulations and experimentally. In this proceeding, we present the performance of the full active optic control loop in presence of perturbations on the segmented mirror, and we detail the quality of the alignment correction.
Minimum-fuel turning climbout and descent guidance of transport jets
NASA Technical Reports Server (NTRS)
Neuman, F.; Kreindler, E.
1983-01-01
The complete flightpath optimization problem for minimum fuel consumption from takeoff to landing including the initial and final turns from and to the runway heading is solved. However, only the initial and final segments which contain the turns are treated, since the straight-line climbout, cruise, and descent problems have already been solved. The paths are derived by generating fields of extremals, using the necessary conditions of optimal control together with singular arcs and state constraints. Results show that the speed profiles for straight flight and turning flight are essentially identical except for the final horizontal accelerating or decelerating turns. The optimal turns require no abrupt maneuvers, and an approximation of the optimal turns could be easily integrated with present straight-line climb-cruise-descent fuel-optimization algorithms. Climbout at the optimal IAS rather than the 250-knot terminal-area speed limit would save 36 lb of fuel for the 727-100 aircraft.
ERIC Educational Resources Information Center
Végh, Ladislav
2016-01-01
The first data structure that first-year undergraduate students learn during the programming and algorithms courses is the one-dimensional array. For novice programmers, it might be hard to understand different algorithms on arrays (e.g. searching, mirroring, sorting algorithms), because the algorithms dynamically change the values of elements. In…
Design of Efficient Mirror Adder in Quantum- Dot Cellular Automata
NASA Astrophysics Data System (ADS)
Mishra, Prashant Kumar; Chattopadhyay, Manju K.
2018-03-01
Lower power consumption is an essential demand for portable multimedia system using digital signal processing algorithms and architectures. Quantum dot cellular automata (QCA) is a rising nano technology for the development of high performance ultra-dense low power digital circuits. QCA based several efficient binary and decimal arithmetic circuits are implemented, however important improvements are still possible. This paper demonstrate Mirror Adder circuit design in QCA. We present comparative study of mirror adder cells designed using conventional CMOS technique and mirror adder cells designed using quantum-dot cellular automata. QCA based mirror adders are better in terms of area by order of three.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.
Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter
2012-08-01
An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.
Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.
2008-01-01
This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.
A hybrid Gerchberg-Saxton-like algorithm for DOE and CGH calculation
NASA Astrophysics Data System (ADS)
Wang, Haichao; Yue, Weirui; Song, Qiang; Liu, Jingdan; Situ, Guohai
2017-02-01
The Gerchberg-Saxton (GS) algorithm is widely used in various disciplines of modern sciences and technologies where phase retrieval is required. However, this legendary algorithm most likely stagnates after a few iterations. Many efforts have been taken to improve this situation. Here we propose to introduce the strategy of gradient descent and weighting technique to the GS algorithm, and demonstrate it using two examples: design of a diffractive optical element (DOE) to achieve off-axis illumination in lithographic tools, and design of a computer generated hologram (CGH) for holographic display. Both numerical simulation and optical experiments are carried out for demonstration.
Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance
NASA Technical Reports Server (NTRS)
Jones, Brandon M.
2005-01-01
Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.
Identity-by-Descent-Based Phasing and Imputation in Founder Populations Using Graphical Models
Palin, Kimmo; Campbell, Harry; Wright, Alan F; Wilson, James F; Durbin, Richard
2011-01-01
Accurate knowledge of haplotypes, the combination of alleles co-residing on a single copy of a chromosome, enables powerful gene mapping and sequence imputation methods. Since humans are diploid, haplotypes must be derived from genotypes by a phasing process. In this study, we present a new computational model for haplotype phasing based on pairwise sharing of haplotypes inferred to be Identical-By-Descent (IBD). We apply the Bayesian network based model in a new phasing algorithm, called systematic long-range phasing (SLRP), that can capitalize on the close genetic relationships in isolated founder populations, and show with simulated and real genome-wide genotype data that SLRP substantially reduces the rate of phasing errors compared to previous phasing algorithms. Furthermore, the method accurately identifies regions of IBD, enabling linkage-like studies without pedigrees, and can be used to impute most genotypes with very low error rate. Genet. Epidemiol. 2011. © 2011 Wiley Periodicals, Inc.35:853-860, 2011 PMID:22006673
Cosmic Microwave Background Mapmaking with a Messenger Field
NASA Astrophysics Data System (ADS)
Huffenberger, Kevin M.; Næss, Sigurd K.
2018-01-01
We apply a messenger field method to solve the linear minimum-variance mapmaking equation in the context of Cosmic Microwave Background (CMB) observations. In simulations, the method produces sky maps that converge significantly faster than those from a conjugate gradient descent algorithm with a diagonal preconditioner, even though the computational cost per iteration is similar. The messenger method recovers large scales in the map better than conjugate gradient descent, and yields a lower overall χ2. In the single, pencil beam approximation, each iteration of the messenger mapmaking procedure produces an unbiased map, and the iterations become more optimal as they proceed. A variant of the method can handle differential data or perform deconvolution mapmaking. The messenger method requires no preconditioner, but a high-quality solution needs a cooling parameter to control the convergence. We study the convergence properties of this new method and discuss how the algorithm is feasible for the large data sets of current and future CMB experiments.
NASA Astrophysics Data System (ADS)
Lavrinov, V. V.; Lavrinova, L. N.
2017-11-01
The statistically optimal control algorithm for the correcting mirror is formed by constructing a prediction of distortions of the optical signal and improves the time resolution of the adaptive optics system. The prediction of distortions is based on an analysis of the dynamics of changes in the optical inhomogeneities of the turbulent atmosphere or the evolution of phase fluctuations at the input aperture of the adaptive system. Dynamic properties of the system are manifested during the temporary transformation of the stresses controlling the mirror and are determined by the dynamic characteristics of the flexible mirror.
Incoherent beam combining based on the momentum SPGD algorithm
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng
2018-05-01
Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.
NASA Technical Reports Server (NTRS)
Partridge, James D.
2002-01-01
'NASA is preparing to launch the Next Generation Space Telescope (NGST). This telescope will be larger than the Hubble Space Telescope, be launched on an Atlas missile rather than the Space Shuttle, have a segmented primary mirror, and be placed in a higher orbit. All these differences pose significant challenges.' This effort addresses the challenge of implementing an algorithm for aligning the segments of the primary mirror during the initial deployment that was designed by Philip Olivier and members of SOMTC (Space Optics Manufacturing Technology Center). The implementation was to be performed on the SIBOA (Systematic Image Based Optical Alignment) test bed. Unfortunately, hardware/software aspect concerning SIBOA and an extended time period for algorithm development prevented testing before the end of the study period. Properties of the digital camera were studied and understood, resulting in the current ability of selecting optimal settings regarding saturation. The study was successful in manually capturing several images of two stacked segments with various relative phases. These images can be used to calibrate the algorithm for future implementation. Currently the system is ready for testing.
NASA Astrophysics Data System (ADS)
Pinson, Robin Marie
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant (fuel) optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from ground control. The goal is to autonomously design the optimal powered descent trajectory onboard the spacecraft immediately prior to the descent burn for use during the burn. Compared to a planetary powered landing problem, the challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies, and low thrust vehicles. The nonlinear gravity fields cannot be represented by a constant gravity model nor a Newtonian model. The trajectory design algorithm needs to be robust and efficient to guarantee a designed trajectory and complete the calculations in a reasonable time frame. This research investigates the following questions: Can convex optimization be used to design the minimum propellant powered descent trajectory for a soft landing on an asteroid? Is this method robust and reliable to allow autonomy onboard the spacecraft without interaction from ground control? This research designed a convex optimization based method that rapidly generates the propellant optimal asteroid powered descent trajectory. The solution to the convex optimization problem is the thrust magnitude and direction, which designs and determines the trajectory. The propellant optimal problem was formulated as a second order cone program, a subset of convex optimization, through relaxation techniques by including a slack variable, change of variables, and incorporation of the successive solution method. Convex optimization solvers, especially second order cone programs, are robust, reliable, and are guaranteed to find the global minimum provided one exists. In addition, an outer optimization loop using Brent's method determines the optimal flight time corresponding to the minimum propellant usage over all flight times. Inclusion of additional trajectory constraints, solely vertical motion near the landing site and glide slope, were evaluated. Through a theoretical proof involving the Minimum Principle from Optimal Control Theory and the Karush-Kuhn-Tucker conditions it was shown that the relaxed problem is identical to the original problem at the minimum point. Therefore, the optimal solution of the relaxed problem is an optimal solution of the original problem, referred to as lossless convexification. A key finding is that this holds for all levels of gravity model fidelity. The designed thrust magnitude profiles were the bang-bang predicted by Optimal Control Theory. The first high fidelity gravity model employed was the 2x2 spherical harmonics model assuming a perfect triaxial ellipsoid and placement of the coordinate frame at the asteroid's center of mass and aligned with the semi-major axes. The spherical harmonics model is not valid inside the Brillouin sphere and this becomes relevant for irregularly shaped asteroids. Then, a higher fidelity model was implemented combining the 4x4 spherical harmonics gravity model with the interior spherical Bessel gravity model. All gravitational terms in the equations of motion are evaluated with the position vector from the previous iteration, creating the successive solution method. Methodology success was shown by applying the algorithm to three triaxial ellipsoidal asteroids with four different rotation speeds using the 2x2 gravity model. Finally, the algorithm was tested using the irregularly shaped asteroid, Castalia.
Parameter-tolerant design of high contrast gratings
NASA Astrophysics Data System (ADS)
Chevallier, Christyves; Fressengeas, Nicolas; Jacquet, Joel; Almuneau, Guilhem; Laaroussi, Youness; Gauthier-Lafaye, Olivier; Cerutti, Laurent; Genty, Frédéric
2015-02-01
This work is devoted to the design of high contrast grating mirrors taking into account the technological constraints and tolerance of fabrication. First, a global optimization algorithm has been combined to a numerical analysis of grating structures (RCWA) to automatically design HCG mirrors. Then, the tolerances of the grating dimensions have been precisely studied to develop a robust optimization algorithm with which high contrast gratings, exhibiting not only a high efficiency but also large tolerance values, could be designed. Finally, several structures integrating previously designed HCGs has been simulated to validate and illustrate the interest of such gratings.
Algorithms for Mathematical Programming with Emphasis on Bi-level Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldfarb, Donald; Iyengar, Garud
2014-05-22
The research supported by this grant was focused primarily on first-order methods for solving large scale and structured convex optimization problems and convex relaxations of nonconvex problems. These include optimal gradient methods, operator and variable splitting methods, alternating direction augmented Lagrangian methods, and block coordinate descent methods.
A conjugate gradient method with descent properties under strong Wolfe line search
NASA Astrophysics Data System (ADS)
Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.
2017-09-01
The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.
Precise Image-Based Motion Estimation for Autonomous Small Body Exploration
NASA Technical Reports Server (NTRS)
Johnson, Andrew Edie; Matthies, Larry H.
2000-01-01
We have developed and tested a software algorithm that enables onboard autonomous motion estimation near small bodies using descent camera imagery and laser altimetry. Through simulation and testing, we have shown that visual feature tracking can decrease uncertainty in spacecraft motion to a level that makes landing on small, irregularly shaped, bodies feasible. Possible future work will include qualification of the algorithm as a flight experiment for the Deep Space 4/Champollion comet lander mission currently under study at the Jet Propulsion Laboratory.
The Double Star Orbit Initial Value Problem
NASA Astrophysics Data System (ADS)
Hensley, Hagan
2018-04-01
Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.
Discrete Analog Processing for Tracking and Guidance Control
1980-11-01
be called the multi- sample algorithm, satisfies -4 67 tD (Da - d) 0 (4.2.2.3) Thus, this descent algorithm will determine a coefficient vector a... flJ -TI:-* IS; 7" rR(VI Dr TH~I ("vFP)ALLCj TT$ C_ F 2C OH Til TPACK I! NC SYS TE ! f- 1I3 cc cc *’I cc. CC snUpcF FIL1j: C~T 01C 0 (1 cc CC OEJCT F I LF
Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.
2012-12-01
We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10 6 particles on 65,536 MPI tasks.
Antunes, Gabriela; Faria da Silva, Samuel F; Simoes de Souza, Fabio M
2018-06-01
Mirror neurons fire action potentials both when the agent performs a certain behavior and watches someone performing a similar action. Here, we present an original mirror neuron model based on the spike-timing-dependent plasticity (STDP) between two morpho-electrical models of neocortical pyramidal neurons. Both neurons fired spontaneously with basal firing rate that follows a Poisson distribution, and the STDP between them was modeled by the triplet algorithm. Our simulation results demonstrated that STDP is sufficient for the rise of mirror neuron function between the pairs of neocortical neurons. This is a proof of concept that pairs of neocortical neurons associating sensory inputs to motor outputs could operate like mirror neurons. In addition, we used the mirror neuron model to investigate whether channelopathies associated with autism spectrum disorder could impair the modeled mirror function. Our simulation results showed that impaired hyperpolarization-activated cationic currents (Ih) affected the mirror function between the pairs of neocortical neurons coupled by STDP.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409
Cyclic coordinate descent: A robotics algorithm for protein loop closure.
Canutescu, Adrian A; Dunbrack, Roland L
2003-05-01
In protein structure prediction, it is often the case that a protein segment must be adjusted to connect two fixed segments. This occurs during loop structure prediction in homology modeling as well as in ab initio structure prediction. Several algorithms for this purpose are based on the inverse Jacobian of the distance constraints with respect to dihedral angle degrees of freedom. These algorithms are sometimes unstable and fail to converge. We present an algorithm developed originally for inverse kinematics applications in robotics. In robotics, an end effector in the form of a robot hand must reach for an object in space by altering adjustable joint angles and arm lengths. In loop prediction, dihedral angles must be adjusted to move the C-terminal residue of a segment to superimpose on a fixed anchor residue in the protein structure. The algorithm, referred to as cyclic coordinate descent or CCD, involves adjusting one dihedral angle at a time to minimize the sum of the squared distances between three backbone atoms of the moving C-terminal anchor and the corresponding atoms in the fixed C-terminal anchor. The result is an equation in one variable for the proposed change in each dihedral. The algorithm proceeds iteratively through all of the adjustable dihedral angles from the N-terminal to the C-terminal end of the loop. CCD is suitable as a component of loop prediction methods that generate large numbers of trial structures. It succeeds in closing loops in a large test set 99.79% of the time, and fails occasionally only for short, highly extended loops. It is very fast, closing loops of length 8 in 0.037 sec on average.
A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms
NASA Technical Reports Server (NTRS)
Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.
2005-01-01
We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.
Applying Gradient Descent in Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Cui, Nan
2018-04-01
With the development of the integrated circuit and computer science, people become caring more about solving practical issues via information technologies. Along with that, a new subject called Artificial Intelligent (AI) comes up. One popular research interest of AI is about recognition algorithm. In this paper, one of the most common algorithms, Convolutional Neural Networks (CNNs) will be introduced, for image recognition. Understanding its theory and structure is of great significance for every scholar who is interested in this field. Convolution Neural Network is an artificial neural network which combines the mathematical method of convolution and neural network. The hieratical structure of CNN provides it reliable computer speed and reasonable error rate. The most significant characteristics of CNNs are feature extraction, weight sharing and dimension reduction. Meanwhile, combining with the Back Propagation (BP) mechanism and the Gradient Descent (GD) method, CNNs has the ability to self-study and in-depth learning. Basically, BP provides an opportunity for backwardfeedback for enhancing reliability and GD is used for self-training process. This paper mainly discusses the CNN and the related BP and GD algorithms, including the basic structure and function of CNN, details of each layer, the principles and features of BP and GD, and some examples in practice with a summary in the end.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
Effect of the influence function of deformable mirrors on laser beam shaping.
González-Núñez, Héctor; Béchet, Clémentine; Ayancán, Boris; Neichel, Benoit; Guesalaga, Andrés
2017-02-20
The continuous membrane stiffness of a deformable mirror propagates the deformation of the actuators beyond their neighbors. When phase-retrieval algorithms are used to determine the desired shape of these mirrors, this cross-coupling-also known as influence function (IF)-is generally disregarded. We study this problem via simulations and bench tests for different target shapes to gain further insight into the phenomenon. Sound modeling of the IF effect is achieved as highlighted by the concurrence between the modeled and experimental results. In addition, we observe that the actuators IF is a key parameter that determines the accuracy of the output light pattern. Finally, it is shown that in some cases it is possible to achieve better shaping by modifying the input irradiance of the phase-retrieval algorithm. The results obtained from this analysis open the door to further improvements in this type of beam-shaping systems.
Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed
NASA Technical Reports Server (NTRS)
Taylor, Jaime; Rakoczy, John; Steincamp, James
2003-01-01
Phase retrieval requires calculation of the real-valued phase of the pupil fimction from the image intensity distribution and characteristics of an optical system. Genetic 'algorithms were used to solve two one-dimensional phase retrieval problem. A GA successfully estimated the coefficients of a polynomial expansion of the phase when the number of coefficients was correctly specified. A GA also successfully estimated the multiple p h e s of a segmented optical system analogous to the seven-mirror Systematic Image-Based Optical Alignment (SIBOA) testbed located at NASA s Marshall Space Flight Center. The SIBOA testbed was developed to investigate phase retrieval techniques. Tiphilt and piston motions of the mirrors accomplish phase corrections. A constant phase over each mirror can be achieved by an independent tip/tilt correction: the phase Conection term can then be factored out of the Discrete Fourier Tranform (DFT), greatly reducing computations.
Powered ankle-foot prosthesis to assist level-ground and stair-descent gaits.
Au, Samuel; Berniker, Max; Herr, Hugh
2008-05-01
The human ankle varies impedance and delivers net positive work during the stance period of walking. In contrast, commercially available ankle-foot prostheses are passive during stance, causing many clinical problems for transtibial amputees, including non-symmetric gait patterns, higher gait metabolism, and poorer shock absorption. In this investigation, we develop and evaluate a myoelectric-driven, finite state controller for a powered ankle-foot prosthesis that modulates both impedance and power output during stance. The system employs both sensory inputs measured local to the external prosthesis, and myoelectric inputs measured from residual limb muscles. Using local prosthetic sensing, we first develop two finite state controllers to produce biomimetic movement patterns for level-ground and stair-descent gaits. We then employ myoelectric signals as control commands to manage the transition between these finite state controllers. To transition from level-ground to stairs, the amputee flexes the gastrocnemius muscle, triggering the prosthetic ankle to plantar flex at terminal swing, and initiating the stair-descent state machine algorithm. To transition back to level-ground walking, the amputee flexes the tibialis anterior muscle, triggering the ankle to remain dorsiflexed at terminal swing, and initiating the level-ground state machine algorithm. As a preliminary evaluation of clinical efficacy, we test the device on a transtibial amputee with both the proposed controller and a conventional passive-elastic control. We find that the amputee can robustly transition between the finite state controllers through direct muscle activation, allowing rapid transitioning from level-ground to stair walking patterns. Additionally, we find that the proposed finite state controllers result in a more biomimetic ankle response, producing net propulsive work during level-ground walking and greater shock absorption during stair descent. The results of this study highlight the potential of prosthetic leg controllers that exploit neural signals to trigger terrain-appropriate, local prosthetic leg behaviors.
Taming the Wild: A Unified Analysis of Hogwild!-Style Algorithms.
De Sa, Christopher; Zhang, Ce; Olukotun, Kunle; Ré, Christopher
2015-12-01
Stochastic gradient descent (SGD) is a ubiquitous algorithm for a variety of machine learning problems. Researchers and industry have developed several techniques to optimize SGD's runtime performance, including asynchronous execution and reduced precision. Our main result is a martingale-based analysis that enables us to capture the rich noise models that may arise from such techniques. Specifically, we use our new analysis in three ways: (1) we derive convergence rates for the convex case (Hogwild!) with relaxed assumptions on the sparsity of the problem; (2) we analyze asynchronous SGD algorithms for non-convex matrix problems including matrix completion; and (3) we design and analyze an asynchronous SGD algorithm, called Buckwild!, that uses lower-precision arithmetic. We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware.
An Impact-Location Estimation Algorithm for Subsonic Uninhabited Aircraft
NASA Technical Reports Server (NTRS)
Bauer, Jeffrey E.; Teets, Edward
1997-01-01
An impact-location estimation algorithm is being used at the NASA Dryden Flight Research Center to support range safety for uninhabited aerial vehicle flight tests. The algorithm computes an impact location based on the descent rate, mass, and altitude of the vehicle and current wind information. The predicted impact location is continuously displayed on the range safety officer's moving map display so that the flightpath of the vehicle can be routed to avoid ground assets if the flight must be terminated. The algorithm easily adapts to different vehicle termination techniques and has been shown to be accurate to the extent required to support range safety for subsonic uninhabited aerial vehicles. This paper describes how the algorithm functions, how the algorithm is used at NASA Dryden, and how various termination techniques are handled by the algorithm. Other approaches to predicting the impact location and the reasons why they were not selected for real-time implementation are also discussed.
Wavefront Reconstruction and Mirror Surface Optimizationfor Adaptive Optics
2014-06-01
TERMS Wavefront reconstruction, Adaptive optics , Wavelets, Atmospheric turbulence , Branch points, Mirror surface optimization, Space telescope, Segmented...contribution adapts the proposed algorithm to work when branch points are present from significant atmospheric turbulence . An analysis of vector spaces...estimate the distortion of the collected light caused by the atmosphere and corrected by adaptive optics . A generalized orthogonal wavelet wavefront
Understanding and Optimizing Asynchronous Low-Precision Stochastic Gradient Descent
De Sa, Christopher; Feldman, Matthew; Ré, Christopher; Olukotun, Kunle
2018-01-01
Stochastic gradient descent (SGD) is one of the most popular numerical algorithms used in machine learning and other domains. Since this is likely to continue for the foreseeable future, it is important to study techniques that can make it run fast on parallel hardware. In this paper, we provide the first analysis of a technique called Buckwild! that uses both asynchronous execution and low-precision computation. We introduce the DMGC model, the first conceptualization of the parameter space that exists when implementing low-precision SGD, and show that it provides a way to both classify these algorithms and model their performance. We leverage this insight to propose and analyze techniques to improve the speed of low-precision SGD. First, we propose software optimizations that can increase throughput on existing CPUs by up to 11×. Second, we propose architectural changes, including a new cache technique we call an obstinate cache, that increase throughput beyond the limits of current-generation hardware. We also implement and analyze low-precision SGD on the FPGA, which is a promising alternative to the CPU for future SGD systems. PMID:29391770
Energy minimization in medical image analysis: Methodologies and applications.
Zhao, Feng; Xie, Xianghua
2016-02-01
Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.
Distributed Control by Lagrangian Steepest Descent
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bieniawski, Stefan
2004-01-01
Often adaptive, distributed control can be viewed as an iterated game between independent players. The coupling between the players mixed strategies, arising as the system evolves from one instant to the next, is determined by the system designer. Information theory tells us that the most likely joint strategy of the players, given a value of the expectation of the overall control objective function, is the minimizer of a function o the joint strategy. So the goal of the system designer is to speed evolution of the joint strategy to that Lagrangian mhimbhgpoint,lowerthe expectated value of the control objective function, and repeat Here we elaborate the theory of algorithms that do this using local descent procedures, and that thereby achieve efficient, adaptive, distributed control.
NASA Astrophysics Data System (ADS)
Jia, Ningning; Y Lam, Edmund
2010-04-01
Inverse lithography technology (ILT) synthesizes photomasks by solving an inverse imaging problem through optimization of an appropriate functional. Much effort on ILT is dedicated to deriving superior masks at a nominal process condition. However, the lower k1 factor causes the mask to be more sensitive to process variations. Robustness to major process variations, such as focus and dose variations, is desired. In this paper, we consider the focus variation as a stochastic variable, and treat the mask design as a machine learning problem. The stochastic gradient descent approach, which is a useful tool in machine learning, is adopted to train the mask design. Compared with previous work, simulation shows that the proposed algorithm is effective in producing robust masks.
Fractional-order gradient descent learning of BP neural networks with Caputo derivative.
Wang, Jian; Wen, Yanqing; Gou, Yida; Ye, Zhenyun; Chen, Hua
2017-05-01
Fractional calculus has been found to be a promising area of research for information processing and modeling of some physical systems. In this paper, we propose a fractional gradient descent method for the backpropagation (BP) training of neural networks. In particular, the Caputo derivative is employed to evaluate the fractional-order gradient of the error defined as the traditional quadratic energy function. The monotonicity and weak (strong) convergence of the proposed approach are proved in detail. Two simulations have been implemented to illustrate the performance of presented fractional-order BP algorithm on three small datasets and one large dataset. The numerical simulations effectively verify the theoretical observations of this paper as well. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Egron, Sylvain; Lajoie, Charles-Philippe; Leboulleux, Lucie; N'Diaye, Mamadou; Pueyo, Laurent; Choquet, Élodie; Perrin, Marshall D.; Ygouf, Marie; Michau, Vincent; Bonnefois, Aurélie; Fusco, Thierry; Escolle, Clément; Ferrari, Marc; Hugot, Emmanuel; Soummer, Rémi
2016-07-01
The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop experiment designed to study wavefront sensing and control for a segmented space telescope, including both commissioning and maintenance activities. JOST is complementary to existing testbeds for JWST (e.g. the Ball Aerospace Testbed Telescope TBT) given its compact scale and flexibility, ease of use, and colocation at the JWST Science and Operations Center. The design of JOST reproduces the physics of JWST's three-mirror anastigmat (TMA) using three custom aspheric lenses. It provides similar quality image as JWST (80% Strehl ratio) over a field equivalent to a NIRCam module, but at 633 nm. An Iris AO segmented mirror stands for the segmented primary mirror of JWST. Actuators allow us to control (1) the 18 segments of the segmented mirror in piston, tip, tilt and (2) the second lens, which stands for the secondary mirror, in tip, tilt and x, y, z positions. We present the full linear control alignment infrastructure developed for JOST, with an emphasis on multi-field wavefront sensing and control. Our implementation of the Wavefront Sensing (WFS) algorithms using phase diversity is experimentally tested. The wavefront control (WFC) algorithms, which rely on a linear model for optical aberrations induced by small misalignments of the three lenses, are tested and validated on simulations.
Segmented Mirror Telescope Model and Simulation
2011-06-01
mirror surface is treated as a grid of masses and springs. The actuators have surface normal forces applied to individual masses. The equation to...are not widely treated in the literature. The required modifications for the wavefront reconstruction algorithm of a circular aperture to correctly...Zernike polynomials, which are particularly suitable to describe the common optical character- izations of astigmatism , coma, defocus and others [9
Implementation of a Wavefront-Sensing Algorithm
NASA Technical Reports Server (NTRS)
Smith, Jeffrey S.; Dean, Bruce; Aronstein, David
2013-01-01
A computer program has been written as a unique implementation of an image-based wavefront-sensing algorithm reported in "Iterative-Transform Phase Retrieval Using Adaptive Diversity" (GSC-14879-1), NASA Tech Briefs, Vol. 31, No. 4 (April 2007), page 32. This software was originally intended for application to the James Webb Space Telescope, but is also applicable to other segmented-mirror telescopes. The software is capable of determining optical-wavefront information using, as input, a variable number of irradiance measurements collected in defocus planes about the best focal position. The software also uses input of the geometrical definition of the telescope exit pupil (otherwise denoted the pupil mask) to identify the locations of the segments of the primary telescope mirror. From the irradiance data and mask information, the software calculates an estimate of the optical wavefront (a measure of performance) of the telescope generally and across each primary mirror segment specifically. The software is capable of generating irradiance data, wavefront estimates, and basis functions for the full telescope and for each primary-mirror segment. Optionally, each of these pieces of information can be measured or computed outside of the software and incorporated during execution of the software.
Hybrid DFP-CG method for solving unconstrained optimization problems
NASA Astrophysics Data System (ADS)
Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa
2017-09-01
The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.
NASA Astrophysics Data System (ADS)
Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr
2017-12-01
There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.
A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.
Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar
2017-03-01
The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse
A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).
Narayanan, Shrikanth
2009-01-01
We describe a method for unsupervised region segmentation of an image using its spatial frequency domain representation. The algorithm was designed to process large sequences of real-time magnetic resonance (MR) images containing the 2-D midsagittal view of a human vocal tract airway. The segmentation algorithm uses an anatomically informed object model, whose fit to the observed image data is hierarchically optimized using a gradient descent procedure. The goal of the algorithm is to automatically extract the time-varying vocal tract outline and the position of the articulators to facilitate the study of the shaping of the vocal tract during speech production. PMID:19244005
Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.
Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L
2017-10-01
The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.
NASA Astrophysics Data System (ADS)
Jun, LIU; Huang, Wei; Hongjie, Fan
2016-02-01
A novel method for finding the initial structure parameters of an optical system via the genetic algorithm (GA) is proposed in this research. Usually, optical designers start their designs from the commonly used structures from a patent database; however, it is time consuming to modify the patented structures to meet the specification. A high-performance design result largely depends on the choice of the starting point. Accordingly, it would be highly desirable to be able to calculate the initial structure parameters automatically. In this paper, a method that combines a genetic algorithm and aberration analysis is used to determine an appropriate initial structure of an optical system. We use a three-mirror system as an example to demonstrate the validity and reliability of this method. On-axis and off-axis telecentric three-mirror systems are obtained based on this method.
Human-like machines: Transparency and comprehensibility.
Patrzyk, Piotr M; Link, Daniela; Marewski, Julian N
2017-01-01
Artificial intelligence algorithms seek inspiration from human cognitive systems in areas where humans outperform machines. But on what level should algorithms try to approximate human cognition? We argue that human-like machines should be designed to make decisions in transparent and comprehensible ways, which can be achieved by accurately mirroring human cognitive processes.
Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed
NASA Technical Reports Server (NTRS)
Rakoczy, John; Steincamp, James; Taylor, Jaime
2003-01-01
A reduced surrogate, one point crossover genetic algorithm with random rank-based selection was used successfully to estimate the multiple phases of a segmented optical system modeled on the seven-mirror Systematic Image-Based Optical Alignment testbed located at NASA's Marshall Space Flight Center.
FPGA-accelerated adaptive optics wavefront control
NASA Astrophysics Data System (ADS)
Mauch, S.; Reger, J.; Reinlein, C.; Appelfelder, M.; Goy, M.; Beckert, E.; Tünnermann, A.
2014-03-01
The speed of real-time adaptive optical systems is primarily restricted by the data processing hardware and computational aspects. Furthermore, the application of mirror layouts with increasing numbers of actuators reduces the bandwidth (speed) of the system and, thus, the number of applicable control algorithms. This burden turns out a key-impediment for deformable mirrors with continuous mirror surface and highly coupled actuator influence functions. In this regard, specialized hardware is necessary for high performance real-time control applications. Our approach to overcome this challenge is an adaptive optics system based on a Shack-Hartmann wavefront sensor (SHWFS) with a CameraLink interface. The data processing is based on a high performance Intel Core i7 Quadcore hard real-time Linux system. Employing a Xilinx Kintex-7 FPGA, an own developed PCie card is outlined in order to accelerate the analysis of a Shack-Hartmann Wavefront Sensor. A recently developed real-time capable spot detection algorithm evaluates the wavefront. The main features of the presented system are the reduction of latency and the acceleration of computation For example, matrix multiplications which in general are of complexity O(n3 are accelerated by using the DSP48 slices of the field-programmable gate array (FPGA) as well as a novel hardware implementation of the SHWFS algorithm. Further benefits are the Streaming SIMD Extensions (SSE) which intensively use the parallelization capability of the processor for further reducing the latency and increasing the bandwidth of the closed-loop. Due to this approach, up to 64 actuators of a deformable mirror can be handled and controlled without noticeable restriction from computational burdens.
Design of Off-Axis PIAACMC Mirrors
NASA Technical Reports Server (NTRS)
Pluzhnik, Eugene; Guyon, Olivier; Belikov, Ruslan; Kern, Brian; Bendek, Eduardo
2015-01-01
The Phase-Induced Amplitude Apodization Complex Mask Coronagraph (PIAACMC) provides an efficient way to control diffraction propagation effects caused by the central obstruction/segmented mirrors of the telescope. PIAACMC can be optimized in a way that takes into account both chromatic diffraction effects caused by the telescope obstructed aperture and tip/tilt sensitivity of the coronagraph. As a result, unlike classic PIAA, the PIAACMC mirror shapes are often slightly asymmetric even for an on-axis configuration and require more care in calculating off-axis shapes when an off-axis configuration is preferred. A method to design off-axis PIAA mirror shapes given an on-axis mirror design is presented. The algorithm is based on geometrical ray tracing and is able to calculate off-axis PIAA mirror shapes for an arbitrary geometry of the input and output beams. The method is demonstrated using the third generation PIAACMC design for WFIRST-AFTA (Wide Field Infrared Survey Telescope-Astrophysics Focused Telescope Assets) telescope. Geometrical optics design issues related to the off-axis diffraction propagation effects are also discussed.
Integration of energy management concepts into the flight deck
NASA Technical Reports Server (NTRS)
Morello, S. A.
1981-01-01
The rapid rise of fuel costs has become a major concern of the commercial aviation industry, and it has become mandatory to seek means by which to conserve fuel. A research program was initiated in 1979 to investigate the integration of fuel-conservative energy/flight management computations and information into today's and tomorrow's flight deck. One completed effort within this program has been the development and flight testing of a fuel-efficient, time-based metering descent algorithm in a research cockpit environment. Research flights have demonstrated that time guidance and control in the cockpit was acceptable to both pilots and ATC controllers. Proper descent planning and energy management can save fuel for the individual aircraft as well as the fleet by helping to maintain a regularized flow into the terminal area.
Seismic noise attenuation using an online subspace tracking algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang
2018-02-01
We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
NASA Astrophysics Data System (ADS)
Bedi, Amrit Singh; Rajawat, Ketan
2018-05-01
Stochastic network optimization problems entail finding resource allocation policies that are optimum on an average but must be designed in an online fashion. Such problems are ubiquitous in communication networks, where resources such as energy and bandwidth are divided among nodes to satisfy certain long-term objectives. This paper proposes an asynchronous incremental dual decent resource allocation algorithm that utilizes delayed stochastic {gradients} for carrying out its updates. The proposed algorithm is well-suited to heterogeneous networks as it allows the computationally-challenged or energy-starved nodes to, at times, postpone the updates. The asymptotic analysis of the proposed algorithm is carried out, establishing dual convergence under both, constant and diminishing step sizes. It is also shown that with constant step size, the proposed resource allocation policy is asymptotically near-optimal. An application involving multi-cell coordinated beamforming is detailed, demonstrating the usefulness of the proposed algorithm.
NASA Astrophysics Data System (ADS)
Cao, Jingtai; Zhao, Xiaohui; Li, Zhaokun; Liu, Wei; Gu, Haijun
2017-11-01
The performance of free space optical (FSO) communication system is limited by atmospheric turbulent extremely. Adaptive optics (AO) is the significant method to overcome the atmosphere disturbance. Especially, for the strong scintillation effect, the sensor-less AO system plays a major role for compensation. In this paper, a modified artificial fish school (MAFS) algorithm is proposed to compensate the aberrations in the sensor-less AO system. Both the static and dynamic aberrations compensations are analyzed and the performance of FSO communication before and after aberrations compensations is compared. In addition, MAFS algorithm is compared with artificial fish school (AFS) algorithm, stochastic parallel gradient descent (SPGD) algorithm and simulated annealing (SA) algorithm. It is shown that the MAFS algorithm has a higher convergence speed than SPGD algorithm and SA algorithm, and reaches the better convergence value than AFS algorithm, SPGD algorithm and SA algorithm. The sensor-less AO system with MAFS algorithm effectively increases the coupling efficiency at the receiving terminal with fewer numbers of iterations. In conclusion, the MAFS algorithm has great significance for sensor-less AO system to compensate atmospheric turbulence in FSO communication system.
Adaptive optics using a MEMS deformable mirror for a segmented mirror telescope
NASA Astrophysics Data System (ADS)
Miyamura, Norihide
2017-09-01
For small satellite remote sensing missions, a large aperture telescope more than 400mm is required to realize less than 1m GSD observations. However, it is difficult or expensive to realize the large aperture telescope using a monolithic primary mirror with high surface accuracy. A segmented mirror telescope should be studied especially for small satellite missions. Generally, not only high accuracy of optical surface but also high accuracy of optical alignment is required for large aperture telescopes. For segmented mirror telescopes, the alignment is more difficult and more important. For conventional systems, the optical alignment is adjusted before launch to achieve desired imaging performance. However, it is difficult to adjust the alignment for large sized optics in high accuracy. Furthermore, thermal environment in orbit and vibration in a launch vehicle cause the misalignments of the optics. We are developing an adaptive optics system using a MEMS deformable mirror for an earth observing remote sensing sensor. An image based adaptive optics system compensates the misalignments and wavefront aberrations of optical elements using the deformable mirror by feedback of observed images. We propose the control algorithm of the deformable mirror for a segmented mirror telescope by using of observed image. The numerical simulation results and experimental results show that misalignment and wavefront aberration of the segmented mirror telescope are corrected and image quality is improved.
NASA Astrophysics Data System (ADS)
Niu, Chaojun; Han, Xiang'e.
2015-10-01
Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.
Autonomous Navigation Results from the Mars Exploration Rover (MER) Mission
NASA Technical Reports Server (NTRS)
Maimone, Mark; Johnson, Andrew; Cheng, Yang; Willson, Reg; Matthies, Larry H.
2004-01-01
In January, 2004, the Mars Exploration Rover (MER) mission landed two rovers, Spirit and Opportunity, on the surface of Mars. Several autonomous navigation capabilities were employed in space for the first time in this mission. ]n the Entry, Descent, and Landing (EDL) phase, both landers used a vision system called the, Descent Image Motion Estimation System (DIMES) to estimate horizontal velocity during the last 2000 meters (m) of descent, by tracking features on the ground with a downlooking camera, in order to control retro-rocket firing to reduce horizontal velocity before impact. During surface operations, the rovers navigate autonomously using stereo vision for local terrain mapping and a local, reactive planning algorithm called Grid-based Estimation of Surface Traversability Applied to Local Terrain (GESTALT) for obstacle avoidance. ]n areas of high slip, stereo vision-based visual odometry has been used to estimate rover motion, As of mid-June, Spirit had traversed 3405 m, of which 1253 m were done autonomously; Opportunity had traversed 1264 m, of which 224 m were autonomous. These results have contributed substantially to the success of the mission and paved the way for increased levels of autonomy in future missions.
Controller evaluations of the descent advisor automation aid
NASA Technical Reports Server (NTRS)
Tobias, Leonard; Volckers, Uwe; Erzberger, Heinz
1989-01-01
An automation aid to assist air traffic controllers in efficiently spacing traffic and meeting arrival times at a fix has been developed at NASA Ames Research Center. The automation aid, referred to as the descent advisor (DA), is based on accurate models of aircraft performance and weather conditions. The DA generates suggested clearances, including both top-of-descent point and speed profile data, for one or more aircraft in order to achieve specific time or distance separation objectives. The DA algorithm is interfaced with a mouse-based, menu-driven controller display that allows the air traffic controller to interactively use its accurate predictive capability to resolve conflicts and issue advisories to arrival aircraft. This paper focuses on operational issues concerning the utilization of the DA, specifically, how the DA can be used for prediction, intrail spacing, and metering. In order to evaluate the DA, a real time simulation was conducted using both current and retired controller subjects. Controllers operated in teams of two, as they do in the present environment; issues of training and team interaction will be discussed. Evaluations by controllers indicated considerable enthusiasm for the DA aid, and provided specific recommendations for using the tool effectively.
Algorithm for Training a Recurrent Multilayer Perceptron
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Rais, Omar T.; Menon, Sunil K.; Atiya, Amir F.
2004-01-01
An improved algorithm has been devised for training a recurrent multilayer perceptron (RMLP) for optimal performance in predicting the behavior of a complex, dynamic, and noisy system multiple time steps into the future. [An RMLP is a computational neural network with self-feedback and cross-talk (both delayed by one time step) among neurons in hidden layers]. Like other neural-network-training algorithms, this algorithm adjusts network biases and synaptic-connection weights according to a gradient-descent rule. The distinguishing feature of this algorithm is a combination of global feedback (the use of predictions as well as the current output value in computing the gradient at each time step) and recursiveness. The recursive aspect of the algorithm lies in the inclusion of the gradient of predictions at each time step with respect to the predictions at the preceding time step; this recursion enables the RMLP to learn the dynamics. It has been conjectured that carrying the recursion to even earlier time steps would enable the RMLP to represent a noisier, more complex system.
Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Benford, Andrew; Tinker, Michael L.
2004-01-01
The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.
Scalable Multiplexed Ion Trap (SMIT) Program
2010-12-08
an integrated micromirror . The symmetric cross and the mirror trap had a number of complex design features. Both traps shaped the electrodes in...genetic algorithm. 6. Integrated micromirror . The Gen II linear trap (as well as the linear sections of the mirror and the cross) had a number of new...conventional imaging system constructed by off-the-shelf optical components and a micromirror located very close to the ion. A large fraction of photons
Shahsavari, Shadab; Rezaie Shirmard, Leila; Amini, Mohsen; Abedin Dokoosh, Farid
2017-01-01
Formulation of a nanoparticulate Fingolimod delivery system based on biodegradable poly(3-hydroxybutyrate-co-3-hydroxyvalerate) was optimized according to artificial neural networks (ANNs). Concentration of poly(3-hydroxybutyrate-co-3-hydroxyvalerate), PVA and amount of Fingolimod is considered as the input value, and the particle size, polydispersity index, loading capacity, and entrapment efficacy as output data in experimental design study. In vitro release study was carried out for best formulation according to statistical analysis. ANNs are employed to generate the best model to determine the relationships between various values. In order to specify the model with the best accuracy and proficiency for the in vitro release, a multilayer percepteron with different training algorithm has been examined. Three training model formulations including Levenberg-Marquardt (LM), gradient descent, and Bayesian regularization were employed for training the ANN models. It is demonstrated that the predictive ability of each training algorithm is in the order of LM > gradient descent > Bayesian regularization. Also, optimum formulation was achieved by LM training function with 15 hidden layers and 20 neurons. The transfer function of the hidden layer for this formulation and the output layer were tansig and purlin, respectively. Also, the optimization process was developed by minimizing the error among the predicted and observed values of training algorithm (about 0.0341). Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Flight test experience using advanced airborne equipment in a time-based metered traffic environment
NASA Technical Reports Server (NTRS)
Morello, S. A.
1980-01-01
A series of test flights have demonstrated that time-based metering guidance and control was acceptable to pilots and air traffic controllers. The descent algorithm of the technique, with good representation of aircraft performance and wind modeling, yielded arrival time accuracy within 12 sec. It is expected that this will represent significant fuel savings (1) through a reduction of the time error dispersions at the metering fix for the entire fleet, and (2) for individual aircraft as well, through the presentation of guidance for a fuel-efficient descent. Air traffic controller workloads were also reduced, in keeping with the reduction of required communications resulting from the transfer of navigation responsibilities to pilots. A second series of test flights demonstrated that an existing flight management system could be modified to operate in the new mode.
NASA Technical Reports Server (NTRS)
Dutta, Soumyo; Way, David W.
2017-01-01
Mars 2020, the next planned U.S. rover mission to land on Mars, is based on the design of the successful 2012 Mars Science Laboratory (MSL) mission. Mars 2020 retains most of the entry, descent, and landing (EDL) sequences of MSL, including the closed-loop entry guidance scheme based on the Apollo guidance algorithm. However, unlike MSL, Mars 2020 will trigger the parachute deployment and descent sequence on range trigger rather than the previously used velocity trigger. This difference will greatly reduce the landing ellipse sizes. Additionally, the relative contribution of each models to the total ellipse sizes have changed greatly due to the switch to range trigger. This paper considers the effect on trajectory dispersions due to changing the trigger schemes and the contributions of these various models to trajectory and EDL performance.
NASA Technical Reports Server (NTRS)
Kopasakis, George
1997-01-01
Performance Seeking Control (PSC) attempts to find and control the process at the operating condition that will generate maximum performance. In this paper a nonlinear multivariable PSC methodology will be developed, utilizing the Fuzzy Model Reference Learning Control (FMRLC) and the method of Steepest Descent or Gradient (SDG). This PSC control methodology employs the SDG method to find the operating condition that will generate maximum performance. This operating condition is in turn passed to the FMRLC controller as a set point for the control of the process. The conventional SDG algorithm is modified in this paper in order for convergence to occur monotonically. For the FMRLC control, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for effective tuning of the FMRLC controller.
Overview of the Phoenix Entry, Descent and Landing System Architecture
NASA Technical Reports Server (NTRS)
Grover, Myron R., III; Cichy, Benjamin D.; Desai, Prasun N.
2008-01-01
NASA s Phoenix Mars Lander began its journey to Mars from Cape Canaveral, Florida in August 2007, but its journey to the launch pad began many years earlier in 1997 as NASA s Mars Surveyor Program 2001 Lander. In the intervening years, the entry, descent and landing (EDL) system architecture went through a series of changes, resulting in the system flown to the surface of Mars on May 25th, 2008. Some changes, such as entry velocity and landing site elevation, were the result of differences in mission design. Other changes, including the removal of hypersonic guidance, the reformulation of the parachute deployment algorithm, and the addition of the backshell avoidance maneuver, were driven by constant efforts to augment system robustness. An overview of the Phoenix EDL system architecture is presented along with rationales driving these architectural changes.
Developmental long trace profiler using optimally aligned mirror based pentaprism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, Samuel K; Morrison, Gregory Y.; Yashchuk, Valeriy V.
2010-07-21
A low-budget surface slope measuring instrument, the Developmental Long Trace Profiler (DLTP), was recently brought into operation at the Advanced Light Source Optical Metrology Laboratory [Nucl. Instr. and Meth. A 616, 212-223 (2010)]. The instrument is based on a precisely calibrated autocollimator and a movable pentaprism. The capability of the DLTP to achieve sub-microradian surface slope metrology has been verified via cross-comparison measurements with other high-performance slope measuring instruments when measuring the same high-quality test optics. In the present work, a further improvement of the DLTP is achieved by replacing the existing bulk pentaprism with a specially designed mirror basedmore » pentaprism. A mirror based pentaprism offers the possibility to eliminate systematic errors introduced by inhomogeneity of the optical material and fabrication imperfections of a bulk pentaprism. We provide the details of the mirror based pentaprism design and describe an original experimental procedure for precision mutual alignment of the mirrors. The algorithm of the alignment procedure and its efficiency are verified with rigorous ray tracing simulations. Results of measurements of a spherically curved test mirror and a flat test mirror using the original bulk pentaprism are compared with measurements using the new mirror based pentaprism, demonstrating the improved performance.« less
Segmented Mirror Image Degradation Due to Surface Dust, Alignment and Figure
NASA Technical Reports Server (NTRS)
Schreur, Julian J.
1999-01-01
In 1996 an algorithm was developed to include the effects of surface roughness in the calculation of the point spread function of a telescope mirror. This algorithm has been extended to include the effects of alignment errors and figure errors for the individual elements, and an overall contamination by surface dust. The final algorithm builds an array for a guard-banded pupil function of a mirror that may or may not have a central hole, a central reflecting segment, or an outer ring of segments. The central hole, central reflecting segment, and outer ring may be circular or polygonal, and the outer segments may have trimmed comers. The modeled point spread functions show that x-tilt and y-tilt, or the corresponding R-tilt and theta-tilt for a segment in an outer ring, is readily apparent for maximum wavefront errors of 0.1 lambda. A similar sized piston error is also apparent, but integral wavelength piston errors are not. Severe piston error introduces a focus error of the opposite sign, so piston could be adjusted to compensate for segments with varying focal lengths. Dust affects the image principally by decreasing the Strehl ratio, or peak intensity of the image. For an eight-meter telescope a 25% coverage by dust produced a scattered light intensity of 10(exp -9) of the peak intensity, a level well below detectability.
Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Lu, Ping
2016-01-01
Mission proposals that land on asteroids are becoming popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site. The problem under investigation is how to design a fuel-optimal powered descent trajectory that can be quickly computed on- board the spacecraft, without interaction from ground control. An optimal trajectory designed immediately prior to the descent burn has many advantages. These advantages include the ability to use the actual vehicle starting state as the initial condition in the trajectory design and the ease of updating the landing target site if the original landing site is no longer viable. For long trajectories, the trajectory can be updated periodically by a redesign of the optimal trajectory based on current vehicle conditions to improve the guidance performance. One of the key drivers for being completely autonomous is the infrequent and delayed communication between ground control and the vehicle. Challenges that arise from designing an asteroid powered descent trajectory include complicated nonlinear gravity fields, small rotating bodies and low thrust vehicles. There are two previous studies that form the background to the current investigation. The first set looked in-depth at applying convex optimization to a powered descent trajectory on Mars with promising results.1, 2 This showed that the powered descent equations of motion can be relaxed and formed into a convex optimization problem and that the optimal solution of the relaxed problem is indeed a feasible solution to the original problem. This analysis used a constant gravity field. The second area applied a successive solution process to formulate a second order cone program that designs rendezvous and proximity operations trajectories.3, 4 These trajectories included a Newtonian gravity model. The equivalence of the solutions between the relaxed and the original problem is theoretically established. The proposed solution for designing the asteroid powered descent trajectory is to use convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process to design the fuel optimal trajectory. The solution to the convex optimization problem is the thrust profile, magnitude and direction, that will yield the minimum fuel trajectory for a soft landing at the target site, subject to various mission and operational constraints. The equations of motion are formulated in a rotating coordinate system and includes a high fidelity gravity model. The vehicle's thrust magnitude can vary between maximum and minimum bounds during the burn. Also, constraints are included to ensure that the vehicle does not run out of propellant, or go below the asteroid's surface, and any vehicle pointing requirements. The equations of motion are discretized and propagated with the trapezoidal rule in order to produce equality constraints for the optimization problem. These equality constraints allow the optimization algorithm to solve the entire problem, without including a propagator inside the optimization algorithm.
Looking Back in Time: Building the James Webb Space Telescope (JWST) Optical Telescope Element
NASA Technical Reports Server (NTRS)
Feinberg, Lee
2016-01-01
When it launches in 2018, the James Webb Space Telescope (JWST) will look back in time at the earliest stars and galaxies forming in the universe. This talk will look back in time at the development of the JWST telescope. This will include a discussion of the design, technology development, mirror development, wave front sensing and control algorithms, lightweight cryogenic deployable structure, pathfinder telescope, and integration and test program evolution and status. The talk will provide the engineering answers on why the mirrors are made of Beryllium, why there are 18 segments, where and how the mirrors were made, how the mirrors get aligned using the main science camera, and how the telescope is being tested. It will also look back in time at the many dedicated people all over the country who helped build it.
NASA Astrophysics Data System (ADS)
Sanger, Gregory M.; Reid, Paul B.; Baker, Lionel R.
1990-11-01
Consideration is given to advanced optical fabrication, profilometry and thin films, and metrology. Particular attention is given to automation for optics manufacturing, 3D contouring on a numerically controlled grinder, laser-scanning lens configurations, a noncontact precision measurement system, novel noncontact profiler design for measuring synchrotron radiation mirrors, laser-diode technologies for in-process metrology, measurements of X-ray reflectivities of Au-coatings at several energies, platinum coating of an X-ray mirror for SR lithography, a Hilbert transform algorithm for fringe-pattern analysis, structural error sources during fabrication of the AXAF optical elements, an in-process mirror figure qualification procedure for large deformable mirrors, interferometric evaluation of lenslet arrays for 2D phase-locked laser diode sources, and manufacturing and metrology tooling for the solar-A soft X-ray telescope.
Performance of lightweight large C/SiC mirror
NASA Astrophysics Data System (ADS)
Yui, Yukari Y.; Goto, Ken; Kaneda, Hidehiro; Katayama, Haruyoshi; Kotani, Masaki; Miyamoto, Masashi; Naitoh, Masataka; Nakagawa, Takao; Saruwatari, Hideki; Suganuma, Masahiro; Sugita, Hiroyuki; Tange, Yoshio; Utsunomiya, Shin; Yamamoto, Yasuji; Yamawaki, Toshihiko
2017-11-01
Very lightweight mirror will be required in the near future for both astronomical and earth science/observation missions. Silicon carbide is becoming one of the major materials applied especially to large and/or light space-borne optics, such as Herschel, GAIA, and SPICA. On the other hand, the technology of highly accurate optical measurement of large telescopes, especially in visible wavelength or cryogenic circumstances is also indispensable to realize such space-borne telescopes and hence the successful missions. We have manufactured a very lightweight Φ=800mm mirror made of carbon reinforced silicon carbide composite that can be used to evaluate the homogeneity of the mirror substrate and to master and establish the ground testing method and techniques by assembling it as the primary mirror into an optical system. All other parts of the optics model are also made of the same material as the primary mirror. The composite material was assumed to be homogeneous from the mechanical tests of samples cut out from the various areas of the 800mm mirror green-body and the cryogenic optical measurement of the mirror surface deformation of a 160mm sample mirror that is also made from the same green-body as the 800mm mirror. The circumstance and condition of the optical testing facility has been confirmed to be capable for the highly precise optical measurements of large optical systems of horizontal light axis configuration. Stitching measurement method and the algorithm for analysis of the measurement is also under study.
Unified algorithm of cone optics to compute solar flux on central receiver
NASA Astrophysics Data System (ADS)
Grigoriev, Victor; Corsi, Clotilde
2017-06-01
Analytical algorithms to compute flux distribution on central receiver are considered as a faster alternative to ray tracing. They have quite too many modifications, with HFLCAL and UNIZAR being the most recognized and verified. In this work, a generalized algorithm is presented which is valid for arbitrary sun shape of radial symmetry. Heliostat mirrors can have a nonrectangular profile, and the effects of shading and blocking, strong defocusing and astigmatism can be taken into account. The algorithm is suitable for parallel computing and can benefit from hardware acceleration of polygon texturing.
Optical analysis of grazing incidence ring resonators for free-electron lasers
NASA Astrophysics Data System (ADS)
Gabardi, David R.; Shealy, David L.
1990-06-01
Two types of grazing incidence ring resonators for use with free-electron lasers have been investigated. These cavities utilize off-axis conical and flat mirrors and have been designed to operate in the extreme ultraviolet region of the spectrum. In this paper, a design algorithm that calculates the mirror parameters for propagation of Gaussian TEM mode beams in the two cavity types is presented. Results concerning the angular stability of each type are also shown.
Assessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module
NASA Technical Reports Server (NTRS)
Gay, Robert S.
2011-01-01
Orion Crew Module (CM) touchdown detection is critical to activating the post-landing sequence that safe?s the Reaction Control Jets (RCS), ensures that the vehicle remains upright, and establishes communication with recovery forces. In order to accommodate safe landing of an unmanned vehicle or incapacitated crew, an onboard automated detection system is required. An Orion-specific touchdown detection algorithm was developed and evaluated to differentiate landing events from in-flight events. The proposed method will be used to initiate post-landing cutting of the parachute riser lines, to prevent CM rollover, and to terminate RCS jet firing prior to submersion. The RCS jets continue to fire until touchdown to maintain proper CM orientation with respect to the flight path and to limit impact loads, but have potentially hazardous consequences if submerged while firing. The time available after impact to cut risers and initiate the CM Up-righting System (CMUS) is measured in minutes, whereas the time from touchdown to RCS jet submersion is a function of descent velocity, sea state conditions, and is often less than one second. Evaluation of the detection algorithms was performed for in-flight events (e.g. descent under chutes) using hi-fidelity rigid body analyses in the Decelerator Systems Simulation (DSS), whereas water impacts were simulated using a rigid finite element model of the Orion CM in LS-DYNA. Two touchdown detection algorithms were evaluated with various thresholds: Acceleration magnitude spike detection, and Accumulated velocity changed (over a given time window) spike detection. Data for both detection methods is acquired from an onboard Inertial Measurement Unit (IMU) sensor. The detection algorithms were tested with analytically generated in-flight and landing IMU data simulations. The acceleration spike detection proved to be faster while maintaining desired safety margin. Time to RCS jet submersion was predicted analytically across a series of simulated Orion landing conditions. This paper details the touchdown detection method chosen and the analysis used to support the decision.
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
An Airborne Conflict Resolution Approach Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Mondoloni, Stephane; Conway, Sheila
2001-01-01
An airborne conflict resolution approach is presented that is capable of providing flight plans forecast to be conflict-free with both area and traffic hazards. This approach is capable of meeting constraints on the flight plan such as required times of arrival (RTA) at a fix. The conflict resolution algorithm is based upon a genetic algorithm, and can thus seek conflict-free flight plans meeting broader flight planning objectives such as minimum time, fuel or total cost. The method has been applied to conflicts occurring 6 to 25 minutes in the future in climb, cruise and descent phases of flight. The conflict resolution approach separates the detection, trajectory generation and flight rules function from the resolution algorithm. The method is capable of supporting pilot-constructed resolutions, cooperative and non-cooperative maneuvers, and also providing conflict resolution on trajectories forecast by an onboard FMC.
Li, Qu; Yao, Min; Yang, Jianhua; Xu, Ning
2014-01-01
Online friend recommendation is a fast developing topic in web mining. In this paper, we used SVD matrix factorization to model user and item feature vector and used stochastic gradient descent to amend parameter and improve accuracy. To tackle cold start problem and data sparsity, we used KNN model to influence user feature vector. At the same time, we used graph theory to partition communities with fairly low time and space complexity. What is more, matrix factorization can combine online and offline recommendation. Experiments showed that the hybrid recommendation algorithm is able to recommend online friends with good accuracy.
Railway obstacle detection algorithm using neural network
NASA Astrophysics Data System (ADS)
Yu, Mingyang; Yang, Peng; Wei, Sen
2018-05-01
Aiming at the difficulty of detection of obstacle in outdoor railway scene, a data-oriented method based on neural network to obtain image objects is proposed. First, we mark objects of images(such as people, trains, animals) acquired on the Internet. and then use the residual learning units to build Fast R-CNN framework. Then, the neural network is trained to get the target image characteristics by using stochastic gradient descent algorithm. Finally, a well-trained model is used to identify an outdoor railway image. if it includes trains and other objects, it will issue an alert. Experiments show that the correct rate of warning reached 94.85%.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
A Simple Algorithm for the Metric Traveling Salesman Problem
NASA Technical Reports Server (NTRS)
Grimm, M. J.
1984-01-01
An algorithm was designed for a wire list net sort problem. A branch and bound algorithm for the metric traveling salesman problem is presented for this. The algorithm is a best bound first recursive descent where the bound is based on the triangle inequality. The bounded subsets are defined by the relative order of the first K of the N cities (i.e., a K city subtour). When K equals N, the bound is the length of the tour. The algorithm is implemented as a one page subroutine written in the C programming language for the VAX 11/750. Average execution times for randomly selected planar points using the Euclidean metric are 0.01, 0.05, 0.42, and 3.13 seconds for ten, fifteen, twenty, and twenty-five cities, respectively. Maximum execution times for a hundred cases are less than eleven times the averages. The speed of the algorithms is due to an initial ordering algorithm that is a N squared operation. The algorithm also solves the related problem where the tour does not return to the starting city and the starting and/or ending cities may be specified. It is possible to extend the algorithm to solve a nonsymmetric problem satisfying the triangle inequality.
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
NASA Astrophysics Data System (ADS)
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
On Nonconvex Decentralized Gradient Descent
2016-08-01
and J. Bolte, On the convergence of the proximal algorithm for nonsmooth functions involving analytic features, Math . Program., 116: 5-16, 2009. [2] H...splitting, and regularized Gauss-Seidel methods, Math . Pro- gram., Ser. A, 137: 91-129, 2013. [3] P. Bianchi and J. Jakubowicz, Convergence of a multi-agent...subgradient method under random communication topologies , IEEE J. Sel. Top. Signal Process., 5:754-771, 2011. [11] A. Nedic and A. Ozdaglar, Distributed
Neural network explanation using inversion.
Saad, Emad W; Wunsch, Donald C
2007-01-01
An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.
Performance study of LMS based adaptive algorithms for unknown system identification
NASA Astrophysics Data System (ADS)
Javed, Shazia; Ahmad, Noor Atinah
2014-07-01
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.
Performance study of LMS based adaptive algorithms for unknown system identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Javed, Shazia; Ahmad, Noor Atinah
Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less
Real-Time Adaptive Control of Flow-Induced Cavity Tones
NASA Technical Reports Server (NTRS)
Kegerise, Michael A.; Cabell, Randolph H.; Cattafesta, Louis N.
2004-01-01
An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. The adaptive control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. The algorithm was also able t o maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Controller performance was evaluated with a measure of output disturbance rejection and an input sensitivity transfer function. The results suggest that disturbances entering the cavity flow are colocated with the control input at the cavity leading edge. In that case, only tonal components of the cavity wall-pressure fluctuations can be suppressed and arbitrary broadband pressure reduction is not possible. In the control-algorithm development, the cavity dynamics are treated as linear and time invariant (LTI) for a fixed Mach number. The experimental results lend support this treatment.
On the Use of a Range Trigger for the Mars Science Laboratory Entry Descent and Landing
NASA Technical Reports Server (NTRS)
Way, David W.
2011-01-01
In 2012, during the Entry, Descent, and Landing (EDL) of the Mars Science Laboratory (MSL) entry vehicle, a 21.5 m Viking-heritage, Disk-Gap-Band, supersonic parachute will be deployed at approximately Mach 2. The baseline algorithm for commanding this parachute deployment is a navigated planet-relative velocity trigger. This paper compares the performance of an alternative range-to-go trigger (sometimes referred to as Smart Chute ), which can significantly reduce the landing footprint size. Numerical Monte Carlo results, predicted by the POST2 MSL POST End-to-End EDL simulation, are corroborated and explained by applying propagation of uncertainty methods to develop an analytic estimate for the standard deviation of Mach number. A negative correlation is shown to exist between the standard deviations of wind velocity and the planet-relative velocity at parachute deploy, which mitigates the Mach number rise in the case of the range trigger.
Alignment and assembly process for primary mirror subsystem of a spaceborne telescope
NASA Astrophysics Data System (ADS)
Lin, Wei-Cheng; Chang, Shenq-Tsong; Chang, Sheng-Hsiung; Chang, Chen-Peng; Lin, Yu-Chuan; Chin, Chi-Chieh; Pan, Hsu-Pin; Huang, Ting-Ming
2015-11-01
In this study, a multispectral spaceborne Cassegrain telescope was developed. The telescope was equipped with a primary mirror with a 450-mm clear aperture composed of Zerodur and lightweighted at a ratio of approximately 50% to meet both thermal and mass requirements. Reducing the astigmatism was critical for this mirror. The astigmatism is caused by gravity effects, the bonding process, and deformation from mounting the main structure of the telescope (main plate). This article presents the primary mirror alignment, mechanical ground-supported equipment (MGSE), assembly process, and optical performance test used to assemble the primary mirror. A mechanical compensated shim is used as the interface between the bipod flexure and main plate. The shim was used to compensate for manufacturer errors found in components and differences between local coplanarity errors to prevent stress while the bipod flexure was screwed to the main plate. After primary mirror assembly, an optical performance test method called a bench test with an algorithm was used to analyze the astigmatism caused by the gravity effect and deformation from the mounting or supporter. The tolerance conditions for the primary mirror assembly require the astigmatism caused by gravity and mounting force deformation to be less than P-V 0.02 λ at 632.8 nm. The results demonstrated that the designed MGSE used in the alignment and assembly processes met the critical requirements for the primary mirror assembly of the telescope.
Using multifield measurements to eliminate alignment degeneracies in the JWST testbed telescope
NASA Astrophysics Data System (ADS)
Sabatke, Erin; Acton, Scott; Schwenker, John; Towell, Tim; Carey, Larkin; Shields, Duncan; Contos, Adam; Leviton, Doug
2007-09-01
The primary mirror of the James Webb Space Telescope (JWST) consists of 18 segments and is 6.6 meters in diameter. A sequence of commissioning steps is carried out at a single field point to align the segments. At that single field point, though, the segmented primary mirror can compensate for aberrations caused by misalignments of the remaining mirrors. The misalignments can be detected in the wavefronts of off-axis field points. The Multifield (MF) step in the commissioning process surveys five field points and uses a simple matrix multiplication to calculate corrected positions for the secondary and primary mirrors. A demonstration of the Multifield process was carried out on the JWST Testbed Telescope (TBT). The results show that the Multifield algorithm is capable of reducing the field dependency of the TBT to about 20 nm RMS, relative to the TBT design nominal field dependency.
E-ELT M5 field stabilisation unit scale 1 demonstrator design and performances evaluation
NASA Astrophysics Data System (ADS)
Casalta, J. M.; Barriga, J.; Ariño, J.; Mercader, J.; San Andrés, M.; Serra, J.; Kjelberg, I.; Hubin, N.; Jochum, L.; Vernet, E.; Dimmler, M.; Müller, M.
2010-07-01
The M5 Field stabilization Unit (M5FU) for European Extremely Large Telescope (E-ELT) is a fast correcting optical system that shall provide tip-tilt corrections for the telescope dynamic pointing errors and the effect of atmospheric tiptilt and wind disturbances. A M5FU scale 1 demonstrator (M5FU1D) is being built to assess the feasibility of the key elements (actuators, sensors, mirror, mirror interfaces) and the real-time control algorithm. The strict constraints (e.g. tip-tilt control frequency range 100Hz, 3m ellipse mirror size, mirror first Eigen frequency 300Hz, maximum tip/tilt range +/- 30 arcsec, maximum tiptilt error < 40 marcsec) have been a big challenge for developing the M5FU Conceptual Design and its scale 1 demonstrator. The paper summarises the proposed design for the final unit and demonstrator and the measured performances compared to the applicable specifications.
Development of the segment alignment maintenance system (SAMS) for the Hobby-Eberly Telescope
NASA Astrophysics Data System (ADS)
Booth, John A.; Adams, Mark T.; Ames, Gregory H.; Fowler, James R.; Montgomery, Edward E.; Rakoczy, John M.
2000-07-01
A sensing and control system for maintaining optical alignment of ninety-one 1-meter mirror segments forming the Hobby-Eberly Telescope (HET) primary mirror array is now under development. The Segment Alignment Maintenance System (SAMS) is designed to sense relative shear motion between each segment edge pair and calculated individual segment tip, tilt, and piston position errors. Error information is sent to the HET primary mirror control system, which corrects the physical position of each segment as often as once per minute. Development of SAMS is required to meet optical images quality specifications for the telescope. Segment misalignment over time is though to be due to thermal inhomogeneity within the steel mirror support truss. Challenging problems of sensor resolution, dynamic range, mechanical mounting, calibration, stability, robust algorithm development, and system integration must be overcome to achieve a successful operational solution.
NASA Astrophysics Data System (ADS)
Lin, Wei-Cheng; Chang, Shenq-Tsong; Yu, Zong-Ru; Lin, Yu-Chuan; Ho, Cheng-Fong; Huang, Ting-Ming; Chen, Cheng-Huan
2014-09-01
A Cassegrain telescope with a 450 mm clear aperture was developed for use in a spaceborne optical remote-sensing instrument. Self-weight deformation and thermal distortion were considered: to this end, Zerodur was used to manufacture the primary mirror. The lightweight scheme adopted a hexagonal cell structure yielding a lightweight ratio of 50%. In general, optical testing on a lightweight mirror is a critical technique during both the manufacturing and assembly processes. To prevent unexpected measurement errors that cause erroneous judgment, this paper proposes a novel and reliable analytical method for optical testing, called the bench test. The proposed algorithm was used to distinguish the manufacturing form error from surface deformation caused by the mounting, supporter and gravity effects for the optical testing. The performance of the proposed bench test was compared with a conventional vertical setup for optical testing during the manufacturing process of the lightweight mirror.
Ma, Xingkun; Huang, Lei; Bian, Qi; Gong, Mali
2014-09-10
The wavefront correction ability of a deformable mirror with a multireflection waveguide was investigated and compared via simulations. By dividing a conventional actuator array into a multireflection waveguide that consisted of single-actuator units, an arbitrary actuator pattern could be achieved. A stochastic parallel perturbation algorithm was proposed to find the optimal actuator pattern for a particular aberration. Compared with conventional an actuator array, the multireflection waveguide showed significant advantages in correction of higher order aberrations.
Kurczynska, Monika; Kotulska, Malgorzata
2018-01-01
Mirror protein structures are often considered as artifacts in modeling protein structures. However, they may soon become a new branch of biochemistry. Moreover, methods of protein structure reconstruction, based on their residue-residue contact maps, need methodology to differentiate between models of native and mirror orientation, especially regarding the reconstructed backbones. We analyzed 130 500 structural protein models obtained from contact maps of 1 305 SCOP domains belonging to all 7 structural classes. On average, the same numbers of native and mirror models were obtained among 100 models generated for each domain. Since their structural features are often not sufficient for differentiating between the two types of model orientations, we proposed to apply various energy terms (ETs) from PyRosetta to separate native and mirror models. To automate the procedure for differentiating these models, the k-means clustering algorithm was applied. Using total energy did not allow to obtain appropriate clusters-the accuracy of the clustering for class A (all helices) was no more than 0.52. Therefore, we tested a series of different k-means clusterings based on various combinations of ETs. Finally, applying two most differentiating ETs for each class allowed to obtain satisfying results. To unify the method for differentiating between native and mirror models, independent of their structural class, the two best ETs for each class were considered. Finally, the k-means clustering algorithm used three common ETs: probability of amino acid assuming certain values of dihedral angles Φ and Ψ, Ramachandran preferences and Coulomb interactions. The accuracies of clustering with these ETs were in the range between 0.68 and 0.76, with sensitivity and selectivity in the range between 0.68 and 0.87, depending on the structural class. The method can be applied to all fully-automated tools for protein structure reconstruction based on contact maps, especially those analyzing big sets of models.
Kurczynska, Monika
2018-01-01
Mirror protein structures are often considered as artifacts in modeling protein structures. However, they may soon become a new branch of biochemistry. Moreover, methods of protein structure reconstruction, based on their residue-residue contact maps, need methodology to differentiate between models of native and mirror orientation, especially regarding the reconstructed backbones. We analyzed 130 500 structural protein models obtained from contact maps of 1 305 SCOP domains belonging to all 7 structural classes. On average, the same numbers of native and mirror models were obtained among 100 models generated for each domain. Since their structural features are often not sufficient for differentiating between the two types of model orientations, we proposed to apply various energy terms (ETs) from PyRosetta to separate native and mirror models. To automate the procedure for differentiating these models, the k-means clustering algorithm was applied. Using total energy did not allow to obtain appropriate clusters–the accuracy of the clustering for class A (all helices) was no more than 0.52. Therefore, we tested a series of different k-means clusterings based on various combinations of ETs. Finally, applying two most differentiating ETs for each class allowed to obtain satisfying results. To unify the method for differentiating between native and mirror models, independent of their structural class, the two best ETs for each class were considered. Finally, the k-means clustering algorithm used three common ETs: probability of amino acid assuming certain values of dihedral angles Φ and Ψ, Ramachandran preferences and Coulomb interactions. The accuracies of clustering with these ETs were in the range between 0.68 and 0.76, with sensitivity and selectivity in the range between 0.68 and 0.87, depending on the structural class. The method can be applied to all fully-automated tools for protein structure reconstruction based on contact maps, especially those analyzing big sets of models. PMID:29787567
Advances in thermal control and performance of the MMT M1 mirror
NASA Astrophysics Data System (ADS)
Gibson, J. D.; Williams, G. G.; Callahan, S.; Comisso, B.; Ortiz, R.; Williams, J. T.
2010-07-01
Strategies for thermal control of the 6.5-meter diameter borosilicate honeycomb primary (M1) mirror at the MMT Observatory have included: 1) direct control of ventilation system chiller setpoints by the telescope operator, 2) semiautomated control of chiller setpoints, using a fixed offset from the ambient temperature, and 3) most recently, an automated temperature controller for conditioned air. Details of this automated controller, including the integration of multiple chillers, heat exchangers, and temperature/dew point sensors, are presented here. Constraints and sanity checks for thermal control are also discussed, including: 1) mirror and hardware safety, 2) aluminum coating preservation, and 3) optimization of M1 thermal conditions for science acquisition by minimizing both air-to-glass temperature differences, which cause mirror seeing, and internal glass temperature gradients, which cause wavefront errors. Consideration is given to special operating conditions, such as high dew and frost points. Precise temperature control of conditioned ventilation air as delivered to the M1 mirror cell is also discussed. The performance of the new automated controller is assessed and compared to previous control strategies. Finally, suggestions are made for further refinement of the M1 mirror thermal control system and related algorithms.
[Near infrared spectroscopy system structure with MOEMS scanning mirror array].
Luo, Biao; Wen, Zhi-Yu; Wen, Zhong-Quan; Chen, Li; Qian, Rong-Rong
2011-11-01
A method which uses MOEMS mirror array optical structure to reduce the high cost of infrared spectrometer is given in the present paper. This method resolved the problem that MOEMS mirror array can not be used in simple infrared spectrometer because the problem of imaging irregularity in infrared spectroscopy and a new structure for spectral imaging was designed. According to the requirements of imaging spot, this method used optical design software ZEMAX and standard-specific aberrations of the optimization algorithm, designed and optimized the optical structure. It works from 900 to 1 400 nm. The results of design analysis showed that with the light source slit width of 50 microm, the spectrophotometric system is superior to the theoretical resolution of 6 nm, and the size of the available spot is 0.042 mm x 0.08 mm. Verification examples show that the design meets the requirements of the imaging regularity, and can be used for MOEMS mirror reflectance scan. And it was also verified that the use of a new MOEMS mirror array spectrometer model is feasible. Finally, analyze the relationship between the location of the detector and the maximum deflection angle of micro-mirror was analyzed.
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
Pharmacogenomics of warfarin in populations of African descent
Suarez-Kurtz, Guilherme; Botton, Mariana R
2013-01-01
Warfarin is the most commonly prescribed oral anticoagulant worldwide despite its narrow therapeutic index and the notorious inter- and intra-individual variability in dose required for the target clinical effect. Pharmacogenetic polymorphisms are major determinants of warfarin pharmacokinetic and dynamics and included in several warfarin dosing algorithms. This review focuses on warfarin pharmacogenomics in sub-Saharan peoples, African Americans and admixed Brazilians. These ‘Black’ populations differ in several aspects, notably their extent of recent admixture with Europeans, a factor which impacts on the frequency distribution of pharmacogenomic polymorphisms relevant to warfarin dose requirement for the target clinical effect. Whereas a small number of polymorphisms in VKORC1 (3673G > A, rs9923231), CYP2C9 (alleles *2 and *3, rs1799853 and rs1057910, respectively) and arguably CYP4F2 (rs2108622), may capture most of the pharmacogenomic influence on warfarin dose variance in White populations, additional polymorphisms in these, and in other, genes (e.g. CALU rs339097) increase the predictive power of pharmacogenetic warfarin dosing algorithms in the Black populations examined. A personalized strategy for initiation of warfarin therapy, allowing for improved safety and cost-effectiveness for populations of African descent must take into account their pharmacogenomic diversity, as well as socio-economical, cultural and medical factors. Accounting for this heterogeneity in algorithms that are ‘friendly’ enough to be adopted by warfarin prescribers worldwide requires gathering information from trials at different population levels, but demands also a critical appraisal of racial/ethnic labels that are commonly used in the clinical pharmacology literature but do not accurately reflect genetic ancestry and population diversity. PMID:22676711
Intelligence system based classification approach for medical disease diagnosis
NASA Astrophysics Data System (ADS)
Sagir, Abdu Masanawa; Sathasivam, Saratha
2017-08-01
The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.
Testing large aspheric surfaces with complementary annular subaperture interferometric method
NASA Astrophysics Data System (ADS)
Hou, Xi; Wu, Fan; Lei, Baiping; Fan, Bin; Chen, Qiang
2008-07-01
Annular subaperture interferometric method has provided an alternative solution to testing rotationally symmetric aspheric surfaces with low cost and flexibility. However, some new challenges, particularly in the motion and algorithm components, appear when applied to large aspheric surfaces with large departure in the practical engineering. Based on our previously reported annular subaperture reconstruction algorithm with Zernike annular polynomials and matrix method, and the experimental results for an approximate 130-mm diameter and f/2 parabolic mirror, an experimental investigation by testing an approximate 302-mm diameter and f/1.7 parabolic mirror with the complementary annular subaperture interferometric method is presented. We have focused on full-aperture reconstruction accuracy, and discuss some error effects and limitations of testing larger aspheric surfaces with the annular subaperture method. Some considerations about testing sector segment with complementary sector subapertures are provided.
Modeling and analysis of the solar concentrator in photovoltaic systems
NASA Astrophysics Data System (ADS)
Mroczka, Janusz; Plachta, Kamil
2015-06-01
The paper presents the Λ-ridge and V-trough concentrator system with a low concentration ratio. Calculations and simulations have been made in the program created by the author. The results of simulation allow to choose the best parameters of photovoltaic system: the opening angle between the surface of the photovoltaic module and mirrors, resolution of the tracking system and the material for construction of the concentrator mirrors. The research shows the effect each of these parameters on the efficiency of the photovoltaic system and method of surface modeling using BRDF function. The parameters of concentrator surface (eg. surface roughness) were calculated using a new algorithm based on the BRDF function. The algorithm uses a combination of model Torrance-Sparrow and HTSG. The simulation shows the change in voltage, current and output power depending on system parameters.
Towards the automatization of the Foucault knife-edge quantitative test
NASA Astrophysics Data System (ADS)
Rodríguez, G.; Villa, J.; Martínez, G.; de la Rosa, I.; Ivanov, R.
2017-08-01
Given the increasing necessity of simple, economical and reliable methods and instruments for performing quality tests of optical surfaces such as mirrors and lenses, in the recent years we resumed the study of the long forgotten Foucault knife-edge test from the point of view of the physical optics, ultimately achieving a closed mathematical expression that directly relates the knife-edge position along the displacement paraxial axis with the observable irradiance pattern, which later allowed us to propose a quantitative methodology for estimating the wavefront error of an aspherical mirror with precision akin to interferometry. In this work, we present a further improved digital image processing algorithm in which the sigmoidal cost-function for calculating the transient slope-point of each associated intensity-illumination profile is replaced for a simplified version of it, thus making the whole process of estimating the wavefront gradient remarkably more stable and efficient, at the same time, the Fourier based algorithm employed for gradient integration has been replaced as well for a regularized quadratic cost-function that allows a considerably easier introduction of the region of interest (ROI) of the function, which solved by means of a linear gradient conjugate method largely increases the overall accuracy and efficiency of the algorithm. This revised approach of our methodology can be easily implemented and handled by most single-board microcontrollers in the market, hence enabling the implementation of a full-integrated automatized test apparatus, opening a realistic path for even the proposal of a stand-alone optical mirror analyzer prototype.
Wavelet-based edge correlation incorporated iterative reconstruction for undersampled MRI.
Hu, Changwei; Qu, Xiaobo; Guo, Di; Bao, Lijun; Chen, Zhong
2011-09-01
Undersampling k-space is an effective way to decrease acquisition time for MRI. However, aliasing artifacts introduced by undersampling may blur the edges of magnetic resonance images, which often contain important information for clinical diagnosis. Moreover, k-space data is often contaminated by the noise signals of unknown intensity. To better preserve the edge features while suppressing the aliasing artifacts and noises, we present a new wavelet-based algorithm for undersampled MRI reconstruction. The algorithm solves the image reconstruction as a standard optimization problem including a ℓ(2) data fidelity term and ℓ(1) sparsity regularization term. Rather than manually setting the regularization parameter for the ℓ(1) term, which is directly related to the threshold, an automatic estimated threshold adaptive to noise intensity is introduced in our proposed algorithm. In addition, a prior matrix based on edge correlation in wavelet domain is incorporated into the regularization term. Compared with nonlinear conjugate gradient descent algorithm, iterative shrinkage/thresholding algorithm, fast iterative soft-thresholding algorithm and the iterative thresholding algorithm using exponentially decreasing threshold, the proposed algorithm yields reconstructions with better edge recovery and noise suppression. Copyright © 2011 Elsevier Inc. All rights reserved.
Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-Test Results
NASA Technical Reports Server (NTRS)
Brown, Nelson Andrew; Schaefer, Jacob Robert
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.
Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows
NASA Technical Reports Server (NTRS)
Palmer, Grant; Venkatapathy, Ethiraj
1995-01-01
Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
NASA Astrophysics Data System (ADS)
Xiao, Ying; Michalski, Darek; Censor, Yair; Galvin, James M.
2004-07-01
The efficient delivery of intensity modulated radiation therapy (IMRT) depends on finding optimized beam intensity patterns that produce dose distributions, which meet given constraints for the tumour as well as any critical organs to be spared. Many optimization algorithms that are used for beamlet-based inverse planning are susceptible to large variations of neighbouring intensities. Accurately delivering an intensity pattern with a large number of extrema can prove impossible given the mechanical limitations of standard multileaf collimator (MLC) delivery systems. In this study, we apply Cimmino's simultaneous projection algorithm to the beamlet-based inverse planning problem, modelled mathematically as a system of linear inequalities. We show that using this method allows us to arrive at a smoother intensity pattern. Including nonlinear terms in the simultaneous projection algorithm to deal with dose-volume histogram (DVH) constraints does not compromise this property from our experimental observation. The smoothness properties are compared with those from other optimization algorithms which include simulated annealing and the gradient descent method. The simultaneous property of these algorithms is ideally suited to parallel computing technologies.
Algorithms for Maneuvering Spacecraft Around Small Bodies
NASA Technical Reports Server (NTRS)
Acikmese, A. Bechet; Bayard, David
2006-01-01
A document describes mathematical derivations and applications of autonomous guidance algorithms for maneuvering spacecraft in the vicinities of small astronomical bodies like comets or asteroids. These algorithms compute fuel- or energy-optimal trajectories for typical maneuvers by solving the associated optimal-control problems with relevant control and state constraints. In the derivations, these problems are converted from their original continuous (infinite-dimensional) forms to finite-dimensional forms through (1) discretization of the time axis and (2) spectral discretization of control inputs via a finite number of Chebyshev basis functions. In these doubly discretized problems, the Chebyshev coefficients are the variables. These problems are, variously, either convex programming problems or programming problems that can be convexified. The resulting discrete problems are convex parameter-optimization problems; this is desirable because one can take advantage of very efficient and robust algorithms that have been developed previously and are well established for solving such problems. These algorithms are fast, do not require initial guesses, and always converge to global optima. Following the derivations, the algorithms are demonstrated by applying them to numerical examples of flyby, descent-to-hover, and ascent-from-hover maneuvers.
Peak-Seeking Optimization of Trim for Reduced Fuel Consumption: Flight-test Results
NASA Technical Reports Server (NTRS)
Brown, Nelson Andrew; Schaefer, Jacob Robert
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These results show that the algorithm has good performance in a relevant environment.
Advanced Wavefront Sensing and Control Testbed (AWCT)
NASA Technical Reports Server (NTRS)
Shi, Fang; Basinger, Scott A.; Diaz, Rosemary T.; Gappinger, Robert O.; Tang, Hong; Lam, Raymond K.; Sidick, Erkin; Hein, Randall C.; Rud, Mayer; Troy, Mitchell
2010-01-01
The Advanced Wavefront Sensing and Control Testbed (AWCT) is built as a versatile facility for developing and demonstrating, in hardware, the future technologies of wave front sensing and control algorithms for active optical systems. The testbed includes a source projector for a broadband point-source and a suite of extended scene targets, a dispersed fringe sensor, a Shack-Hartmann camera, and an imaging camera capable of phase retrieval wavefront sensing. The testbed also provides two easily accessible conjugated pupil planes which can accommodate the active optical devices such as fast steering mirror, deformable mirror, and segmented mirrors. In this paper, we describe the testbed optical design, testbed configurations and capabilities, as well as the initial results from the testbed hardware integrations and tests.
Phase imaging using shifted wavefront sensor images.
Zhang, Zhengyun; Chen, Zhi; Rehman, Shakil; Barbastathis, George
2014-11-01
We propose a new approach to the complete retrieval of a coherent field (amplitude and phase) using the same hardware configuration as a Shack-Hartmann sensor but with two modifications: first, we add a transversally shifted measurement to resolve ambiguities in the measured phase; and second, we employ factored form descent (FFD), an inverse algorithm for coherence retrieval, with a hard rank constraint. We verified the proposed approach using both numerical simulations and experiments.
Learning Structured Classifiers with Dual Coordinate Ascent
2010-06-01
stochastic gradient descent (SGD) [LeCun et al., 1998], and the margin infused relaxed algorithm (MIRA) [ Crammer et al., 2006]. This paper presents a...evaluate these methods on the Prague Dependency Treebank us- ing online large-margin learning tech- niques ( Crammer et al., 2003; McDonald et al., 2005...between two kinds of factors: hard constraint factors, which are used to rule out forbidden par- tial assignments by mapping them to zero potential values
Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy
NASA Astrophysics Data System (ADS)
Rendon, A.; Beck, J. C.; Lilge, Lothar
2008-02-01
Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
NASA Astrophysics Data System (ADS)
Arias, E.; Florez, E.; Pérez-Torres, J. F.
2017-06-01
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu7, Cu9, and Cu11 as benchmark systems, and Cu38 and Ni9 as novel systems. New equilibrium structures for Cu9, Cu11, Cu38, and Ni9 are reported.
Arias, E; Florez, E; Pérez-Torres, J F
2017-06-28
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu 7 , Cu 9 , and Cu 11 as benchmark systems, and Cu 38 and Ni 9 as novel systems. New equilibrium structures for Cu 9 , Cu 11 , Cu 38 , and Ni 9 are reported.
Design factors and considerations for a time-based flight management system
NASA Technical Reports Server (NTRS)
Vicroy, D. D.; Williams, D. H.; Sorensen, J. A.
1986-01-01
Recent NASA Langley Research Center research to develop a technology data base from which an advanced Flight Management System (FMS) design might evolve is reviewed. In particular, the generation of fixed range cruise/descent reference trajectories which meet predefined end conditions of altitude, speed, and time is addressed. Results on the design and theoretical basis of the trajectory generation algorithm are presented, followed by a brief discussion of a series of studies that are being conducted to determine the accuracy requirements of the aircraft and weather models resident in the trajectory generation algorithm. Finally, studies to investigate the interface requirements between the pilot and an advanced FMS are considered.
Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.
Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban
2015-07-20
In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.
CP decomposition approach to blind separation for DS-CDMA system using a new performance index
NASA Astrophysics Data System (ADS)
Rouijel, Awatif; Minaoui, Khalid; Comon, Pierre; Aboutajdine, Driss
2014-12-01
In this paper, we present a canonical polyadic (CP) tensor decomposition isolating the scaling matrix. This has two major implications: (i) the problem conditioning shows up explicitly and could be controlled through a constraint on the so-called coherences and (ii) a performance criterion concerning the factor matrices can be exactly calculated and is more realistic than performance metrics used in the literature. Two new algorithms optimizing the CP decomposition based on gradient descent are proposed. This decomposition is illustrated by an application to direct-sequence code division multiplexing access (DS-CDMA) systems; computer simulations are provided and demonstrate the good behavior of these algorithms, compared to others in the literature.
A modified conjugate gradient coefficient with inexact line search for unconstrained optimization
NASA Astrophysics Data System (ADS)
Aini, Nurul; Rivaie, Mohd; Mamat, Mustafa
2016-11-01
Conjugate gradient (CG) method is a line search algorithm mostly known for its wide application in solving unconstrained optimization problems. Its low memory requirements and global convergence properties makes it one of the most preferred method in real life application such as in engineering and business. In this paper, we present a new CG method based on AMR* and CD method for solving unconstrained optimization functions. The resulting algorithm is proven to have both the sufficient descent and global convergence properties under inexact line search. Numerical tests are conducted to assess the effectiveness of the new method in comparison to some previous CG methods. The results obtained indicate that our method is indeed superior.
NASA Astrophysics Data System (ADS)
Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro
1995-02-01
We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
Algorithms for the optimization of RBE-weighted dose in particle therapy.
Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M
2013-01-21
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
Algorithms for the optimization of RBE-weighted dose in particle therapy
NASA Astrophysics Data System (ADS)
Horcicka, M.; Meyer, C.; Buschbacher, A.; Durante, M.; Krämer, M.
2013-01-01
We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.
NASA Astrophysics Data System (ADS)
Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang
2018-05-01
Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.
Hazardous gas detection for FTIR-based hyperspectral imaging system using DNN and CNN
NASA Astrophysics Data System (ADS)
Kim, Yong Chan; Yu, Hyeong-Geun; Lee, Jae-Hoon; Park, Dong-Jo; Nam, Hyun-Woo
2017-10-01
Recently, a hyperspectral imaging system (HIS) with a Fourier Transform InfraRed (FTIR) spectrometer has been widely used due to its strengths in detecting gaseous fumes. Even though numerous algorithms for detecting gaseous fumes have already been studied, it is still difficult to detect target gases properly because of atmospheric interference substances and unclear characteristics of low concentration gases. In this paper, we propose detection algorithms for classifying hazardous gases using a deep neural network (DNN) and a convolutional neural network (CNN). In both the DNN and CNN, spectral signal preprocessing, e.g., offset, noise, and baseline removal, are carried out. In the DNN algorithm, the preprocessed spectral signals are used as feature maps of the DNN with five layers, and it is trained by a stochastic gradient descent (SGD) algorithm (50 batch size) and dropout regularization (0.7 ratio). In the CNN algorithm, preprocessed spectral signals are trained with 1 × 3 convolution layers and 1 × 2 max-pooling layers. As a result, the proposed algorithms improve the classification accuracy rate by 1.5% over the existing support vector machine (SVM) algorithm for detecting and classifying hazardous gases.
Real-Time Feedback Control of Flow-Induced Cavity Tones. Part 2; Adaptive Control
NASA Technical Reports Server (NTRS)
Kegerise, M. A.; Cabell, R. H.; Cattafesta, L. N., III
2006-01-01
An adaptive generalized predictive control (GPC) algorithm was formulated and applied to the cavity flow-tone problem. The algorithm employs gradient descent to update the GPC coefficients at each time step. Past input-output data and an estimate of the open-loop pulse response sequence are all that is needed to implement the algorithm for application at fixed Mach numbers. Transient measurements made during controller adaptation revealed that the controller coefficients converged to a steady state in the mean, and this implies that adaptation can be turned off at some point with no degradation in control performance. When converged, the control algorithm demonstrated multiple Rossiter mode suppression at fixed Mach numbers ranging from 0.275 to 0.38. However, as in the case of fixed-gain GPC, the adaptive GPC performance was limited by spillover in sidebands around the suppressed Rossiter modes. The algorithm was also able to maintain suppression of multiple cavity tones as the freestream Mach number was varied over a modest range (0.275 to 0.29). Beyond this range, stable operation of the control algorithm was not possible due to the fixed plant model in the algorithm.
On the use of harmony search algorithm in the training of wavelet neural networks
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2015-10-01
Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.
Optimisation des trajectoires verticales par la methode de la recherche de l'harmonie =
NASA Astrophysics Data System (ADS)
Ruby, Margaux
Face au rechauffement climatique, les besoins de trouver des solutions pour reduire les emissions de CO2 sont urgentes. L'optimisation des trajectoires est un des moyens pour reduire la consommation de carburant lors d'un vol. Afin de determiner la trajectoire optimale de l'avion, differents algorithmes ont ete developpes. Le but de ces algorithmes est de reduire au maximum le cout total d'un vol d'un avion qui est directement lie a la consommation de carburant et au temps de vol. Un autre parametre, nomme l'indice de cout est considere dans la definition du cout de vol. La consommation de carburant est fournie via des donnees de performances pour chaque phase de vol. Dans le cas de ce memoire, les phases d'un vol complet, soit, une phase de montee, une phase de croisiere et une phase de descente, sont etudies. Des " marches de montee " etaient definies comme des montees de 2 000ft lors de la phase de croisiere sont egalement etudiees. L'algorithme developpe lors de ce memoire est un metaheuristique, nomme la recherche de l'harmonie, qui, concilie deux types de recherches : la recherche locale et la recherche basee sur une population. Cet algorithme se base sur l'observation des musiciens lors d'un concert, ou plus exactement sur la capacite de la musique a trouver sa meilleure harmonie, soit, en termes d'optimisation, le plus bas cout. Differentes donnees d'entrees comme le poids de l'avion, la destination, la vitesse de l'avion initiale et le nombre d'iterations doivent etre, entre autre, fournies a l'algorithme pour qu'il soit capable de determiner la solution optimale qui est definie comme : [Vitesse de montee, Altitude, Vitesse de croisiere, Vitesse de descente]. L'algorithme a ete developpe a l'aide du logiciel MATLAB et teste pour plusieurs destinations et plusieurs poids pour un seul type d'avion. Pour la validation, les resultats obtenus par cet algorithme ont ete compares dans un premier temps aux resultats obtenus suite a une recherche exhaustive qui a utilisee toutes les combinaisons possibles. Cette recherche exhaustive nous a fourni l'optimal global; ainsi, la solution de notre algorithme doit se rapprocher le plus possible de la recherche exhaustive afin de prouver qu'il donne des resultats proche de l'optimal global. Une seconde comparaison a ete effectuee entre les resultats fournis par l'algorithme et ceux du Flight Management System (FMS) qui est un systeme d'avionique situe dans le cockpit de l'avion fournissant la route a suivre afin d'optimiser la trajectoire. Le but est de prouver que l'algorithme de la recherche de l'harmonie donne de meilleurs resultats que l'algorithme implemente dans le FMS.
NASA Technical Reports Server (NTRS)
Brown, Nelson
2013-01-01
A peak-seeking control algorithm for real-time trim optimization for reduced fuel consumption has been developed by researchers at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center to address the goals of the NASA Environmentally Responsible Aviation project to reduce fuel burn and emissions. The peak-seeking control algorithm is based on a steepest-descent algorithm using a time-varying Kalman filter to estimate the gradient of a performance function of fuel flow versus control surface positions. In real-time operation, deflections of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of an F/A-18 airplane are used for optimization of fuel flow. Results from six research flights are presented herein. The optimization algorithm found a trim configuration that required approximately 3 percent less fuel flow than the baseline trim at the same flight condition. This presentation also focuses on the design of the flight experiment and the practical challenges of conducting the experiment.
NASA Astrophysics Data System (ADS)
Shahri, Abbas; Mousavinaseri, Mahsasadat; Naderi, Shima; Espersson, Maria
2015-04-01
Application of Artificial Neural Networks (ANNs) in many areas of engineering, in particular to geotechnical engineering problems such as site characterization has demonstrated some degree of success. The present paper aims to evaluate the feasibility of several various types of ANN models to predict the clay sensitivity of soft clays form piezocone penetration test data (CPTu). To get the aim, a research database of CPTu data of 70 test points around the Göta River near the Lilli Edet in the southwest of Sweden which is a high prone land slide area were collected and considered as input for ANNs. For training algorithms the quick propagation, conjugate gradient descent, quasi-Newton, limited memory quasi-Newton and Levenberg-Marquardt were developed tested and trained using the CPTu data to provide a comparison between the results of field investigation and ANN models to estimate the clay sensitivity. The reason of using the clay sensitivity parameter in this study is due to its relation to landslides in Sweden.A special high sensitive clay namely quick clay is considered as the main responsible for experienced landslides in Sweden which has high sensitivity and prone to slide. The training and testing program was started with 3-2-1 ANN architecture structure. By testing and trying several various architecture structures and changing the hidden layer in order to have a higher output resolution the 3-4-4-3-1 architecture structure for ANN in this study was confirmed. The tested algorithm showed that increasing the hidden layers up to 4 layers in ANN can improve the results and the 3-4-4-3-1 architecture structure ANNs for prediction of clay sensitivity represent reliable and reasonable response. The obtained results showed that the conjugate gradient descent algorithm with R2=0.897 has the best performance among the tested algorithms. Keywords: clay sensitivity, landslide, Artificial Neural Network
Powered Descent Trajectory Guidance and Some Considerations for Human Lunar Landing
NASA Technical Reports Server (NTRS)
Sostaric, Ronald R.
2007-01-01
The Autonomous Precision Landing and Hazard Detection and Avoidance Technology development (ALHAT) will enable an accurate (better than 100m) landing on the lunar surface. This technology will also permit autonomous (independent from ground) avoidance of hazards detected in real time. A preliminary trajectory guidance algorithm capable of supporting these tasks has been developed and demonstrated in simulations. Early results suggest that with expected improvements in sensor technology and lunar mapping, mission objectives are achievable.
2014-09-30
to establish the performance of algorithms detecting dives, strokes , clicks, respiration and gait changes. We have also found that a combination of...whale click count, total click count, vocal duration, SOC2 depth, EOC3 depth) Descent 40 bits (duration, vertical speed, stroke count 0...100 m, stroke count 100-400 m, OBDA4, sum sr35) Bottom 26 bits (movement index6, OBDA, jerk events7, median jerk depth) Ascent
Method and system for training dynamic nonlinear adaptive filters which have embedded memory
NASA Technical Reports Server (NTRS)
Rabinowitz, Matthew (Inventor)
2002-01-01
Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.
A study on the performance comparison of metaheuristic algorithms on the learning of neural networks
NASA Astrophysics Data System (ADS)
Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline
2017-08-01
The learning or training process of neural networks entails the task of finding the most optimal set of parameters, which includes translation vectors, dilation parameter, synaptic weights, and bias terms. Apart from the traditional gradient descent-based methods, metaheuristic methods can also be used for this learning purpose. Since the inception of genetic algorithm half a century ago, the last decade witnessed the explosion of a variety of novel metaheuristic algorithms, such as harmony search algorithm, bat algorithm, and whale optimization algorithm. Despite the proof of the no free lunch theorem in the discipline of optimization, a survey in the literature of machine learning gives contrasting results. Some researchers report that certain metaheuristic algorithms are superior to the others, whereas some others argue that different metaheuristic algorithms give comparable performance. As such, this paper aims to investigate if a certain metaheuristic algorithm will outperform the other algorithms. In this work, three metaheuristic algorithms, namely genetic algorithms, particle swarm optimization, and harmony search algorithm are considered. The algorithms are incorporated in the learning of neural networks and their classification results on the benchmark UCI machine learning data sets are compared. It is found that all three metaheuristic algorithms give similar and comparable performance, as captured in the average overall classification accuracy. The results corroborate the findings reported in the works done by previous researchers. Several recommendations are given, which include the need of statistical analysis to verify the results and further theoretical works to support the obtained empirical results.
Walking and talking the tree of life: Why and how to teach about biodiversity.
Ballen, Cissy J; Greene, Harry W
2017-03-01
Taxonomic details of diversity are an essential scaffolding for biology education, yet outdated methods for teaching the tree of life (TOL), as implied by textbook content and usage, are still commonly employed. Here, we show that the traditional approach only vaguely represents evolutionary relationships, fails to denote major events in the history of life, and relies heavily on memorizing near-meaningless taxonomic ranks. Conversely, a clade-based strategy-focused on common ancestry, monophyletic groups, and derived functional traits-is explicitly based on Darwin's "descent with modification," provides students with a rational system for organizing the details of biodiversity, and readily lends itself to active learning techniques. We advocate for a phylogenetic classification that mirrors the TOL, a pedagogical format of increasingly complex but always hierarchical presentations, and the adoption of active learning technologies and tactics.
Pharmacogenomics of warfarin in populations of African descent.
Suarez-Kurtz, Guilherme; Botton, Mariana R
2013-02-01
Warfarin is the most commonly prescribed oral anticoagulant worldwide despite its narrow therapeutic index and the notorious inter- and intra-individual variability in dose required for the target clinical effect. Pharmacogenetic polymorphisms are major determinants of warfarin pharmacokinetic and dynamics and included in several warfarin dosing algorithms. This review focuses on warfarin pharmacogenomics in sub-Saharan peoples, African Americans and admixed Brazilians. These 'Black' populations differ in several aspects, notably their extent of recent admixture with Europeans, a factor which impacts on the frequency distribution of pharmacogenomic polymorphisms relevant to warfarin dose requirement for the target clinical effect. Whereas a small number of polymorphisms in VKORC1 (3673G > A, rs9923231), CYP2C9 (alleles *2 and *3, rs1799853 and rs1057910, respectively) and arguably CYP4F2 (rs2108622), may capture most of the pharmacogenomic influence on warfarin dose variance in White populations, additional polymorphisms in these, and in other, genes (e.g. CALU rs339097) increase the predictive power of pharmacogenetic warfarin dosing algorithms in the Black populations examined. A personalized strategy for initiation of warfarin therapy, allowing for improved safety and cost-effectiveness for populations of African descent must take into account their pharmacogenomic diversity, as well as socio-economical, cultural and medical factors. Accounting for this heterogeneity in algorithms that are 'friendly' enough to be adopted by warfarin prescribers worldwide requires gathering information from trials at different population levels, but demands also a critical appraisal of racial/ethnic labels that are commonly used in the clinical pharmacology literature but do not accurately reflect genetic ancestry and population diversity. © 2012 The Authors. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.
Yao, Rui; Templeton, Alistair K; Liao, Yixiang; Turian, Julius V; Kiel, Krystyna D; Chu, James C H
2014-01-01
To validate an in-house optimization program that uses adaptive simulated annealing (ASA) and gradient descent (GD) algorithms and investigate features of physical dose and generalized equivalent uniform dose (gEUD)-based objective functions in high-dose-rate (HDR) brachytherapy for cervical cancer. Eight Syed/Neblett template-based cervical cancer HDR interstitial brachytherapy cases were used for this study. Brachytherapy treatment plans were first generated using inverse planning simulated annealing (IPSA). Using the same dwell positions designated in IPSA, plans were then optimized with both physical dose and gEUD-based objective functions, using both ASA and GD algorithms. Comparisons were made between plans both qualitatively and based on dose-volume parameters, evaluating each optimization method and objective function. A hybrid objective function was also designed and implemented in the in-house program. The ASA plans are higher on bladder V75% and D2cc (p=0.034) and lower on rectum V75% and D2cc (p=0.034) than the IPSA plans. The ASA and GD plans are not significantly different. The gEUD-based plans have higher homogeneity index (p=0.034), lower overdose index (p=0.005), and lower rectum gEUD and normal tissue complication probability (p=0.005) than the physical dose-based plans. The hybrid function can produce a plan with dosimetric parameters between the physical dose-based and gEUD-based plans. The optimized plans with the same objective value and dose-volume histogram could have different dose distributions. Our optimization program based on ASA and GD algorithms is flexible on objective functions, optimization parameters, and can generate optimized plans comparable with IPSA. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.
Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron
2017-10-21
The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.
Learning and optimization with cascaded VLSI neural network building-block chips
NASA Technical Reports Server (NTRS)
Duong, T.; Eberhardt, S. P.; Tran, M.; Daud, T.; Thakoor, A. P.
1992-01-01
To demonstrate the versatility of the building-block approach, two neural network applications were implemented on cascaded analog VLSI chips. Weights were implemented using 7-b multiplying digital-to-analog converter (MDAC) synapse circuits, with 31 x 32 and 32 x 32 synapses per chip. A novel learning algorithm compatible with analog VLSI was applied to the two-input parity problem. The algorithm combines dynamically evolving architecture with limited gradient-descent backpropagation for efficient and versatile supervised learning. To implement the learning algorithm in hardware, synapse circuits were paralleled for additional quantization levels. The hardware-in-the-loop learning system allocated 2-5 hidden neurons for parity problems. Also, a 7 x 7 assignment problem was mapped onto a cascaded 64-neuron fully connected feedback network. In 100 randomly selected problems, the network found optimal or good solutions in most cases, with settling times in the range of 7-100 microseconds.
Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai
2017-03-01
This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.
NASA Astrophysics Data System (ADS)
Kisi, Ozgur; Shiri, Jalal
2012-06-01
Estimating sediment volume carried by a river is an important issue in water resources engineering. This paper compares the accuracy of three different soft computing methods, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Gene Expression Programming (GEP), in estimating daily suspended sediment concentration on rivers by using hydro-meteorological data. The daily rainfall, streamflow and suspended sediment concentration data from Eel River near Dos Rios, at California, USA are used as a case study. The comparison results indicate that the GEP model performs better than the other models in daily suspended sediment concentration estimation for the particular data sets used in this study. Levenberg-Marquardt, conjugate gradient and gradient descent training algorithms were used for the ANN models. Out of three algorithms, the Conjugate gradient algorithm was found to be better than the others.
Zhao, Tuo; Liu, Han
2016-01-01
We propose an accelerated path-following iterative shrinkage thresholding algorithm (APISTA) for solving high dimensional sparse nonconvex learning problems. The main difference between APISTA and the path-following iterative shrinkage thresholding algorithm (PISTA) is that APISTA exploits an additional coordinate descent subroutine to boost the computational performance. Such a modification, though simple, has profound impact: APISTA not only enjoys the same theoretical guarantee as that of PISTA, i.e., APISTA attains a linear rate of convergence to a unique sparse local optimum with good statistical properties, but also significantly outperforms PISTA in empirical benchmarks. As an application, we apply APISTA to solve a family of nonconvex optimization problems motivated by estimating sparse semiparametric graphical models. APISTA allows us to obtain new statistical recovery results which do not exist in the existing literature. Thorough numerical results are provided to back up our theory. PMID:28133430
NASA Astrophysics Data System (ADS)
Motta, Mario; Zhang, Shiwei
2018-05-01
We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.
3D Reconstruction in the Presence of Glass and Mirrors by Acoustic and Visual Fusion.
Zhang, Yu; Ye, Mao; Manocha, Dinesh; Yang, Ruigang
2017-07-06
We present a practical and inexpensive method to reconstruct 3D scenes that include transparent and mirror objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, which commonly include glass. These large structures are often invisible to cameras. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-sized reconstruction setting. Our simple hardware setup augments a regular depth camera with a single ultrasonic sensor, which is able to measure the distance to any object, including transparent surfaces. The key technical challenge is the sparse sampling rate from the acoustic sensor, which only takes one point measurement per frame. To address this challenge, we take advantage of the fact that the large scale glass structures in indoor environments are usually either piece-wise planar or simple parametric surfaces. Based on these assumptions, we have developed a novel sensor fusion algorithm that first segments the (hybrid) depth map into different categories such as opaque/transparent/infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. We validated our algorithms with a number of challenging cases, including multiple panes of glass, mirrors, and even a curved glass cabinet.
Sub-aperture stitching test of a cylindrical mirror with large aperture
NASA Astrophysics Data System (ADS)
Xue, Shuai; Chen, Shanyong; Shi, Feng; Lu, Jinfeng
2016-09-01
Cylindrical mirrors are key optics of high-end equipment of national defense and scientific research such as high energy laser weapons, synchrotron radiation system, etc. However, its surface error test technology develops slowly. As a result, its optical processing quality can not meet the requirements, and the developing of the associated equipment is hindered. Computer Generated-Hologram (CGH) is commonly utilized as null for testing cylindrical optics. However, since the fabrication process of CGH with large aperture is not sophisticated yet, the null test of cylindrical optics with large aperture is limited by the aperture of the CGH. Hence CGH null test combined with sub-aperture stitching method is proposed to break the limit of the aperture of CGH for testing cylindrical optics, and the design of CGH for testing cylindrical surfaces is analyzed. Besides, the misalignment aberration of cylindrical surfaces is different from that of the rotational symmetric surfaces since the special shape of cylindrical surfaces, and the existing stitching algorithm of rotational symmetric surfaces can not meet the requirements of stitching cylindrical surfaces. We therefore analyze the misalignment aberrations of cylindrical surfaces, and study the stitching algorithm for measuring cylindrical optics with large aperture. Finally we test a cylindrical mirror with large aperture to verify the validity of the proposed method.
Maneuvering Rotorcraft Noise Prediction: A New Code for a New Problem
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Bres, Guillaume A.; Perez, Guillaume; Jones, Henry E.
2002-01-01
This paper presents the unique aspects of the development of an entirely new maneuver noise prediction code called PSU-WOPWOP. The main focus of the code is the aeroacoustic aspects of the maneuver noise problem, when the aeromechanical input data are provided (namely aircraft and blade motion, blade airloads). The PSU-WOPWOP noise prediction capability was developed for rotors in steady and transient maneuvering flight. Featuring an object-oriented design, the code allows great flexibility for complex rotor configuration and motion (including multiple rotors and full aircraft motion). The relative locations and number of hinges, flexures, and body motions can be arbitrarily specified to match the any specific rotorcraft. An analysis of algorithm efficiency is performed for maneuver noise prediction along with a description of the tradeoffs made specifically for the maneuvering noise problem. Noise predictions for the main rotor of a rotorcraft in steady descent, transient (arrested) descent, hover and a mild "pop-up" maneuver are demonstrated.
Multi-Sensor Fusion for Enhanced Contextual Awareness of Everyday Activities with Ubiquitous Devices
Guiry, John J.; van de Ven, Pepijn; Nelson, John
2014-01-01
In this paper, the authors investigate the role that smart devices, including smartphones and smartwatches, can play in identifying activities of daily living. A feasibility study involving N = 10 participants was carried out to evaluate the devices' ability to differentiate between nine everyday activities. The activities examined include walking, running, cycling, standing, sitting, elevator ascents, elevator descents, stair ascents and stair descents. The authors also evaluated the ability of these devices to differentiate indoors from outdoors, with the aim of enhancing contextual awareness. Data from this study was used to train and test five well known machine learning algorithms: C4.5, CART, Naïve Bayes, Multi-Layer Perceptrons and finally Support Vector Machines. Both single and multi-sensor approaches were examined to better understand the role each sensor in the device can play in unobtrusive activity recognition. The authors found overall results to be promising, with some models correctly classifying up to 100% of all instances. PMID:24662406
Guiry, John J; van de Ven, Pepijn; Nelson, John
2014-03-21
In this paper, the authors investigate the role that smart devices, including smartphones and smartwatches, can play in identifying activities of daily living. A feasibility study involving N = 10 participants was carried out to evaluate the devices' ability to differentiate between nine everyday activities. The activities examined include walking, running, cycling, standing, sitting, elevator ascents, elevator descents, stair ascents and stair descents. The authors also evaluated the ability of these devices to differentiate indoors from outdoors, with the aim of enhancing contextual awareness. Data from this study was used to train and test five well known machine learning algorithms: C4.5, CART, Naïve Bayes, Multi-Layer Perceptrons and finally Support Vector Machines. Both single and multi-sensor approaches were examined to better understand the role each sensor in the device can play in unobtrusive activity recognition. The authors found overall results to be promising, with some models correctly classifying up to 100% of all instances.
Aerial projection of three-dimensional motion pictures by electro-holography and parabolic mirrors.
Kakue, Takashi; Nishitsuji, Takashi; Kawashima, Tetsuya; Suzuki, Keisuke; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2015-07-08
We demonstrate an aerial projection system for reconstructing 3D motion pictures based on holography. The system consists of an optical source, a spatial light modulator corresponding to a display and two parabolic mirrors. The spatial light modulator displays holograms calculated by computer and can reconstruct holographic motion pictures near the surface of the modulator. The two parabolic mirrors can project floating 3D images of the motion pictures formed by the spatial light modulator without mechanical scanning or rotating. In this demonstration, we used a phase-modulation-type spatial light modulator. The number of pixels and the pixel pitch of the modulator were 1,080 × 1,920 and 8.0 μm × 8.0 μm, respectively. The diameter, the height and the focal length of each parabolic mirror were 288 mm, 55 mm and 100 mm, respectively. We succeeded in aerially projecting 3D motion pictures of size ~2.5 mm(3) by this system constructed by the modulator and mirrors. In addition, by applying a fast computational algorithm for holograms, we achieved hologram calculations at ~12 ms per hologram with 4 CPU cores.
Aerial projection of three-dimensional motion pictures by electro-holography and parabolic mirrors
Kakue, Takashi; Nishitsuji, Takashi; Kawashima, Tetsuya; Suzuki, Keisuke; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2015-01-01
We demonstrate an aerial projection system for reconstructing 3D motion pictures based on holography. The system consists of an optical source, a spatial light modulator corresponding to a display and two parabolic mirrors. The spatial light modulator displays holograms calculated by computer and can reconstruct holographic motion pictures near the surface of the modulator. The two parabolic mirrors can project floating 3D images of the motion pictures formed by the spatial light modulator without mechanical scanning or rotating. In this demonstration, we used a phase-modulation-type spatial light modulator. The number of pixels and the pixel pitch of the modulator were 1,080 × 1,920 and 8.0 μm × 8.0 μm, respectively. The diameter, the height and the focal length of each parabolic mirror were 288 mm, 55 mm and 100 mm, respectively. We succeeded in aerially projecting 3D motion pictures of size ~2.5 mm3 by this system constructed by the modulator and mirrors. In addition, by applying a fast computational algorithm for holograms, we achieved hologram calculations at ~12 ms per hologram with 4 CPU cores. PMID:26152453
NASA Astrophysics Data System (ADS)
Nozka, L.; Hiklova, H.; Horvath, P.; Hrabovsky, M.; Mandat, D.; Palatka, M.; Pech, M.; Ridky, J.; Schovanek, P.
2018-05-01
We present results of the monitoring method we have used to characterize the optical performance deterioration due to the dust of our mirror segments produced for fluorescence detectors used in astrophysics experiments. The method is based on the measurement of scatter profiles of reflected light. The scatter profiles and the reflectivity of the mirror segments sufficiently describe the performance of the mirrors from the perspective of reconstruction algorithms. The method is demonstrated on our mirror segments installed in frame of the Pierre Auger Observatory project. Although installed in air-conditioned buildings, both the dust sedimentation and the natural aging of the reflective layer deteriorate the optical throughput of the segments. In the paper, we summarized data from ten years of operation of the fluorescence detectors. During this time, we periodically measured in-situ scatter characteristics represented by the specular reflectivity and the reflectivity of the diffusion part at the wavelength of 670 nm of the segment surface (measured by means of the optical scatter technique as well). These measurements were extended with full Bidirectional Reflectance Distribution Functions (BRDF) profiles of selected segments made in the laboratory. Cleaning procedures are also discussed in the paper.
Learning algorithms for human-machine interfaces.
Danziger, Zachary; Fishbach, Alon; Mussa-Ivaldi, Ferdinando A
2009-05-01
The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore-Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction.
Learning Algorithms for Human–Machine Interfaces
Fishbach, Alon; Mussa-Ivaldi, Ferdinando A.
2012-01-01
The goal of this study is to create and examine machine learning algorithms that adapt in a controlled and cadenced way to foster a harmonious learning environment between the user and the controlled device. To evaluate these algorithms, we have developed a simple experimental framework. Subjects wear an instrumented data glove that records finger motions. The high-dimensional glove signals remotely control the joint angles of a simulated planar two-link arm on a computer screen, which is used to acquire targets. A machine learning algorithm was applied to adaptively change the transformation between finger motion and the simulated robot arm. This algorithm was either LMS gradient descent or the Moore–Penrose (MP) pseudoinverse transformation. Both algorithms modified the glove-to-joint angle map so as to reduce the endpoint errors measured in past performance. The MP group performed worse than the control group (subjects not exposed to any machine learning), while the LMS group outperformed the control subjects. However, the LMS subjects failed to achieve better generalization than the control subjects, and after extensive training converged to the same level of performance as the control subjects. These results highlight the limitations of coadaptive learning using only endpoint error reduction. PMID:19203886
A fast summation method for oscillatory lattice sums
NASA Astrophysics Data System (ADS)
Denlinger, Ryan; Gimbutas, Zydrunas; Greengard, Leslie; Rokhlin, Vladimir
2017-02-01
We present a fast summation method for lattice sums of the type which arise when solving wave scattering problems with periodic boundary conditions. While there are a variety of effective algorithms in the literature for such calculations, the approach presented here is new and leads to a rigorous analysis of Wood's anomalies. These arise when illuminating a grating at specific combinations of the angle of incidence and the frequency of the wave, for which the lattice sums diverge. They were discovered by Wood in 1902 as singularities in the spectral response. The primary tools in our approach are the Euler-Maclaurin formula and a steepest descent argument. The resulting algorithm has super-algebraic convergence and requires only milliseconds of CPU time.
Using High Resolution Design Spaces for Aerodynamic Shape Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Li, Wu; Padula, Sharon
2004-01-01
This paper explains why high resolution design spaces encourage traditional airfoil optimization algorithms to generate noisy shape modifications, which lead to inaccurate linear predictions of aerodynamic coefficients and potential failure of descent methods. By using auxiliary drag constraints for a simultaneous drag reduction at all design points and the least shape distortion to achieve the targeted drag reduction, an improved algorithm generates relatively smooth optimal airfoils with no severe off-design performance degradation over a range of flight conditions, in high resolution design spaces parameterized by cubic B-spline functions. Simulation results using FUN2D in Euler flows are included to show the capability of the robust aerodynamic shape optimization method over a range of flight conditions.
Precise Image-Based Motion Estimation for Autonomous Small Body Exploration
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Matthies, Larry H.
1998-01-01
Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.
Development of a Sunspot Tracking System
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1998-01-01
Large solar flares produce a significant amount of energetic particles which pose a hazard for human activity in space. In the hope of understanding flare mechanisms and thus better predicting solar flares, NASA's Marshall Space Flight Center (MSFC) developed an experimental vector magnetograph (EXVM) polarimeter to measure the Sun's magnetic field. The EXVM will be used to perform ground-based solar observations and will provide a proof of concept for the design of a similar instrument for the Japanese Solar-B space mission. The EXVM typically operates for a period of several minutes. During this time there is image motion due to atmospheric fluctuation and telescope wind loading. To optimize the EXVM performance an image motion compensation device (sunspot tracker) is needed. The sunspot tracker consists of two parts, an image motion determination system and an image deflection system. For image motion determination a CCD or CID camera is used to digitize an image, than an algorithm is applied to determine the motion. This motion or error signal is sent to the image deflection system which moves the image back to its original location. Both of these systems are under development. Two algorithms are available for sunspot tracking which require the use of only one row and one column of image data. To implement these algorithms, two identical independent systems are being developed, one system for each axis of motion. Two CID cameras have been purchased; the data from each camera will be used to determine image motion for each direction. The error signal generated by the tracking algorithm will be sent to an image deflection system consisting of an actuator and a mirror constrained to move about one axis. Magnetostrictive actuators were chosen to move the mirror over piezoelectrics due to their larger driving force and larger range of motion. The actuator and mirror mounts are currently under development.
NASA Tech Briefs, December 2011
NASA Technical Reports Server (NTRS)
2011-01-01
Topics covered include: 1) SNE Industrial Fieldbus Interface; 2) Composite Thermal Switch; 3) XMOS XC-2 Development Board for Mechanical Control and Data Collection; 4) Receiver Gain Modulation Circuit; 5) NEXUS Scalable and Distributed Next-Generation Avionics Bus for Space Missions; 6) Digital Interface Board to Control Phase and Amplitude of Four Channels; 7) CoNNeCT Baseband Processor Module; 8) Cryogenic 160-GHz MMIC Heterodyne Receiver Module; 9) Ka-Band, Multi-Gigabit-Per-Second Transceiver; 10) All-Solid-State 2.45-to-2.78-THz Source; 11) Onboard Interferometric SAR Processor for the Ka-Band Radar Interferometer (KaRIn); 12) Space Environments Testbed; 13) High-Performance 3D Articulated Robot Display; 14) Athena; 15) In Situ Surface Characterization; 16) Ndarts; 17) Cryo-Etched Black Silicon for Use as Optical Black; 18) Advanced CO2 Removal and Reduction System; 19) Correcting Thermal Deformations in an Active Composite Reflector; 20) Umbilical Deployment Device; 21) Space Mirror Alignment System; 22) Thermionic Power Cell To Harness Heat Energies for Geothermal Applications; 23) Graph Theory Roots of Spatial Operators for Kinematics and Dynamics; 24) Spacesuit Soft Upper Torso Sizing Systems; 25) Radiation Protection Using Single-Wall Carbon Nanotube Derivatives; 26) PMA-PhyloChip DNA Microarray to Elucidate Viable Microbial Community Structure; 27) Lidar Luminance Quantizer; 28) Distributed Capacitive Sensor for Sample Mass Measurement; 29) Base Flow Model Validation; 30) Minimum Landing Error Powered-Descent Guidance for Planetary Missions; 31) Framework for Integrating Science Data Processing Algorithms Into Process Control Systems; 32) Time Synchronization and Distribution Mechanisms for Space Networks; 33) Local Estimators for Spacecraft Formation Flying; 34) Software-Defined Radio for Space-to-Space Communications; 35) Reflective Occultation Mask for Evaluation of Occulter Designs for Planet Finding; and 36) Molecular Adsorber Coating
Horizon: A Proposal for Large Aperture, Active Optics in Geosynchronous Orbit
NASA Technical Reports Server (NTRS)
Chesters, Dennis; Jenstrom, Del
2000-01-01
In 1999, NASA's New Millennium Program called for proposals to validate new technology in high-earth orbit for the Earth Observing-3 (NMP EO3) mission to fly in 2003. In response, we proposed to test a large aperture, active optics telescope in geosynchronous orbit. This would flight-qualify new technologies for both Earth and Space science: 1) a future instrument with LANDSAT image resolution and radiometric quality watching continuously from geosynchronous station, and 2) the Next Generation Space Telescope (NGST) for deep space imaging. Six enabling technologies were to be flight-qualified: 1) a 3-meter, lightweight segmented primary mirror, 2) mirror actuators and mechanisms, 3) a deformable mirror, 4) coarse phasing techniques, 5) phase retrieval for wavefront control during stellar viewing, and 6) phase diversity for wavefront control during Earth viewing. Three enhancing technologies were to be flight- validated: 1) mirror deployment and latching mechanisms, 2) an advanced microcontroller, and 3) GPS at GEO. In particular, two wavefront sensing algorithms, phase retrieval by JPL and phase diversity by ERIM International, were to sense optical system alignment and focus errors, and to correct them using high-precision mirror mechanisms. Active corrections based on Earth scenes are challenging because phase diversity images must be collected from extended, dynamically changing scenes. In addition, an Earth-facing telescope in GEO orbit is subject to a powerful diurnal thermal and radiometric cycle not experienced by deep-space astronomy. The Horizon proposal was a bare-bones design for a lightweight large-aperture, active optical system that is a practical blend of science requirements, emerging technologies, budget constraints, launch vehicle considerations, orbital mechanics, optical hardware, phase-determination algorithms, communication strategy, computational burdens, and first-rate cooperation among earth and space scientists, engineers and managers. This manuscript presents excerpts from the Horizon proposal's sections that describe the Earth science requirements, the structural -thermal-optical design, the wavefront sensing and control, and the on-orbit validation.
NASA Astrophysics Data System (ADS)
Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.
2015-09-01
Adaptive optics has been widely used in the field of astronomy to correct for atmospheric turbulence while viewing images of celestial bodies. The slightly distorted incoming wavefronts are typically sensed with a Shack-Hartmann sensor and then corrected with a deformable mirror. Although this approach has proven to be effective for astronomical purposes, a new approach must be developed when correcting for the deep turbulence experienced in ground to ground based optical systems. We propose the use of a modified plenoptic camera as a wavefront sensor capable of accurately representing an incoming wavefront that has been significantly distorted by strong turbulence conditions (C2n <10-13 m- 2/3). An intelligent correction algorithm can then be developed to reconstruct the perturbed wavefront and use this information to drive a deformable mirror capable of correcting the major distortions. After the large distortions have been corrected, a secondary mode utilizing more traditional adaptive optics algorithms can take over to fine tune the wavefront correction. This two-stage algorithm can find use in free space optical communication systems, in directed energy applications, as well as for image correction purposes.
The Mars Science Laboratory Entry, Descent, and Landing Flight Software
NASA Technical Reports Server (NTRS)
Gostelow, Kim P.
2013-01-01
This paper describes the design, development, and testing of the EDL program from the perspective of the software engineer. We briefly cover the overall MSL flight software organization, and then the organization of EDL itself. We discuss the timeline, the structure of the GNC code (but not the algorithms as they are covered elsewhere in this conference) and the command and telemetry interfaces. Finally, we cover testing and the influence that testability had on the EDL flight software design.
Combined approach to the Hubble Space Telescope wave-front distortion analysis
NASA Astrophysics Data System (ADS)
Roddier, Claude; Roddier, Francois
1993-06-01
Stellar images taken by the HST at various focus positions have been analyzed to estimate wave-front distortion. Rather than using a single algorithm, we found that better results were obtained by combining the advantages of various algorithms. For the planetary camera, the most accurate algorithms consistently gave a spherical aberration of -0.290-micron rms with a maximum deviation of 0.005 micron. Evidence was found that the spherical aberration is essentially produced by the primary mirror. The illumination in the telescope pupil plane was reconstructed and evidence was found for a slight camera misalignment.
NASA Astrophysics Data System (ADS)
Lohmann, U.; Jahns, J.; Limmer, S.; Fey, D.
2011-01-01
We consider the implementation of a dynamic crossbar interconnect using planar-integrated free-space optics (PIFSO) and a digital mirror-device™ (DMD). Because of the 3D nature of free-space optics, this approach is able to solve geometrical problems with crossings of the signal paths that occur in waveguide optical and electrical interconnection, especially for large number of connections. The DMD device allows one to route the signals dynamically. Due to the large number of individual mirror elements in the DMD, different optical path configurations are possible, thus offering the chance for optimizing the network configuration. The optimization is achieved by using an evolutionary algorithm for finding best values for a skewless parallel interconnection. Here, we present results and experimental examples for the use of the PIFSO/DMD-setup.
JWST Wavefront Control Toolbox
NASA Technical Reports Server (NTRS)
Shin, Shahram Ron; Aronstein, David L.
2011-01-01
A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.
The CZCS geolocation algorithms
NASA Technical Reports Server (NTRS)
Wilson, W. H.; Smith, R. C.; Nolten, J. W.
1981-01-01
The Coastal Zone Color Scanner (CZCS) on board the Nimbus 7 satellite was designed to measure surface radiance upwelled from the ocean in 6 spectral bands. The CZCS spectrometer obtains its information from a rotating mirror and is timed to collect data when the mirror views the Earth surface between ca. 40 degrees to the left and right of the subsatellite track. Each scan is divided into 1968 picture elements, pixels, of 0.04 degrees scan each. In order to avoid direct reflected Sun glint, the rotating mirror shaft can be tilted so that scans across the subsatellite track up to 20 degrees forward or aft of the point directed beneath the satellite. The CZCS is the first satellite borne instrument to have this tilted scan capability and therefore poses some new problems in locating the Earth surface position of viewed pixels.
On-sky performance of the Zernike phase contrast sensor for the phasing of segmented telescopes.
Surdej, Isabelle; Yaitskova, Natalia; Gonte, Frederic
2010-07-20
The Zernike phase contrast method is a novel technique to phase the primary mirrors of segmented telescopes. It has been tested on-sky on a unit telescope of the Very Large Telescope with a segmented mirror conjugated to the primary mirror to emulate a segmented telescope. The theoretical background of this sensor and the algorithm used to retrieve the piston, tip, and tilt information are described. The performance of the sensor as a function of parameters such as star magnitude, seeing, and integration time is discussed. The phasing accuracy has always been below 15 nm root mean square wavefront error under normal conditions of operation and the limiting star magnitude achieved on-sky with this sensor is 15.7 in the red, which would be sufficient to phase segmented telescopes in closed-loop during observations.
NASA Astrophysics Data System (ADS)
Shimansky, R. V.; Poleshchuk, A. G.; Korolkov, V. P.; Cherkashin, V. V.
2017-05-01
This paper presents a method of improving the accuracy of a circular laser system in fabrication of large-diameter diffractive optical elements by means of a polar coordinate system and the results of their use. An algorithm for correcting positioning errors of a circular laser writing system developed at the Institute of Automation and Electrometry, SB RAS, is proposed and tested. Highprecision synthesized holograms fabricated by this method and the results of using these elements for testing the 6.5 m diameter aspheric mirror of the James Webb space telescope (JWST) are described..
Neocytolysis on descent from altitude: a newly recognized mechanism for the control of red cell mass
NASA Technical Reports Server (NTRS)
Rice, L.; Ruiz, W.; Driscoll, T.; Whitley, C. E.; Tapia, R.; Hachey, D. L.; Gonzales, G. F.; Alfrey, C. P.
2001-01-01
BACKGROUND: Studies of space-flight anemia have uncovered a physiologic process, neocytolysis, by which young red blood cells are selectively hemolyzed, allowing rapid adaptation when red cell mass is excessive for a new environment. OBJECTIVES: 1) To confirm that neocytolysis occurs in another situation of acute plethora-when high-altitude dwellers with polycythemia descend to sea level; and 2) to clarify the role of erythropoietin suppression. DESIGN: Prospective observational and interventional study. SETTING: Cerro de Pasco (4380 m) and Lima (sea level), Peru. PARTICIPANTS: Nine volunteers with polycythemia. INTERVENTIONS: Volunteers were transported to sea level; three received low-dose erythropoietin. MEASUREMENTS: Changes in red cell mass, hematocrit, hemoglobin concentration, reticulocyte count, ferritin level, serum erythropoietin, and enrichment of administered(13)C in heme. RESULTS: In six participants, red cell mass decreased by 7% to 10% within a few days of descent; this decrease was mirrored by a rapid increase in serum ferritin level. Reticulocyte production did not decrease, a finding that establishes a hemolytic mechanism.(13)C changes in circulating heme were consistent with hemolysis of young cells. Erythropoietin was suppressed, and administration of exogenous erythropoietin prevented the changes in red cell mass, serum ferritin level, and(13)C-heme. CONCLUSIONS: Neocytolysis and the role of erythropoietin are confirmed in persons with polycythemia who descend from high altitude. This may have implications that extend beyond space and altitude medicine to renal disease and other situations of erythropoietin suppression, hemolysis, and polycythemia.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2016-01-01
This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.
Active Mirror Predictive and Requirements Verification Software (AMP-ReVS)
NASA Technical Reports Server (NTRS)
Basinger, Scott A.
2012-01-01
This software is designed to predict large active mirror performance at various stages in the fabrication lifecycle of the mirror. It was developed for 1-meter class powered mirrors for astronomical purposes, but is extensible to other geometries. The package accepts finite element model (FEM) inputs and laboratory measured data for large optical-quality mirrors with active figure control. It computes phenomenological contributions to the surface figure error using several built-in optimization techniques. These phenomena include stresses induced in the mirror by the manufacturing process and the support structure, the test procedure, high spatial frequency errors introduced by the polishing process, and other process-dependent deleterious effects due to light-weighting of the mirror. Then, depending on the maturity of the mirror, it either predicts the best surface figure error that the mirror will attain, or it verifies that the requirements for the error sources have been met once the best surface figure error has been measured. The unique feature of this software is that it ties together physical phenomenology with wavefront sensing and control techniques and various optimization methods including convex optimization, Kalman filtering, and quadratic programming to both generate predictive models and to do requirements verification. This software combines three distinct disciplines: wavefront control, predictive models based on FEM, and requirements verification using measured data in a robust, reusable code that is applicable to any large optics for ground and space telescopes. The software also includes state-of-the-art wavefront control algorithms that allow closed-loop performance to be computed. It allows for quantitative trade studies to be performed for optical systems engineering, including computing the best surface figure error under various testing and operating conditions. After the mirror manufacturing process and testing have been completed, the software package can be used to verify that the underlying requirements have been met.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, J.; Assoufid, L.; Macrander, A.
2007-01-01
Long trace profilers (LTPS) have been used at many synchrotron radiation laboratories worldwide for over a decade to measure surface slope profiles of long grazing incidence x-ray mirrors. Phase measuring interferometers (PMIs) of the Fizeau type, on the other hand, are being used by most mirror manufacturers to accomplish the same task. However, large mirrors whose dimensions exceed the aperture of the Fizeau interferometer require measurements to be carried out at grazing incidence, and aspheric optics require the use of a null lens. While an LTP provides a direct measurement of ID slope profiles, PMIs measure area height profiles frommore » which the slope can be obtained by a differentiation algorithm. Measurements of the two types of instruments have been found by us to be in good agreement, but to our knowledge there is no published work directly comparing the two instruments. This paper documents that comparison. We measured two different nominally flat mirrors with both the LTP in operation at the Advanced Photon Source (a type-II LTP) and a Fizeau-type PMI interferometer (Wyko model 6000). One mirror was 500 mm long and made of Zerodur, and the other mirror was 350 mm long and made of silicon. Slope error results with these instruments agree within nearly 100% (3.11 {+-} 0.15 {micro}rad for the LTP, and 3.11 {+-} 0.02 {micro}rad for the Fizeau PMI interferometer) for the medium quality Zerodur mirror with 3 {micro}rad rms nominal slope error. A significant difference was observed with the much higher quality silicon mirror. For the Si mirror, slope error data is 0.39 {+-} 0.08 {micro}rad from LTP measurements but it is 0.35 {+-} 0.01 {micro}rad from PMI interferometer measurements. The standard deviations show that the Fizeau PMI interferometer has much better measurement repeatability.« less
The MINERVA Software Development Process
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.
2017-01-01
This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.
Optical Analysis of Grazing Incidence Ring Resonators for Free-Electron Lasers
NASA Astrophysics Data System (ADS)
Gabardi, David Richard
1990-08-01
The design of resonators for free-electron lasers (FELs) which are to operate in the soft x-ray/vacuum ultraviolet (XUV) region of the spectrum is complicated by the fact that, in this wavelength regime, normal incidence mirrors, which would otherwise be used for the construction of the resonators, generally have insufficient reflectivities for this purpose. However, the use of grazing incidence mirrors in XUV resonators offers the possibility of (1) providing sufficient reflectivity, (2) a lessening of the mirrors' thermal loads due to the projection of the laser beam onto an oblique surface, and (3) the preservation of the FEL's tunability. In this work, the behavior of resonators employing grazing incidence mirrors in ring type configurations is explored. In particular, two designs, each utilizing four off-axis conic mirrors and a number of flats, are examined. In order to specify the location, orientation, and surface parameters for the mirrors in these resonators, a design algorithm has been developed based upon the properties of Gaussian beam propagation. Two computer simulation methods are used to perform a vacuum stability analysis of the two resonator designs. The first method uses paraxial ray trace techniques with the resonators' thin lens analogues while the second uses the diffraction-based computer simulation code GLAD (General Laser Analysis and Design). The effects of mirror tilts and deviations in the mirror surface parameters are investigated for a number of resonators designed to propagate laser beams of various Rayleigh ranges. It will be shown that resonator stability decreases as the laser wavelength for which the resonator was designed is made smaller. In addition, resonator stability will also be seen to decrease as the amount of magnification the laser beam receives as it travels around the resonator is increased.
Walking and talking the tree of life: Why and how to teach about biodiversity
Ballen, Cissy J.; Greene, Harry W.
2017-01-01
Taxonomic details of diversity are an essential scaffolding for biology education, yet outdated methods for teaching the tree of life (TOL), as implied by textbook content and usage, are still commonly employed. Here, we show that the traditional approach only vaguely represents evolutionary relationships, fails to denote major events in the history of life, and relies heavily on memorizing near-meaningless taxonomic ranks. Conversely, a clade-based strategy—focused on common ancestry, monophyletic groups, and derived functional traits—is explicitly based on Darwin’s “descent with modification,” provides students with a rational system for organizing the details of biodiversity, and readily lends itself to active learning techniques. We advocate for a phylogenetic classification that mirrors the TOL, a pedagogical format of increasingly complex but always hierarchical presentations, and the adoption of active learning technologies and tactics. PMID:28319149
Autonomous Laser-Powered Vehicle
NASA Technical Reports Server (NTRS)
Stone, William C. (Inventor); Hogan, Bartholomew P. (Inventor)
2017-01-01
An autonomous laser-powered vehicle designed to autonomously penetrate through ice caps of substantial (e.g., kilometers) thickness by melting a path ahead of the vehicle as it descends. A high powered laser beam is transmitted to the vehicle via an onboard bare fiber spooler. After the beam enters through the dispersion optics, the beam expands into a cavity. A radiation shield limits backscatter radiation from heating the optics. The expanded beam enters the heat exchanger and is reflected by a dispersion mirror. Forward-facing beveled circular grooves absorb the reflected radiant energy preventing the energy from being reflected back towards the optics. Microchannels along the inner circumference of the beam dump heat exchanger maximize heat transfer. Sufficient amount of fiber is wound on the fiber spooler to permit not only a descent but also to permit a sample return mission by inverting the vehicle and melting its way back to the surface.
Dong, Bing; Li, Yan; Han, Xin-Li; Hu, Bin
2016-09-02
For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10(-5) in optimized correction and is 1.427 × 10(-5) in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method.
Predictability of Top of Descent Location for Operational Idle-Thrust Descents
NASA Technical Reports Server (NTRS)
Stell, Laurel L.
2010-01-01
To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the trajectory predictor and its uncertainty models, commercial flights executed idle-thrust descents at a specified descent speed, and the recorded data included the specified descent speed profile, aircraft weight, and the winds entered into the FMS as well as the radar data. The FMS computed the intended descent path assuming idle thrust after top of descent (TOD), and the controllers and pilots then endeavored to allow the FMS to fly the descent to the meter fix with minimal human intervention. The horizontal flight path, cruise and meter fix altitudes, and actual TOD location were extracted from the radar data. Using approximately 70 descents each in Boeing 757 and Airbus 319/320 aircraft, multiple regression estimated TOD location as a linear function of the available predictive factors. The cruise and meter fix altitudes, descent speed, and wind clearly improve goodness of fit. The aircraft weight improves fit for the Airbus descents but not for the B757. Except for a few statistical outliers, the residuals have absolute value less than 5 nmi. Thus, these predictive factors adequately explain the TOD location, which indicates the data do not include excessive noise.
Piecewise convexity of artificial neural networks.
Rister, Blaine; Rubin, Daniel L
2017-10-01
Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sparse Representation with Spatio-Temporal Online Dictionary Learning for Efficient Video Coding.
Dai, Wenrui; Shen, Yangmei; Tang, Xin; Zou, Junni; Xiong, Hongkai; Chen, Chang Wen
2016-07-27
Classical dictionary learning methods for video coding suer from high computational complexity and interfered coding eciency by disregarding its underlying distribution. This paper proposes a spatio-temporal online dictionary learning (STOL) algorithm to speed up the convergence rate of dictionary learning with a guarantee of approximation error. The proposed algorithm incorporates stochastic gradient descents to form a dictionary of pairs of 3-D low-frequency and highfrequency spatio-temporal volumes. In each iteration of the learning process, it randomly selects one sample volume and updates the atoms of dictionary by minimizing the expected cost, rather than optimizes empirical cost over the complete training data like batch learning methods, e.g. K-SVD. Since the selected volumes are supposed to be i.i.d. samples from the underlying distribution, decomposition coecients attained from the trained dictionary are desirable for sparse representation. Theoretically, it is proved that the proposed STOL could achieve better approximation for sparse representation than K-SVD and maintain both structured sparsity and hierarchical sparsity. It is shown to outperform batch gradient descent methods (K-SVD) in the sense of convergence speed and computational complexity, and its upper bound for prediction error is asymptotically equal to the training error. With lower computational complexity, extensive experiments validate that the STOL based coding scheme achieves performance improvements than H.264/AVC or HEVC as well as existing super-resolution based methods in ratedistortion performance and visual quality.
Wang, S W; Li, M; Yang, H F; Zhao, Y J; Wang, Y; Liu, Y
2016-04-18
To compare the accuracyof interactive closet point (ICP) algorithm, Procrustes analysis (PA) algorithm,and a landmark-independent method to construct the mid-sagittal plane (MSP) of the cone beam computed tomography.To provide theoretical basis for establishing coordinate systemof CBCT images and symmetric analysis. Ten patients were selected and scanned by CBCT before orthodontic treatment.The scan data was imported into Mimics 10.0 to reconstructthree dimensional skulls.And the MSP of each skull was generated by ICP algorithm, PA algorithm and landmark-independent method. MSP extracted by ICP algorithm or PA algorithm involvedthree steps. First, the 3D skull processing was performed by reverse engineering software geomagic studio 2012 to obtain the mirror skull. Then, the original and its mirror skull was registered separately by ICP algorithm in geomagic studio 2012 and PA algorithm in NX Imageware 11.0. Finally, the registered data were united into new data to calculate the MSP of the originaldata in geomagic studio 2012. The mid-sagittal plane was determined by SELLA (S), nasion (N), basion (Ba) as traditional landmark-dependent methodconducted in software InVivoDental 5.0. The distance from 9 pairs of symmetric anatomical marked points to three sagittal plane were measured and calculated to compare the differences of the absolute value. The one-way ANOVA test was used to analyze the variable differences among the 3 MSPs. The pairwise comparison was performed with LSD method. MSPs calculated by the three methods were available for clinic analysis, which could be concluded from the front view.However, there was significant differences among the distances from the 9 pairs of symmetric anatomical marked points to the MSPs (F=10.932,P=0.001).LSD test showed there was no significant difference between the ICP algorithm and landmark-independent method (P=0.11), while there was significant difference between the PA algorithm and landmark-independent methods (P=0.01) . Mid-sagittal plane of 3D skulls could be generated base on ICP algorithm or PA algorithm. There was no significant difference between the ICP algorithm and landmark-independent method. For the subjects with no evident asymmetry, ICP algorithm is feasible in clinical analysis.
Vogel, Curtis R; Yang, Qiang
2006-08-21
We present two different implementations of the Fourier domain preconditioned conjugate gradient algorithm (FD-PCG) to efficiently solve the large structured linear systems that arise in optimal volume turbulence estimation, or tomography, for multi-conjugate adaptive optics (MCAO). We describe how to deal with several critical technical issues, including the cone coordinate transformation problem and sensor subaperture grid spacing. We also extend the FD-PCG approach to handle the deformable mirror fitting problem for MCAO.
Comparison of Structural Optimization Techniques for a Nuclear Electric Space Vehicle
NASA Technical Reports Server (NTRS)
Benford, Andrew
2003-01-01
The purpose of this paper is to utilize the optimization method of genetic algorithms (GA) for truss design on a nuclear propulsion vehicle. Genetic Algorithms are a guided, random search that mirrors Darwin s theory of natural selection and survival of the fittest. To verify the GA s capabilities, other traditional optimization methods were used to compare the results obtained by the GA's, first on simple 2-D structures, and eventually on full-scale 3-D truss designs.
Robust camera calibration for sport videos using court models
NASA Astrophysics Data System (ADS)
Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang
2003-12-01
We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
Multigrid optimal mass transport for image registration and morphing
NASA Astrophysics Data System (ADS)
Rehman, Tauseef ur; Tannenbaum, Allen
2007-02-01
In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
Object recognition in images via a factor graph model
NASA Astrophysics Data System (ADS)
He, Yong; Wang, Long; Wu, Zhaolin; Zhang, Haisu
2018-04-01
Object recognition in images suffered from huge search space and uncertain object profile. Recently, the Bag-of- Words methods are utilized to solve these problems, especially the 2-dimension CRF(Conditional Random Field) model. In this paper we suggest the method based on a general and flexible fact graph model, which can catch the long-range correlation in Bag-of-Words by constructing a network learning framework contrasted from lattice in CRF. Furthermore, we explore a parameter learning algorithm based on the gradient descent and Loopy Sum-Product algorithms for the factor graph model. Experimental results on Graz 02 dataset show that, the recognition performance of our method in precision and recall is better than a state-of-art method and the original CRF model, demonstrating the effectiveness of the proposed method.
Agent Collaborative Target Localization and Classification in Wireless Sensor Networks
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.
Peculiarities of the detection and identification of substance at long distance
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Trofimov, Vladislav V.; Tikhomirov, Vasily V.
2014-05-01
Nowadays, the detection and identification of dangerous substances at long distance (several meters, for example) by using of THz pulse reflected from the object is an important problem. In this report we demonstrate possibility of THz signal measuring reflected from investigated object that is placed before a flat metallic mirror. A distance between the flat mirror and the parabolic mirror this mirror is equal to 3.5 meters. Therefore, at present time our measurements contain features of both transmission and reflection modes. The reflecting mirror is used because of weak average power of used femtosecond laser. Measurements were provided at room temperature and humidity about 60%. The aim of investigation was the detection of a substance in real condition. Chocolate and Cookies were used as samples for identification. We also discuss modified correlation criteria for the detection and identification of various substances using pulsed THz signal in the transmission and reflection mode at short distances of about 30-40 cm. These criteria are integral criteria in time and they are based on the SDA method. Proposed algorithms show both high probability of the substance identification and a reliability of realization in practice. We compare P-spectrum and SDA- methods in the paper and show that P-spectrum method is a partial case of SDAmethod.
NASA Technical Reports Server (NTRS)
Shi, Fang; Basinger, Scott A.; Redding, David C.
2006-01-01
Dispersed Fringe Sensing (DFS) is an efficient and robust method for coarse phasing of a segmented primary mirror such as the James Webb Space Telescope (JWST). In this paper, modeling and simulations are used to study the effect of segmented mirror aberrations on the fringe image, DFS signals and DFS detection accuracy. The study has shown due to the pixilation spatial filter effect from DFS signal extraction the effect of wavefront error is reduced and DFS algorithm will be more robust against wavefront aberration by using multi-trace DFS approach. We also studied the JWST Dispersed Hartmann Sensor (DHS) performance in presence of wavefront aberrations caused by the gravity sag and we use the scaled gravity sag to explore the JWST DHS performance relationship with the level of the wavefront aberration. This also includes the effect from line-of-sight jitter.
Walter, Jonathan P; Pandy, Marcus G
2017-10-01
The aim of this study was to perform multi-body, muscle-driven, forward-dynamics simulations of human gait using a 6-degree-of-freedom (6-DOF) model of the knee in tandem with a surrogate model of articular contact and force control. A forward-dynamics simulation incorporating position, velocity and contact force-feedback control (FFC) was used to track full-body motion capture data recorded for multiple trials of level walking and stair descent performed by two individuals with instrumented knee implants. Tibiofemoral contact force errors for FFC were compared against those obtained from a standard computed muscle control algorithm (CMC) with a 6-DOF knee contact model (CMC6); CMC with a 1-DOF translating hinge-knee model (CMC1); and static optimization with a 1-DOF translating hinge-knee model (SO). Tibiofemoral joint loads predicted by FFC and CMC6 were comparable for level walking, however FFC produced more accurate results for stair descent. SO yielded reasonable predictions of joint contact loading for level walking but significant differences between model and experiment were observed for stair descent. CMC1 produced the least accurate predictions of tibiofemoral contact loads for both tasks. Our findings suggest that reliable estimates of knee-joint loading may be obtained by incorporating position, velocity and force-feedback control with a multi-DOF model of joint contact in a forward-dynamics simulation of gait. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Efficient algorithms for polyploid haplotype phasing.
He, Dan; Saha, Subrata; Finkers, Richard; Parida, Laxmi
2018-05-09
Inference of haplotypes, or the sequence of alleles along the same chromosomes, is a fundamental problem in genetics and is a key component for many analyses including admixture mapping, identifying regions of identity by descent and imputation. Haplotype phasing based on sequencing reads has attracted lots of attentions. Diploid haplotype phasing where the two haplotypes are complimentary have been studied extensively. In this work, we focused on Polyploid haplotype phasing where we aim to phase more than two haplotypes at the same time from sequencing data. The problem is much more complicated as the search space becomes much larger and the haplotypes do not need to be complimentary any more. We proposed two algorithms, (1) Poly-Harsh, a Gibbs Sampling based algorithm which alternatively samples haplotypes and the read assignments to minimize the mismatches between the reads and the phased haplotypes, (2) An efficient algorithm to concatenate haplotype blocks into contiguous haplotypes. Our experiments showed that our method is able to improve the quality of the phased haplotypes over the state-of-the-art methods. To our knowledge, our algorithm for haplotype blocks concatenation is the first algorithm that leverages the shared information across multiple individuals to construct contiguous haplotypes. Our experiments showed that it is both efficient and effective.
Optimization-based image reconstruction from sparse-view data in offset-detector CBCT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan
2013-01-01
The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
Caroline Müllenbroich, M; McGhee, Ewan J; Wright, Amanda J; Anderson, Kurt I; Mathieson, Keith
2014-01-01
We have developed a nonlinear adaptive optics microscope utilizing a deformable membrane mirror (DMM) and demonstrated its use in compensating for system- and sample-induced aberrations. The optimum shape of the DMM was determined with a random search algorithm optimizing on either two photon fluorescence or second harmonic signals as merit factors. We present here several strategies to overcome photobleaching issues associated with lengthy optimization routines by adapting the search algorithm and the experimental methodology. Optimizations were performed on extrinsic fluorescent dyes, fluorescent beads loaded into organotypic tissue cultures and the intrinsic second harmonic signal of these cultures. We validate the approach of using these preoptimized mirror shapes to compile a robust look-up table that can be applied for imaging over several days and through a variety of tissues. In this way, the photon exposure to the fluorescent cells under investigation is limited to imaging. Using our look-up table approach, we show signal intensity improvement factors ranging from 1.7 to 4.1 in organotypic tissue cultures and freshly excised mouse tissue. Imaging zebrafish in vivo, we demonstrate signal improvement by a factor of 2. This methodology is easily reproducible and could be applied to many photon starved experiments, for example fluorescent life time imaging, or when photobleaching is a concern.
NASA Astrophysics Data System (ADS)
Liu, Zhipeng; Zhang, Bin; Feng, Qi; Chen, Zhaoyang; Lin, Chengyou; Ding, Yingchun
2017-06-01
Focusing light through strongly scattering media plays an important role in biomedical imaging and therapy. Here, we experimentally demonstrate light focusing through ZnO sample by controlling binary amplitude optimization using genetic algorithm. In the experiment, we use a Micro Electro-Mechanical System (MEMS)-based digital micromirror device (DMD) which is in amplitude-only modulation mode. The DMD consists of 1920×1080 square mirrors that can be independently controlled to reflect light to a desired position. We control only 160 thousand mirrors which are divided into 400 segments to modulate light focusing through the scattering media using advanced genetic algorithm. Light intensity at the target position is enhanced up to 50+/-5 times the average speckle intensity. The diameters of focusing spot can be changed ranging from 7 μm to 70 μm at arbitrary positions and multiple foci are obtained simultaneously. The spatial arrangement of multiple foci can be flexibly controlled. The advantage of DMDs lies in their switching speed up to 30 kHz, which has the potential to generate a focus in an ultra-short period of time. Our work provides a reference for the study of high speed wavefront shaping that is required in vivo tissues imaging.
Wavefront Control Toolbox for James Webb Space Telescope Testbed
NASA Technical Reports Server (NTRS)
Shiri, Ron; Aronstein, David L.; Smith, Jeffery Scott; Dean, Bruce H.; Sabatke, Erin
2007-01-01
We have developed a Matlab toolbox for wavefront control of optical systems. We have applied this toolbox to the optical models of James Webb Space Telescope (JWST) in general and to the JWST Testbed Telescope (TBT) in particular, implementing both unconstrained and constrained wavefront optimization to correct for possible misalignments present on the segmented primary mirror or the monolithic secondary mirror. The optical models implemented in Zemax optical design program and information is exchanged between Matlab and Zemax via the Dynamic Data Exchange (DDE) interface. The model configuration is managed using the XML protocol. The optimization algorithm uses influence functions for each adjustable degree of freedom of the optical mode. The iterative and non-iterative algorithms have been developed to converge to a local minimum of the root-mean-square (rms) of wavefront error using singular value decomposition technique of the control matrix of influence functions. The toolkit is highly modular and allows the user to choose control strategies for the degrees of freedom to be adjusted on a given iteration and wavefront convergence criterion. As the influence functions are nonlinear over the control parameter space, the toolkit also allows for trade-offs between frequency of updating the local influence functions and execution speed. The functionality of the toolbox and the validity of the underlying algorithms have been verified through extensive simulations.
Optimum Strategies for Selecting Descent Flight-Path Angles
NASA Technical Reports Server (NTRS)
Wu, Minghong G. (Inventor); Green, Steven M. (Inventor)
2016-01-01
An information processing system and method for adaptively selecting an aircraft descent flight path for an aircraft, are provided. The system receives flight adaptation parameters, including aircraft flight descent time period, aircraft flight descent airspace region, and aircraft flight descent flyability constraints. The system queries a plurality of flight data sources and retrieves flight information including any of winds and temperatures aloft data, airspace/navigation constraints, airspace traffic demand, and airspace arrival delay model. The system calculates a set of candidate descent profiles, each defined by at least one of a flight path angle and a descent rate, and each including an aggregated total fuel consumption value for the aircraft following a calculated trajectory, and a flyability constraints metric for the calculated trajectory. The system selects a best candidate descent profile having the least fuel consumption value while the fly ability constraints metric remains within aircraft flight descent flyability constraints.
A new modified conjugate gradient coefficient for solving system of linear equations
NASA Astrophysics Data System (ADS)
Hajar, N.; ‘Aini, N.; Shapiee, N.; Abidin, Z. Z.; Khadijah, W.; Rivaie, M.; Mamat, M.
2017-09-01
Conjugate gradient (CG) method is an evolution of computational method in solving unconstrained optimization problems. This approach is easy to implement due to its simplicity and has been proven to be effective in solving real-life application. Although this field has received copious amount of attentions in recent years, some of the new approaches of CG algorithm cannot surpass the efficiency of the previous versions. Therefore, in this paper, a new CG coefficient which retains the sufficient descent and global convergence properties of the original CG methods is proposed. This new CG is tested on a set of test functions under exact line search. Its performance is then compared to that of some of the well-known previous CG methods based on number of iterations and CPU time. The results show that the new CG algorithm has the best efficiency amongst all the methods tested. This paper also includes an application of the new CG algorithm for solving large system of linear equations
Analysis of Air Traffic Track Data with the AutoBayes Synthesis System
NASA Technical Reports Server (NTRS)
Schumann, Johann Martin Philip; Cate, Karen; Lee, Alan G.
2010-01-01
The Next Generation Air Traffic System (NGATS) is aiming to provide substantial computer support for the air traffic controllers. Algorithms for the accurate prediction of aircraft movements are of central importance for such software systems but trajectory prediction has to work reliably in the presence of unknown parameters and uncertainties. We are using the AutoBayes program synthesis system to generate customized data analysis algorithms that process large sets of aircraft radar track data in order to estimate parameters and uncertainties. In this paper, we present, how the tasks of finding structure in track data, estimation of important parameters in climb trajectories, and the detection of continuous descent approaches can be accomplished with compact task-specific AutoBayes specifications. We present an overview of the AutoBayes architecture and describe, how its schema-based approach generates customized analysis algorithms, documented C/C++ code, and detailed mathematical derivations. Results of experiments with actual air traffic control data are discussed.
Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana
2016-01-01
With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie
2014-04-15
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less
Feasibility study of low-dose intra-operative cone-beam CT for image-guided surgery
NASA Astrophysics Data System (ADS)
Han, Xiao; Shi, Shuanghe; Bian, Junguo; Helm, Patrick; Sidky, Emil Y.; Pan, Xiaochuan
2011-03-01
Cone-beam computed tomography (CBCT) has been increasingly used during surgical procedures for providing accurate three-dimensional anatomical information for intra-operative navigation and verification. High-quality CBCT images are in general obtained through reconstruction from projection data acquired at hundreds of view angles, which is associated with a non-negligible amount of radiation exposure to the patient. In this work, we have applied a novel image-reconstruction algorithm, the adaptive-steepest-descent-POCS (ASD-POCS) algorithm, to reconstruct CBCT images from projection data at a significantly reduced number of view angles. Preliminary results from experimental studies involving both simulated data and real data show that images of comparable quality to those presently available in clinical image-guidance systems can be obtained by use of the ASD-POCS algorithm from a fraction of the projection data that are currently used. The result implies potential value of the proposed reconstruction technique for low-dose intra-operative CBCT imaging applications.
Wang, Tianyun; Lu, Xinfei; Yu, Xiaofei; Xi, Zhendong; Chen, Weidong
2014-01-01
In recent years, various applications regarding sparse continuous signal recovery such as source localization, radar imaging, communication channel estimation, etc., have been addressed from the perspective of compressive sensing (CS) theory. However, there are two major defects that need to be tackled when considering any practical utilization. The first issue is off-grid problem caused by the basis mismatch between arbitrary located unknowns and the pre-specified dictionary, which would make conventional CS reconstruction methods degrade considerably. The second important issue is the urgent demand for low-complexity algorithms, especially when faced with the requirement of real-time implementation. In this paper, to deal with these two problems, we have presented three fast and accurate sparse reconstruction algorithms, termed as HR-DCD, Hlog-DCD and Hlp-DCD, which are based on homotopy, dichotomous coordinate descent (DCD) iterations and non-convex regularizations, by combining with the grid refinement technique. Experimental results are provided to demonstrate the effectiveness of the proposed algorithms and related analysis. PMID:24675758
NASA Astrophysics Data System (ADS)
Kozynchenko, Alexander I.; Kozynchenko, Sergey A.
2017-03-01
In the paper, a problem of improving efficiency of the particle-particle- particle-mesh (P3M) algorithm in computing the inter-particle electrostatic forces is considered. The particle-mesh (PM) part of the algorithm is modified in such a way that the space field equation is solved by the direct method of summation of potentials over the ensemble of particles lying not too close to a reference particle. For this purpose, a specific matrix "pattern" is introduced to describe the spatial field distribution of a single point charge, so the "pattern" contains pre-calculated potential values. This approach allows to reduce a set of arithmetic operations performed at the innermost of nested loops down to an addition and assignment operators and, therefore, to decrease the running time substantially. The simulation model developed in C++ substantiates this view, showing the descent accuracy acceptable in particle beam calculations together with the improved speed performance.
Self-Organizing Hidden Markov Model Map (SOHMMM).
Ferles, Christos; Stafylopatis, Andreas
2013-12-01
A hybrid approach combining the Self-Organizing Map (SOM) and the Hidden Markov Model (HMM) is presented. The Self-Organizing Hidden Markov Model Map (SOHMMM) establishes a cross-section between the theoretic foundations and algorithmic realizations of its constituents. The respective architectures and learning methodologies are fused in an attempt to meet the increasing requirements imposed by the properties of deoxyribonucleic acid (DNA), ribonucleic acid (RNA), and protein chain molecules. The fusion and synergy of the SOM unsupervised training and the HMM dynamic programming algorithms bring forth a novel on-line gradient descent unsupervised learning algorithm, which is fully integrated into the SOHMMM. Since the SOHMMM carries out probabilistic sequence analysis with little or no prior knowledge, it can have a variety of applications in clustering, dimensionality reduction and visualization of large-scale sequence spaces, and also, in sequence discrimination, search and classification. Two series of experiments based on artificial sequence data and splice junction gene sequences demonstrate the SOHMMM's characteristics and capabilities. Copyright © 2013 Elsevier Ltd. All rights reserved.
Evaluating and minimizing noise impact due to aircraft flyover
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Cook, G.
1979-01-01
Existing techniques were used to assess the noise impact on a community due to aircraft operation and to optimize the flight paths of an approaching aircraft with respect to the annoyance produced. Major achievements are: (1) the development of a population model suitable for determining the noise impact, (2) generation of a numerical computer code which uses this population model along with the steepest descent algorithm to optimize approach/landing trajectories, (3) implementation of this optimization code in several fictitious cases as well as for the community surrounding Patrick Henry International Airport, Virginia.
Adaptive conversion of a high-order mode beam into a near-diffraction-limited beam.
Zhao, Haichuan; Wang, Xiaolin; Ma, Haotong; Zhou, Pu; Ma, Yanxing; Xu, Xiaojun; Zhao, Yijun
2011-08-01
We present a new method for efficiently transforming a high-order mode beam into a nearly Gaussian beam with much higher beam quality. The method is based on modulation of phases of different lobes by stochastic parallel gradient descent algorithm and coherent addition after phase flattening. We demonstrate the method by transforming an LP11 mode into a nearly Gaussian beam. The experimental results reveal that the power in the diffraction-limited bucket in the far field is increased by more than a factor of 1.5.
Joint estimation of motion and illumination change in a sequence of images
NASA Astrophysics Data System (ADS)
Koo, Ja-Keoung; Kim, Hyo-Hun; Hong, Byung-Woo
2015-09-01
We present an algorithm that simultaneously computes optical flow and estimates illumination change from an image sequence in a unified framework. We propose an energy functional consisting of conventional optical flow energy based on Horn-Schunck method and an additional constraint that is designed to compensate for illumination changes. Any undesirable illumination change that occurs in the imaging procedure in a sequence while the optical flow is being computed is considered a nuisance factor. In contrast to the conventional optical flow algorithm based on Horn-Schunck functional, which assumes the brightness constancy constraint, our algorithm is shown to be robust with respect to temporal illumination changes in the computation of optical flows. An efficient conjugate gradient descent technique is used in the optimization procedure as a numerical scheme. The experimental results obtained from the Middlebury benchmark dataset demonstrate the robustness and the effectiveness of our algorithm. In addition, comparative analysis of our algorithm and Horn-Schunck algorithm is performed on the additional test dataset that is constructed by applying a variety of synthetic bias fields to the original image sequences in the Middlebury benchmark dataset in order to demonstrate that our algorithm outperforms the Horn-Schunck algorithm. The superior performance of the proposed method is observed in terms of both qualitative visualizations and quantitative accuracy errors when compared to Horn-Schunck optical flow algorithm that easily yields poor results in the presence of small illumination changes leading to violation of the brightness constancy constraint.
Shape regularized active contour based on dynamic programming for anatomical structure segmentation
NASA Astrophysics Data System (ADS)
Yu, Tianli; Luo, Jiebo; Singhal, Amit; Ahuja, Narendra
2005-04-01
We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), without the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP"s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.
Estimating the degree of identity by descent in consanguineous couples.
Carr, Ian M; Markham, Sir Alexander F; Pena, Sérgio D J
2011-12-01
In some clinical and research settings, it is often necessary to identify the true level of "identity by descent" (IBD) between two individuals. However, as the individuals become more distantly related, it is increasingly difficult to accurately calculate this value. Consequently, we have developed a computer program that uses genome-wide SNP genotype data from related individuals to estimate the size and extent of IBD in their genomes. In addition, the software can compare a couple's IBD regions with either the autozygous regions of a relative affected by an autosomal recessive disease of unknown cause, or the IBD regions in the parents of the affected relative. It is then possible to calculate the probability of one of the couple's children suffering from the same disease. The software works by finding SNPs that exclude any possible IBD and then identifies regions that lack these SNPs, while exceeding a minimum size and number of SNPs. The accuracy of the algorithm was established by estimating the pairwise IBD between different members of a large pedigree with varying known coefficients of genetic relationship (CGR). © 2011 Wiley Periodicals, Inc.
A Lagrange multiplier and Hopfield-type barrier function method for the traveling salesman problem.
Dang, Chuangyin; Xu, Lei
2002-02-01
A Lagrange multiplier and Hopfield-type barrier function method is proposed for approximating a solution of the traveling salesman problem. The method is derived from applications of Lagrange multipliers and a Hopfield-type barrier function and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the method searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that lower and upper bounds on variables are always satisfied automatically if the step length is a number between zero and one. At each iteration, the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the method converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the method seems more effective and efficient than the softassign algorithm.
Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Lu, Ping
2016-01-01
Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.
Liu, Xiaozheng; Yuan, Zhenming; Zhu, Junming; Xu, Dongrong
2013-12-07
The demons algorithm is a popular algorithm for non-rigid image registration because of its computational efficiency and simple implementation. The deformation forces of the classic demons algorithm were derived from image gradients by considering the deformation to decrease the intensity dissimilarity between images. However, the methods using the difference of image intensity for medical image registration are easily affected by image artifacts, such as image noise, non-uniform imaging and partial volume effects. The gradient magnitude image is constructed from the local information of an image, so the difference in a gradient magnitude image can be regarded as more reliable and robust for these artifacts. Then, registering medical images by considering the differences in both image intensity and gradient magnitude is a straightforward selection. In this paper, based on a diffeomorphic demons algorithm, we propose a chain-type diffeomorphic demons algorithm by combining the differences in both image intensity and gradient magnitude for medical image registration. Previous work had shown that the classic demons algorithm can be considered as an approximation of a second order gradient descent on the sum of the squared intensity differences. By optimizing the new dissimilarity criteria, we also present a set of new demons forces which were derived from the gradients of the image and gradient magnitude image. We show that, in controlled experiments, this advantage is confirmed, and yields a fast convergence.
Fuel-Efficient Descent and Landing Guidance Logic for a Safe Lunar Touchdown
NASA Technical Reports Server (NTRS)
Lee, Allan Y.
2011-01-01
The landing of a crewed lunar lander on the surface of the Moon will be the climax of any Moon mission. At touchdown, the landing mechanism must absorb the load imparted on the lander due to the vertical component of the lander's touchdown velocity. Also, a large horizontal velocity must be avoided because it could cause the lander to tip over, risking the life of the crew. To be conservative, the worst-case lander's touchdown velocity is always assumed in designing the landing mechanism, making it very heavy. Fuel-optimal guidance algorithms for soft planetary landing have been studied extensively. In most of these studies, the lander is constrained to touchdown with zero velocity. With bounds imposed on the magnitude of the engine thrust, the optimal control solutions typically have a "bang-bang" thrust profile: the thrust magnitude "bangs" instantaneously between its maximum and minimum magnitudes. But the descent engine might not be able to throttle between its extremes instantaneously. There is also a concern about the acceptability of "bang-bang" control to the crew. In our study, the optimal control of a lander is formulated with a cost function that penalizes both the touchdown velocity and the fuel cost of the descent engine. In this formulation, there is not a requirement to achieve a zero touchdown velocity. Only a touchdown velocity that is consistent with the capability of the landing gear design is required. Also, since the nominal throttle level for the terminal descent sub-phase is well below the peak engine thrust, no bound on the engine thrust is used in our formulated problem. Instead of bangbang type solution, the optimal thrust generated is a continuous function of time. With this formulation, we can easily derive analytical expressions for the optimal thrust vector, touchdown velocity components, and other system variables. These expressions provide insights into the "physics" of the optimal landing and terminal descent maneuver. These insights could help engineers to achieve a better "balance" between the conflicting needs of achieving a safe touchdown velocity, a low-weight landing mechanism, low engine fuel cost, and other design goals. In comparing the computed optimal control results with the preflight landing trajectory design of the Apollo-11 mission, we noted interesting similarities between the two missions.
Phase retrieval using regularization method in intensity correlation imaging
NASA Astrophysics Data System (ADS)
Li, Xiyu; Gao, Xin; Tang, Jia; Lu, Changming; Wang, Jianli; Wang, Bin
2014-11-01
Intensity correlation imaging(ICI) method can obtain high resolution image with ground-based low precision mirrors, in the imaging process, phase retrieval algorithm should be used to reconstituted the object's image. But the algorithm now used(such as hybrid input-output algorithm) is sensitive to noise and easy to stagnate. However the signal-to-noise ratio of intensity interferometry is low especially in imaging astronomical objects. In this paper, we build the mathematical model of phase retrieval and simplified it into a constrained optimization problem of a multi-dimensional function. New error function was designed by noise distribution and prior information using regularization method. The simulation results show that the regularization method can improve the performance of phase retrieval algorithm and get better image especially in low SNR condition
Flight Management System Execution of Idle-Thrust Descents in Operations
NASA Technical Reports Server (NTRS)
Stell, Laurel L.
2011-01-01
To enable arriving aircraft to fly optimized descents computed by the flight management system (FMS) in congested airspace, ground automation must accurately predict descent trajectories. To support development of the trajectory predictor and its error models, commercial flights executed idle-thrust descents, and the recorded data includes the target speed profile and FMS intent trajectories. The FMS computes the intended descent path assuming idle thrust after top of descent (TOD), and any intervention by the controllers that alters the FMS execution of the descent is recorded so that such flights are discarded from the analysis. The horizontal flight path, cruise and meter fix altitudes, and actual TOD location are extracted from the radar data. Using more than 60 descents in Boeing 777 aircraft, the actual speeds are compared to the intended descent speed profile. In addition, three aspects of the accuracy of the FMS intent trajectory are analyzed: the meter fix crossing time, the TOD location, and the altitude at the meter fix. The actual TOD location is within 5 nmi of the intent location for over 95% of the descents. Roughly 90% of the time, the airspeed is within 0.01 of the target Mach number and within 10 KCAS of the target descent CAS, but the meter fix crossing time is only within 50 sec of the time computed by the FMS. Overall, the aircraft seem to be executing the descents as intended by the designers of the onboard automation.
VTOL shipboard letdown guidance system analysis
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Karmali, M. S.
1983-01-01
Alternative letdown guidance strategies are examined for landing of a VTOL aircraft onboard a small aviation ship under adverse environmental conditions. Off line computer simulation of shipboard landing task is utilized for assessing the relative merits of the proposed guidance schemes. The touchdown performance of a nominal constant rate of descent (CROD) letdown strategy serves as a benchmark for ranking the performance of the alternative letdown schemes. Analysis of ship motion time histories indicates the existence of an alternating sequence of quiescent and rough motions called lulls and swells. A real time algorithms lull/swell classification based upon ship motion pattern features is developed. The classification algorithm is used to command a go/no go signal to indicate the initiation and termination of an acceptable landing window. Simulation results show that such a go/no go pattern based letdown guidance strategy improves touchdown performance.
Robotic Lunar Lander Development Project Status
NASA Technical Reports Server (NTRS)
Hammond, Monica; Bassler, Julie; Morse, Brian
2010-01-01
This slide presentation reviews the status of the development of a robotic lunar lander. The goal of the project is to perform engineering tests and risk reduction activities to support the development of a small lunar lander for lunar surface science. This includes: (1) risk reduction for the flight of the robotic lander, (i.e., testing and analyzing various phase of the project); (2) the incremental development for the design of the robotic lander, which is to demonstrate autonomous, controlled descent and landing on airless bodies, and design of thruster configuration for 1/6th of the gravity of earth; (3) cold gas test article in flight demonstration testing; (4) warm gas testing of the robotic lander design; (5) develop and test landing algorithms; (6) validate the algorithms through analysis and test; and (7) tests of the flight propulsion system.
NASA Astrophysics Data System (ADS)
Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas
1992-07-01
Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.
Efficient two-dimensional compressive sensing in MIMO radar
NASA Astrophysics Data System (ADS)
Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad
2017-12-01
Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.
NASA Astrophysics Data System (ADS)
Aiyoshi, Eitaro; Masuda, Kazuaki
On the basis of market fundamentalism, new types of social systems with the market mechanism such as electricity trading markets and carbon dioxide (CO2) emission trading markets have been developed. However, there are few textbooks in science and technology which present the explanation that Lagrange multipliers can be interpreted as market prices. This tutorial paper explains that (1) the steepest descent method for dual problems in optimization, and (2) Gauss-Seidel method for solving the stationary conditions of Lagrange problems with market principles, can formulate the mechanism of market pricing, which works even in the information-oriented modern society. The authors expect readers to acquire basic knowledge on optimization theory and algorithms related to economics and to utilize them for designing the mechanism of more complicated markets.
System Verification of MSL Skycrane Using an Integrated ADAMS Simulation
NASA Technical Reports Server (NTRS)
White, Christopher; Antoun, George; Brugarolas, Paul; Lih, Shyh-Shiuh; Peng, Chia-Yen; Phan, Linh; San Martin, Alejandro; Sell, Steven
2012-01-01
Mars Science Laboratory (MSL) will use the Skycrane architecture to execute final descent and landing maneuvers. The Skycrane phase uses closed-loop feedback control throughout the entire phase, starting with rover separation, through mobility deploy, and through touchdown, ending only when the bridles have completely slacked. The integrated ADAMS simulation described in this paper couples complex dynamical models created by the mechanical subsystem with actual GNC flight software algorithms that have been compiled and linked into ADAMS. These integrated simulations provide the project with the best means to verify key Skycrane requirements which have a tightly coupled GNC-Mechanical aspect to them. It also provides the best opportunity to validate the design of the algorithm that determines when to cut the bridles. The results of the simulations show the excellent performance of the Skycrane system.
Transient plasma estimation: a noise cancelling/identification approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.V.; Casper, T.; Kane, R.
1985-03-01
The application of a noise cancelling technique to extract energy storage information from sensors occurring during fusion reactor experiments on the Tandem Mirror Experiment-Upgrade (TMX-U) at the Lawrence Livermore National Laboratory (LLNL) is examined. We show how this technique can be used to decrease the uncertainty in the corresponding sensor measurements used for diagnostics in both real-time and post-experimental environments. We analyze the performance of algorithm on the sensor data and discuss the various tradeoffs. The algorithm suggested is designed using SIG, an interactive signal processing package developed at LLNL.
Su, Mingzhe; Ma, Yan; Zhang, Xiangfen; Wang, Yan; Zhang, Yuping
2017-01-01
The traditional scale invariant feature transform (SIFT) method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT) methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed.
Su, Mingzhe; Ma, Yan; Zhang, Xiangfen; Wang, Yan; Zhang, Yuping
2017-01-01
The traditional scale invariant feature transform (SIFT) method can extract distinctive features for image matching. However, it is extremely time-consuming in SIFT matching because of the use of the Euclidean distance measure. Recently, many binary SIFT (BSIFT) methods have been developed to improve matching efficiency; however, none of them is invariant to mirror reflection. To address these problems, in this paper, we present a horizontal or vertical mirror reflection invariant binary descriptor named MBR-SIFT, in addition to a novel image matching approach. First, 16 cells in the local region around the SIFT keypoint are reorganized, and then the 128-dimensional vector of the SIFT descriptor is transformed into a reconstructed vector according to eight directions. Finally, the MBR-SIFT descriptor is obtained after binarization and reverse coding. To improve the matching speed and accuracy, a fast matching algorithm that includes a coarse-to-fine two-step matching strategy in addition to two similarity measures for the MBR-SIFT descriptor are proposed. Experimental results on the UKBench dataset show that the proposed method not only solves the problem of mirror reflection, but also ensures desirable matching accuracy and speed. PMID:28542537
Landsat-5 bumper-mode geometric correction
Storey, James C.; Choate, Michael J.
2004-01-01
The Landsat-5 Thematic Mapper (TM) scan mirror was switched from its primary operating mode to a backup mode in early 2002 in order to overcome internal synchronization problems arising from long-term wear of the scan mirror mechanism. The backup bumper mode of operation removes the constraints on scan start and stop angles enforced in the primary scan angle monitor operating mode, requiring additional geometric calibration effort to monitor the active scan angles. It also eliminates scan timing telemetry used to correct the TM scan geometry. These differences require changes to the geometric correction algorithms used to process TM data. A mathematical model of the scan mirror's behavior when operating in bumper mode was developed. This model includes a set of key timing parameters that characterize the time-varying behavior of the scan mirror bumpers. To simplify the implementation of the bumper-mode model, the bumper timing parameters were recast in terms of the calibration and telemetry data items used to process normal TM imagery. The resulting geometric performance, evaluated over 18 months of bumper-mode operations, though slightly reduced from that achievable in the primary operating mode, is still within the Landsat specifications when the data are processed with the most up-to-date calibration parameters.
NASA Astrophysics Data System (ADS)
Deng, Shaoyong; Zhang, Shiqiang; He, Minbo; Zhang, Zheng; Guan, Xiaowei
2017-05-01
The positive-branch confocal unstable resonator with inhomogeneous gain medium was studied for the normal used high energy DF laser system. The fast changing process of the resonator's eigenmodes was coupled with the slow changing process of the thermal deformation of cavity mirrors. Influences of the thermal deformation of cavity mirrors to the outcoupled beam quality and transmission loss of high frequency components of high energy laser were computed. The simulations are done through programs compiled by MATLAB and GLAD software and the method of combination of finite elements and Fox-li iteration algorithm was used. Effects of thermal distortion, misaligned of cavity mirrors and inhomogeneous distribution of gain medium were introduced to simulate the real physical circumstances of laser cavity. The wavefront distribution and beam quality (including RMS of wavefront, power in the bucket, Strehl ratio, diffraction limit β, position of the beam spot center, spot size and intensity distribution in far-field ) of the distorted outcoupled beam were studied. The conclusions of the simulation agree with the experimental results. This work would supply references of wavefront correction range to the adaptive optics system of interior alleyway.
Jankovic, Marko; Ogawa, Hidemitsu
2004-10-01
Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.
Haplotype assembly in polyploid genomes and identical by descent shared tracts.
Aguiar, Derek; Istrail, Sorin
2013-07-01
Genome-wide haplotype reconstruction from sequence data, or haplotype assembly, is at the center of major challenges in molecular biology and life sciences. For complex eukaryotic organisms like humans, the genome is vast and the population samples are growing so rapidly that algorithms processing high-throughput sequencing data must scale favorably in terms of both accuracy and computational efficiency. Furthermore, current models and methodologies for haplotype assembly (i) do not consider individuals sharing haplotypes jointly, which reduces the size and accuracy of assembled haplotypes, and (ii) are unable to model genomes having more than two sets of homologous chromosomes (polyploidy). Polyploid organisms are increasingly becoming the target of many research groups interested in the genomics of disease, phylogenetics, botany and evolution but there is an absence of theory and methods for polyploid haplotype reconstruction. In this work, we present a number of results, extensions and generalizations of compass graphs and our HapCompass framework. We prove the theoretical complexity of two haplotype assembly optimizations, thereby motivating the use of heuristics. Furthermore, we present graph theory-based algorithms for the problem of haplotype assembly using our previously developed HapCompass framework for (i) novel implementations of haplotype assembly optimizations (minimum error correction), (ii) assembly of a pair of individuals sharing a haplotype tract identical by descent and (iii) assembly of polyploid genomes. We evaluate our methods on 1000 Genomes Project, Pacific Biosciences and simulated sequence data. HapCompass is available for download at http://www.brown.edu/Research/Istrail_Lab/. Supplementary data are available at Bioinformatics online.
The Clark Phase-able Sample Size Problem: Long-Range Phasing and Loss of Heterozygosity in GWAS
NASA Astrophysics Data System (ADS)
Halldórsson, Bjarni V.; Aguiar, Derek; Tarpine, Ryan; Istrail, Sorin
A phase transition is taking place today. The amount of data generated by genome resequencing technologies is so large that in some cases it is now less expensive to repeat the experiment than to store the information generated by the experiment. In the next few years it is quite possible that millions of Americans will have been genotyped. The question then arises of how to make the best use of this information and jointly estimate the haplotypes of all these individuals. The premise of the paper is that long shared genomic regions (or tracts) are unlikely unless the haplotypes are identical by descent (IBD), in contrast to short shared tracts which may be identical by state (IBS). Here we estimate for populations, using the US as a model, what sample size of genotyped individuals would be necessary to have sufficiently long shared haplotype regions (tracts) that are identical by descent (IBD), at a statistically significant level. These tracts can then be used as input for a Clark-like phasing method to obtain a complete phasing solution of the sample. We estimate in this paper that for a population like the US and about 1% of the people genotyped (approximately 2 million), tracts of about 200 SNPs long are shared between pairs of individuals IBD with high probability which assures the Clark method phasing success. We show on simulated data that the algorithm will get an almost perfect solution if the number of individuals being SNP arrayed is large enough and the correctness of the algorithm grows with the number of individuals being genotyped.
End-to-end commissioning demonstration of the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Acton, D. Scott; Towell, Timothy; Schwenker, John; Shields, Duncan; Sabatke, Erin; Contos, Adam R.; Hansen, Karl; Shi, Fang; Dean, Bruce; Smith, Scott
2007-09-01
The one-meter Testbed Telescope (TBT) has been developed at Ball Aerospace to facilitate the design and implementation of the wavefront sensing and control (WFSC) capabilities of the James Webb Space Telescope (JWST). We have recently conducted an "end-to-end" demonstration of the flight commissioning process on the TBT. This demonstration started with the Primary Mirror (PM) segments and the Secondary Mirror (SM) in random positions, traceable to the worst-case flight deployment conditions. The commissioning process detected and corrected the deployment errors, resulting in diffraction-limited performance across the entire science FOV. This paper will describe the commissioning demonstration and the WFSC algorithms used at each step in the process.
Hysteresis compensation of piezoelectric deformable mirror based on Prandtl-Ishlinskii model
NASA Astrophysics Data System (ADS)
Ma, Jianqiang; Tian, Lei; Li, Yan; Yang, Zongfeng; Cui, Yuguo; Chu, Jiaru
2018-06-01
Hysteresis of piezoelectric deformable mirror (DM) reduces the closed-loop bandwidth and the open-loop correction accuracy of adaptive optics (AO) systems. In this work, a classical Prandtl-Ishlinskii (PI) model is employed to model the hysteresis behavior of a unimorph DM with 20 actuators. A modified control algorithm combined with the inverse PI model is developed for piezoelectric DMs. With the help of PI model, the hysteresis of the DM was reduced effectively from about 9% to 1%. Furthermore, open-loop regenerations of low-order aberrations with or without hysteresis compensation were carried out. The experimental results demonstrate that the regeneration accuracy with PI model compensation is significantly improved.
Loop Mirror Laser Neural Network with a Fast Liquid-Crystal Display
NASA Astrophysics Data System (ADS)
Mos, Evert C.; Schleipen, Jean J. H. B.; de Waardt, Huug; Khoe, Djan G. D.
1999-07-01
In our laser neural network (LNN) all-optical threshold action is obtained by application of controlled optical feedback to a laser diode. Here an extended experimental LNN is presented with as many as 32 neurons and 12 inputs. In the setup we use a fast liquid-crystal display to implement an optical matrix vector multiplier. This display, based on ferroelectric liquid-crystal material, enables us to present 125 training examples s to the LNN. To maximize the optical feedback efficiency of the setup, a loop mirror is introduced. We use a -rule learning algorithm to train the network to perform a number of functions toward the application area of telecommunication data switching.
Low cost label-free live cell imaging for biological samples
NASA Astrophysics Data System (ADS)
Seniya, C.; Towers, C. E.; Towers, D. P.
2017-02-01
This paper reports the progress to develop a practical phase measuring microscope offering new capabilities in terms of phase measurement accuracy and quantification of cell:cell interactions over the longer term. A novel, low cost phase interference microscope for imaging live cells (label-free) is described. The method combines the Zernike phase contrast approach with a dual mirror design to enable phase modulation between the scattered and un-scattered optical fields. Two designs are proposed and demonstrated, one of which retains the common path nature of Zernike's original microscopy concept. In both setups the phase shift is simple to control via a piezoelectric driven mirror in the back focal plane of the imaging system. The approach is significantly cheaper to implement than those based on spatial light modulators (SLM) at approximately 20% of the cost. A quantitative assessment of the performance of a set of phase shifting algorithms is also presented, specifically with regard to broad bandwidth illumination in phase contrast microscopy. The simulation results show that the phase measurement accuracy is strongly dependent on the algorithm selected and the optical path difference in the sample.
Some aspects of SR beamline alignment
NASA Astrophysics Data System (ADS)
Gaponov, Yu. A.; Cerenius, Y.; Nygaard, J.; Ursby, T.; Larsson, K.
2011-09-01
Based on the Synchrotron Radiation (SR) beamline optical element-by-element alignment with analysis of the alignment results an optimized beamline alignment algorithm has been designed and developed. The alignment procedures have been designed and developed for the MAX-lab I911-4 fixed energy beamline. It has been shown that the intermediate information received during the monochromator alignment stage can be used for the correction of both monochromator and mirror without the next stages of alignment of mirror, slits, sample holder, etc. Such an optimization of the beamline alignment procedures decreases the time necessary for the alignment and becomes useful and helpful in the case of any instability of the beamline optical elements, storage ring electron orbit or the wiggler insertion device, which could result in the instability of angular and positional parameters of the SR beam. A general purpose software package for manual, semi-automatic and automatic SR beamline alignment has been designed and developed using the developed algorithm. The TANGO control system is used as the middle-ware between the stand-alone beamline control applications BLTools, BPMonitor and the beamline equipment.
Dong, Bing; Li, Yan; Han, Xin-li; Hu, Bin
2016-01-01
For high-speed aircraft, a conformal window is used to optimize the aerodynamic performance. However, the local shape of the conformal window leads to large amounts of dynamic aberrations varying with look angle. In this paper, deformable mirror (DM) and model-based wavefront sensorless adaptive optics (WSLAO) are used for dynamic aberration correction of an infrared remote sensor equipped with a conformal window and scanning mirror. In model-based WSLAO, aberration is captured using Lukosz mode, and we use the low spatial frequency content of the image spectral density as the metric function. Simulations show that aberrations induced by the conformal window are dominated by some low-order Lukosz modes. To optimize the dynamic correction, we can only correct dominant Lukosz modes and the image size can be minimized to reduce the time required to compute the metric function. In our experiment, a 37-channel DM is used to mimic the dynamic aberration of conformal window with scanning rate of 10 degrees per second. A 52-channel DM is used for correction. For a 128 × 128 image, the mean value of image sharpness during dynamic correction is 1.436 × 10−5 in optimized correction and is 1.427 × 10−5 in un-optimized correction. We also demonstrated that model-based WSLAO can achieve convergence two times faster than traditional stochastic parallel gradient descent (SPGD) method. PMID:27598161
Control algorithms for aerobraking in the Martian atmosphere
NASA Technical Reports Server (NTRS)
Ward, Donald T.; Shipley, Buford W., Jr.
1991-01-01
The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.
A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks
Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping
2013-01-01
In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249
Field evaluation of flight deck procedures for flying CTAS descents
DOT National Transportation Integrated Search
1997-01-01
Flight deck descent procedures were developed for a field evaluation of the CTAS Descent Advisor conducted in the fall of 1995. During this study, CTAS descent clearances were issued to 185 commercial flights at Denver International Airport. Data col...
Time-Dependent Response Versus Scan Angle for MODIS Reflective Solar Bands
NASA Technical Reports Server (NTRS)
Sun, Junqiang; Xiong, Xiaoxiong; Angal, Amit; Chen, Hongda; Wu, Aisheng; Geng, Xu
2014-01-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) instruments currently operate onboard the National Aeronautics and Space Administration (NASA's) Terra and Aqua spacecraft, launched on December 18, 1999 and May 4, 2002, respectively. MODIS has 36 spectral bands, among which 20 are reflective solar bands (RSBs) covering a spectral range from 0.412 to 2.13 µm. The RSBs are calibrated on orbit using a solar diffuser (SD) and an SD stability monitor and with additional measurements from lunar observations via a space view (SV) port. Selected pseudo-invariant desert sites are also used to track the RSB on-orbit gain change, particularly for short-wavelength bands. MODIS views the Earth surface, SV, and the onboard calibrators using a two-sided scan mirror. The response versus scan angle (RVS) of the scan mirror was characterized prior to launch, and its changes are tracked using observations made at different angles of incidence from onboard SD, lunar, and Earth view (EV) measurements. These observations show that the optical properties of the scan mirror have experienced large wavelength-dependent degradation in both the visible and near infrared spectral regions. Algorithms have been developed to track the on-orbit RVS change using the calibrators and the selected desert sites. These algorithms have been applied to both Terra and Aqua MODIS Level 1B (L1B) to improve the EV data accuracy since L1B Collection 4, refined in Collection 5, and further improved in the latest Collection 6 (C6). In C6, two approaches have been used to derive the time-dependent RVS for MODIS RSB. The first approach relies on data collected from sensor onboard calibrators and mirror side ratios from EV observations. The second approach uses onboard calibrators and EV response trending from selected desert sites. This approach is mainly used for the bands with much larger changes in their time-dependent RVS, such as the Terra MODIS bands 1-4, 8, and 9 and the Aqua MODIS bands 8- and 9. In this paper, the algorithms of these approaches are described, their performance is demonstrated, and their impact on L1B products is discussed. In general, the shorter wavelength bands have experienced a larger on-orbit RVS change, which, in general, are mirror side and detector dependent. The on-orbit RVS change due to the degradation of band 8 can be as large as 35 percent for Terra MODIS and 20 percent for Aqua MODIS. Vital to maintaining the accuracy of the MODIS L1B products is an accurate characterization of the on-orbit RVS change. The derived time-independent RVS, implemented in C6, makes an important improvement to the quality of the MODIS L1B products.
Real-time path planning and autonomous control for helicopter autorotation
NASA Astrophysics Data System (ADS)
Yomchinda, Thanan
Autorotation is a descending maneuver that can be used to recover helicopters in the event of total loss of engine power; however it is an extremely difficult and complex maneuver. The objective of this work is to develop a real-time system which provides full autonomous control for autorotation landing of helicopters. The work includes the development of an autorotation path planning method and integration of the path planner with a primary flight control system. The trajectory is divided into three parts: entry, descent and flare. Three different optimization algorithms are used to generate trajectories for each of these segments. The primary flight control is designed using a linear dynamic inversion control scheme, and a path following control law is developed to track the autorotation trajectories. Details of the path planning algorithm, trajectory following control law, and autonomous autorotation system implementation are presented. The integrated system is demonstrated in real-time high fidelity simulations. Results indicate feasibility of the capability of the algorithms to operate in real-time and of the integrated systems ability to provide safe autorotation landings. Preliminary simulations of autonomous autorotation on a small UAV are presented which will lead to a final hardware demonstration of the algorithms.
Vectorial mask optimization methods for robust optical lithography
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong; Arce, Gonzalo R.
2012-10-01
Continuous shrinkage of critical dimension in an integrated circuit impels the development of resolution enhancement techniques for low k1 lithography. Recently, several pixelated optical proximity correction (OPC) and phase-shifting mask (PSM) approaches were developed under scalar imaging models to account for the process variations. However, the lithography systems with larger-NA (NA>0.6) are predominant for current technology nodes, rendering the scalar models inadequate to describe the vector nature of the electromagnetic field that propagates through the optical lithography system. In addition, OPC and PSM algorithms based on scalar models can compensate for wavefront aberrations, but are incapable of mitigating polarization aberrations in practical lithography systems, which can only be dealt with under the vector model. To this end, we focus on developing robust pixelated gradient-based OPC and PSM optimization algorithms aimed at canceling defocus, dose variation, wavefront and polarization aberrations under a vector model. First, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. A steepest descent algorithm is then used to iteratively optimize the mask patterns. Simulations show that the proposed algorithms can effectively improve the process windows of the optical lithography systems.
On the fusion of tuning parameters of fuzzy rules and neural network
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2017-08-01
Learning fuzzy rule-based system with neural network can lead to a precise valuable empathy of several problems. Fuzzy logic offers a simple way to reach at a definite conclusion based upon its vague, ambiguous, imprecise, noisy or missing input information. Conventional learning algorithm for tuning parameters of fuzzy rules using training input-output data usually end in a weak firing state, this certainly powers the fuzzy rule and makes it insecure for a multiple-input fuzzy system. In this paper, we introduce a new learning algorithm for tuning the parameters of the fuzzy rules alongside with radial basis function neural network (RBFNN) in training input-output data based on the gradient descent method. By the new learning algorithm, the problem of weak firing using the conventional method was addressed. We illustrated the efficiency of our new learning algorithm by means of numerical examples. MATLAB R2014(a) software was used in simulating our result The result shows that the new learning method has the best advantage of training the fuzzy rules without tempering with the fuzzy rule table which allowed a membership function of the rule to be used more than one time in the fuzzy rule base.
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks.
Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek; You, Ilsun
2017-11-29
As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks' survivability, in terms of anti-interference, network energy saving, etc.
A different approach to estimate nonlinear regression model using numerical methods
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.
2017-11-01
This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].
A Dependable Localization Algorithm for Survivable Belt-Type Sensor Networks
Zhu, Mingqiang; Song, Fei; Xu, Lei; Seo, Jung Taek
2017-01-01
As the key element, sensor networks are widely investigated by the Internet of Things (IoT) community. When massive numbers of devices are well connected, malicious attackers may deliberately propagate fake position information to confuse the ordinary users and lower the network survivability in belt-type situation. However, most existing positioning solutions only focus on the algorithm accuracy and do not consider any security aspects. In this paper, we propose a comprehensive scheme for node localization protection, which aims to improve the energy-efficient, reliability and accuracy. To handle the unbalanced resource consumption, a node deployment mechanism is presented to satisfy the energy balancing strategy in resource-constrained scenarios. According to cooperation localization theory and network connection property, the parameter estimation model is established. To achieve reliable estimations and eliminate large errors, an improved localization algorithm is created based on modified average hop distances. In order to further improve the algorithms, the node positioning accuracy is enhanced by using the steepest descent method. The experimental simulations illustrate the performance of new scheme can meet the previous targets. The results also demonstrate that it improves the belt-type sensor networks’ survivability, in terms of anti-interference, network energy saving, etc. PMID:29186072
Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun
2015-09-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver
2012-01-01
Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.
Optimization of a mirror-based neutron source using differential evolution algorithm
NASA Astrophysics Data System (ADS)
Yurov, D. V.; Prikhodko, V. V.
2016-12-01
This study is dedicated to the assessment of capabilities of gas-dynamic trap (GDT) and gas-dynamic multiple-mirror trap (GDMT) as potential neutron sources for subcritical hybrids. In mathematical terms the problem of the study has been formulated as determining the global maximum of fusion gain (Q pl), the latter represented as a function of trap parameters. A differential evolution method has been applied to perform the search. Considered in all calculations has been a configuration of the neutron source with 20 m long distance between the mirrors and 100 MW heating power. It is important to mention that the numerical study has also taken into account a number of constraints on plasma characteristics so as to provide physical credibility of searched-for trap configurations. According to the results obtained the traps considered have demonstrated fusion gain up to 0.2, depending on the constraints applied. This enables them to be used either as neutron sources within subcritical reactors for minor actinides incineration or as material-testing facilities.
NASA Technical Reports Server (NTRS)
2013-01-01
Topics covered include: Remote Data Access with IDL Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters Vectorized Rebinning Algorithm for Fast Data Down-Sampling Display Provides Pilots with Real-Time Sonic-Boom Information Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery Monitoring and Acquisition Real-time System (MARS) Analog Signal Correlating Using an Analog-Based Signal Conditioning Front End Micro-Textured Black Silicon Wick for Silicon Heat Pipe Array Robust Multivariable Optimization and Performance Simulation for ASIC Design; Castable Amorphous Metal Mirrors and Mirror Assemblies; Sandwich Core Heat-Pipe Radiator for Power and Propulsion Systems; Apparatus for Pumping a Fluid; Cobra Fiber-Optic Positioner Upgrade; Improved Wide Operating Temperature Range of Li-Ion Cells; Non-Toxic, Non-Flammable, -80 C Phase Change Materials; Soft-Bake Purification of SWCNTs Produced by Pulsed Laser Vaporization; Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models; Hand-Based Biometric Analysis; The Next Generation of Cold Immersion Dry Suit Design Evolution for Hypothermia Prevention; Integrated Lunar Information Architecture for Decision Support Version 3.0 (ILIADS 3.0); Relay Forward-Link File Management Services (MaROS Phase 2); Two Mechanisms to Avoid Control Conflicts Resulting from Uncoordinated Intent; XTCE GOVSAT Tool Suite 1.0; Determining Temperature Differential to Prevent Hardware Cross-Contamination in a Vacuum Chamber; SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws; Remote Data Exploration with the Interactive Data Language (IDL); Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals; Partitioned-Interval Quantum Optical Communications Receiver; and Practical UAV Optical Sensor Bench with Minimal Adjustability.
NASA Astrophysics Data System (ADS)
Roberts, Randy S.; Bliss, Erlan S.; Rushford, Michael C.; Halpin, John M.; Awwal, Abdul A. S.; Leach, Richard R.
2014-09-01
The Advance Radiographic Capability (ARC) at the National Ignition Facility (NIF) is a laser system designed to produce a sequence of short pulses used to backlight imploding fuel capsules. Laser pulses from a short-pulse oscillator are dispersed in wavelength into long, low-power pulses, injected in the NIF main laser for amplification, and then compressed into high-power pulses before being directed into the NIF target chamber. In the target chamber, the laser pulses hit targets which produce x-rays used to backlight imploding fuel capsules. Compression of the ARC laser pulses is accomplished with a set of precision-surveyed optical gratings mounted inside of vacuum vessels. The tilt of each grating is monitored by a measurement system consisting of a laser diode, camera and crosshair, all mounted in a pedestal outside of the vacuum vessel, and a mirror mounted on the back of a grating inside the vacuum vessel. The crosshair is mounted in front of the camera, and a diffraction pattern is formed when illuminated with the laser diode beam reflected from the mirror. This diffraction pattern contains information related to relative movements between the grating and the pedestal. Image analysis algorithms have been developed to determine the relative movements between the gratings and pedestal. In the paper we elaborate on features in the diffraction pattern, and describe the image analysis algorithms used to monitor grating tilt changes. Experimental results are provided which indicate the high degree of sensitivity provided by the tilt sensor and image analysis algorithms.
How to define pathologic pelvic floor descent in MR defecography during defecation?
Schawkat, Khoschy; Heinrich, Henriette; Parker, Helen L; Barth, Borna K; Mathew, Rishi P; Weishaupt, Dominik; Fox, Mark; Reiner, Caecilia S
2018-06-01
To assess the extents of pelvic floor descent both during the maximal straining phase and the defecation phase in healthy volunteers and in patients with pelvic floor disorders, studied with MR defecography (MRD), and to define specific threshold values for pelvic floor descent during the defecation phase. Twenty-two patients (mean age 51 ± 19.4) with obstructed defecation and 20 healthy volunteers (mean age 33.4 ± 11.5) underwent 3.0T MRD in supine position using midsagittal T2-weighted images. Two radiologists performed measurements in reference to PCL-lines in straining and during defecation. In order to identify cutoff values of pelvic floor measurements for diagnosis of pathologic pelvic floor descent [anterior, middle, and posterior compartments (AC, MC, PC)], receiver-operating characteristic (ROC) curves were plotted. Pelvic floor descent of all three compartments was significantly larger during defecation than at straining in patients and healthy volunteers (p < 0.002). When grading pelvic floor descent in the straining phase, only two healthy volunteers showed moderate PC descent (10%), which is considered pathologic. However, when applying the grading system during defecation, PC descent was overestimated with 50% of the healthy volunteers (10 of 20) showing moderate PC descent. The AUC for PC measurements during defecation was 0.77 (p = 0.003) and suggests a cutoff value of 45 mm below the PCL to identify patients with pathologic PC descent. With the adapted cutoff, only 15% of healthy volunteers show pathologic PC descent during defecation. MRD measurements during straining and defecation can be used to differentiate patients with pelvic floor dysfunction from healthy volunteers. However, different cutoff values should be used during straining and during defecation to define normal or pathologic PC descent.
Evaluation of pelvic descent disorders by dynamic contrast roentgenography.
Takano, M; Hamada, A
2000-10-01
For precise diagnosis and rational treatment of the increasing number of patients with descent of intrapelvic organ(s) and anatomic plane(s), dynamic contrast roentgenography of multiple intrapelvic organs and planes is described. Sixty-six patients, consisting of 11 males, with a mean age (+/- standard deviation) of 65.6+/-14.2 years and with chief complaints of intrapelvic organ and perineal descent or defecation problems, were examined in this study. Dynamic contrast roentgenography was obtained by opacifying the ileum, urinary bladder, vagina, rectum, and the perineum. Films were taken at both squeeze and strain phases. On the films the lowest points of each organ and plane were plotted, and the distances from the standard line drawn at the upper surface of the sacrum were measured. The values were corrected to percentages according to the height of the sacrococcygeal bone of each patient. From these corrected values, organ or plane descents at strain and squeeze were diagnosed and graphically demonstrated as a descentgram in each patient. Among 17 cases with subjective symptoms of bladder descent, 9 cases (52.9 percent) showed roentgenographic descent. By the same token, among the cases with subjective feeling of descent of the vagina, uterus, peritoneum, perineum, rectum, and anus, roentgenographic descent was confirmed in 15 of 20 (75 percent), 7 of 9 (77.8 percent), 6 of 16 (37.5 percent), 33 of 33 (100 percent), 25 of 37 (67.6 percent), and 22 of 36 (61.6 percent), respectively. The descentgrams were divided into three patterns: anorectal descent type, female genital descent type, and total organ descent type. Dynamic contrast roentgenography and successive descentgraphy of multiple intrapelvic organs and planes are useful for objective diagnosis and rational treatment of patients with descent disorders of the intrapelvic organ(s) and plane(s).
Absolute measurements of large mirrors
NASA Astrophysics Data System (ADS)
Su, Peng
The ability to produce mirrors for large astronomical telescopes is limited by the accuracy of the systems used to test the surfaces of such mirrors. Typically the mirror surfaces are measured by comparing their actual shapes to a precision master, which may be created using combinations of mirrors, lenses, and holograms. The work presented here develops several optical testing techniques that do not rely on a large or expensive precision, master reference surface. In a sense these techniques provide absolute optical testing. The Giant Magellan Telescope (GMT) has been designed with a 350 m 2 collecting area provided by a 25 m diameter primary mirror made out from seven circular independent mirror segments. These segments create an equivalent f/0.7 paraboloidal primary mirror consisting of a central segment and six outer segments. Each of the outer segments is 8.4 m in diameter and has an off-axis aspheric shape departing 14.5 mm from the best-fitting sphere. Much of the work in this dissertation is motivated by the need to measure the surfaces or such large mirrors accurately, without relying on a large or expensive precision reference surface. One method for absolute testing describing in this dissertation uses multiple measurements relative to a reference surface that is located in different positions with respect to the test surface of interest. The test measurements are performed with an algorithm that is based on the maximum likelihood (ML) method. Some methodologies for measuring large flat surfaces in the 2 m diameter range and for measuring the GMT primary mirror segments were specifically developed. For example, the optical figure of a 1.6-m flat mirror was determined to 2 nm rms accuracy using multiple 1-meter sub-aperture measurements. The optical figure of the reference surface used in the 1-meter sub-aperture measurements was also determined to the 2 nm level. The optical test methodology for a 1.7-m off axis parabola was evaluated by moving several times the mirror under test in relation to the test system. The result was a separation of errors in the optical test system to those errors from the mirror under test. This method proved to be accurate to 12nm rms. Another absolute measurement technique discussed in this dissertation utilizes the property of a paraboloidal surface of reflecting rays parallel to its optical axis, to its focal point. We have developed a scanning pentaprism technique that exploits this geometry to measure off-axis paraboloidal mirrors such as the GMT segments. This technique was demonstrated on a 1.7 m diameter prototype and proved to have a precision of about 50 nm rms.
Performance Characterization of a Landmark Measurement System for ARRM Terrain Relative Navigation
NASA Technical Reports Server (NTRS)
Shoemaker, Michael A.; Wright, Cinnamon; Liounis, Andrew J.; Getzandanner, Kenneth M.; Van Eepoel, John M.; DeWeese, Keith D.
2016-01-01
This paper describes the landmark measurement system being developed for terrain relative navigation on NASAs Asteroid Redirect Robotic Mission (ARRM),and the results of a performance characterization study given realistic navigational and model errors. The system is called Retina, and is derived from the stereo-photoclinometry methods widely used on other small-body missions. The system is simulated using synthetic imagery of the asteroid surface and discussion is given on various algorithmic design choices. Unlike other missions, ARRMs Retina is the first planned autonomous use of these methods during the close-proximity and descent phase of the mission.
Performance Characterization of a Landmark Measurement System for ARRM Terrain Relative Navigation
NASA Technical Reports Server (NTRS)
Shoemaker, Michael; Wright, Cinnamon; Liounis, Andrew; Getzandanner, Kenneth; Van Eepoel, John; Deweese, Keith
2016-01-01
This paper describes the landmark measurement system being developed for terrain relative navigation on NASAs Asteroid Redirect Robotic Mission (ARRM),and the results of a performance characterization study given realistic navigational and model errors. The system is called Retina, and is derived from the stereophotoclinometry methods widely used on other small-body missions. The system is simulated using synthetic imagery of the asteroid surface and discussion is given on various algorithmic design choices. Unlike other missions, ARRMs Retina is the first planned autonomous use of these methods during the close-proximity and descent phase of the mission.
Kurtosis Approach for Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.
Deep kernel learning method for SAR image target recognition
NASA Astrophysics Data System (ADS)
Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao
2017-10-01
With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.
Analysis of various descent trajectories for a hypersonic-cruise, cold-wall research airplane
NASA Technical Reports Server (NTRS)
Lawing, P. L.
1975-01-01
The probable descent operating conditions for a hypersonic air-breathing research airplane were examined. Descents selected were cruise angle of attack, high dynamic pressure, high lift coefficient, turns, and descents with drag brakes. The descents were parametrically exercised and compared from the standpoint of cold-wall (367 K) aircraft heat load. The descent parameters compared were total heat load, peak heating rate, time to landing, time to end of heat pulse, and range. Trends in total heat load as a function of cruise Mach number, cruise dynamic pressure, angle-of-attack limitation, pull-up g-load, heading angle, and drag-brake size are presented.
NASA Technical Reports Server (NTRS)
1980-01-01
The results of three nonlinear the Monte Carlo dispersion analyses for the Space Transportation System 1 Flight (STS-1) Orbiter Descent Operational Flight Profile, Cycle 3 are presented. Fifty randomly selected simulation for the end of mission (EOM) descent, the abort once around (AOA) descent targeted line are steep target line, and the AOA descent targeted to the shallow target line are analyzed. These analyses compare the flight environment with system and operational constraints on the flight environment and in some cases use simplified system models as an aid in assessing the STS-1 descent flight profile. In addition, descent flight envelops are provided as a data base for use by system specialists to determine the flight readiness for STS-1. The results of these dispersion analyses supersede results of the dispersion analysis previously documented.
Development of an Interval Management Algorithm Using Ground Speed Feedback for Delayed Traffic
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Swieringa, Kurt A.; Underwood, Matthew C.; Abbott, Terence; Leonard, Robert D.
2016-01-01
One of the goals of NextGen is to enable frequent use of Optimized Profile Descents (OPD) for aircraft, even during periods of peak traffic demand. NASA is currently testing three new technologies that enable air traffic controllers to use speed adjustments to space aircraft during arrival and approach operations. This will allow an aircraft to remain close to their OPD. During the integration of these technologies, it was discovered that, due to a lack of accurate trajectory information for the leading aircraft, Interval Management aircraft were exhibiting poor behavior. NASA's Interval Management algorithm was modified to address the impact of inaccurate trajectory information and a series of studies were performed to assess the impact of this modification. These studies show that the modification provided some improvement when the Interval Management system lacked accurate trajectory information for the leading aircraft.
Material parameter estimation with terahertz time-domain spectroscopy.
Dorney, T D; Baraniuk, R G; Mittleman, D M
2001-07-01
Imaging systems based on terahertz (THz) time-domain spectroscopy offer a range of unique modalities owing to the broad bandwidth, subpicosecond duration, and phase-sensitive detection of the THz pulses. Furthermore, the possibility exists for combining spectroscopic characterization or identification with imaging because the radiation is broadband in nature. To achieve this, we require novel methods for real-time analysis of THz waveforms. This paper describes a robust algorithm for extracting material parameters from measured THz waveforms. Our algorithm simultaneously obtains both the thickness and the complex refractive index of an unknown sample under certain conditions. In contrast, most spectroscopic transmission measurements require knowledge of the sample's thickness for an accurate determination of its optical parameters. Our approach relies on a model-based estimation, a gradient descent search, and the total variation measure. We explore the limits of this technique and compare the results with literature data for optical parameters of several different materials.
2014-01-01
Background Hemoglobin Shepherds Bush (Human Genome Variation Society name: HBB:c.224G > A) is an unstable hemoglobin variant resulting from a β 74 GGC to GAC mutation (Gly to Asp) that manifests clinically as hemolytic anemia or gall bladder disease due to chronic subclinical hemolysis. Case presentation We report a Pennsylvania family of English descent with this condition, first noticed in a 6-year-old female. The proband presented with splenomegaly, fatigue, dark urine and an elevated indirect bilirubin. Hemoglobin identification studies and subsequent genetic testing performed according to a systematic algorithm elucidated the diagnosis of Hb Shepherds Bush. Conclusions This is the first case of this rare hemoglobin variant identified in North America to our knowledge. It was identified using a systematic algorithm of diagnostic tests that should be followed whenever considering a rare hemoglobinopathy as part of the differential diagnosis. PMID:24428873
Active semi-supervised learning method with hybrid deep belief networks.
Zhou, Shusen; Chen, Qingcai; Wang, Xiaolong
2014-01-01
In this paper, we develop a novel semi-supervised learning algorithm called active hybrid deep belief networks (AHD), to address the semi-supervised sentiment classification problem with deep learning. First, we construct the previous several hidden layers using restricted Boltzmann machines (RBM), which can reduce the dimension and abstract the information of the reviews quickly. Second, we construct the following hidden layers using convolutional restricted Boltzmann machines (CRBM), which can abstract the information of reviews effectively. Third, the constructed deep architecture is fine-tuned by gradient-descent based supervised learning with an exponential loss function. Finally, active learning method is combined based on the proposed deep architecture. We did several experiments on five sentiment classification datasets, and show that AHD is competitive with previous semi-supervised learning algorithm. Experiments are also conducted to verify the effectiveness of our proposed method with different number of labeled reviews and unlabeled reviews respectively.
Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction
Gregor, Jens; Fessler, Jeffrey A.
2015-01-01
Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security. PMID:26478906
Multi-Sensor Registration of Earth Remotely Sensed Imagery
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)
2001-01-01
Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).
Gaussian diffusion sinogram inpainting for X-ray CT metal artifact reduction.
Peng, Chengtao; Qiu, Bensheng; Li, Ming; Guan, Yihui; Zhang, Cheng; Wu, Zhongyi; Zheng, Jian
2017-01-05
Metal objects implanted in the bodies of patients usually generate severe streaking artifacts in reconstructed images of X-ray computed tomography, which degrade the image quality and affect the diagnosis of disease. Therefore, it is essential to reduce these artifacts to meet the clinical demands. In this work, we propose a Gaussian diffusion sinogram inpainting metal artifact reduction algorithm based on prior images to reduce these artifacts for fan-beam computed tomography reconstruction. In this algorithm, prior information that originated from a tissue-classified prior image is used for the inpainting of metal-corrupted projections, and it is incorporated into a Gaussian diffusion function. The prior knowledge is particularly designed to locate the diffusion position and improve the sparsity of the subtraction sinogram, which is obtained by subtracting the prior sinogram of the metal regions from the original sinogram. The sinogram inpainting algorithm is implemented through an approach of diffusing prior energy and is then solved by gradient descent. The performance of the proposed metal artifact reduction algorithm is compared with two conventional metal artifact reduction algorithms, namely the interpolation metal artifact reduction algorithm and normalized metal artifact reduction algorithm. The experimental datasets used included both simulated and clinical datasets. By evaluating the results subjectively, the proposed metal artifact reduction algorithm causes fewer secondary artifacts than the two conventional metal artifact reduction algorithms, which lead to severe secondary artifacts resulting from impertinent interpolation and normalization. Additionally, the objective evaluation shows the proposed approach has the smallest normalized mean absolute deviation and the highest signal-to-noise ratio, indicating that the proposed method has produced the image with the best quality. No matter for the simulated datasets or the clinical datasets, the proposed algorithm has reduced the metal artifacts apparently.
Autonomous spacecraft landing through human pre-attentive vision.
Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E
2012-06-01
In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting.
Relative Terrain Imaging Navigation (RETINA) Tool for the Asteroid Redirect Robotic Mission (ARRM)
NASA Technical Reports Server (NTRS)
Wright, Cinnamon A.; Van Eepoel, John; Liounis, Andrew; Shoemaker, Michael; DeWeese, Keith; Getzandanner, Kenneth
2016-01-01
As a part of the NASA initiative to collect a boulder off of an asteroid and return it to Lunar orbit, the Satellite Servicing Capabilities Office (SSCO) and NASA GSFC are developing an on-board relative terrain imaging navigation algorithm for the Asteroid Redirect Robotic Mission (ARRM). After performing several flybys and dry runs to verify and refine the shape, spin, and gravity models and obtain centimeter level imagery, the spacecraft will descend to the surface of the asteroid to capture a boulder and return it to Lunar Orbit. The algorithm implements Stereophotoclinometry methods to register landmarks with images taken onboard the spacecraft, and use these measurements to estimate the position and orientation of the spacecraft with respect to the asteroid. This paper will present an overview of the ARRM GN&C system and concept of operations as well as a description of the algorithm and its implementation. These techniques will be demonstrated for the descent to the surface of the proposed asteroid of interest, 2008 EV5, and preliminary results will be shown.
NASA Astrophysics Data System (ADS)
Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong
2017-01-01
A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model.
Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong
2017-01-01
A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model. PMID:28120889
Martín, Andrés; Barrientos, Antonio; Del Cerro, Jaime
2018-03-22
This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.
NASA Astrophysics Data System (ADS)
Xu, Ye; Wang, Ling; Wang, Shengyao; Liu, Min
2014-09-01
In this article, an effective hybrid immune algorithm (HIA) is presented to solve the distributed permutation flow-shop scheduling problem (DPFSP). First, a decoding method is proposed to transfer a job permutation sequence to a feasible schedule considering both factory dispatching and job sequencing. Secondly, a local search with four search operators is presented based on the characteristics of the problem. Thirdly, a special crossover operator is designed for the DPFSP, and mutation and vaccination operators are also applied within the framework of the HIA to perform an immune search. The influence of parameter setting on the HIA is investigated based on the Taguchi method of design of experiment. Extensive numerical testing results based on 420 small-sized instances and 720 large-sized instances are provided. The effectiveness of the HIA is demonstrated by comparison with some existing heuristic algorithms and the variable neighbourhood descent methods. New best known solutions are obtained by the HIA for 17 out of 420 small-sized instances and 585 out of 720 large-sized instances.
Joint Chance-Constrained Dynamic Programming
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob
2012-01-01
This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM. PMID:27143958
Online Sequential Projection Vector Machine with Adaptive Data Mean Update.
Chen, Lin; Jia, Ji-Ting; Zhang, Qiong; Deng, Wan-Yu; Wei, Wei
2016-01-01
We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM) which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1) the projection vectors for dimension reduction, (2) the input weights, biases, and output weights, and (3) the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD) approach, adaptive multihyperplane machine (AMM), primal estimated subgradient solver (Pegasos), online sequential extreme learning machine (OSELM), and SVD + OSELM (feature selection based on SVD is performed before OSELM). The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
High-Speed Printing Process Characterization using the Lissajous Trajectory Method
NASA Astrophysics Data System (ADS)
Lee, Sangwon; Kim, Daekeun
2018-04-01
We present a novel stereolithographic three-dimensional (3D) printing process that uses Lissajous trajectories. By using Lissajous trajectories, this 3D printing process allows two laser-scanning mirrors to operate at similar high-speed frequencies simultaneously, and the printing speed can be faster than that of raster scanning used in conventional stereolithography. In this paper, we first propose the basic theoretical background for this printing process based on Lissajous trajectories. We also characterize its printing conditions, such as printing size, laser spot size, and minimum printing resolution, with respect to the operating frequencies of the scanning mirrors and the capability of the laser modulation. Finally, we demonstrate simulation results for printing basic 2D shapes by using a noble printing process algorithm.
Phase retrieval using a modified Shack-Hartmann wavefront sensor with defocus.
Li, Changwei; Li, Bangming; Zhang, Sijiong
2014-02-01
This paper proposes a modified Shack-Hartmann wavefront sensor for phase retrieval. The sensor is revamped by placing a detector at a defocused plane before the focal plane of the lenslet array of the Shack-Hartmann sensor. The algorithm for phase retrieval is an optimization with initial Zernike coefficients calculated by the conventional phase reconstruction of the Shack-Hartmann sensor. Numerical simulations show that the proposed sensor permits sensitive, accurate phase retrieval. Furthermore, experiments tested the feasibility of phase retrieval using the proposed sensor. The surface irregularity for a flat mirror was measured by the proposed method and a Veeco interferometer, respectively. The irregularity for the mirror measured by the proposed method is in very good agreement with that measured using the Veeco interferometer.
Adaptive optics system application for solar telescope
NASA Astrophysics Data System (ADS)
Lukin, V. P.; Grigor'ev, V. M.; Antoshkin, L. V.; Botugina, N. N.; Emaleev, O. N.; Konyaev, P. A.; Kovadlo, P. G.; Krivolutskiy, N. P.; Lavrionova, L. N.; Skomorovski, V. I.
2008-07-01
The possibility of applying adaptive correction to ground-based solar astronomy is considered. Several experimental systems for image stabilization are described along with the results of their tests. Using our work along several years and world experience in solar adaptive optics (AO) we are assuming to obtain first light to the end of 2008 for the first Russian low order ANGARA solar AO system on the Big Solar Vacuum Telescope (BSVT) with 37 subapertures Shack-Hartmann wavefront sensor based of our modified correlation tracker algorithm, DALSTAR video camera, 37 elements deformable bimorph mirror, home made fast tip-tip mirror with separate correlation tracker. Too strong daytime turbulence is on the BSVT site and we are planning to obtain a partial correction for part of Sun surface image.
Spectral radiation analyses of the GOES solar illuminated hexagonal cell scan mirror back
NASA Technical Reports Server (NTRS)
Fantano, Louis G.
1993-01-01
A ray tracing analytical tool has been developed for the simulation of spectral radiation exchange in complex systems. Algorithms are used to account for heat source spectral energy, surface directional radiation properties, and surface spectral absorptivity properties. This tool has been used to calculate the effective solar absorptivity of the geostationary operational environmental satellites (GOES) scan mirror in the calibration position. The development and design of Sounder and Imager instruments on board GOES is reviewed and the problem of calculating the effective solar absorptivity associated with the GOES hexagonal cell configuration is presented. The analytical methodology based on the Monte Carlo ray tracing technique is described and results are presented and verified by experimental measurements for selected solar incidence angles.
NASA Technical Reports Server (NTRS)
Give'on, Amir; Kern, Brian D.; Shaklan, Stuart
2011-01-01
In this paper we describe the complex electric field reconstruction from image plane intensity measurements for high contrast coronagraphic imaging. A deformable mirror (DM) surface is modied with pairs of complementary shapes to create diversity in the image plane of the science camera where the intensity of the light is measured. Along with the Electric Field Conjugation correction algorithm, this estimation method has been used in various high contrast imaging testbeds to achieve the best contrasts to date both in narrow and in broad band light. We present the basic methodology of estimation in easy to follow list of steps, present results from HCIT and raise several open quations we are confronted with using this method.
Solar Thermal Concept Evaluation
NASA Technical Reports Server (NTRS)
Hawk, Clark W.; Bonometti, Joseph A.
1995-01-01
Concentrated solar thermal energy can be utilized in a variety of high temperature applications for both terrestrial and space environments. In each application, knowledge of the collector and absorber's heat exchange interaction is required. To understand this coupled mechanism, various concentrator types and geometries, as well as, their relationship to the physical absorber mechanics were investigated. To conduct experimental tests various parts of a 5,000 watt, thermal concentrator, facility were made and evaluated. This was in anticipation at a larger NASA facility proposed for construction. Although much of the work centered on solar thermal propulsion for an upper stage (less than one pound thrust range), the information generated and the facility's capabilities are applicable to material processing, power generation and similar uses. The numerical calculations used to design the laboratory mirror and the procedure for evaluating other solar collectors are presented here. The mirror design is based on a hexagonal faceted system, which uses a spherical approximation to the parabolic surface. The work began with a few two dimensional estimates and continued with a full, three dimensional, numerical algorithm written in FORTRAN code. This was compared to a full geometry, ray trace program, BEAM 4, which optimizes the curvatures, based on purely optical considerations. Founded on numerical results, the characteristics of a faceted concentrator were construed. The numerical methodologies themselves were evaluated and categorized. As a result, the three-dimensional FORTRAN code was the method chosen to construct the mirrors, due to its overall accuracy and superior results to the ray trace program. This information is being used to fabricate and subsequently, laser map the actual mirror surfaces. Evaluation of concentrator mirrors, thermal applications and scaling the results of the 10 foot diameter mirror to a much larger concentrator, were studied. Evaluations, recommendations and pit falls regarding the structure, materials and facility design are presented.
Gait mode recognition and control for a portable-powered ankle-foot orthosis.
David Li, Yifan; Hsiao-Wecksler, Elizabeth T
2013-06-01
Ankle foot orthoses (AFOs) are widely used as assistive/rehabilitation devices to correct the gait of people with lower leg neuromuscular dysfunction and muscle weakness. We have developed a portable powered ankle-foot orthosis (PPAFO), which uses a pneumatic bi-directional rotary actuator powered by compressed CO2 to provide untethered dorsiflexor and plantarflexor assistance at the ankle joint. Since portability is a key to the success of the PPAFO as an assist device, it is critical to recognize and control for gait modes (i.e. level walking, stair ascent/descent). While manual mode switching is implemented in most powered orthotic/prosthetic device control algorithms, we propose an automatic gait mode recognition scheme by tracking the 3D position of the PPAFO from an inertial measurement unit (IMU). The control scheme was designed to match the torque profile of physiological gait data during different gait modes. Experimental results indicate that, with an optimized threshold, the controller was able to identify the position, orientation and gait mode in real time, and properly control the actuation. It was also illustrated that during stair descent, a mode-specific actuation control scheme could better restore gait kinematic and kinetic patterns, compared to using the level ground controller.
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
Product Distribution Theory for Control of Multi-Agent Systems
NASA Technical Reports Server (NTRS)
Lee, Chia Fan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.
Flight Evaluation of Center-TRACON Automation System Trajectory Prediction Process
NASA Technical Reports Server (NTRS)
Williams, David H.; Green, Steven M.
1998-01-01
Two flight experiments (Phase 1 in October 1992 and Phase 2 in September 1994) were conducted to evaluate the accuracy of the Center-TRACON Automation System (CTAS) trajectory prediction process. The Transport Systems Research Vehicle (TSRV) Boeing 737 based at Langley Research Center flew 57 arrival trajectories that included cruise and descent segments; at the same time, descent clearance advisories from CTAS were followed. Actual trajectories of the airplane were compared with the trajectories predicted by the CTAS trajectory synthesis algorithms and airplane Flight Management System (FMS). Trajectory prediction accuracy was evaluated over several levels of cockpit automation that ranged from a conventional cockpit to performance-based FMS vertical navigation (VNAV). Error sources and their magnitudes were identified and measured from the flight data. The major source of error during these tests was found to be the predicted winds aloft used by CTAS. The most significant effect related to flight guidance was the cross-track and turn-overshoot errors associated with conventional VOR guidance. FMS lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and airplane performance model errors.
The Yearly Variation in Fall-Winter Arctic Winter Vortex Descent
NASA Technical Reports Server (NTRS)
Schoeberl, Mark R.; Newman, Paul A.
1999-01-01
Using the change in HALOE methane profiles from early September to late March, we have estimated the minimum amount of diabatic descent within the polar which takes place during Arctic winter. The year to year variations are a result in the year to year variations in stratospheric wave activity which (1) modify the temperature of the vortex and thus the cooling rate; (2) reduce the apparent descent by mixing high amounts of methane into the vortex. The peak descent amounts from HALOE methane vary from l0km -14km near the arrival altitude of 25 km. Using a diabatic trajectory calculation, we compare forward and backward trajectories over the course of the winter using UKMO assimilated stratospheric data. The forward calculation agrees fairly well with the observed descent. The backward calculation appears to be unable to produce the observed amount of descent, but this is only an apparent effect due to the density decrease in parcels with altitude. Finally we show the results for unmixed descent experiments - where the parcels are fixed in latitude and longitude and allowed to descend based on the local cooling rate. Unmixed descent is found to always exceed mixed descent, because when normal parcel motion is included, the path average cooling is always less than the cooling at a fixed polar point.
Aureole lidar: Design, operation, and comparison with in-situ measurements
NASA Astrophysics Data System (ADS)
Hooper, William P.; Jensen, D. R.
1992-07-01
In 1986, H. Berber and Hooper examined the signals that could be detected by an airborne lidar flying above the marine boundary layer (MBL). One signal (aureole) formed from laser light returned to the receiver after a reflect off the ocean and forward scatter off the aerosol particles appeared to be both detectable and related to the optical depth of the MBL. Now, research has been directed towards developing a practical instrument to measure the aureole and finding an algorithm to use the information. Unlike the lidar backscatter which typically requires a telescope with a narrow field of view (0.5 mrad), the aureole signal occurs over a wide field of view (50 mrad). To accommodate the totally different needs, a standard commercial Cassegrainian telescope was modified to yield a telescope with two focal planes. The secondary mirror was replaced by a lens, whose front surface was half silvered and curved to match the replaced mirror. Light reflecting off the lens focused behind the primary mirror. The back lens surface was curved to allow unreflected light to focus at the natural focus of the primary mirror. This focal plane which is behind the lens has a wide field of view. To calculate an extinction profile, the aureole optical depth estimate is combined with the lidar backscatter profile.
Aureole lidar: Design, operation, and comparison with in-situ measurements
NASA Technical Reports Server (NTRS)
Hooper, William P.; Jensen, D. R.
1992-01-01
In 1986, H. Berber and Hooper examined the signals that could be detected by an airborne lidar flying above the marine boundary layer (MBL). One signal (aureole) formed from laser light returned to the receiver after a reflect off the ocean and forward scatter off the aerosol particles appeared to be both detectable and related to the optical depth of the MBL. Now, research has been directed towards developing a practical instrument to measure the aureole and finding an algorithm to use the information. Unlike the lidar backscatter which typically requires a telescope with a narrow field of view (0.5 mrad), the aureole signal occurs over a wide field of view (50 mrad). To accommodate the totally different needs, a standard commercial Cassegrainian telescope was modified to yield a telescope with two focal planes. The secondary mirror was replaced by a lens, whose front surface was half silvered and curved to match the replaced mirror. Light reflecting off the lens focused behind the primary mirror. The back lens surface was curved to allow unreflected light to focus at the natural focus of the primary mirror. This focal plane which is behind the lens has a wide field of view. To calculate an extinction profile, the aureole optical depth estimate is combined with the lidar backscatter profile.
James Webb Space Telescope Optical Simulation Testbed: Segmented Mirror Phase Retrieval Testing
NASA Astrophysics Data System (ADS)
Laginja, Iva; Egron, Sylvain; Brady, Greg; Soummer, Remi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Mazoyer, Johan; N’Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand
2018-01-01
The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a hardware simulator designed to produce JWST-like images. A model of the JWST three mirror anastigmat is realized with three lenses in form of a Cooke Triplet, which provides JWST-like optical quality over a field equivalent to a NIRCam module, and an Iris AO segmented mirror with hexagonal elements is standing in for the JWST segmented primary. This setup successfully produces images extremely similar to NIRCam images from cryotesting in terms of the PSF morphology and sampling relative to the diffraction limit.The testbed is used for staff training of the wavefront sensing and control (WFS&C) team and for independent analysis of WFS&C scenarios of the JWST. Algorithms like geometric phase retrieval (GPR) that may be used in flight and potential upgrades to JWST WFS&C will be explored. We report on the current status of the testbed after alignment, implementation of the segmented mirror, and testing of phase retrieval techniques.This optical bench complements other work at the Makidon laboratory at the Space Telescope Science Institute, including the investigation of coronagraphy for segmented aperture telescopes. Beyond JWST we intend to use JOST for WFS&C studies for future large segmented space telescopes such as LUVOIR.
Multi-conjugate AO for the European Solar Telescope
NASA Astrophysics Data System (ADS)
Montilla, I.; Béchet, C.; Le Louarn, M.; Tallon, M.; Sánchez-Capuchino, J.; Collados Vera, M.
2012-07-01
The European Solar Telescope (EST) will be a 4-meter diameter world-class facility, optimized for studies of the magnetic coupling between the deep photosphere and upper chromosphere. It will specialize in high spatial resolution observations and therefore it has been designed to incorporate an innovative built-in Multi-Conjugate Adaptive Optics system (MCAO). It combines a narrow field high order sensor that will provide the information to correct the ground layer and a wide field low order sensor for the high altitude mirrors used in the MCAO mode. One of the challenging particularities of solar AO is that it has to be able to correct the turbulence for a wide range of observing elevations, from zenith to almost horizon. Also, seeing is usually worse at day-time, and most science is done at visible wavelengths. Therefore, the system has to include a large number of high altitude deformable mirrors. In the case of the EST, an arrangement of 4 high altitude DMs is used. Controlling such a number of mirrors makes it necessary to use fast reconstruction algorithms to deal with such large amount of degrees of freedom. For this reason, we have studied the performance of the Fractal Iterative Method (FriM) and the Fourier Transform Reconstructor (FTR), to the EST MCAO case. Using OCTOPUS, the end-to-end simulator of the European Southern Observatory, we have performed several simulations with both algorithms, being able to reach the science requirement of a homogeneous Strehl higher that 50% all over the 1 arcmin field of view.
NASA Astrophysics Data System (ADS)
Mazoyer, J.; Pueyo, L.; N'Diaye, M.; Fogarty, K.; Zimmerman, N.; Leboulleux, L.; St. Laurent, K. E.; Soummer, R.; Shaklan, S.; Norman, C.
2018-01-01
Future searches for bio-markers on habitable exoplanets will rely on telescope instruments that achieve extremely high contrast at small planet-to-star angular separations. Coronagraphy is a promising starlight suppression technique, providing excellent contrast and throughput for off-axis sources on clear apertures. However, the complexity of space- and ground-based telescope apertures goes on increasing over time, owing to the combination of primary mirror segmentation, the secondary mirror, and its support structures. These discontinuities in the telescope aperture limit the coronagraph performance. In this paper, we present ACAD-OSM, a novel active method to correct for the diffractive effects of aperture discontinuities in the final image plane of a coronagraph. Active methods use one or several deformable mirrors that are controlled with an interaction matrix to correct for the aberrations in the pupil. However, they are often limited by the amount of aberrations introduced by aperture discontinuities. This algorithm relies on the recalibration of the interaction matrix during the correction process to overcome this limitation. We first describe the ACAD-OSM technique and compare it to the previous active methods for the correction of aperture discontinuities. We then show its performance in terms of contrast and off-axis throughput for static aperture discontinuities (segmentation, struts) and for some aberrations evolving over the life of the instrument (residual phase aberrations, artifacts in the aperture, misalignments in the coronagraph design). This technique can now obtain the Earth-like planet detection threshold of {10}10 contrast on any given aperture over at least a 10% spectral bandwidth, with several coronagraph designs.
Automatic toilet seat lowering apparatus
Guerty, Harold G.
1994-09-06
A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat. A toilet seat lowering apparatus includes a housing defining an internal cavity for receiving water from the water supply line to the toilet holding tank. A descent delay assembly of the apparatus can include a stationary dam member and a rotating dam member for dividing the internal cavity into an inlet chamber and an outlet chamber and controlling the intake and evacuation of water in a delayed fashion. A descent initiator is activated when the internal cavity is filled with pressurized water and automatically begins the lowering of the toilet seat from its upright position, which lowering is also controlled by the descent delay assembly. In an alternative embodiment, the descent initiator and the descent delay assembly can be combined in a piston linked to the rotating dam member and provided with a water channel for creating a resisting pressure to the advancing piston and thereby slowing the associated descent of the toilet seat.
A Self Contained Method for Safe and Precise Lunar Landing
NASA Technical Reports Server (NTRS)
Paschall, Stephen C., II; Brady, Tye; Cohanim, Babak; Sostaric, Ronald
2008-01-01
The return of humans to the Moon will require increased capability beyond that of the previous Apollo missions. Longer stay times and a greater flexibility with regards to landing locations are among the many improvements planned. A descent and landing system that can land the vehicle more accurately than Apollo with a greater ability to detect and avoid hazards is essential to the development of a Lunar Outpost, and also for increasing the number of potentially reachable Lunar Sortie locations. This descent and landing system should allow landings in more challenging terrain and provide more flexibility with regards to mission timing and lighting considerations, while maintaining safety as the top priority. The lunar landing system under development by the ALHAT (Autonomous precision Landing and Hazard detection Avoidance Technology) project is addressing this by providing terrain-relative navigation measurements to enhance global-scale precision, an onboard hazard-detection system to select safe landing locations, and an Autonomous GNC (Guidance, Navigation, and Control) capability to process these measurements and safely direct the vehicle to this landing location. This ALHAT landing system will enable safe and precise lunar landings without requiring lunar infrastructure in the form of navigation aids or a priori identified hazard-free landing locations. The safe landing capability provided by ALHAT uses onboard active sensing to detect hazards that are large enough to be a danger to the vehicle but too small to be detected from orbit, given currently planned orbital terrain resolution limits. Algorithms to interpret raw active sensor terrain data and generate hazard maps as well as identify safe sites and recalculate new trajectories to those sites are included as part of the ALHAT System. These improvements to descent and landing will help contribute to repeated safe and precise landings for a wide variety of terrain on the Moon.
Orion MPCV Touchdown Detection Threshold Development and Testing
NASA Technical Reports Server (NTRS)
Daum, Jared; Gay, Robert
2013-01-01
A robust method of detecting Orion Multi ]Purpose Crew Vehicle (MPCV) splashdown is necessary to ensure crew and hardware safety during descent and after touchdown. The proposed method uses a triple redundant system to inhibit Reaction Control System (RCS) thruster firings, detach parachute risers from the vehicle, and transition to the post ]landing segment of the Flight Software (FSW). The vehicle crew is the prime input for touchdown detection, followed by an autonomous FSW algorithm, and finally a strictly time based backup timer. RCS thrusters must be inhibited before submersion in water to protect against possible damage due to firing these jets under water. In addition, neglecting to declare touchdown will not allow the vehicle to transition to post ]landing activities such as activating the Crew Module Up ]righting System (CMUS), resulting in possible loss of communication and difficult recovery. A previous AIAA paper gAssessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module h concluded that a strictly Inertial Measurement Unit (IMU) based detection method using an acceleration spike algorithm had the highest safety margins and shortest detection times of other methods considered. That study utilized finite element simulations of vehicle splashdown, generated by LS ]DYNA, which were expanded to a larger set of results using a Kriging surface fit. The study also used the Decelerator Systems Simulation (DSS) to generate flight dynamics during vehicle descent under parachutes. Proto ]type IMU and FSW MATLAB models provided the basis for initial algorithm development and testing. This paper documents an in ]depth trade study, using the same dynamics data and MATLAB simulations as the earlier work, to further develop the acceleration detection method. By studying the combined effects of data rate, filtering on the rotational acceleration correction, data persistence limits and values of acceleration thresholds, an optimal configuration was determined. The lever arm calculation, which removes the centripetal acceleration caused by vehicle rotation, requires that the vehicle angular acceleration be derived from vehicle body rates, necessitating the addition of a 2nd order filter to smooth the data. It was determined that using 200 Hz data directly from the vehicle IMU outperforms the 40 Hz FSW data rate. Data persistence counter values and acceleration thresholds were balanced in order to meet desired safety and performance. The algorithm proved to exhibit ample safety margin against early detection while under parachutes, and adequate performance upon vehicle splashdown. Fall times from algorithm initiation were also studied, and a backup timer length was chosen to provide a large safety margin, yet still trigger detection before CMUS inflation. This timer serves as a backup to the primary acceleration detection method. Additionally, these parameters were tested for safety on actual flight test data, demonstrating expected safety margins.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2015-01-01
This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.
Online selective kernel-based temporal difference learning.
Chen, Xingguo; Gao, Yang; Wang, Ruili
2013-12-01
In this paper, an online selective kernel-based temporal difference (OSKTD) learning algorithm is proposed to deal with large scale and/or continuous reinforcement learning problems. OSKTD includes two online procedures: online sparsification and parameter updating for the selective kernel-based value function. A new sparsification method (i.e., a kernel distance-based online sparsification method) is proposed based on selective ensemble learning, which is computationally less complex compared with other sparsification methods. With the proposed sparsification method, the sparsified dictionary of samples is constructed online by checking if a sample needs to be added to the sparsified dictionary. In addition, based on local validity, a selective kernel-based value function is proposed to select the best samples from the sample dictionary for the selective kernel-based value function approximator. The parameters of the selective kernel-based value function are iteratively updated by using the temporal difference (TD) learning algorithm combined with the gradient descent technique. The complexity of the online sparsification procedure in the OSKTD algorithm is O(n). In addition, two typical experiments (Maze and Mountain Car) are used to compare with both traditional and up-to-date O(n) algorithms (GTD, GTD2, and TDC using the kernel-based value function), and the results demonstrate the effectiveness of our proposed algorithm. In the Maze problem, OSKTD converges to an optimal policy and converges faster than both traditional and up-to-date algorithms. In the Mountain Car problem, OSKTD converges, requires less computation time compared with other sparsification methods, gets a better local optima than the traditional algorithms, and converges much faster than the up-to-date algorithms. In addition, OSKTD can reach a competitive ultimate optima compared with the up-to-date algorithms.
Testing for a slope-based decoupling algorithm in a woofer-tweeter adaptive optics system.
Cheng, Tao; Liu, WenJin; Yang, KangJian; He, Xin; Yang, Ping; Xu, Bing
2018-05-01
It is well known that using two or more deformable mirrors (DMs) can improve the compensation ability of an adaptive optics (AO) system. However, to keep the stability of an AO system, the correlation between the multiple DMs must be suppressed during the correction. In this paper, we proposed a slope-based decoupling algorithm to simultaneous control the multiple DMs. In order to examine the validity and practicality of this algorithm, a typical woofer-tweeter (W-T) AO system was set up. For the W-T system, a theory model was simulated and the results indicated in theory that the algorithm we presented can selectively make woofer and tweeter correct different spatial frequency aberration and suppress the cross coupling between the dual DMs. At the same time, the experimental results for the W-T AO system were consistent with the results of the simulation, which demonstrated in practice that this algorithm is practical for the AO system with dual DMs.
Preliminary Design and Analysis of the GIFTS Instrument Pointing System
NASA Technical Reports Server (NTRS)
Zomkowski, Paul P.
2003-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Instrument is the next generation spectrometer for remote sensing weather satellites. The GIFTS instrument will be used to perform scans of the Earth s atmosphere by assembling a series of field-of- views (FOV) into a larger pattern. Realization of this process is achieved by step scanning the instrument FOV in a contiguous fashion across any desired portion of the visible Earth. A 2.3 arc second pointing stability, with respect to the scanning instrument, must be maintained for the duration of the FOV scan. A star tracker producing attitude data at 100 Hz rate will be used by the autonomous pointing algorithm to precisely track target FOV s on the surface of the Earth. The main objective is to validate the pointing algorithm in the presence of spacecraft disturbances and determine acceptable disturbance limits from expected noise sources. Proof of concept validation of the pointing system algorithm is carried out with a full system simulation developed using Matlab Simulink. Models for the following components function within the full system simulation: inertial reference unit (IRU), attitude control system (ACS), reaction wheels, star tracker, and mirror controller. With the spacecraft orbital position and attitude maintained to within specified limits the pointing algorithm receives quaternion, ephemeris, and initialization data that are used to construct the required mirror pointing commands at a 100 Hz rate. This comprehensive simulation will also aid in obtaining a thorough understanding of spacecraft disturbances and other sources of pointing system errors. Parameter sensitivity studies and disturbance analysis will be used to obtain limits of operability for the GIFTS instrument. The culmination of this simulation development and analysis will be used to validate the specified performance requirements outlined for this instrument.
Signal processing methods for MFE plasma diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.V.; Casper, T.; Kane, R.
1985-02-01
The application of various signal processing methods to extract energy storage information from plasma diamagnetism sensors occurring during physics experiments on the Tandom Mirror Experiment-Upgrade (TMX-U) is discussed. We show how these processing techniques can be used to decrease the uncertainty in the corresponding sensor measurements. The algorithms suggested are implemented using SIG, an interactive signal processing package developed at LLNL.
Report to the Congress on the Strategic Defense Initiative, 1991
1991-05-01
ultraviolet, and infrared radiation-hardened charge-coupled device images , step-stare sensor signal processing algorithms , and processor...Demonstration Experiment (LODE) resolved central issues associated with wavefront sensing and control and the 4-meter I Large Advanced Mirror Program (LAMP...21 Figure 4-16 Firepond CO 2 Imaging Radar Demonstration .......................... 4-22 Figure 4-17 IBSS and the Shuttle
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
An analysis of neural receptive field plasticity by point process adaptive filtering
Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor
2001-01-01
Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
Validation of Genome-Wide Prostate Cancer Associations in Men of African Descent
Chang, Bao-Li; Spangler, Elaine; Gallagher, Stephen; Haiman, Christopher A.; Henderson, Brian; Isaacs, William; Benford, Marnita L.; Kidd, LaCreis R.; Cooney, Kathleen; Strom, Sara; Ann Ingles, Sue; Stern, Mariana C.; Corral, Roman; Joshi, Amit D.; Xu, Jianfeng; Giri, Veda N.; Rybicki, Benjamin; Neslund-Dudas, Christine; Kibel, Adam S.; Thompson, Ian M.; Leach, Robin J.; Ostrander, Elaine A.; Stanford, Janet L.; Witte, John; Casey, Graham; Eeles, Rosalind; Hsing, Ann W.; Chanock, Stephen; Hu, Jennifer J.; John, Esther M.; Park, Jong; Stefflova, Klara; Zeigler-Johnson, Charnita; Rebbeck, Timothy R.
2010-01-01
Background Genome-wide association studies (GWAS) have identified numerous prostate cancer susceptibility alleles, but these loci have been identified primarily in men of European descent. There is limited information about the role of these loci in men of African descent. Methods We identified 7,788 prostate cancer cases and controls with genotype data for 47 GWAS-identified loci. Results We identified significant associations for SNP rs10486567 at JAZF1, rs10993994 at MSMB, rs12418451 and rs7931342 at 11q13, and rs5945572 and rs5945619 at NUDT10/11. These associations were in the same direction and of similar magnitude as those reported in men of European descent. Significance was attained at all report prostate cancer susceptibility regions at chromosome 8q24, including associations reaching genome-wide significance in region 2. Conclusion We have validated in men of African descent the associations at some, but not all, prostate cancer susceptibility loci originally identified in European descent populations. This may be due to heterogeneity in genetic etiology or in the pattern of genetic variation across populations. Impact The genetic etiology of prostate cancer in men of African descent differs from that of men of European descent. PMID:21071540
Studies of the hormonal control of postnatal testicular descent in the rat.
Spencer, J R; Vaughan, E D; Imperato-McGinley, J
1993-03-01
Dihydrotestosterone is believed to control the transinguinal phase of testicular descent based on hormonal manipulation studies performed in postnatal rats. In the present study, these hormonal manipulation experiments were repeated, and the results were compared with those obtained using the antiandrogens flutamide and cyproterone acetate. 17 beta-estradiol completely blocked testicular descent, but testosterone and dihydrotestosterone were equally effective in reversing this inhibition. Neither flutamide nor cyproterone acetate prevented testicular descent in postnatal rats despite marked peripheral antiandrogenic action. Further analysis of the data revealed a correlation between testicular size and descent. Androgen receptor blockade did not produce a marked reduction in testicular size and consequently did not prevent testicular descent, whereas estradiol alone caused marked testicular atrophy and testicular maldescent. Reduction of the estradiol dosage or concomitant administration of androgens or human chorionic gonadotropin resulted in both increased testicular size and degree of descent. These data suggest that growth of the neonatal rat testis may contribute to its passage into the scrotum.
Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems
NASA Technical Reports Server (NTRS)
Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy
2007-01-01
High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.
NASA Astrophysics Data System (ADS)
Swanson, C.; Jandovitz, P.; Cohen, S. A.
2018-02-01
We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.
NASA Astrophysics Data System (ADS)
Hast, J.; Okkonen, M.; Heikkinen, H.; Krehut, L.; Myllylä, R.
2006-06-01
A self-mixing interferometer is proposed to measure nanometre-scale optical path length changes in the interferometer's external cavity. As light source, the developed technique uses a blue emitting GaN laser diode. An external reflector, a silicon mirror, driven by a piezo nanopositioner is used to produce an interference signal which is detected with the monitor photodiode of the laser diode. Changing the optical path length of the external cavity introduces a phase difference to the interference signal. This phase difference is detected using a signal processing algorithm based on Pearson's correlation coefficient and cubic spline interpolation techniques. The results show that the average deviation between the measured and actual displacements of the silicon mirror is 3.1 nm in the 0-110 nm displacement range. Moreover, the measured displacements follow linearly the actual displacement of the silicon mirror. Finally, the paper considers the effects produced by the temperature and current stability of the laser diode as well as dispersion effects in the external cavity of the interferometer. These reduce the sensor's measurement accuracy especially in long-term measurements.
Control code for laboratory adaptive optics teaching system
NASA Astrophysics Data System (ADS)
Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael
2017-09-01
By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.
A multi-conjugate adaptive optics testbed using two MEMS deformable mirrors
NASA Astrophysics Data System (ADS)
Andrews, Jonathan R.; Martinez, Ty; Teare, Scott W.; Restaino, Sergio R.; Wilcox, Christopher C.; Santiago, Freddie; Payne, Don M.
2011-03-01
Adaptive optics (AO) systems are well demonstrated in the literature with both laboratory and real-world systems being developed. Some of these systems have employed MEMS deformable mirrors as their active corrective element. More recent work in AO for astronomical applications has focused on providing correction in more than one conjugate plane. Additionally, horizontal path AO systems are exploring correction in multiple conjugate planes. This provides challenges for a laboratory system as the aberrations need to be generated and corrected in more than one plane in the optical system. Our work with compact AO systems employing MEMS technology in addition to liquid crystal spatial light modulator (SLM) driven aberration generators has been scaled up to a two conjugate plane testbed. Using two SLM based aberration generators and two separate wavefront sensors, the system can apply correction with two MEMS deformable mirrors. The challenges in such a system are to properly match non-identical components and weight the correction algorithm for correcting in two planes. This paper demonstrates preliminary results and analysis with this system with wavefront data and residual error measurements.
Tilt angle measurement with a Gaussian-shaped laser beam tracking
NASA Astrophysics Data System (ADS)
Šarbort, Martin; Řeřucha, Šimon; Jedlička, Petr; Lazar, Josef; Číp, Ondrej
2014-05-01
We have addressed the challenge to carry out the angular tilt stabilization of a laser guiding mirror which is intended to route a laser beam with a high energy density. Such an application requires good angular accuracy as well as large operating range, long term stability and absolute positioning. We have designed an instrument for such a high precision angular tilt measurement based on a triangulation method where a laser beam with Gaussian profile is reflected off the stabilized mirror and detected by an image sensor. As the angular deflection of the mirror causes a change of the beam spot position, the principal task is to measure the position on the image chip surface. We have employed a numerical analysis of the Gaussian intensity pattern which uses the nonlinear regression algorithm. The feasibility and performance of the method were tested by numeric modeling as well as experimentally. The experimental results indicate that the assembled instrument achieves a measurement error of 0.13 microradian in the range +/-0.65 degrees over the period of one hour. This corresponds to the dynamic range of 1:170 000.
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Murad-Regadas, Sthela M; Pinheiro Regadas, Francisco Sergio; Rodrigues, Lusmar V; da Silva Vilarinho, Adjra; Buchen, Guilherme; Borges, Livia Olinda; Veras, Lara B; da Cruz, Mariana Murad
2016-12-01
Defecography is an established method of evaluating dynamic anorectal dysfunction, but conventional defecography does not allow for visualization of anatomic structures. The purpose of this study was to describe the use of dynamic 3-dimensional endovaginal ultrasonography for evaluating perineal descent in comparison with echodefecography (3-dimensional anorectal ultrasonography) and to study the relationship between perineal descent and symptoms and anatomic/functional abnormalities of the pelvic floor. This was a prospective study. The study was conducted at a large university tertiary care hospital. Consecutive female patients were eligible if they had pelvic floor dysfunction, obstructed defecation symptoms, and a score >6 on the Cleveland Clinic Florida Constipation Scale. Each patient underwent both echodefecography and dynamic 3-dimensional endovaginal ultrasonography to evaluate posterior pelvic floor dysfunction. Normal perineal descent was defined on echodefecography as puborectalis muscle displacement ≤2.5 cm; excessive perineal descent was defined as displacement >2.5 cm. Of 61 women, 29 (48%) had normal perineal descent; 32 (52%) had excessive perineal descent. Endovaginal ultrasonography identified 27 of the 29 patients in the normal group as having anorectal junction displacement ≤1 cm (mean = 0.6 cm; range, 0.1-1.0 cm) and a mean anorectal junction position of 0.6 cm (range, 0-2.3 cm) above the symphysis pubis during the Valsalva maneuver and correctly identified 30 of the 32 patients in the excessive perineal descent group. The κ statistic showed almost perfect agreement (κ = 0.86) between the 2 methods for categorization into the normal and excessive perineal descent groups. Perineal descent was not related to fecal or urinary incontinence or anatomic and functional factors (sphincter defects, pubovisceral muscle defects, levator hiatus area, grade II or III rectocele, intussusception, or anismus). The study did not include a control group without symptoms. Three-dimensional endovaginal ultrasonography is a reliable technique for assessment of perineal descent. Using this technique, excessive perineal descent can be defined as displacement of the anorectal junction >1 cm and/or its position below the symphysis pubis on Valsalva maneuver.
Adjoint shape optimization for fluid-structure interaction of ducted flows
NASA Astrophysics Data System (ADS)
Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.
2018-03-01
Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qichun; Zhou, Jinglin; Wang, Hong
In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.
Kurtosis Approach Nonlinear Blind Source Separation
NASA Technical Reports Server (NTRS)
Duong, Vu A.; Stubbemd, Allen R.
2005-01-01
In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.
An image morphing technique based on optimal mass preserving mapping.
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2007-06-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods.
An Image Morphing Technique Based on Optimal Mass Preserving Mapping
Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen
2013-01-01
Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128
Controlling laser driven protons acceleration using a deformable mirror at a high repetition rate
NASA Astrophysics Data System (ADS)
Noaman-ul-Haq, M.; Sokollik, T.; Ahmed, H.; Braenzel, J.; Ehrentraut, L.; Mirzaie, M.; Yu, L.-L.; Sheng, Z. M.; Chen, L. M.; Schnürer, M.; Zhang, J.
2018-03-01
We present results from a proof-of-principle experiment to optimize laser driven protons acceleration by directly feeding back its spectral information to a deformable mirror (DM) controlled by evolutionary algorithms (EAs). By irradiating a stable high-repetition rate tape driven target with ultra-intense pulses of intensities ∼1020 W/ cm2, we optimize the maximum energy of the accelerated protons with a stability of less than ∼5% fluctuations near optimum value. Moreover, due to spatio-temporal development of the sheath field, modulations in the spectrum are also observed. Particularly, a prominent narrow peak is observed with a spread of ∼15% (FWHM) at low energy part of the spectrum. These results are helpful to develop high repetition rate optimization techniques required for laser-driven ion accelerators.
GaAs/AlOx high-contrast grating mirrors for mid-infrared VCSELs
NASA Astrophysics Data System (ADS)
Almuneau, G.; Laaroussi, Y.; Chevallier, C.; Genty, F.; Fressengeas, N. s.; Cerutti, L.; Gauthier-Lafaye, Olivier
2015-02-01
Mid-infrared Vertical cavity surface emitting lasers (MIR-VCSEL) are very attractive compact sources for spectroscopic measurements above 2μm, relevant for molecules sensing in various application domains. A long-standing issue for long wavelength VCSEL is the large structure thickness affecting the laser properties, added for the MIR to the tricky technological implementation of the antimonide alloys system. In this paper, we propose a new geometry for MIR-VCSEL including both a lateral confinement by an oxide aperture, and a high-contrast sub-wavelength grating mirror (HCG mirror) formed by the high contrast combination AIOx/GaAs in place of GaSb/A|AsSb top Bragg reflector. In addition to drastically simplifying the vertical stack, HCG mirror allows to control through its design the beam properties. The robust design of the HCG has been ensured by an original method of optimization based on particle swarm optimization algorithm combined with an anti-optimization one, thus allowing large error tolerance for the nano-fabrication. Oxide-based electro-optical confinement has been adapted to mid-infrared lasers, byusing a metamorphic approach with (Al) GaAs layer directly epitaxially grown on the GaSb-based VCSEL bottom structure. This approach combines the advantages of the will-controlled oxidation of AlAs layer and the efficient gain media of Sb-based for mid-infrared emission. We finally present the results obtained on electrically pumped mid-IR-VCSELs structures, for which we included oxide aperturing for lateral confinement and HCG as high reflectivity output mirrors, both based on AlxOy/GaAs heterostructures.
14 CFR 23.69 - Enroute climb/descent.
Code of Federal Regulations, 2010 CFR
2010-01-01
... climb/descent. (a) All engines operating. The steady gradient and rate of climb must be determined at.... The steady gradient and rate of climb/descent must be determined at each weight, altitude, and ambient...
Bernard, Olivier; Alata, Olivier; Francaux, Marc
2006-03-01
Modeling in the time domain, the non-steady-state O2 uptake on-kinetics of high-intensity exercises with empirical models is commonly performed with gradient-descent-based methods. However, these procedures may impair the confidence of the parameter estimation when the modeling functions are not continuously differentiable and when the estimation corresponds to an ill-posed problem. To cope with these problems, an implementation of simulated annealing (SA) methods was compared with the GRG2 algorithm (a gradient-descent method known for its robustness). Forty simulated Vo2 on-responses were generated to mimic the real time course for transitions from light- to high-intensity exercises, with a signal-to-noise ratio equal to 20 dB. They were modeled twice with a discontinuous double-exponential function using both estimation methods. GRG2 significantly biased two estimated kinetic parameters of the first exponential (the time delay td1 and the time constant tau1) and impaired the precision (i.e., standard deviation) of the baseline A0, td1, and tau1 compared with SA. SA significantly improved the precision of the three parameters of the second exponential (the asymptotic increment A2, the time delay td2, and the time constant tau2). Nevertheless, td2 was significantly biased by both procedures, and the large confidence intervals of the whole second component parameters limit their interpretation. To compare both algorithms on experimental data, 26 subjects each performed two transitions from 80 W to 80% maximal O2 uptake on a cycle ergometer and O2 uptake was measured breath by breath. More than 88% of the kinetic parameter estimations done with the SA algorithm produced the lowest residual sum of squares between the experimental data points and the model. Repeatability coefficients were better with GRG2 for A1 although better with SA for A2 and tau2. Our results demonstrate that the implementation of SA improves significantly the estimation of most of these kinetic parameters, but a large inaccuracy remains in estimating the parameter values of the second exponential.
High-Contrast Coronagraph Performance in the Presence of DM Actuator Defects
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Shaklan, Stuart; Cady, Eric
2015-01-01
Deformable Mirrors (DMs) are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Occasionally DM actuators or their associated cables or electronics fail, requiring a wavefront control algorithm to compensate for actuators that may be displaced from their neighbors by hundreds of nanometers. We have carried out experiments on our High-Contrast Imaging Testbed (HCIT) to study the impact of failed actuators in partial fulfillment of the Terrestrial Planet Finder Coronagraph optical model validation milestone. We show that the wavefront control algorithm adapts to several broken actuators and maintains dark-hole contrast in broadband light.
High-contrast coronagraph performance in the presence of DM actuator defects
NASA Astrophysics Data System (ADS)
Sidick, Erkin; Shaklan, Stuart; Cady, Eric
2015-09-01
Deformable Mirrors (DMs) are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Occasionally DM actuators or their associated cables or electronics fail, requiring a wavefront control algorithm to compensate for actuators that may be displaced from their neighbors by hundreds of nanometers. We have carried out experiments on our High-Contrast Imaging Testbed (HCIT) to study the impact of failed actuators in partial fulfilment of the Terrestrial Planet Finder Coronagraph optical model validation milestone. We show that the wavefront control algorithm adapts to several broken actuators and maintains dark-hole contrast in broadband light.
Effects of flutamide and finasteride on rat testicular descent.
Spencer, J R; Torrado, T; Sanchez, R S; Vaughan, E D; Imperato-McGinley, J
1991-08-01
The endocrine control of descent of the testis in mammalian species is poorly understood. The androgen dependency of testicular descent was studied in the rat using an antiandrogen (flutamide) and an inhibitor of the enzyme 5 alpha-reductase (finasteride). Androgen receptor blockade inhibited testicular descent more effectively than inhibition of 5 alpha-reductase activity. Moreover, its inhibitory effect was limited to the outgrowth phase of the gubernaculum testis, particularly the earliest stages of outgrowth. Gubernacular size was also significantly reduced in fetuses exposed to flutamide during the outgrowth period. In contrast, androgen receptor blockade or 5 alpha-reductase inhibition applied after the initiation of gubernacular outgrowth or during the regression phase did not affect testicular descent. Successful inhibition of the development of epididymis and vas by prenatal flutamide did not correlate with ipsilateral testicular maldescent, suggesting that an intact epididymis is not required for descent of the testis. Plasma androgen assays confirmed significant inhibition of dihydrotestosterone formation in finasteride-treated rats. These data suggest that androgens, primarily testosterone, are required during the early phases of gubernacular outgrowth for subsequent successful completion of testicular descent.
Zhai, Chao; Alderisio, Francesco; Slowinski, Piotr; Tsaneva-Atanasova, Krasimira; di Bernardo, Mario
2018-03-01
The mirror game has been recently proposed as a simple, yet powerful paradigm for studying interpersonal interactions. It has been suggested that a virtual partner able to play the game with human subjects can be an effective tool to affect the underlying neural processes needed to establish the necessary connections between the players, and also to provide new clinical interventions for rehabilitation of patients suffering from social disorders. Inspired by the motor processes of the central nervous system (CNS) and the musculoskeletal system in the human body, in this paper we develop a novel interactive cognitive architecture based on nonlinear control theory to drive a virtual player (VP) to play the mirror game with a human player (HP) in different configurations. Specifically, we consider two cases: 1) the VP acts as leader and 2) the VP acts as follower. The crucial problem is to design a feedback control architecture capable of imitating and following or leading an HP in a joint action task. The movement of the end-effector of the VP is modeled by means of a feedback controlled Haken-Kelso-Bunz (HKB) oscillator, which is coupled with the observed motion of the HP measured in real time. To this aim, two types of control algorithms (adaptive control and optimal control) are used and implemented on the HKB model so that the VP can generate a human-like motion while satisfying certain kinematic constraints. A proof of convergence of the control algorithms is presented together with an extensive numerical and experimental validation of their effectiveness. A comparison with other existing designs is also discussed, showing the flexibility and the advantages of our control-based approach.
2012-03-01
geometry of reflection from a smooth (or mirror-like) surface [27]. In passive polarimetry , the angle of polarization (AoP) provides information about... polarimetry for remote sens- ing applications”. Appl. Opt., 45(22):5453–5469, Aug 2006. URL http://ao.osa.org/abstract.cfm?URI=ao-45-22-5453. 27
Ionosphere-magnetosphere coupling and convection
NASA Technical Reports Server (NTRS)
Wolf, R. A.; Spiro, R. W.
1984-01-01
The following international Magnetospheric Study quantitative models of observed ionosphere-magnetosphere events are reviewed: (1) a theoretical model of convection; (2) algorithms for deducing ionospheric current and electric-field patterns from sets of ground magnetograms and ionospheric conductivity information; and (3) empirical models of ionospheric conductances and polar cap potential drop. Research into magnetic-field-aligned electric fields is reviewed, particularly magnetic-mirror effects and double layers.
Schneider, Adrian; Pezold, Simon; Baek, Kyung-Won; Marinov, Dilyan; Cattin, Philippe C
2016-09-01
PURPOSE : During the past five decades, laser technology emerged and is nowadays part of a great number of scientific and industrial applications. In the medical field, the integration of laser technology is on the rise and has already been widely adopted in contemporary medical applications. However, it is new to use a laser to cut bone and perform general osteotomy surgical tasks with it. In this paper, we describe a method to calibrate a laser deflecting tilting mirror and integrate it into a sophisticated laser osteotome, involving next generation robots and optical tracking. METHODS : A mathematical model was derived, which describes a controllable deflection mirror by the general projective transformation. This makes the application of well-known camera calibration methods possible. In particular, the direct linear transformation algorithm is applied to calibrate and integrate a laser deflecting tilting mirror into the affine transformation chain of a surgical system. RESULTS : Experiments were performed on synthetic generated calibration input, and the calibration was tested with real data. The determined target registration errors in a working distance of 150 mm for both simulated input and real data agree at the declared noise level of the applied optical 3D tracking system: The evaluation of the synthetic input showed an error of 0.4 mm, and the error with the real data was 0.3 mm.
Optical design of free face reflective headlamps
NASA Astrophysics Data System (ADS)
Cen, Zhao Feng; Li, Xiao Tong; Deng, Shi Tao
2005-02-01
Headlamps are installed at the head of automobiles for road lighting. About the illumination and anti-dazzle, some standards such as the standard of ECE are established. Now more and more free face reflective headlamps (FFR headlamps) are applied, and the light distribution design of FFR mirror becomes an important subject in the field of automobile assembling part. In this paper the surface shape of FFR headlamps is analyzed and described as a multi-partition aspherical surface with some simple parameters. According to the fundamental principles of geometrical optics and using the theory of ray transmission with energy, millions of real rays emitted from lower beam filament and high beam filament are traced and the relative intensity of illumination at the test screen with distance of 25m from the automobiles is obtained. In this paper the description of FFR mirrors is discussed, the algorithm of FFR headlamp design is presented, the flow chart is given and the light distribution simulation software with friendly interfaces is developed. In the light distribution graphic interface of the software, the illumination area could be dragged to a certain position while the parameters of current partition at the FFR mirror will be automatically changed. Using this software the FFR headlamps satisfying criteria will be designed very quickly and the 3D coordinates of any points at the mirror will be obtained. This makes CAM of FFR headlamps easy.
Testicular descent related to growth hormone treatment.
Papadimitriou, Anastasios; Fountzoula, Ioanna; Grigoriadou, Despina; Christianakis, Stratos; Tzortzatou, Georgia
2003-01-01
An 8.7 year-old boy with cryptorchidism and growth hormone (GH) deficiency due to septooptic dysplasia presented testicular descent related to the commencement of hGH treatment. This case suggests a role for GH in testicular descent.
Aircraft Vortex Wake Descent and Decay under Real Atmospheric Effects
DOT National Transportation Integrated Search
1973-10-01
Aircraft vortex wake descent and decay in a real atmosphere is studied analytically. Factors relating to encounter hazard, wake generation, wake descent and stability, and atmospheric dynamics are considered. Operational equations for encounter hazar...
NASA Astrophysics Data System (ADS)
Golomazov, M. M.; Ivankov, A. A.
2016-12-01
Methods for calculating the aerodynamic impact of the Martian atmosphere on the descent module "Exomars-2018" intended for solving the problem of heat protection of the descent module during aerodynamic deceleration are presented. The results of the investigation are also given. The flow field and radiative and convective heat exchange are calculated along the trajectory of the descent module until parachute system activation.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1974-01-01
Apollo lunar-descent guidance transfers the Lunar Module from a near-circular orbit to touchdown, traversing a 17 deg central angle and a 15 km altitude in 11 min. A group of interactive programs in an onboard computer guide the descent, controlling altitude and the descent propulsion system throttle. A ground-based program pre-computes guidance targets. The concepts involved in this guidance are described. Explicit and implicit guidance are discussed, guidance equations are derived, and the earlier Apollo explicit equation is shown to be an inferior special case of the later implicit equation. Interactive guidance, by which the two-man crew selects a landing site in favorable terrain and directs the trajectory there, is discussed. Interactive terminal-descent guidance enables the crew to control the essentially vertical descent rate in order to land in minimum time with safe contact speed. The altitude maneuver routine uses concepts that make gimbal lock inherently impossible.
NASA Technical Reports Server (NTRS)
Smith, Charlee C., Jr.; Lovell, Powell M., Jr.
1954-01-01
An investigation is being conducted to determine the dynamic stability and control characteristics of a 0.13-scale flying model of Convair XFY-1 vertically rising airplane. This paper presents the results of flight and force tests to determine the stability and control characteristics of the model in vertical descent and landings in still air. The tests indicated that landings, including vertical descent from altitudes representing up to 400 feet for the full-scale airplane and at rates of descent up to 15 or 20 feet per second (full scale), can be performed satisfactorily. Sustained vertical descent in still air probably will be more difficult to perform because of large random trim changes that become greater as the descent velocity is increased. A slight steady head wind or cross wind might be sufficient to eliminate the random trim changes.
Swanson, C.; Jandovitz, P.; Cohen, S. A.
2018-02-27
We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swanson, C.; Jandovitz, P.; Cohen, S. A.
We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less
Zou, Weiyao; Qi, Xiaofeng; Burns, Stephen A
2011-07-01
We implemented a Lagrange-multiplier (LM)-based damped least-squares (DLS) control algorithm in a woofer-tweeter dual deformable-mirror (DM) adaptive optics scanning laser ophthalmoscope (AOSLO). The algorithm uses data from a single Shack-Hartmann wavefront sensor to simultaneously correct large-amplitude low-order aberrations by a woofer DM and small-amplitude higher-order aberrations by a tweeter DM. We measured the in vivo performance of high resolution retinal imaging with the dual DM AOSLO. We compared the simultaneous LM-based DLS dual DM controller with both single DM controller, and a successive dual DM controller. We evaluated performance using both wavefront (RMS) and image quality metrics including brightness and power spectrum. The simultaneous LM-based dual DM AO can consistently provide near diffraction-limited in vivo routine imaging of human retina.
Multi-AUV Target Search Based on Bioinspired Neurodynamics Model in 3-D Underwater Environments.
Cao, Xiang; Zhu, Daqi; Yang, Simon X
2016-11-01
Target search in 3-D underwater environments is a challenge in multiple autonomous underwater vehicles (multi-AUVs) exploration. This paper focuses on an effective strategy for multi-AUV target search in the 3-D underwater environments with obstacles. First, the Dempster-Shafer theory of evidence is applied to extract information of environment from the sonar data to build a grid map of the underwater environments. Second, a topologically organized bioinspired neurodynamics model based on the grid map is constructed to represent the dynamic environment. The target globally attracts the AUVs through the dynamic neural activity landscape of the model, while the obstacles locally push the AUVs away to avoid collision. Finally, the AUVs plan their search path to the targets autonomously by a steepest gradient descent rule. The proposed algorithm deals with various situations, such as static targets search, dynamic targets search, and one or several AUVs break down in the 3-D underwater environments with obstacles. The simulation results show that the proposed algorithm is capable of guiding multi-AUV to achieve search task of multiple targets with higher efficiency and adaptability compared with other algorithms.
Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle
2013-01-01
The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.
Zhang, Yuwei; Cao, Zexing; Zhang, John Zenghui; Xia, Fei
2017-02-27
Construction of coarse-grained (CG) models for large biomolecules used for multiscale simulations demands a rigorous definition of CG sites for them. Several coarse-graining methods such as the simulated annealing and steepest descent (SASD) based on the essential dynamics coarse-graining (ED-CG) or the stepwise local iterative optimization (SLIO) based on the fluctuation maximization coarse-graining (FM-CG), were developed to do it. However, the practical applications of these methods such as SASD based on ED-CG are subject to limitations because they are too expensive. In this work, we extend the applicability of ED-CG by combining it with the SLIO algorithm. A comprehensive comparison of optimized results and accuracy of various algorithms based on ED-CG show that SLIO is the fastest as well as the most accurate algorithm among them. ED-CG combined with SLIO could give converged results as the number of CG sites increases, which demonstrates that it is another efficient method for coarse-graining large biomolecules. The construction of CG sites for Ras protein by using MD fluctuations demonstrates that the CG sites derived from FM-CG can reflect the fluctuation properties of secondary structures in Ras accurately.
Pixel-By Estimation of Scene Motion in Video
NASA Astrophysics Data System (ADS)
Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.
2017-05-01
The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
Pardo-Montero, Juan; Fenwick, John D
2010-06-01
The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape comprising two regions: One where the dose to the target is close to prescription and trade-offs can be made between doses to the organs at risk and (small) changes in target dose, and one where very substantial rectal sparing is achieved at the cost of large target underdosage. Plans computed following the approach using a conformal arc and four blocked arcs generally lie close to the Pareto front, although distances of some plans from high gradient regions of the Pareto front can be greater. Only around 12% of plans lie a relative Euclidean distance of 0.15 or greater from the Pareto front. Using the alternative distance measure of Craft ["Calculating and controlling the error of discrete representations of Pareto surfaces in convex multi-criteria optimization," Phys. Medica (to be published)], around 2/5 of plans lie more than 0.05 from the front. Computation of blocked arcs is quite fast, the algorithms requiring 35%-80% of the running time per iteration needed for conventional inverse plan computation. The geometry-based arc approach to multicriteria optimization of rotational therapy allows solutions to be obtained that lie close to the Pareto front. Both the image-reconstruction type and gradient-descent algorithms produce similar modulated arcs, the latter one perhaps being preferred because it is more easily implementable in standard treatment planning systems. Moderate unblocking provides a good way of dealing with OARs which abut the PTV. Optimization of geometry-based arcs is faster than usual inverse optimization of treatment plans, making this approach more rapid than an inverse-based Pareto front reconstruction.
Kidd, La Creis Renee; VanCleave, Tiva T.; Doll, Mark A.; Srivastava, Daya S.; Thacker, Brandon; Komolafe, Oyeyemi; Pihur, Vasyl; Brock, Guy N.; Hein, David W.
2011-01-01
Objective We evaluated the individual and combination effects of NAT1, NAT2 and tobacco smoking in a case-control study of 219 incident prostate cancer (PCa) cases and 555 disease-free men. Methods Allelic discriminations for 15 NAT1 and NAT2 loci were detected in germ-line DNA samples using Taqman polymerase chain reaction (PCR) assays. Single gene, gene-gene and gene-smoking interactions were analyzed using logistic regression models and multi-factor dimensionality reduction (MDR) adjusted for age and subpopulation stratification. MDR involves a rigorous algorithm that has ample statistical power to assess and visualize gene-gene and gene-environment interactions using relatively small samples sizes (i.e., 200 cases and 200 controls). Results Despite the relatively high prevalence of NAT1*10/*10 (40.1%), NAT2 slow (30.6%), and NAT2 very slow acetylator genotypes (10.1%) among our study participants, these putative risk factors did not individually or jointly increase PCa risk among all subjects or a subset analysis restricted to tobacco smokers. Conclusion Our data do not support the use of N-acetyltransferase genetic susceptibilities as PCa risk factors among men of African descent; however, subsequent studies in larger sample populations are needed to confirm this finding. PMID:21709725
Understanding the Convolutional Neural Networks with Gradient Descent and Backpropagation
NASA Astrophysics Data System (ADS)
Zhou, XueFei
2018-04-01
With the development of computer technology, the applications of machine learning are more and more extensive. And machine learning is providing endless opportunities to develop new applications. One of those applications is image recognition by using Convolutional Neural Networks (CNNs). CNN is one of the most common algorithms in image recognition. It is significant to understand its theory and structure for every scholar who is interested in this field. CNN is mainly used in computer identification, especially in voice, text recognition and other aspects of the application. It utilizes hierarchical structure with different layers to accelerate computing speed. In addition, the greatest features of CNNs are the weight sharing and dimension reduction. And all of these consolidate the high effectiveness and efficiency of CNNs with idea computing speed and error rate. With the help of other learning altruisms, CNNs could be used in several scenarios for machine learning, especially for deep learning. Based on the general introduction to the background and the core solution CNN, this paper is going to focus on summarizing how Gradient Descent and Backpropagation work, and how they contribute to the high performances of CNNs. Also, some practical applications will be discussed in the following parts. The last section exhibits the conclusion and some perspectives of future work.
Siggs, Owen M.; Miosge, Lisa A.; Roots, Carla M.; Enders, Anselm; Bertram, Edward M.; Crockford, Tanya L.; Whittle, Belinda; Potter, Paul K.; Simon, Michelle M.; Mallon, Ann-Marie; Brown, Steve D. M.; Beutler, Bruce; Goodnow, Christopher C.; Lunter, Gerton; Cornall, Richard J.
2013-01-01
Forward genetics screens with N-ethyl-N-nitrosourea (ENU) provide a powerful way to illuminate gene function and generate mouse models of human disease; however, the identification of causative mutations remains a limiting step. Current strategies depend on conventional mapping, so the propagation of affected mice requires non-lethal screens; accurate tracking of phenotypes through pedigrees is complex and uncertain; out-crossing can introduce unexpected modifiers; and Sanger sequencing of candidate genes is inefficient. Here we show how these problems can be efficiently overcome using whole-genome sequencing (WGS) to detect the ENU mutations and then identify regions that are identical by descent (IBD) in multiple affected mice. In this strategy, we use a modification of the Lander-Green algorithm to isolate causative recessive and dominant mutations, even at low coverage, on a pure strain background. Analysis of the IBD regions also allows us to calculate the ENU mutation rate (1.54 mutations per Mb) and to model future strategies for genetic screens in mice. The introduction of this approach will accelerate the discovery of causal variants, permit broader and more informative lethal screens to be used, reduce animal costs, and herald a new era for ENU mutagenesis. PMID:23382690
Design and Analysis of Map Relative Localization for Access to Hazardous Landing Sites on Mars
NASA Technical Reports Server (NTRS)
Johnson, Andrew E.; Aaron, Seth; Cheng, Yang; Montgomery, James; Trawny, Nikolas; Tweddle, Brent; Vaughan, Geoffrey; Zheng, Jason
2016-01-01
Human and robotic planetary lander missions require accurate surface relative position knowledge to land near science targets or next to pre-deployed assets. In the absence of GPS, accurate position estimates can be obtained by automatically matching sensor data collected during descent to an on-board map. The Lander Vision System (LVS) that is being developed for Mars landing applications generates landmark matches in descent imagery and combines these with inertial data to estimate vehicle position, velocity and attitude. This paper describes recent LVS design work focused on making the map relative localization algorithms robust to challenging environmental conditions like bland terrain, appearance differences between the map and image and initial input state errors. Improved results are shown using data from a recent LVS field test campaign. This paper also fills a gap in analysis to date by assessing the performance of the LVS with data sets containing significant vertical motion including a complete data set from the Mars Science Laboratory mission, a Mars landing simulation, and field test data taken over multiple altitudes above the same scene. Accurate and robust performance is achieved for all data sets indicating that vertical motion does not play a significant role in position estimation performance.
A Descent Rate Control Approach to Developing an Autonomous Descent Vehicle
NASA Astrophysics Data System (ADS)
Fields, Travis D.
Circular parachutes have been used for aerial payload/personnel deliveries for over 100 years. In the past two decades, significant work has been done to improve the landing accuracies of cargo deliveries for humanitarian and military applications. This dissertation discusses the approach developed in which a circular parachute is used in conjunction with an electro-mechanical reefing system to manipulate the landing location. Rather than attempt to steer the autonomous descent vehicle directly, control of the landing location is accomplished by modifying the amount of time spent in a particular wind layer. Descent rate control is performed by reversibly reefing the parachute canopy. The first stage of the research investigated the use of a single actuation during descent (with periodic updates), in conjunction with a curvilinear target. Simulation results using real-world wind data are presented, illustrating the utility of the methodology developed. Additionally, hardware development and flight-testing of the single actuation autonomous descent vehicle are presented. The next phase of the research focuses on expanding the single actuation descent rate control methodology to incorporate a multi-actuation path-planning system. By modifying the parachute size throughout the descent, the controllability of the system greatly increases. The trajectory planning methodology developed provides a robust approach to accurately manipulate the landing location of the vehicle. The primary benefits of this system are the inherent robustness to release location errors and the ability to overcome vehicle uncertainties (mass, parachute size, etc.). A separate application of the path-planning methodology is also presented. An in-flight path-prediction system was developed for use in high-altitude ballooning by utilizing the path-planning methodology developed for descent vehicles. The developed onboard system improves landing location predictions in-flight using collected flight information during the ascent and descent. Simulation and real-world flight tests (using the developed low-cost hardware) demonstrate the significance of the improvements achievable when flying the developed system.
Miller, Vonda H; Jansen, Ben H
2008-12-01
Computer algorithms that match human performance in recognizing written text or spoken conversation remain elusive. The reasons why the human brain far exceeds any existing recognition scheme to date in the ability to generalize and to extract invariant characteristics relevant to category matching are not clear. However, it has been postulated that the dynamic distribution of brain activity (spatiotemporal activation patterns) is the mechanism by which stimuli are encoded and matched to categories. This research focuses on supervised learning using a trajectory based distance metric for category discrimination in an oscillatory neural network model. Classification is accomplished using a trajectory based distance metric. Since the distance metric is differentiable, a supervised learning algorithm based on gradient descent is demonstrated. Classification of spatiotemporal frequency transitions and their relation to a priori assessed categories is shown along with the improved classification results after supervised training. The results indicate that this spatiotemporal representation of stimuli and the associated distance metric is useful for simple pattern recognition tasks and that supervised learning improves classification results.
Multigrid one shot methods for optimal control problems: Infinite dimensional control
NASA Technical Reports Server (NTRS)
Arian, Eyal; Taasan, Shlomo
1994-01-01
The multigrid one shot method for optimal control problems, governed by elliptic systems, is introduced for the infinite dimensional control space. ln this case, the control variable is a function whose discrete representation involves_an increasing number of variables with grid refinement. The minimization algorithm uses Lagrange multipliers to calculate sensitivity gradients. A preconditioned gradient descent algorithm is accelerated by a set of coarse grids. It optimizes for different scales in the representation of the control variable on different discretization levels. An analysis which reduces the problem to the boundary is introduced. It is used to approximate the two level asymptotic convergence rate, to determine the amplitude of the minimization steps, and the choice of a high pass filter to be used when necessary. The effectiveness of the method is demonstrated on a series of test problems. The new method enables the solutions of optimal control problems at the same cost of solving the corresponding analysis problems just a few times.
Graph Matching: Relax at Your Own Risk.
Lyzinski, Vince; Fishkind, Donniell E; Fiori, Marcelo; Vogelstein, Joshua T; Priebe, Carey E; Sapiro, Guillermo
2016-01-01
Graph matching-aligning a pair of graphs to minimize their edge disagreements-has received wide-spread attention from both theoretical and applied communities over the past several decades, including combinatorics, computer vision, and connectomics. Its attention can be partially attributed to its computational difficulty. Although many heuristics have previously been proposed in the literature to approximately solve graph matching, very few have any theoretical support for their performance. A common technique is to relax the discrete problem to a continuous problem, therefore enabling practitioners to bring gradient-descent-type algorithms to bear. We prove that an indefinite relaxation (when solved exactly) almost always discovers the optimal permutation, while a common convex relaxation almost always fails to discover the optimal permutation. These theoretical results suggest that initializing the indefinite algorithm with the convex optimum might yield improved practical performance. Indeed, experimental results illuminate and corroborate these theoretical findings, demonstrating that excellent results are achieved in both benchmark and real data problems by amalgamating the two approaches.
Mars Entry Atmospheric Data System Modelling and Algorithm Development
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.
2009-01-01
The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less
Mars Pathfinder Atmospheric Entry Navigation Operations
NASA Technical Reports Server (NTRS)
Braun, R. D.; Spencer, D. A.; Kallemeyn, P. H.; Vaughan, R. M.
1997-01-01
On July 4, 1997, after traveling close to 500 million km, the Pathfinder spacecraft successfully completed entry, descent, and landing, coming to rest on the surface of Mars just 27 km from its target point. In the present paper, the atmospheric entry and approach navigation activities required in support of this mission are discussed. In particular, the flight software parameter update and landing site prediction analyses performed by the Pathfinder operations navigation team are described. A suite of simulation tools developed during Pathfinder's design cycle, but extendible to Pathfinder operations, are also presented. Data regarding the accuracy of the primary parachute deployment algorithm is extracted from the Pathfinder flight data, demonstrating that this algorithm performed as predicted. The increased probability of mission success through the software parameter update process is discussed. This paper also demonstrates the importance of modeling atmospheric flight uncertainties in the estimation of an accurate landing site. With these atmospheric effects included, the final landed ellipse prediction differs from the post-flight determined landing site by less then 0.5 km in downtrack.
A Space Affine Matching Approach to fMRI Time Series Analysis.
Chen, Liang; Zhang, Weishi; Liu, Hongbo; Feng, Shigang; Chen, C L Philip; Wang, Huili
2016-07-01
For fMRI time series analysis, an important challenge is to overcome the potential delay between hemodynamic response signal and cognitive stimuli signal, namely the same frequency but different phase (SFDP) problem. In this paper, a novel space affine matching feature is presented by introducing the time domain and frequency domain features. The time domain feature is used to discern different stimuli, while the frequency domain feature to eliminate the delay. And then we propose a space affine matching (SAM) algorithm to match fMRI time series by our affine feature, in which a normal vector is estimated using gradient descent to explore the time series matching optimally. The experimental results illustrate that the SAM algorithm is insensitive to the delay between the hemodynamic response signal and the cognitive stimuli signal. Our approach significantly outperforms GLM method while there exists the delay. The approach can help us solve the SFDP problem in fMRI time series matching and thus of great promise to reveal brain dynamics.
Improving the Incoherence of a Learned Dictionary via Rank Shrinkage.
Ubaru, Shashanka; Seghouane, Abd-Krim; Saad, Yousef
2017-01-01
This letter considers the problem of dictionary learning for sparse signal representation whose atoms have low mutual coherence. To learn such dictionaries, at each step, we first update the dictionary using the method of optimal directions (MOD) and then apply a dictionary rank shrinkage step to decrease its mutual coherence. In the rank shrinkage step, we first compute a rank 1 decomposition of the column-normalized least squares estimate of the dictionary obtained from the MOD step. We then shrink the rank of this learned dictionary by transforming the problem of reducing the rank to a nonnegative garrotte estimation problem and solving it using a path-wise coordinate descent approach. We establish theoretical results that show that the rank shrinkage step included will reduce the coherence of the dictionary, which is further validated by experimental results. Numerical experiments illustrating the performance of the proposed algorithm in comparison to various other well-known dictionary learning algorithms are also presented.
An evaluation of descent strategies for TNAV-equipped aircraft in an advanced metering environment
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Schwab, R. W.; Groce, J. L.; Coote, M. A.
1986-01-01
Investigated were the effects on system throughput and fleet fuel usage of arrival aircraft utilizing three 4D RNAV descent strategies (cost optimal, clean-idle Mach/CAS and constant descent angle Mach/CAS), both individually and in combination, in an advanced air traffic control metering environment. Results are presented for all mixtures of arrival traffic consisting of three Boeing commercial jet types and for all combinations of the three descent strategies for a typical en route metering airport arrival distribution.
OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE
NASA Technical Reports Server (NTRS)
Lee, H.
1994-01-01
For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.
Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang
2016-01-01
The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045
Power Control and Optimization of Photovoltaic and Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Ghaffari, Azad
Power map and Maximum Power Point (MPP) of Photovoltaic (PV) and Wind Energy Conversion Systems (WECS) highly depend on system dynamics and environmental parameters, e.g., solar irradiance, temperature, and wind speed. Power optimization algorithms for PV systems and WECS are collectively known as Maximum Power Point Tracking (MPPT) algorithm. Gradient-based Extremum Seeking (ES), as a non-model-based MPPT algorithm, governs the system to its peak point on the steepest descent curve regardless of changes of the system dynamics and variations of the environmental parameters. Since the power map shape defines the gradient vector, then a close estimate of the power map shape is needed to create user assignable transients in the MPPT algorithm. The Hessian gives a precise estimate of the power map in a neighborhood around the MPP. The estimate of the inverse of the Hessian in combination with the estimate of the gradient vector are the key parts to implement the Newton-based ES algorithm. Hence, we generate an estimate of the Hessian using our proposed perturbation matrix. Also, we introduce a dynamic estimator to calculate the inverse of the Hessian which is an essential part of our algorithm. We present various simulations and experiments on the micro-converter PV systems to verify the validity of our proposed algorithm. The ES scheme can also be used in combination with other control algorithms to achieve desired closed-loop performance. The WECS dynamics is slow which causes even slower response time for the MPPT based on the ES. Hence, we present a control scheme, extended from Field-Oriented Control (FOC), in combination with feedback linearization to reduce the convergence time of the closed-loop system. Furthermore, the nonlinear control prevents magnetic saturation of the stator of the Induction Generator (IG). The proposed control algorithm in combination with the ES guarantees the closed-loop system robustness with respect to high level parameter uncertainty in the IG dynamics. The simulation results verify the effectiveness of the proposed algorithm.
Khachatryan, Naira; Medeiros, Felipe A.; Sharpsten, Lucie; Bowd, Christopher; Sample, Pamela A.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Weinreb, Robert N.; Miki, Atsuya; Hammel, Na’ama; Zangwill, Linda M.
2015-01-01
Purpose To evaluate racial differences in the development of visual field (VF) damage in glaucoma suspects. Design Prospective, observational cohort study. Methods Six hundred thirty six eyes from 357 glaucoma suspects with normal VF at baseline were included from the multicenter African Descent and Glaucoma Evaluation Study (ADAGES). Racial differences in the development of VF damage were examined using multivariable Cox Proportional Hazard models. Results Thirty one (25.4%) of 122 African descent participants and 47 (20.0%) of 235 European descent participants developed VF damage (p=0.078). In multivariable analysis, worse baseline VF mean deviation, higher mean arterial pressure during follow up, and a race *mean intraocular pressure (IOP) interaction term were significantly associated with the development of VF damage suggesting that racial differences in the risk of VF damage varied by IOP. At higher mean IOP levels, race was predictive of the development of VF damage even after adjusting for potentially confounding factors. At mean IOPs during follow-up of 22, 24 and 26 mmHg, multivariable hazard ratios (95%CI) for the development of VF damage in African descent compared to European descent subjects were 2.03 (1.15–3.57), 2.71 (1.39–5.29), and 3.61 (1.61–8.08), respectively. However, at lower mean IOP levels (below 22 mmHg) during follow-up, African descent was not predictive of the development of VF damage. Conclusion In this cohort of glaucoma suspects with similar access to treatment, multivariate analysis revealed that at higher mean IOP during follow-up, individuals of African descent were more likely to develop VF damage than individuals of European descent. PMID:25597839
van der Stoep, T
Compared to the percentage of ethnic minorities in the general population, ethnic minorities are overrepresented in forensic psychiatry. If these minorities are to be treated successfully, we need to know more about this group. So far, however, little is known about the differences between mental disorders and types of offences associated with patients of non-Dutch descent and those associated with patients of Dutch descent.
AIM: To take the first steps to obtain the information we need in order to provide customised care for patients of non-Dutch descent.
METHOD: It proved possible to identify differences between patients of Dutch and non-Dutch descent with regard to treatment, diagnosis and offences committed within a group of patients who were admitted to the forensic psychiatric centre Oostvaarderskliniek during the period 2001 - 2014.
RESULTS: The treatment of patients of non-Dutch descent lasted longer than the treatment of patients of Dutch descent (8.5 year versus 6.6 year). Furthermore, patients from ethnic minority groups were diagnosed more often with schizophrenia (49.1% versus 21.4%), but less often with pervasive developmental disorders or sexual disorders. Patients of non-Dutch descent were more often convicted for sexual crimes where the victim was aged 16 years or older, whereas patients of Dutch descent were convicted of sexual crimes where the victim was under 16.
CONCLUSION: There are differences between patients of Dutch and non-Dutch descent with regard to treatment duration, diagnosis and offences they commit. Future research needs to investigate whether these results are representative for the entire field of forensic psychiatry and to discover the reasons for these differences.