Sample records for atmospheric compensation algorithm

  1. Assimilation of nontraditional datasets to improve atmospheric compensation

    NASA Astrophysics Data System (ADS)

    Kelly, Michael A.; Osei-Wusu, Kwame; Spisz, Thomas S.; Strong, Shadrian; Setters, Nathan; Gibson, David M.

    2012-06-01

    Detection and characterization of space objects require the capability to derive physical properties such as brightness temperature and reflectance. These quantities, together with trajectory and position, are often used to correlate an object from a catalogue of known characteristics. However, retrieval of these physical quantities can be hampered by the radiative obscuration of the atmosphere. Atmospheric compensation must therefore be applied to remove the radiative signature of the atmosphere from electro-optical (EO) collections and enable object characterization. The JHU/APL Atmospheric Compensation System (ACS) was designed to perform atmospheric compensation for long, slant-range paths at wavelengths from the visible to infrared. Atmospheric compensation is critically important for airand ground-based sensors collecting at low elevations near the Earth's limb. It can be demonstrated that undetected thin, sub-visual cirrus clouds in the line of sight (LOS) can significantly alter retrieved target properties (temperature, irradiance). The ACS algorithm employs non-traditional cirrus datasets and slant-range atmospheric profiles to estimate and remove atmospheric radiative effects from EO/IR collections. Results are presented for a NASA-sponsored collection in the near-IR (NIR) during hypersonic reentry of the Space Shuttle during STS-132.

  2. Retrieval of atmospheric properties from hyper and multispectral imagery with the FLAASH atmospheric correction algorithm

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald

    2005-10-01

    Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.

  3. A computerized compensator design algorithm with launch vehicle applications

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Mcdaniel, W. L., Jr.

    1976-01-01

    This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.

  4. Linear phase conjugation for atmospheric aberration compensation

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Stappaerts, Eddy A.

    1998-01-01

    Atmospheric induced aberrations can seriously degrade laser performance, greatly affecting the beam that finally reaches the target. Lasers propagated over any distance in the atmosphere suffer from a significant decrease in fluence at the target due to these aberrations. This is especially so for propagation over long distances. It is due primarily to fluctuations in the atmosphere over the propagation path, and from platform motion relative to the intended aimpoint. Also, delivery of high fluence to the target typically requires low beam divergence, thus, atmospheric turbulence, platform motion, or both results in a lack of fine aimpoint control to keep the beam directed at the target. To improve both the beam quality and amount of laser energy delivered to the target, Northrop Grumman has developed the Active Tracking System (ATS); a novel linear phase conjugation aberration compensation technique. Utilizing a silicon spatial light modulator (SLM) as a dynamic wavefront reversing element, ATS undoes aberrations induced by the atmosphere, platform motion or both. ATS continually tracks the target as well as compensates for atmospheric and platform motion induced aberrations. This results in a high fidelity, near-diffraction limited beam delivered to the target.

  5. Atmospheric Compensation and Surface Temperature and Emissivity Retrieval with LWIR Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Pieper, Michael

    Accurate estimation or retrieval of surface emissivity spectra from long-wave infrared (LWIR) or Thermal Infrared (TIR) hyperspectral imaging data acquired by airborne or space-borne sensors is necessary for many scientific and defense applications. The at-aperture radiance measured by the sensor is a function of the ground emissivity and temperature, modified by the atmosphere. Thus the emissivity retrieval process consists of two interwoven steps: atmospheric compensation (AC) to retrieve the ground radiance from the measured at-aperture radiance and temperature-emissivity separation (TES) to separate the temperature and emissivity from the ground radiance. In-scene AC (ISAC) algorithms use blackbody-like materials in the scene, which have a linear relationship between their ground radiances and at-aperture radiances determined by the atmospheric transmission and upwelling radiance. Using a clear reference channel to estimate the ground radiance, a linear fitting of the at-aperture radiance and estimated ground radiance is done to estimate the atmospheric parameters. TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the sharp features added by the atmosphere. The ground temperature and emissivity are found by finding the temperature that provides the smoothest emissivity estimate. In this thesis we develop models to investigate the sensitivity of AC and TES to the basic assumptions enabling their performance. ISAC assumes that there are perfect blackbody pixels in a scene and that there is a clear channel, which is never the case. The developed ISAC model explains how the quality of blackbody-like pixels affect the shape of atmospheric estimates and the clear channel assumption affects their magnitude. Emissivity spectra for solids usually have some roughness. The TES model identifies four sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect

  6. Novel wavelength diversity technique for high-speed atmospheric turbulence compensation

    NASA Astrophysics Data System (ADS)

    Arrasmith, William W.; Sullivan, Sean F.

    2010-04-01

    The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.

  7. Iterative reconstruction methods in atmospheric tomography: FEWHA, Kaczmarz and Gradient-based algorithm

    NASA Astrophysics Data System (ADS)

    Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.

    2014-07-01

    The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.

  8. Adaptive optics compensation of orbital angular momentum beams with a modified Gerchberg-Saxton-based phase retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Chang, Huan; Yin, Xiao-li; Cui, Xiao-zhou; Zhang, Zhi-chao; Ma, Jian-xin; Wu, Guo-hua; Zhang, Li-jia; Xin, Xiang-jun

    2017-12-01

    Practical orbital angular momentum (OAM)-based free-space optical (FSO) communications commonly experience serious performance degradation and crosstalk due to atmospheric turbulence. In this paper, we propose a wave-front sensorless adaptive optics (WSAO) system with a modified Gerchberg-Saxton (GS)-based phase retrieval algorithm to correct distorted OAM beams. We use the spatial phase perturbation (SPP) GS algorithm with a distorted probe Gaussian beam as the only input. The principle and parameter selections of the algorithm are analyzed, and the performance of the algorithm is discussed. The simulation results show that the proposed adaptive optics (AO) system can significantly compensate for distorted OAM beams in single-channel or multiplexed OAM systems, which provides new insights into adaptive correction systems using OAM beams.

  9. Atmospheric turbulence compensation in orbital angular momentum communications: Advances and perspectives

    NASA Astrophysics Data System (ADS)

    Li, Shuhui; Chen, Shi; Gao, Chunqing; Willner, Alan E.; Wang, Jian

    2018-02-01

    Orbital angular momentum (OAM)-carrying beams have recently generated considerable interest due to their potential use in communication systems to increase transmission capacity and spectral efficiency. For OAM-based free-space optical (FSO) links, a critical challenge is the atmospheric turbulence that will distort the helical wavefronts of OAM beams leading to the decrease of received power, introducing crosstalk between multiple channels, and impairing link performance. In this paper, we review recent advances in turbulence effects compensation techniques for OAM-based FSO communication links. First, basic concepts of atmospheric turbulence and theoretical model are introduced. Second, atmospheric turbulence effects on OAM beams are theoretically and experimentally investigated and discussed. Then, several typical turbulence compensation approaches, including both adaptive optics-based (optical domain) and signal processing-based (electrical domain) techniques, are presented. Finally, key challenges and perspectives of compensation of turbulence-distorted OAM links are discussed.

  10. A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.

    2005-01-01

    This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.

  11. Compensating Atmospheric Turbulence Effects at High Zenith Angles with Adaptive Optics Using Advanced Phase Reconstructors

    NASA Astrophysics Data System (ADS)

    Roggemann, M.; Soehnel, G.; Archer, G.

    Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.

  12. Modified artificial fish school algorithm for free space optical communication with sensor-less adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cao, Jingtai; Zhao, Xiaohui; Li, Zhaokun; Liu, Wei; Gu, Haijun

    2017-11-01

    The performance of free space optical (FSO) communication system is limited by atmospheric turbulent extremely. Adaptive optics (AO) is the significant method to overcome the atmosphere disturbance. Especially, for the strong scintillation effect, the sensor-less AO system plays a major role for compensation. In this paper, a modified artificial fish school (MAFS) algorithm is proposed to compensate the aberrations in the sensor-less AO system. Both the static and dynamic aberrations compensations are analyzed and the performance of FSO communication before and after aberrations compensations is compared. In addition, MAFS algorithm is compared with artificial fish school (AFS) algorithm, stochastic parallel gradient descent (SPGD) algorithm and simulated annealing (SA) algorithm. It is shown that the MAFS algorithm has a higher convergence speed than SPGD algorithm and SA algorithm, and reaches the better convergence value than AFS algorithm, SPGD algorithm and SA algorithm. The sensor-less AO system with MAFS algorithm effectively increases the coupling efficiency at the receiving terminal with fewer numbers of iterations. In conclusion, the MAFS algorithm has great significance for sensor-less AO system to compensate atmospheric turbulence in FSO communication system.

  13. Heat Transport Compensation in Atmosphere and Ocean over the Past 22,000 Years

    PubMed Central

    Yang, Haijun; Zhao, Yingying; Liu, Zhengyu; Li, Qing; He, Feng; Zhang, Qiong

    2015-01-01

    The Earth’s climate has experienced dramatic changes over the past 22,000 years; however, the total meridional heat transport (MHT) of the climate system remains stable. A 22,000-year-long simulation using an ocean-atmosphere coupled model shows that the changes in atmosphere and ocean MHT are significant but tend to be out of phase in most regions, mitigating the total MHT change, which helps to maintain the stability of the Earth’s overall climate. A simple conceptual model is used to understand the compensation mechanism. The simple model can reproduce qualitatively the evolution and compensation features of the MHT over the past 22,000 years. We find that the global energy conservation requires the compensation changes in the atmosphere and ocean heat transports. The degree of compensation is mainly determined by the local climate feedback between surface temperature and net radiation flux at the top of the atmosphere. This study suggests that an internal mechanism may exist in the climate system, which might have played a role in constraining the global climate change over the past 22,000 years. PMID:26567710

  14. Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation

    PubMed Central

    Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu

    2015-01-01

    To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401

  15. Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation

    PubMed Central

    Munoz Diaz, Estefania; Caamano, Maria; Fuentes Sánchez, Francisco Javier

    2017-01-01

    The navigation of pedestrians based on inertial sensors, i.e., accelerometers and gyroscopes, has experienced a great growth over the last years. However, the noise of medium- and low-cost sensors causes a high error in the orientation estimation, particularly in the yaw angle. This error, called drift, is due to the bias of the z-axis gyroscope and other slow changing errors, such as temperature variations. We propose a seamless landmark-based drift compensation algorithm that only uses inertial measurements. The proposed algorithm adds a great value to the state of the art, because the vast majority of the drift elimination algorithms apply corrections to the estimated position, but not to the yaw angle estimation. Instead, the presented algorithm computes the drift value and uses it to prevent yaw errors and therefore position errors. In order to achieve this goal, a detector of landmarks, i.e., corners and stairs, and an association algorithm have been developed. The results of the experiments show that it is possible to reliably detect corners and stairs using only inertial measurements eliminating the need that the user takes any action, e.g., pressing a button. Associations between re-visited landmarks are successfully made taking into account the uncertainty of the position. After that, the drift is computed out of all associations and used during a post-processing stage to obtain a low-drifted yaw angle estimation, that leads to successfully drift compensated trajectories. The proposed algorithm has been tested with quasi-error-free turn rate measurements introducing known biases and with medium-cost gyroscopes in 3D indoor and outdoor scenarios. PMID:28671622

  16. Respiratory motion compensation algorithm of ultrasound hepatic perfusion data acquired in free-breathing

    NASA Astrophysics Data System (ADS)

    Wu, Kaizhi; Zhang, Xuming; Chen, Guangxie; Weng, Fei; Ding, Mingyue

    2013-10-01

    Images acquired in free breathing using contrast enhanced ultrasound exhibit a periodic motion that needs to be compensated for if a further accurate quantification of the hepatic perfusion analysis is to be executed. In this work, we present an algorithm to compensate the respiratory motion by effectively combining the PCA (Principal Component Analysis) method and block matching method. The respiratory kinetics of the ultrasound hepatic perfusion image sequences was firstly extracted using the PCA method. Then, the optimal phase of the obtained respiratory kinetics was detected after normalizing the motion amplitude and determining the image subsequences of the original image sequences. The image subsequences were registered by the block matching method using cross-correlation as the similarity. Finally, the motion-compensated contrast images can be acquired by using the position mapping and the algorithm was evaluated by comparing the TICs extracted from the original image sequences and compensated image subsequences. Quantitative comparisons demonstrated that the average fitting error estimated of ROIs (region of interest) was reduced from 10.9278 +/- 6.2756 to 5.1644 +/- 3.3431 after compensating.

  17. An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation

    PubMed Central

    Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie

    2014-01-01

    In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912

  18. Performance of synchronous optical receivers using atmospheric compensation techniques.

    PubMed

    Belmonte, Aniceto; Khan, Joseph

    2008-09-01

    We model the impact of atmospheric turbulence-induced phase and amplitude fluctuations on free-space optical links using synchronous detection. We derive exact expressions for the probability density function of the signal-to-noise ratio in the presence of turbulence. We consider the effects of log-normal amplitude fluctuations and Gaussian phase fluctuations, in addition to local oscillator shot noise, for both passive receivers and those employing active modal compensation of wave-front phase distortion. We compute error probabilities for M-ary phase-shift keying, and evaluate the impact of various parameters, including the ratio of receiver aperture diameter to the wave-front coherence diameter, and the number of modes compensated.

  19. An Online Tilt Estimation and Compensation Algorithm for a Small Satellite Camera

    NASA Astrophysics Data System (ADS)

    Lee, Da-Hyun; Hwang, Jai-hyuk

    2018-04-01

    In the case of a satellite camera designed to execute an Earth observation mission, even after a pre-launch precision alignment process has been carried out, misalignment will occur due to external factors during the launch and in the operating environment. In particular, for high-resolution satellite cameras, which require submicron accuracy for alignment between optical components, misalignment is a major cause of image quality degradation. To compensate for this, most high-resolution satellite cameras undergo a precise realignment process called refocusing before and during the operation process. However, conventional Earth observation satellites only execute refocusing upon de-space. Thus, in this paper, an online tilt estimation and compensation algorithm that can be utilized after de-space correction is executed. Although the sensitivity of the optical performance degradation due to the misalignment is highest in de-space, the MTF can be additionally increased by correcting tilt after refocusing. The algorithm proposed in this research can be used to estimate the amount of tilt that occurs by taking star images, and it can also be used to carry out automatic tilt corrections by employing a compensation mechanism that gives angular motion to the secondary mirror. Crucially, this algorithm is developed using an online processing system so that it can operate without communication with the ground.

  20. An Analytical Framework for the Steady State Impact of Carbonate Compensation on Atmospheric CO2

    NASA Astrophysics Data System (ADS)

    Omta, Anne Willem; Ferrari, Raffaele; McGee, David

    2018-04-01

    The deep-ocean carbonate ion concentration impacts the fraction of the marine calcium carbonate production that is buried in sediments. This gives rise to the carbonate compensation feedback, which is thought to restore the deep-ocean carbonate ion concentration on multimillennial timescales. We formulate an analytical framework to investigate the impact of carbonate compensation under various changes in the carbon cycle relevant for anthropogenic change and glacial cycles. Using this framework, we show that carbonate compensation amplifies by 15-20% changes in atmospheric CO2 resulting from a redistribution of carbon between the atmosphere and ocean (e.g., due to changes in temperature, salinity, or nutrient utilization). A counterintuitive result emerges when the impact of organic matter burial in the ocean is examined. The organic matter burial first leads to a slight decrease in atmospheric CO2 and an increase in the deep-ocean carbonate ion concentration. Subsequently, enhanced calcium carbonate burial leads to outgassing of carbon from the ocean to the atmosphere, which is quantified by our framework. Results from simulations with a multibox model including the minor acids and bases important for the ocean-atmosphere exchange of carbon are consistent with our analytical predictions. We discuss the potential role of carbonate compensation in glacial-interglacial cycles as an example of how our theoretical framework may be applied.

  1. Hyperspectral material identification on radiance data using single-atmosphere or multiple-atmosphere modeling

    NASA Astrophysics Data System (ADS)

    Mariano, Adrian V.; Grossmann, John M.

    2010-11-01

    Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.

  2. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  3. New inverse synthetic aperture radar algorithm for translational motion compensation

    NASA Astrophysics Data System (ADS)

    Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.

    1991-10-01

    Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.

  4. A Novel Modified Omega-K Algorithm for Synthetic Aperture Imaging Lidar through the Atmosphere

    PubMed Central

    Guo, Liang; Xing, Mendao; Tang, Yu; Dan, Jing

    2008-01-01

    The spatial resolution of a conventional imaging lidar system is constrained by the diffraction limit of the telescope's aperture. The combination of the lidar and synthetic aperture (SA) processing techniques may overcome the diffraction limit and pave the way for a higher resolution air borne or space borne remote sensor. Regarding the lidar transmitting frequency modulation continuous-wave (FMCW) signal, the motion during the transmission of a sweep and the reception of the corresponding echo were expected to be one of the major problems. The given modified Omega-K algorithm takes the continuous motion into account, which can compensate for the Doppler shift induced by the continuous motion efficiently and azimuth ambiguity for the low pulse recurrence frequency limited by the tunable laser. And then, simulation of Phase Screen (PS) distorted by atmospheric turbulence following the von Karman spectrum by using Fourier Transform is implemented in order to simulate turbulence. Finally, the computer simulation shows the validity of the modified algorithm and if in the turbulence the synthetic aperture length does not exceed the similar coherence length of the atmosphere for SAIL, we can ignore the effect of the turbulence. PMID:27879865

  5. Laser beam projection with adaptive array of fiber collimators. II. Analysis of atmospheric compensation efficiency.

    PubMed

    Lachinova, Svetlana L; Vorontsov, Mikhail A

    2008-08-01

    We analyze the potential efficiency of laser beam projection onto a remote object in atmosphere with incoherent and coherent phase-locked conformal-beam director systems composed of an adaptive array of fiber collimators. Adaptive optics compensation of turbulence-induced phase aberrations in these systems is performed at each fiber collimator. Our analysis is based on a derived expression for the atmospheric-averaged value of the mean square residual phase error as well as direct numerical simulations. Operation of both conformal-beam projection systems is compared for various adaptive system configurations characterized by the number of fiber collimators, the adaptive compensation resolution, and atmospheric turbulence conditions.

  6. Improved compensation of atmospheric turbulence effects by multiple adaptive mirror systems.

    PubMed

    Shamir, J; Crowe, D G; Beletic, J W

    1993-08-20

    Optical wave-front propagation in a layered model for the atmosphere is analyzed by the use of diffraction theory, leading to a novel approach for utilizing artificial guide stars. Considering recent observations of layering in the atmospheric turbulence, the results of this paper indicate that, even for very large telescopes, a substantial enlargement of the compensated angular field of view is possible when two adaptive mirrors and four or five artificial guide stars are employed. The required number of guide stars increases as the thickness of the turbulent layers increases, converging to the conventional results at the limit of continuously turbulent atmosphere.

  7. Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.

    PubMed

    Zhang, Man; Wang, Guanyong; Zhang, Lei

    2017-10-26

    Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.

  8. Local motion compensation in image sequences degraded by atmospheric turbulence: a comparative analysis of optical flow vs. block matching methods

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.

    2016-10-01

    As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).

  9. Modified compensation algorithm of lever-arm effect and flexural deformation for polar shipborne transfer alignment based on improved adaptive Kalman filter

    NASA Astrophysics Data System (ADS)

    Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei

    2017-09-01

    Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region.

  10. Sculling Compensation Algorithm for SINS Based on Two-Time Scale Perturbation Model of Inertial Measurements

    PubMed Central

    Wang, Lingling; Fu, Li

    2018-01-01

    In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323

  11. Sensor Drift Compensation Algorithm based on PDF Distance Minimization

    NASA Astrophysics Data System (ADS)

    Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo

    2009-05-01

    In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation

  12. Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark

    2016-01-01

    This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.

  13. Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark

    2015-01-01

    This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.

  14. Atmospheric Correction Algorithm for Hyperspectral Remote Sensing of Ocean Color from Space

    DTIC Science & Technology

    2000-02-20

    Existing atmospheric correction algorithms for multichannel remote sensing of ocean color from space were designed for retrieving water-leaving...atmospheric correction algorithm for hyperspectral remote sensing of ocean color with the near-future Coastal Ocean Imaging Spectrometer. The algorithm uses

  15. The atmospheric correction algorithm for HY-1B/COCTS

    NASA Astrophysics Data System (ADS)

    He, Xianqiang; Bai, Yan; Pan, Delu; Zhu, Qiankun

    2008-10-01

    China has launched her second ocean color satellite HY-1B on 11 Apr., 2007, which carried two remote sensors. The Chinese Ocean Color and Temperature Scanner (COCTS) is the main sensor on HY-1B, and it has not only eight visible and near-infrared wavelength bands similar to the SeaWiFS, but also two more thermal infrared bands to measure the sea surface temperature. Therefore, COCTS has broad application potentiality, such as fishery resource protection and development, coastal monitoring and management and marine pollution monitoring. Atmospheric correction is the key of the quantitative ocean color remote sensing. In this paper, the operational atmospheric correction algorithm of HY-1B/COCTS has been developed. Firstly, based on the vector radiative transfer numerical model of coupled oceanatmosphere system- PCOART, the exact Rayleigh scattering look-up table (LUT), aerosol scattering LUT and atmosphere diffuse transmission LUT for HY-1B/COCTS have been generated. Secondly, using the generated LUTs, the exactly operational atmospheric correction algorithm for HY-1B/COCTS has been developed. The algorithm has been validated using the simulated spectral data generated by PCOART, and the result shows the error of the water-leaving reflectance retrieved by this algorithm is less than 0.0005, which meets the requirement of the exactly atmospheric correction of ocean color remote sensing. Finally, the algorithm has been applied to the HY-1B/COCTS remote sensing data, and the retrieved water-leaving radiances are consist with the Aqua/MODIS results, and the corresponding ocean color remote sensing products have been generated including the chlorophyll concentration and total suspended particle matter concentration.

  16. Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana

    1989-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  17. Algorithm for atmospheric corrections of aircraft and satellite imagery

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.; Ferrare, R. A.; Kaufman, Y. J.; Markham, B. L.; Mattoo, S.

    1992-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 microns. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  18. Atmospheric Correction Algorithm for Hyperspectral Imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolutemore » calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.« less

  19. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  20. Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation

    NASA Technical Reports Server (NTRS)

    Woodard , Stanley E.; Nagchaudhuri, Abhijit

    1998-01-01

    This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.

  1. Compensation for the orbital angular momentum of a vortex beam in turbulent atmosphere by adaptive optics

    NASA Astrophysics Data System (ADS)

    Li, Nan; Chu, Xiuxiang; Zhang, Pengfei; Feng, Xiaoxing; Fan, ChengYu; Qiao, Chunhong

    2018-01-01

    A method which can be used to compensate for a distorted orbital angular momentum and wavefront of a beam in atmospheric turbulence, simultaneously, has been proposed. To confirm the validity of the method, an experimental setup for up-link propagation of a vortex beam in a turbulent atmosphere has been simulated. Simulation results show that both of the distorted orbital angular momentum and the distorted wavefront of a beam due to turbulence can be compensated by an adaptive optics system with the help of a cooperative beacon at satellite. However, when the number of the lenslet of wavefront sensor (WFS) and the actuators of the deform mirror (DM) is small, satisfactory results cannot be obtained.

  2. Genetic algorithm optimized triply compensated pulses in NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Manu, V. S.; Veglia, Gianluigi

    2015-11-01

    Sensitivity and resolution in NMR experiments are affected by magnetic field inhomogeneities (of both external and RF), errors in pulse calibration, and offset effects due to finite length of RF pulses. To remedy these problems, built-in compensation mechanisms for these experimental imperfections are often necessary. Here, we propose a new family of phase-modulated constant-amplitude broadband pulses with high compensation for RF inhomogeneity and heteronuclear coupling evolution. These pulses were optimized using a genetic algorithm (GA), which consists in a global optimization method inspired by Nature's evolutionary processes. The newly designed π and π / 2 pulses belong to the 'type A' (or general rotors) symmetric composite pulses. These GA-optimized pulses are relatively short compared to other general rotors and can be used for excitation and inversion, as well as refocusing pulses in spin-echo experiments. The performance of the GA-optimized pulses was assessed in Magic Angle Spinning (MAS) solid-state NMR experiments using a crystalline U-13C, 15N NAVL peptide as well as U-13C, 15N microcrystalline ubiquitin. GA optimization of NMR pulse sequences opens a window for improving current experiments and designing new robust pulse sequences.

  3. Single-dose volume regulation algorithm for a gas-compensated intrathecal infusion pump.

    PubMed

    Nam, Kyoung Won; Kim, Kwang Gi; Sung, Mun Hyun; Choi, Seong Wook; Kim, Dae Hyun; Jo, Yung Ho

    2011-01-01

    The internal pressures of medication reservoirs of gas-compensated intrathecal medication infusion pumps decrease when medication is discharged, and these discharge-induced pressure drops can decrease the volume of medication discharged. To prevent these reductions, the volumes discharged must be adjusted to maintain the required dosage levels. In this study, the authors developed an automatic control algorithm for an intrathecal infusion pump developed by the Korean National Cancer Center that regulates single-dose volumes. The proposed algorithm estimates the amount of medication remaining and adjusts control parameters automatically to maintain single-dose volumes at predetermined levels. Experimental results demonstrated that the proposed algorithm can regulate mean single-dose volumes with a variation of <3% and estimate the remaining medication volume with an accuracy of >98%. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  4. Doppler-based motion compensation algorithm for focusing the signature of a rotorcraft.

    PubMed

    Goldman, Geoffrey H

    2013-02-01

    A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft. For target signatures with large spectral peaks that vary slowly in amplitude and have near constant frequency, the time-varying Doppler shift can be tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of the first harmonic of a rotorcraft was tracked with a fixed-lag smoother. Then, state space estimates of the frequency were used to calculate a time warping that removed the effect of a time-varying Doppler shift from the data. The algorithm was evaluated by analyzing the increase in the amplitude of the harmonics in the spectrum of a rotorcraft. The results depended upon the frequency of the harmonics and the processing interval duration. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved an estimated upper bound. The results for higher frequency harmonics had larger increases in the amplitude of the peaks, but significantly lower than the estimated upper bounds.

  5. Control algorithms for aerobraking in the Martian atmosphere

    NASA Technical Reports Server (NTRS)

    Ward, Donald T.; Shipley, Buford W., Jr.

    1991-01-01

    The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.

  6. Advanced Control Algorithms for Compensating the Phase Distortion Due to Transport Delay in Human-Machine Systems

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline

  7. Atmospheric turbulence and sensor system effects on biometric algorithm performance

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy

    2015-05-01

    Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.

  8. Influence of measuring algorithm on shape accuracy in the compensating turning of high gradient thin-wall parts

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Guilin; Zhu, Dengchao; Li, Shengyi

    2015-02-01

    In order to meet the requirement of aerodynamics, the infrared domes or windows with conformal and thin-wall structure becomes the development trend of high-speed aircrafts in the future. But these parts usually have low stiffness, the cutting force will change along with the axial position, and it is very difficult to meet the requirement of shape accuracy by single machining. Therefore, on-machine measurement and compensating turning are used to control the shape errors caused by the fluctuation of cutting force and the change of stiffness. In this paper, on the basis of ultra precision diamond lathe, a contact measuring system with five DOFs is developed to achieve on-machine measurement of conformal thin-wall parts with high accuracy. According to high gradient surface, the optimizing algorithm is designed on the distribution of measuring points by using the data screening method. The influence rule of sampling frequency is analyzed on measuring errors, the best sampling frequency is found out based on planning algorithm, the effect of environmental factors and the fitting errors are controlled within lower range, and the measuring accuracy of conformal dome is greatly improved in the process of on-machine measurement. According to MgF2 conformal dome with high gradient, the compensating turning is implemented by using the designed on-machine measuring algorithm. The shape error is less than PV 0.8μm, greatly superior compared with PV 3μm before compensating turning, which verifies the correctness of measuring algorithm.

  9. Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1987-01-01

    Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.

  10. An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander

    NASA Technical Reports Server (NTRS)

    Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.

    1998-01-01

    An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.

  11. A digital combining-weight estimation algorithm for broadband sources with the array feed compensation system

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Rodemich, E. R.

    1994-01-01

    An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.

  12. Motion-compensated cone beam computed tomography using a conjugate gradient least-squares algorithm and electrical impedance tomography imaging motion data.

    PubMed

    Pengpen, T; Soleimani, M

    2015-06-13

    Cone beam computed tomography (CBCT) is an imaging modality that has been used in image-guided radiation therapy (IGRT). For applications such as lung radiation therapy, CBCT images are greatly affected by the motion artefacts. This is mainly due to low temporal resolution of CBCT. Recently, a dual modality of electrical impedance tomography (EIT) and CBCT has been proposed, in which the high temporal resolution EIT imaging system provides motion data to a motion-compensated algebraic reconstruction technique (ART)-based CBCT reconstruction software. High computational time associated with ART and indeed other variations of ART make it less practical for real applications. This paper develops a motion-compensated conjugate gradient least-squares (CGLS) algorithm for CBCT. A motion-compensated CGLS offers several advantages over ART-based methods, including possibilities for explicit regularization, rapid convergence and parallel computations. This paper for the first time demonstrates motion-compensated CBCT reconstruction using CGLS and reconstruction results are shown in limited data CBCT considering only a quarter of the full dataset. The proposed algorithm is tested using simulated motion data in generic motion-compensated CBCT as well as measured EIT data in dual EIT-CBCT imaging. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  13. Network compensation for missing sensors

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.

    1991-01-01

    A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.

  14. Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle

    2013-01-01

    The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.

  15. An observer-based compensator for distributed delays

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok

    1990-01-01

    This paper presents an algorithm for compensating delays that are distributed between the sensor(s), controller and actuator(s) within a control loop. This observer-based algorithm is specially suited to compensation of network-induced delays in integrated communication and control systems. The robustness of the algorithm relative to plant model uncertainties has been examined.

  16. Computational algorithms for simulations in atmospheric optics.

    PubMed

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  17. Motion compensation for ultra wide band SAR

    NASA Technical Reports Server (NTRS)

    Madsen, S.

    2001-01-01

    This paper describes an algorithm that combines wavenumber domain processing with a procedure that enables motion compensation to be applied as a function of target range and azimuth angle. First, data are processed with nominal motion compensation applied, partially focusing the image, then the motion compensation of individual subpatches is refined. The results show that the proposed algorithm is effective in compensating for deviations from a straight flight path, from both a performance and a computational efficiency point of view.

  18. Fourier domain preconditioned conjugate gradient algorithm for atmospheric tomography.

    PubMed

    Yang, Qiang; Vogel, Curtis R; Ellerbroek, Brent L

    2006-07-20

    By 'atmospheric tomography' we mean the estimation of a layered atmospheric turbulence profile from measurements of the pupil-plane phase (or phase gradients) corresponding to several different guide star directions. We introduce what we believe to be a new Fourier domain preconditioned conjugate gradient (FD-PCG) algorithm for atmospheric tomography, and we compare its performance against an existing multigrid preconditioned conjugate gradient (MG-PCG) approach. Numerical results indicate that on conventional serial computers, FD-PCG is as accurate and robust as MG-PCG, but it is from one to two orders of magnitude faster for atmospheric tomography on 30 m class telescopes. Simulations are carried out for both natural guide stars and for a combination of finite-altitude laser guide stars and natural guide stars to resolve tip-tilt uncertainty.

  19. Diagnostic Abilities of Variable and Enhanced Corneal Compensation Algorithms of GDx in Different Severities of Glaucoma.

    PubMed

    Yadav, Ravi K; Begum, Viquar U; Addepalli, Uday K; Senthil, Sirisha; Garudadri, Chandra S; Rao, Harsha L

    2016-02-01

    To compare the abilities of retinal nerve fiber layer (RNFL) parameters of variable corneal compensation (VCC) and enhanced corneal compensation (ECC) algorithms of scanning laser polarimetry (GDx) in detecting various severities of glaucoma. Two hundred and eighty-five eyes of 194 subjects from the Longitudinal Glaucoma Evaluation Study who underwent GDx VCC and ECC imaging were evaluated. Abilities of RNFL parameters of GDx VCC and ECC to diagnose glaucoma were compared using area under receiver operating characteristic curves (AUC), sensitivities at fixed specificities, and likelihood ratios. After excluding 5 eyes that failed to satisfy manufacturer-recommended quality parameters with ECC and 68 with VCC, 56 eyes of 41 normal subjects and 161 eyes of 121 glaucoma patients [36 eyes with preperimetric glaucoma, 52 eyes with early (MD>-6 dB), 34 with moderate (MD between -6 and -12 dB), and 39 with severe glaucoma (MD<-12 dB)] were included for the analysis. Inferior RNFL, average RNFL, and nerve fiber indicator parameters showed the best AUCs and sensitivities both with GDx VCC and ECC in diagnosing all severities of glaucoma. AUCs and sensitivities of all RNFL parameters were comparable between the VCC and ECC algorithms (P>0.20 for all comparisons). Likelihood ratios associated with the diagnostic categorization of RNFL parameters were comparable between the VCC and ECC algorithms. In scans satisfying the manufacturer-recommended quality parameters, which were significantly greater with ECC than VCC algorithm, diagnostic abilities of GDx ECC and VCC in glaucoma were similar.

  20. Simulating large atmospheric phase screens using a woofer-tweeter algorithm.

    PubMed

    Buscher, David F

    2016-10-03

    We describe an algorithm for simulating atmospheric wavefront perturbations over ranges of spatial and temporal scales spanning more than 4 orders of magnitude. An open-source implementation of the algorithm written in Python can simulate the evolution of the perturbations more than an order-of-magnitude faster than real time. Testing of the implementation using metrics appropriate to adaptive optics systems and long-baseline interferometers show accuracies at the few percent level or better.

  1. A Fault Location Algorithm for Two-End Series-Compensated Double-Circuit Transmission Lines Using the Distributed Parameter Line Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Ning; Gombos, Gergely; Mousavi, Mirrasoul J.

    A new fault location algorithm for two-end series-compensated double-circuit transmission lines utilizing unsynchronized two-terminal current phasors and local voltage phasors is presented in this paper. The distributed parameter line model is adopted to take into account the shunt capacitance of the lines. The mutual coupling between the parallel lines in the zero-sequence network is also considered. The boundary conditions under different fault types are used to derive the fault location formulation. The developed algorithm directly uses the local voltage phasors on the line side of series compensation (SC) and metal oxide varistor (MOV). However, when potential transformers are not installedmore » on the line side of SC and MOVs for the local terminal, these measurements can be calculated from the local terminal bus voltage and currents by estimating the voltages across the SC and MOVs. MATLAB SimPowerSystems is used to generate cases under diverse fault conditions to evaluating accuracy. The simulation results show that the proposed algorithm is qualified for practical implementation.« less

  2. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  3. Evaluation of atmospheric correction algorithms for processing SeaWiFS data

    NASA Astrophysics Data System (ADS)

    Ransibrahmanakul, Varis; Stumpf, Richard; Ramachandran, Sathyadev; Hughes, Kent

    2005-08-01

    To enable the production of the best chlorophyll products from SeaWiFS data NOAA (Coastwatch and NOS) evaluated the various atmospheric correction algorithms by comparing the satellite derived water reflectance derived for each algorithm with in situ data. Gordon and Wang (1994) introduced a method to correct for Rayleigh and aerosol scattering in the atmosphere so that water reflectance may be derived from the radiance measured at the top of the atmosphere. However, since the correction assumed near infrared scattering to be negligible in coastal waters an invalid assumption, the method over estimates the atmospheric contribution and consequently under estimates water reflectance for the lower wavelength bands on extrapolation. Several improved methods to estimate near infrared correction exist: Siegel et al. (2000); Ruddick et al. (2000); Stumpf et al. (2002) and Stumpf et al. (2003), where an absorbing aerosol correction is also applied along with an additional 1.01% calibration adjustment for the 412 nm band. The evaluation show that the near infrared correction developed by Stumpf et al. (2003) result in an overall minimum error for U.S. waters. As of July 2004, NASA (SEADAS) has selected this as the default method for the atmospheric correction used to produce chlorophyll products.

  4. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  5. A Portable Ground-Based Atmospheric Monitoring System (PGAMS) for the Calibration and Validation of Atmospheric Correction Algorithms Applied to Aircraft and Satellite Images

    NASA Technical Reports Server (NTRS)

    Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements

  6. Mars Entry Atmospheric Data System Modelling and Algorithm Development

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.

    2009-01-01

    The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.

  7. Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm

    NASA Technical Reports Server (NTRS)

    Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)

    2004-01-01

    In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.

  8. The Results of a Simulator Study to Determine the Effects on Pilot Performance of Two Different Motion Cueing Algorithms and Various Delays, Compensated and Uncompensated

    NASA Technical Reports Server (NTRS)

    Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.

    2003-01-01

    A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.

  9. Compensator improvement for multivariable control systems

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Mcdaniel, W. L., Jr.; Gresham, L. L.

    1977-01-01

    A theory and the associated numerical technique are developed for an iterative design improvement of the compensation for linear, time-invariant control systems with multiple inputs and multiple outputs. A strict constraint algorithm is used in obtaining a solution of the specified constraints of the control design. The result of the research effort is the multiple input, multiple output Compensator Improvement Program (CIP). The objective of the Compensator Improvement Program is to modify in an iterative manner the free parameters of the dynamic compensation matrix so that the system satisfies frequency domain specifications. In this exposition, the underlying principles of the multivariable CIP algorithm are presented and the practical utility of the program is illustrated with space vehicle related examples.

  10. An adaptive compensation algorithm for temperature drift of micro-electro-mechanical systems gyroscopes using a strong tracking Kalman filter.

    PubMed

    Feng, Yibo; Li, Xisheng; Zhang, Xiaojuan

    2015-05-13

    We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS) gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF), the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to -2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation.

  11. Impact of beacon wavelength on phase-compensation performance

    NASA Astrophysics Data System (ADS)

    Enterline, Allison A.; Spencer, Mark F.; Burrell, Derek J.; Brennan, Terry J.

    2017-09-01

    This study evaluates the effects of beacon-wavelength mismatch on phase-compensation performance. In general, beacon-wavelength mismatch occurs at the system level because the beacon-illuminator laser (BIL) and high-energy laser (HEL) are often at different wavelengths. Such is the case, for example, when using an aperture sharing element to isolate the beam-control sensor suite from the blinding nature of the HEL. With that said, this study uses the WavePlex Toolbox in MATLAB® to model ideal spherical wave propagation through various atmospheric-turbulence conditions. To quantify phase-compensation performance, we also model a nominal adaptive-optics (AO) system. We achieve correction from a Shack-Hartmann wavefront sensor and continuous-face-sheet deformable mirror using a least-squares phase reconstruction algorithm in the Fried geometry and a leaky integrator control law. To this end, we plot the power in the bucket metric as a function of BIL-HEL wavelength difference. Our initial results show that positive BIL-HEL wavelength differences achieve better phase compensation performance compared to negative BIL-HEL wavelength differences (i.e., red BILs outperform blue BILs). This outcome is consistent with past results.

  12. Fourier transform wavefront control with adaptive prediction of the atmosphere.

    PubMed

    Poyneer, Lisa A; Macintosh, Bruce A; Véran, Jean-Pierre

    2007-09-01

    Predictive Fourier control is a temporal power spectral density-based adaptive method for adaptive optics that predicts the atmosphere under the assumption of frozen flow. The predictive controller is based on Kalman filtering and a Fourier decomposition of atmospheric turbulence using the Fourier transform reconstructor. It provides a stable way to compensate for arbitrary numbers of atmospheric layers. For each Fourier mode, efficient and accurate algorithms estimate the necessary atmospheric parameters from closed-loop telemetry and determine the predictive filter, adjusting as conditions change. This prediction improves atmospheric rejection, leading to significant improvements in system performance. For a 48x48 actuator system operating at 2 kHz, five-layer prediction for all modes is achievable in under 2x10(9) floating-point operations/s.

  13. Adaptation of a Hyperspectral Atmospheric Correction Algorithm for Multi-spectral Ocean Color Data in Coastal Waters. Chapter 3

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.

    2003-01-01

    This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.

  14. Aeromagnetic gradient compensation method for helicopter based on ɛ-support vector regression algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Peilin; Zhang, Qunying; Fei, Chunjiao; Fang, Guangyou

    2017-04-01

    Aeromagnetic gradients are typically measured by optically pumped magnetometers mounted on an aircraft. Any aircraft, particularly helicopters, produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is essential, and least square (LS) is the conventional method used for reducing interference levels. However, the LSs approach to solving the aeromagnetic interference model has a few difficulties, one of which is in handling multicollinearity. Therefore, we propose an aeromagnetic gradient compensation method, specifically targeted for helicopter use but applicable on any airborne platform, which is based on the ɛ-support vector regression algorithm. The structural risk minimization criterion intrinsic to the method avoids multicollinearity altogether. Local aeromagnetic anomalies can be retained, and platform-generated fields are suppressed simultaneously by constructing an appropriate loss function and kernel function. The method was tested using an unmanned helicopter and obtained improvement ratios of 12.7 and 3.5 in the vertical and horizontal gradient data, respectively. Both of these values are probably better than those that would have been obtained from the conventional method applied to the same data, had it been possible to do so in a suitable comparative context. The validity of the proposed method is demonstrated by the experimental result.

  15. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  16. Development of the atmospheric correction algorithm for the next generation geostationary ocean color sensor data

    NASA Astrophysics Data System (ADS)

    Lee, Kwon-Ho; Kim, Wonkook

    2017-04-01

    The geostationary ocean color imager-II (GOCI-II), designed to be focused on the ocean environmental monitoring with better spatial (250m for local and 1km for full disk) and spectral resolution (13 bands) then the current operational mission of the GOCI-I. GOCI-II will be launched in 2018. This study presents currently developing algorithm for atmospheric correction and retrieval of surface reflectance over land to be optimized with the sensor's characteristics. We first derived the top-of-atmosphere radiances as the proxy data derived from the parameterized radiative transfer code in the 13 bands of GOCI-II. Based on the proxy data, the algorithm has been made with cloud masking, gas absorption correction, aerosol inversion, computation of aerosol extinction correction. The retrieved surface reflectances are evaluated by the MODIS level 2 surface reflectance products (MOD09). For the initial test period, the algorithm gave error of within 0.05 compared to MOD09. Further work will be progressed to fully implement the GOCI-II Ground Segment system (G2GS) algorithm development environment. These atmospherically corrected surface reflectance product will be the standard GOCI-II product after launch.

  17. Results of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) Experiment

    NASA Technical Reports Server (NTRS)

    Wilson, K. E.; Leatherman, P. R.; Cleis, R.; Spinhirne, J.; Fugate, R. Q.

    1997-01-01

    Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes. Phase I of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) experiment demonstrated the first propagation of an atmosphere-compensated laser beam to the lunar retroreflectors. A 1.06-micron Nd:YAG laser beam was propagated through the full aperture of the 1.5-m telescope at the Starfire Optical Range (SOR), Kirtland Air Force Base, New Mexico, to the Apollo 15 retroreflector array at Hadley Rille. Laser guide-star adaptive optics were used to compensate turbulence-induced aberrations across the transmitter's 1.5-m aperture. A 3.5-m telescope, also located at the SOR, was used as a receiver for detecting the return signals. JPL-supplied Chebyshev polynomials of the retroreflector locations were used to develop tracking algorithms for the telescopes. At times we observed in excess of 100 photons returned from a single pulse when the outgoing beam from the 1.5-m telescope was corrected by the adaptive optics system. No returns were detected when the outgoing beam was uncompensated. The experiment was conducted from March through September 1994, during the first or last quarter of the Moon.

  18. An observer-based compensator for distributed delays in integrated control systems

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok

    1989-01-01

    This paper presents an algorithm for compensation of delays that are distributed within a control loop. The observer-based algorithm is especially suitable for compensating network-induced delays that are likely to occur in integrated control systems of the future generation aircraft. The robustness of the algorithm relative to uncertainties in the plant model have been examined.

  19. Refining atmosphere light to improve the dark channel prior algorithm

    NASA Astrophysics Data System (ADS)

    Gan, Ling; Li, Dagang; Zhou, Can

    2017-05-01

    The defogging image gotten through dark channel prior algorithm has some shortcomings, such like color distortion, dimmer light and detail-loss near the observer. The main reasons are that the atmosphere light is estimated as one value and its change in different scene depth is not considered. So we modeled the atmosphere, one parameter of the defogging model. Firstly, we scatter the atmosphere light into equivalent point and build discrete model of the light. Secondly, we build some rough and possible models through analyzing the relationship between the atmosphere light and the medium transmission. Finally, by analyzing the results of many experiments qualitatively and quantitatively, we get the selected and optimized model. Although using this method causes the time-consuming to increase slightly, the evaluations, histogram correlation coefficient and peak signal-to-noise ratio are improved significantly and the defogging result is more conformed to human visual. And the color and the details near the observer in the defogging image are better than that achieved by the primal method.

  20. Turbulence compensation: an overview

    NASA Astrophysics Data System (ADS)

    van Eekeren, Adam W. M.; Schutte, Klamer; Dijk, Judith; Schwering, Piet B. W.; van Iersel, Miranda; Doelman, Niek J.

    2012-06-01

    In general, long range visual detection, recognition and identification are hampered by turbulence caused by atmospheric conditions. Much research has been devoted to the field of turbulence compensation. One of the main advantages of turbulence compensation is that it enables visual identification over larger distances. In many (military) scenarios this is of crucial importance. In this paper we give an overview of several software and hardware approaches to compensate for the visual artifacts caused by turbulence. These approaches are very diverse and range from the use of dedicated hardware, such as adaptive optics, to the use of software methods, such as deconvolution and lucky imaging. For each approach the pros and cons are given and it is indicated for which scenario this approach is useful. In more detail we describe the turbulence compensation methods TNO has developed in the last years and place them in the context of the different turbulence compensation approaches and TNO's turbulence compensation roadmap. Furthermore we look forward and indicate the upcoming challenges in the field of turbulence compensation.

  1. Enhancement and evaluation of an algorithm for atmospheric profiling continuity from Aqua to Suomi-NPP

    NASA Astrophysics Data System (ADS)

    Lipton, A.; Moncet, J. L.; Payne, V.; Lynch, R.; Polonsky, I. N.

    2017-12-01

    We will present recent results from an algorithm for producing climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. Developments to be presented include the impact of a radiance-based pre-classification method for the atmospheric background. In addition to improving retrieval performance, pre-classification has the potential to reduce the sensitivity of the retrievals to the climatological data from which the background estimate and its error covariance are derived. We will also discuss evaluation of a method for mitigating the effect of clouds on the radiances, and enhancements of the radiative transfer forward model.

  2. An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP

    NASA Astrophysics Data System (ADS)

    Moncet, J. L.

    2015-12-01

    We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from

  3. Diversity in Detection Algorithms for Atmospheric Rivers: A Community Effort to Understand the Consequences

    NASA Astrophysics Data System (ADS)

    Shields, C. A.; Ullrich, P. A.; Rutz, J. J.; Wehner, M. F.; Ralph, M.; Ruby, L.

    2017-12-01

    Atmospheric rivers (ARs) are long, narrow filamentary structures that transport large amounts of moisture in the lower layers of the atmosphere, typically from subtropical regions to mid-latitudes. ARs play an important role in regional hydroclimate by supplying significant amounts of precipitation that can alleviate drought, or in extreme cases, produce dangerous floods. Accurately detecting, or tracking, ARs is important not only for weather forecasting, but is also necessary to understand how these events may change under global warming. Detection algorithms are used on both regional and global scales, and most accurately, using high resolution datasets, or model output. Different detection algorithms can produce different answers. Detection algorithms found in the current literature fall broadly into two categories: "time-stitching", where the AR is tracked with a Lagrangian approach through time and space; and "counting", where ARs are identified for a single point in time for a single location. Counting routines can be further subdivided into algorithms that use absolute thresholds with specific geometry, to algorithms that use relative thresholds, to algorithms based on statistics, to pattern recognition and machine learning techniques. With such a large diversity in detection code, differences in AR tracking and "counts" can vary widely from technique to technique. Uncertainty increases for future climate scenarios, where the difference between relative and absolute thresholding produce vastly different counts, simply due to the moister background state in a warmer world. In an effort to quantify the uncertainty associated with tracking algorithms, the AR detection community has come together to participate in ARTMIP, the Atmospheric River Tracking Method Intercomparison Project. Each participant will provide AR metrics to the greater group by applying their code to a common reanalysis dataset. MERRA2 data was chosen for both temporal and spatial resolution

  4. A Novel Speed Compensation Method for ISAR Imaging with Low SNR

    PubMed Central

    Liu, Yongxiang; Zhang, Shuanghui; Zhu, Dekang; Li, Xiang

    2015-01-01

    In this paper, two novel speed compensation algorithms for ISAR imaging under a low signal-to-noise ratio (SNR) condition have been proposed, which are based on the cubic phase function (CPF) and the integrated cubic phase function (ICPF), respectively. These two algorithms can estimate the speed of the target from the wideband radar echo directly, which breaks the limitation of speed measuring in a radar system. With the utilization of non-coherent accumulation, the ICPF-based speed compensation algorithm is robust to noise and can meet the requirement of speed compensation for ISAR imaging under a low SNR condition. Moreover, a fast searching implementation strategy, which consists of coarse search and precise search, has been introduced to decrease the computational burden of speed compensation based on CPF and ICPF. Experimental results based on radar data validate the effectiveness of the proposed algorithms. PMID:26225980

  5. Atmospheric correction for hyperspectral ocean color sensors

    NASA Astrophysics Data System (ADS)

    Ibrahim, A.; Ahmad, Z.; Franz, B. A.; Knobelspiesse, K. D.

    2017-12-01

    NASA's heritage Atmospheric Correction (AC) algorithm for multi-spectral ocean color sensors is inadequate for the new generation of spaceborne hyperspectral sensors, such as NASA's first hyperspectral Ocean Color Instrument (OCI) onboard the anticipated Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite mission. The AC process must estimate and remove the atmospheric path radiance contribution due to the Rayleigh scattering by air molecules and by aerosols from the measured top-of-atmosphere (TOA) radiance. Further, it must also compensate for the absorption by atmospheric gases and correct for reflection and refraction of the air-sea interface. We present and evaluate an improved AC for hyperspectral sensors beyond the heritage approach by utilizing the additional spectral information of the hyperspectral sensor. The study encompasses a theoretical radiative transfer sensitivity analysis as well as a practical application of the Hyperspectral Imager for the Coastal Ocean (HICO) and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensors.

  6. Novel lidar algorithms for atmospheric slantrange visibility, planetary boundary layer height, meteorogical phenomena and atmospheric layering measurements

    NASA Astrophysics Data System (ADS)

    Pantazis, Alexandros; Papayannis, Alexandros; Georgoussis, Georgios

    2018-04-01

    In this paper we present a development of novel algorithms and techniques implemented within the Laser Remote Sensing Laboratory (LRSL) of the National Technical University of Athens (NTUA), in collaboration with Raymetrics S.A., in order to incorporate them into a 3-Dimensional (3D) lidar. The lidar is transmitting at 355 nm in the eye safe region and the measurements then are transposed to the visual range at 550 nm, according to the World Meteorological Organization (WMO) and the International Civil Aviation Organization (ICAO) rules of daytime visibility. These algorithms are able to provide horizontal, slant and vertical visibility for tower aircraft controllers, meteorologists, but also from pilot's point of view. Other algorithms are also provided for detection of atmospheric layering in any given direction and vertical angle, along with the detection of the Planetary Boundary Layer Height (PBLH).

  7. Compensation of distributed delays in integrated communication and control systems

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Luck, Rogelio

    1991-01-01

    The concept, analysis, implementation, and verification of a method for compensating delays that are distributed between the sensors, controller, and actuators within a control loop are discussed. With the objective of mitigating the detrimental effects of these network induced delays, a predictor-controller algorithm was formulated and analyzed. Robustness of the delay compensation algorithm was investigated relative to parametric uncertainties in plant modeling. The delay compensator was experimentally verified on an IEEE 802.4 network testbed for velocity control of a DC servomotor.

  8. Ground based measurements on reflectance towards validating atmospheric correction algorithms on IRS-P6 AWiFS data

    NASA Astrophysics Data System (ADS)

    Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.

    In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with

  9. A preliminary assessment of the Nimbus-7 CZCS atmospheric correction algorithm in a horizontally inhomogeneous atmosphere. [Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Gordon, H. R.

    1981-01-01

    For an estimation of the concentration of phytoplankton pigments in the oceans on the basis of Nimbus-7 CZCS imagery, it is necessary to remove the effects of the intervening atmosphere from the satellite imagery. The principle effect of the atmosphere is a loss in contrast caused by the addition of a substantial amount of radiance (path radiance) to that scatttered out of the water. Gordon (1978) has developed a technique which shows considerable promise for removal of these atmospheric effects. Attention is given to the correction algorithm, and its application to CZCS imagery. An alternate method under study for affecting the atmospheric correction requires a knowledge of 'clear water' subsurface upwelled radiance as a function of solar angle and pigment concentration.

  10. Characterization of Properties of Earth Atmosphere from Multi-Angular Polarimetric Observations of Polder/Parasol Using GRASP Algorithm

    NASA Astrophysics Data System (ADS)

    Dubovik, O.; Litvinov, P.; Lapyonok, T.; Ducos, F.; Fuertes, D.; Huang, X.; Torres, B.; Aspetsberger, M.; Federspiel, C.

    2014-12-01

    The POLDER imager on board of the PARASOL micro-satellite is the only satellite polarimeter provided ~ 9 years extensive record of detailed polarmertic observations of Earth atmosphere from space. POLDER / PARASOL registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. Such observations have very high sensitivity to the variability of the properties of atmosphere and underlying surface and can not be adequately interpreted using look-up-table retrieval algorithms developed for analyzing mono-viewing intensity only observations traditionally used in atmospheric remote sensing. Therefore, a new enhanced retrieval algorithm GRASP (Generalized Retrieval of Aerosol and Surface Properties) has been developed and applied for processing of PARASOL data. GRASP relies on highly optimized statistical fitting of observations and derives large number of unknowns for each observed pixel. The algorithm uses elaborated model of the atmosphere and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are implemented during inversion and no look-up tables are used. The algorithm is very flexible in utilization of various types of a priori constraints on the retrieved characteristics and in parameterization of surface - atmosphere system. It is also optimized for high performance calculations. The results of the PARASOL data processing will be presented with the emphasis on the discussion of transferability and adaptability of the developed retrieval concept for processing polarimetric observations of other planets. For example, flexibility and possible alternative in modeling properties of aerosol polydisperse mixtures, particle composition and shape, reflectance of surface, etc. will be discussed.

  11. The Computational Complexity, Parallel Scalability, and Performance of Atmospheric Data Assimilation Algorithms

    NASA Technical Reports Server (NTRS)

    Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)

    2001-01-01

    The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.

  12. Retrieving Atmospheric Temperature and Moisture Profiles from NPP CRIS/ATMS Sensors Using Crimss EDR Algorithm

    NASA Technical Reports Server (NTRS)

    Liu, X.; Kizer, S.; Barnet, C.; Dvakarla, M.; Zhou, D. K.; Larar, A. M.

    2012-01-01

    The Joint Polar Satellite System (JPSS) is a U.S. National Oceanic and Atmospheric Administration (NOAA) mission in collaboration with the U.S. National Aeronautical Space Administration (NASA) and international partners. The NPP Cross-track Infrared Microwave Sounding Suite (CrIMSS) consists of the infrared (IR) Crosstrack Infrared Sounder (CrIS) and the microwave (MW) Advanced Technology Microwave Sounder (ATMS). The CrIS instrument is hyperspectral interferometer, which measures high spectral and spatial resolution upwelling infrared radiances. The ATMS is a 22-channel radiometer similar to Advanced Microwave Sounding Units (AMSU) A and B. It measures top of atmosphere MW upwelling radiation and provides capability of sounding below clouds. The CrIMSS Environmental Data Record (EDR) algorithm provides three EDRs, namely the atmospheric vertical temperature, moisture and pressure profiles (AVTP, AVMP and AVPP, respectively), with the lower tropospheric AVTP and the AVMP being JPSS Key Performance Parameters (KPPs). The operational CrIMSS EDR an algorithm was originally designed to run on large IBM computers with dedicated data management subsystem (DMS). We have ported the operational code to simple Linux systems by replacing DMS with appropriate interfaces. We also changed the interface of the operational code so that we can read data from both the CrIMSS science code and the operational code and be able to compare lookup tables, parameter files, and output results. The detail of the CrIMSS EDR algorithm is described in reference [1]. We will present results of testing the CrIMSS EDR operational algorithm using proxy data generated from the Infrared Atmospheric Sounding Interferometer (IASI) satellite data and from the NPP CrIS/ATMS data.

  13. Delay compensation in integrated communication and control systems. I - Conceptual development and analysis

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok

    1990-01-01

    A procedure for compensating for the effects of distributed network-induced delays in integrated communication and control systems (ICCS) is proposed. The problem of analyzing systems with time-varying and possibly stochastic delays could be circumvented by use of a deterministic observer which is designed to perform under certain restrictive but realistic assumptions. The proposed delay-compensation algorithm is based on a deterministic state estimator and a linear state-variable-feedback control law. The deterministic observer can be replaced by a stochastic observer without any structural modifications of the delay compensation algorithm. However, if a feedforward-feedback control law is chosen instead of the state-variable feedback control law, the observer must be modified as a conventional nondelayed system would be. Under these circumstances, the delay compensation algorithm would be accordingly changed. The separation principle of the classical Luenberger observer holds true for the proposed delay compensator. The algorithm is suitable for ICCS in advanced aircraft, spacecraft, manufacturing automation, and chemical process applications.

  14. An embedded processor for real-time atmoshperic compensation

    NASA Astrophysics Data System (ADS)

    Bodnar, Michael R.; Curt, Petersen F.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.

    2009-05-01

    Imaging over long distances is crucial to a number of defense and security applications, such as homeland security and launch tracking. However, the image quality obtained from current long-range optical systems can be severely degraded by the turbulent atmosphere in the path between the region under observation and the imager. While this obscured image information can be recovered using post-processing techniques, the computational complexity of such approaches has prohibited deployment in real-time scenarios. To overcome this limitation, we have coupled a state-of-the-art atmospheric compensation algorithm, the average-bispectrum speckle method, with a powerful FPGA-based embedded processing board. The end result is a light-weight, lower-power image processing system that improves the quality of long-range imagery in real-time, and uses modular video I/O to provide a flexible interface to most common digital and analog video transport methods. By leveraging the custom, reconfigurable nature of the FPGA, a 20x speed increase over a modern desktop PC was achieved in a form-factor that is compact, low-power, and field-deployable.

  15. An overview of turbulence compensation

    NASA Astrophysics Data System (ADS)

    Schutte, Klamer; van Eekeren, Adam W. M.; Dijk, Judith; Schwering, Piet B. W.; van Iersel, Miranda; Doelman, Niek J.

    2012-09-01

    In general, long range visual detection, recognition and identification are hampered by turbulence caused by atmospheric conditions. Much research has been devoted to the field of turbulence compensation. One of the main advantages of turbulence compensation is that it enables visual identification over larger distances. In many (military) scenarios this is of crucial importance. In this paper we give an overview of several software and hardware approaches to compensate for the visual artifacts caused by turbulence. These approaches are very diverse and range from the use of dedicated hardware, such as adaptive optics, to the use of software methods, such as deconvolution and lucky imaging. For each approach the pros and cons are given and it is indicated for which type of scenario this approach is useful. In more detail we describe the turbulence compensation methods TNO has developed in the last years and place them in the context of the different turbulence compensation approaches and TNO's turbulence compensation roadmap. Furthermore we look forward and indicate the upcoming challenges in the field of turbulence compensation.

  16. Intraocular scattering compensation in retinal imaging

    PubMed Central

    Christaras, Dimitrios; Ginis, Harilaos; Pennos, Alexandros; Artal, Pablo

    2016-01-01

    Intraocular scattering affects fundus imaging in a similar way that affects vision; it causes a decrease in contrast which depends on both the intrinsic scattering of the eye but also on the dynamic range of the image. Consequently, in cases where the absolute intensity in the fundus image is important, scattering can lead to a wrong estimation. In this paper, a setup capable of acquiring fundus images and estimating objectively intraocular scattering was built, and the acquired images were then used for scattering compensation in fundus imaging. The method consists of two parts: first, reconstruct the individual’s wide-angle Point Spread Function (PSF) at a specific wavelength to be used within an enhancement algorithm on an acquired fundus image to compensate for scattering. As a proof of concept, a single pass measurement with a scatter filter was carried out first and the complete algorithm of the PSF reconstruction and the scattering compensation was applied. The advantage of the single pass test is that one can compare the reconstructed image with the original one and see the validity, thus testing the efficiency of the method. Following the test, the algorithm was applied in actual fundus images in human eyes and the effect on the contrast of the image before and after the compensation was compared. The comparison showed that depending on the wavelength, contrast can be reduced by 8.6% under certain conditions. PMID:27867710

  17. Delay compensation in integrated communication and control systems. II - Implementation and verification

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok

    1990-01-01

    The implementation and verification of the delay-compensation algorithm are addressed. The delay compensator has been experimentally verified at an IEEE 802.4 network testbed for velocity control of a DC servomotor. The performance of the delay-compensation algorithm was also examined by combined discrete-event and continuous-time simulation of the flight control system of an advanced aircraft that uses the SAE (Society of Automotive Engineers) linear token passing bus for data communications.

  18. Atmospheric correction over case 2 waters with an iterative fitting algorithm: relative humidity effects.

    PubMed

    Land, P E; Haigh, J D

    1997-12-20

    In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.

  19. Sub-picosecond timing fluctuation suppression in laser-based atmospheric transfer of microwave signal using electronic phase compensation

    NASA Astrophysics Data System (ADS)

    Chen, Shijun; Sun, Fuyu; Bai, Qingsong; Chen, Dawei; Chen, Qiang; Hou, Dong

    2017-10-01

    We demonstrated a timing fluctuation suppression in outdoor laser-based atmospheric radio-frequency transfer over a 110 m one-way free-space link using an electronic phase compensation technique. Timing fluctuations and Allan Deviation are both measured to characterize the instability of transferred frequency incurred during the transfer process. With transferring a 1 GHz microwave signal over a timing fluctuation suppressed transmission link, the total root-mean-square (rms) timing fluctuation was measured to be 920 femtoseconds in 5000 s, with fractional frequency instability on the order of 1 × 10-12 at 1 s, and order of 2 × 10-16 at 1000 s. This atmospheric frequency transfer scheme with the timing fluctuation suppression technique can be used to fast build an atomic clock-based frequency free-space transmission link since its stability is superior to a commercial Cs and Rb clock.

  20. An innovative approach to compensator design

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.; Mcdaniel, W. L., Jr.

    1973-01-01

    The design is considered of a computer-aided-compensator for a control system from a frequency domain point of view. The design technique developed is based on describing the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. To do this, several definitions in regard to measuring the performance of a system in the frequency domain are given, e.g., relative stability, relative attenuation, proper phasing, etc. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. This tool is called the constraint improvement algorithm. Then for applying the constraint improvement algorithm generalized, gradients for the constraints are derived. Finally, the necessary theory is incorporated in a Computer program called CIP (compensator Improvement Program). The practical usefulness of CIP is demonstrated by two large system examples.

  1. Advanced Transport Delay Compensation Algorithms: Results of Delay Measurement and Piloted Performance Tests

    NASA Technical Reports Server (NTRS)

    Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.

    2007-01-01

    This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.

  2. Distortion correction and cross-talk compensation algorithm for use with an imaging spectrometer based spatially resolved diffuse reflectance system

    NASA Astrophysics Data System (ADS)

    Cappon, Derek J.; Farrell, Thomas J.; Fang, Qiyin; Hayward, Joseph E.

    2016-12-01

    Optical spectroscopy of human tissue has been widely applied within the field of biomedical optics to allow rapid, in vivo characterization and analysis of the tissue. When designing an instrument of this type, an imaging spectrometer is often employed to allow for simultaneous analysis of distinct signals. This is especially important when performing spatially resolved diffuse reflectance spectroscopy. In this article, an algorithm is presented that allows for the automated processing of 2-dimensional images acquired from an imaging spectrometer. The algorithm automatically defines distinct spectrometer tracks and adaptively compensates for distortion introduced by optical components in the imaging chain. Crosstalk resulting from the overlap of adjacent spectrometer tracks in the image is detected and subtracted from each signal. The algorithm's performance is demonstrated in the processing of spatially resolved diffuse reflectance spectra recovered from an Intralipid and ink liquid phantom and is shown to increase the range of wavelengths over which usable data can be recovered.

  3. The Algorithm Theoretical Basis Document for the GLAS Atmospheric Data Products

    NASA Technical Reports Server (NTRS)

    Palm, Stephen P.; Hart, William D.; Hlavka, Dennis L.; Welton, Ellsworth J.; Spinhirne, James D.

    2012-01-01

    The purpose of this document is to present a detailed description of the algorithm theoretical basis for each of the GLAS data products. This will be the final version of this document. The algorithms were initially designed and written based on the authors prior experience with high altitude lidar data on systems such as the Cloud and Aerosol Lidar System (CALS) and the Cloud Physics Lidar (CPL), both of which fly on the NASA ER-2 high altitude aircraft. These lidar systems have been employed in many field experiments around the world and algorithms have been developed to analyze these data for a number of atmospheric parameters. CALS data have been analyzed for cloud top height, thin cloud optical depth, cirrus cloud emittance (Spinhirne and Hart, 1990) and boundary layer depth (Palm and Spinhirne, 1987, 1998). The successor to CALS, the CPL, has also been extensively deployed in field missions since 2000 including the validation of GLAS and CALIPSO. The CALS and early CPL data sets also served as the basis for the construction of simulated GLAS data sets which were then used to develop and test the GLAS analysis algorithms.

  4. A Wave Diagnostics in Geophysics: Algorithmic Extraction of Atmosphere Disturbance Modes

    NASA Astrophysics Data System (ADS)

    Leble, S.; Vereshchagin, S.

    2018-04-01

    The problem of diagnostics in geophysics is discussed and a proposal based on dynamic projecting operators technique is formulated. The general exposition is demonstrated by an example of symbolic algorithm for the wave and entropy modes in the exponentially stratified atmosphere. The novel technique is developed as a discrete version for the evolution operator and the corresponding projectors via discrete Fourier transformation. Its explicit realization for directed modes in exponential one-dimensional atmosphere is presented via the correspondent projection operators in its discrete version in terms of matrices with a prescribed action on arrays formed from observation tables. A simulation based on opposite directed (upward and downward) wave train solution is performed and the modes' extraction from a mixture is illustrated.

  5. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  6. Algorithms to automate gap-dependent integral tuning for the 2.8-meter long horizontal field undulator with a dynamic force compensation mechanism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Joseph Z., E-mail: x@anl.gov; Vasserman, Isaac; Strelnikov, Nikita

    2016-07-27

    A 2.8-meter long horizontal field prototype undulator with a dynamic force compensation mechanism has been developed and tested at the Advanced Photon Source (APS) at Argonne National Laboratory (Argonne). The magnetic tuning of the undulator integrals has been automated and accomplished by applying magnetic shims. A detailed description of the algorithms and performance is reported.

  7. Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems

    NASA Technical Reports Server (NTRS)

    Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.

    2012-01-01

    Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.

  8. Algorithmic vs. finite difference Jacobians for infrared atmospheric radiative transfer

    NASA Astrophysics Data System (ADS)

    Schreier, Franz; Gimeno García, Sebastián; Vasquez, Mayte; Xu, Jian

    2015-10-01

    Jacobians, i.e. partial derivatives of the radiance and transmission spectrum with respect to the atmospheric state parameters to be retrieved from remote sensing observations, are important for the iterative solution of the nonlinear inverse problem. Finite difference Jacobians are easy to implement, but computationally expensive and possibly of dubious quality; on the other hand, analytical Jacobians are accurate and efficient, but the implementation can be quite demanding. GARLIC, our "Generic Atmospheric Radiation Line-by-line Infrared Code", utilizes algorithmic differentiation (AD) techniques to implement derivatives w.r.t. atmospheric temperature and molecular concentrations. In this paper, we describe our approach for differentiation of the high resolution infrared and microwave spectra and provide an in-depth assessment of finite difference approximations using "exact" AD Jacobians as a reference. The results indicate that the "standard" two-point finite differences with 1 K and 1% perturbation for temperature and volume mixing ratio, respectively, can exhibit substantial errors, and central differences are significantly better. However, these deviations do not transfer into the truncated singular value decomposition solution of a least squares problem. Nevertheless, AD Jacobians are clearly recommended because of the superior speed and accuracy.

  9. Characterization of Methane Emission Sources Using Genetic Algorithms and Atmospheric Transport Modeling

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Cervone, G.; Barkley, Z.; Lauvaux, T.; Deng, A.; Miles, N.; Richardson, S.

    2016-12-01

    Fugitive methane emission rates for the Marcellus shale area are estimated using a genetic algorithm that finds optimal weights to minimize the error between simulated and observed concentrations. The overall goal is to understand the relative contribution of methane due to Shale gas extraction. Methane sensors were installed on four towers located in northeastern Pennsylvania to measure atmospheric concentrations since May 2015. Inverse Lagrangian dispersion model runs are performed from each of these tower locations for each hour of 2015. Simulated methane concentrations at each of the four towers are computed by multiplying the resulting footprints from the atmospheric simulations by thousands of emission sources grouped into 11 classes. The emission sources were identified using GIS techniques, and include conventional and unconventional wells, different types of compressor stations, pipelines, landfills, farming and wetlands. Initial estimates for each source are calculated based on emission factors from EPA and few regional studies. A genetic algorithm is then used to identify optimal emission rates for the 11 classes of methane emissions and to explore extreme events and spatial and temporal structures in the emissions associated with natural gas activities.

  10. Assessment of Polarization Effect on Efficiency of Levenberg-Marquardt Algorithm in Case of Thin Atmosphere Over Black Surface

    NASA Technical Reports Server (NTRS)

    Korkin, S.; Lyapustin, A.

    2012-01-01

    The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer s rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request.

  11. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  12. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid ismore » used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements« less

  13. A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System

    PubMed Central

    Zhou, Guanwu; Zhao, Yulong; Guo, Fangfang; Xu, Wenju

    2014-01-01

    Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM) as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU) after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system's performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor. PMID:25006998

  14. An End-to-End simulator for the development of atmospheric corrections and temperature - emissivity separation algorithms in the TIR spectral domain

    NASA Astrophysics Data System (ADS)

    Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas

    2017-04-01

    The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.

  15. Direct variational data assimilation algorithm for atmospheric chemistry data with transport and transformation model

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir; Nuterman, Roman; Baklanov, Alexander; Mahura, Alexander

    2015-11-01

    Atmospheric chemistry dynamics is studied with convection-diffusion-reaction model. The numerical Data Assimilation algorithm presented is based on the additive-averaged splitting schemes. It carries out ''fine-grained'' variational data assimilation on the separate splitting stages with respect to spatial dimensions and processes i.e. the same measurement data is assimilated to different parts of the split model. This design has efficient implementation due to the direct data assimilation algorithms of the transport process along coordinate lines. Results of numerical experiments with chemical data assimilation algorithm of in situ concentration measurements on real data scenario have been presented. In order to construct the scenario, meteorological data has been taken from EnviroHIRLAM model output, initial conditions from MOZART model output and measurements from Airbase database.

  16. High-resolution studies of the structure of the solar atmosphere using a new imaging algorithm

    NASA Technical Reports Server (NTRS)

    Karovska, Margarita; Habbal, Shadia Rifai

    1991-01-01

    The results of the application of a new image restoration algorithm developed by Ayers and Dainty (1988) to the multiwavelength EUV/Skylab observations of the solar atmosphere are presented. The application of the algorithm makes it possible to reach a resolution better than 5 arcsec, and thus study the structure of the quiet sun on that spatial scale. The results show evidence for discrete looplike structures in the network boundary, 5-10 arcsec in size, at temperatures of 100,000 K.

  17. Extremum Seeking Control of Smart Inverters for VAR Compensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnold, Daniel; Negrete-Pincetic, Matias; Stewart, Emma

    2015-09-04

    Reactive power compensation is used by utilities to ensure customer voltages are within pre-defined tolerances and reduce system resistive losses. While much attention has been paid to model-based control algorithms for reactive power support and Volt Var Optimization (VVO), these strategies typically require relatively large communications capabilities and accurate models. In this work, a non-model-based control strategy for smart inverters is considered for VAR compensation. An Extremum Seeking control algorithm is applied to modulate the reactive power output of inverters based on real power information from the feeder substation, without an explicit feeder model. Simulation results using utility demand informationmore » confirm the ability of the control algorithm to inject VARs to minimize feeder head real power consumption. In addition, we show that the algorithm is capable of improving feeder voltage profiles and reducing reactive power supplied by the distribution substation.« less

  18. Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code

    NASA Astrophysics Data System (ADS)

    Phillips, William; Russwurm, George M.

    1999-02-01

    This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.

  19. Adaptive Beam Loading Compensation in Room Temperature Bunching Cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, J. P.; Chase, B. E.; Cullerton, E.

    In this paper we present the design, simulation, and proof of principle results of an optimization based adaptive feedforward algorithm for beam-loading compensation in a high impedance room temperature cavity. We begin with an overview of prior developments in beam loading compensation. Then we discuss different techniques for adaptive beam loading compensation and why the use of Newton?s Method is of interest for this application. This is followed by simulation and initial experimental results of this method.

  20. An Adaptive Numeric Predictor-corrector Guidance Algorithm for Atmospheric Entry Vehicles. M.S. Thesis - MIT, Cambridge

    NASA Technical Reports Server (NTRS)

    Spratlin, Kenneth Milton

    1987-01-01

    An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.

  1. Top-of-atmosphere radiative fluxes - Validation of ERBE scanner inversion algorithm using Nimbus-7 ERB data

    NASA Technical Reports Server (NTRS)

    Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri

    1992-01-01

    The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.

  2. Precision laser surveying instrument using atmospheric turbulence compensation by determining the absolute displacement between two laser beam components

    DOEpatents

    Veligdan, James T.

    1993-01-01

    Atmospheric effects on sighting measurements are compensated for by adjusting any sighting measurements using a correction factor that does not depend on atmospheric state conditions such as temperature, pressure, density or turbulence. The correction factor is accurately determined using a precisely measured physical separation between two color components of a light beam (or beams) that has been generated using either a two-color laser or two lasers that project different colored beams. The physical separation is precisely measured by fixing the position of a short beam pulse and measuring the physical separation between the two fixed-in-position components of the beam. This precisely measured physical separation is then used in a relationship that includes the indexes of refraction for each of the two colors of the laser beam in the atmosphere through which the beam is projected, thereby to determine the absolute displacement of one wavelength component of the laser beam from a straight line of sight for that projected component of the beam. This absolute displacement is useful to correct optical measurements, such as those developed in surveying measurements that are made in a test area that includes the same dispersion effects of the atmosphere on the optical measurements. The means and method of the invention are suitable for use with either single-ended systems or a double-ended systems.

  3. Real-time embedded atmospheric compensation for long-range imaging using the average bispectrum speckle method

    NASA Astrophysics Data System (ADS)

    Curt, Petersen F.; Bodnar, Michael R.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.

    2009-02-01

    While imaging over long distances is critical to a number of security and defense applications, such as homeland security and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent atmosphere in the path between the region under observation and the imaging system, which can severely degrade captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image information; however, the computational complexity of such approaches has prohibited real-time deployment and hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and manufactured an embedded image processing system based on commodity hardware which can compensate for these atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop PC, in a form-factor that is compact, low-power, and field-deployable.

  4. Optimal line drop compensation parameters under multi-operating conditions

    NASA Astrophysics Data System (ADS)

    Wan, Yuan; Li, Hang; Wang, Kai; He, Zhe

    2017-01-01

    Line Drop Compensation (LDC) is a main function of Reactive Current Compensation (RCC) which is developed to improve voltage stability. While LDC has benefit to voltage, it may deteriorate the small-disturbance rotor angle stability of power system. In present paper, an intelligent algorithm which is combined by Genetic Algorithm (GA) and Backpropagation Neural Network (BPNN) is proposed to optimize parameters of LDC. The objective function proposed in present paper takes consideration of voltage deviation and power system oscillation minimal damping ratio under multi-operating conditions. A simulation based on middle area of Jiangxi province power system is used to demonstrate the intelligent algorithm. The optimization result shows that coordinate optimized parameters can meet the multioperating conditions requirement and improve voltage stability as much as possible while guaranteeing enough damping ratio.

  5. An Improved Method of Heterogeneity Compensation for the Convolution / Superposition Algorithm

    NASA Astrophysics Data System (ADS)

    Jacques, Robert; McNutt, Todd

    2014-03-01

    Purpose: To improve the accuracy of convolution/superposition (C/S) in heterogeneous material by developing a new algorithm: heterogeneity compensated superposition (HCS). Methods: C/S has proven to be a good estimator of the dose deposited in a homogeneous volume. However, near heterogeneities electron disequilibrium occurs, leading to the faster fall-off and re-buildup of dose. We propose to filter the actual patient density in a position and direction sensitive manner, allowing the dose deposited near interfaces to be increased or decreased relative to C/S. We implemented the effective density function as a multivariate first-order recursive filter and incorporated it into GPU-accelerated, multi-energetic C/S implementation. We compared HCS against C/S using the ICCR 2000 Monte-Carlo accuracy benchmark, 23 similar accuracy benchmarks and 5 patient cases. Results: Multi-energetic HCS increased the dosimetric accuracy for the vast majority of voxels; in many cases near Monte-Carlo results were achieved. We defined the per-voxel error, %|mm, as the minimum of the distance to agreement in mm and the dosimetric percentage error relative to the maximum MC dose. HCS improved the average mean error by 0.79 %|mm for the patient volumes; reducing the average mean error from 1.93 %|mm to 1.14 %|mm. Very low densities (i.e. < 0.1 g / cm3) remained problematic, but may be solvable with a better filter function. Conclusions: HCS improved upon C/S's density scaled heterogeneity correction with a position and direction sensitive density filter. This method significantly improved the accuracy of the GPU based algorithm reaching the accuracy levels of Monte Carlo based methods with performance in a few tenths of seconds per beam. Acknowledgement: Funding for this research was provided by the NSF Cooperative Agreement EEC9731748, Elekta / IMPAC Medical Systems, Inc. and the Johns Hopkins University. James Satterthwaite provided the Monte Carlo benchmark simulations.

  6. Topography-Dependent Motion Compensation: Application to UAVSAR Data

    NASA Technical Reports Server (NTRS)

    Jones, Cathleen E.; Hensley, Scott; Michel, Thierry

    2009-01-01

    The UAVSAR L-band synthetic aperture radar system has been designed for repeat track interferometry in support of Earth science applications that require high-precision measurements of small surface deformations over timescales from hours to years. Conventional motion compensation algorithms, which are based upon assumptions of a narrow beam and flat terrain, yield unacceptably large errors in areas with even moderate topographic relief, i.e., in most areas of interest. This often limits the ability to achieve sub-centimeter surface change detection over significant portions of an acquired scene. To reduce this source of error in the interferometric phase, we have implemented an advanced motion compensation algorithm that corrects for the scene topography and radar beam width. Here we discuss the algorithm used, its implementation in the UAVSAR data processor, and the improvement in interferometric phase and correlation achieved in areas with significant topographic relief.

  7. Assessment of Polarization Effect on Efficiency of Levenberg-Marquardt Algorithm in Case of Thin Atmosphere over Black Surface

    NASA Astrophysics Data System (ADS)

    Korkin, S.; Lyapustin, A.

    2012-12-01

    The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD

  8. A Deep Machine Learning Algorithm to Optimize the Forecast of Atmospherics

    NASA Astrophysics Data System (ADS)

    Russell, A. M.; Alliss, R. J.; Felton, B. D.

    Space-based applications from imaging to optical communications are significantly impacted by the atmosphere. Specifically, the occurrence of clouds and optical turbulence can determine whether a mission is a success or a failure. In the case of space-based imaging applications, clouds produce atmospheric transmission losses that can make it impossible for an electro-optical platform to image its target. Hence, accurate predictions of negative atmospheric effects are a high priority in order to facilitate the efficient scheduling of resources. This study seeks to revolutionize our understanding of and our ability to predict such atmospheric events through the mining of data from a high-resolution Numerical Weather Prediction (NWP) model. Specifically, output from the Weather Research and Forecasting (WRF) model is mined using a Random Forest (RF) ensemble classification and regression approach in order to improve the prediction of low cloud cover over the Haleakala summit of the Hawaiian island of Maui. RF techniques have a number of advantages including the ability to capture non-linear associations between the predictors (in this case physical variables from WRF such as temperature, relative humidity, wind speed and pressure) and the predictand (clouds), which becomes critical when dealing with the complex non-linear occurrence of clouds. In addition, RF techniques are capable of representing complex spatial-temporal dynamics to some extent. Input predictors to the WRF-based RF model are strategically selected based on expert knowledge and a series of sensitivity tests. Ultimately, three types of WRF predictors are chosen: local surface predictors, regional 3D moisture predictors and regional inversion predictors. A suite of RF experiments is performed using these predictors in order to evaluate the performance of the hybrid RF-WRF technique. The RF model is trained and tuned on approximately half of the input dataset and evaluated on the other half. The RF

  9. Nonlinear Blind Compensation for Array Signal Processing Application

    PubMed Central

    Ma, Hong; Jin, Jiang; Zhang, Hua

    2018-01-01

    Recently, nonlinear blind compensation technique has attracted growing attention in array signal processing application. However, due to the nonlinear distortion stemming from array receiver which consists of multi-channel radio frequency (RF) front-ends, it is too difficult to estimate the parameters of array signal accurately. A novel nonlinear blind compensation algorithm aims at the nonlinearity mitigation of array receiver and its spurious-free dynamic range (SFDR) improvement, which will be more precise to estimate the parameters of target signals such as their two-dimensional directions of arrival (2-D DOAs). Herein, the suggested method is designed as follows: the nonlinear model parameters of any channel of RF front-end are extracted to synchronously compensate the nonlinear distortion of the entire receiver. Furthermore, a verification experiment on the array signal from a uniform circular array (UCA) is adopted to testify the validity of our approach. The real-world experimental results show that the SFDR of the receiver is enhanced, leading to a significant improvement of the 2-D DOAs estimation performance for weak target signals. And these results demonstrate that our nonlinear blind compensation algorithm is effective to estimate the parameters of weak array signal in concomitance with strong jammers. PMID:29690571

  10. Nearly arc-length tool path generation and tool radius compensation algorithm research in FTS turning

    NASA Astrophysics Data System (ADS)

    Zhao, Minghui; Zhao, Xuesen; Li, Zengqiang; Sun, Tao

    2014-08-01

    In the non-rotational symmetrical microstrcture surfaces generation using turning method with Fast Tool Servo(FTS), non-uniform distribution of the interpolation data points will lead to long processing cycle and poor surface quality. To improve this situation, nearly arc-length tool path generation algorithm is proposed, which generates tool tip trajectory points in nearly arc-length instead of the traditional interpolation rule of equal angle and adds tool radius compensation. All the interpolation points are equidistant in radial distribution because of the constant feeding speed in X slider, the high frequency tool radius compensation components are in both X direction and Z direction, which makes X slider difficult to follow the input orders due to its large mass. Newton iterative method is used to calculate the neighboring contour tangent point coordinate value with the interpolation point X position as initial value, in this way, the new Z coordinate value is gotten, and the high frequency motion components in X direction is decomposed into Z direction. Taking a typical microstructure with 4μm PV value for test, which is mixed with two 70μm wave length sine-waves, the max profile error at the angle of fifteen is less than 0.01μm turning by a diamond tool with big radius of 80μm. The sinusoidal grid is machined on a ultra-precision lathe succesfully, the wavelength is 70.2278μm the Ra value is 22.81nm evaluated by data points generated by filtering out the first five harmonics.

  11. Density implications of shift compensation postprocessing in holographic storage systems

    NASA Astrophysics Data System (ADS)

    Menetrier, Laure; Burr, Geoffrey W.

    2003-02-01

    We investigate the effect of data page misregistration, and its subsequent correction in postprocessing, on the storage density of holographic data storage systems. A numerical simulation is used to obtain the bit-error rate as a function of hologram aperture, page misregistration, pixel fill factors, and Gaussian additive intensity noise. Postprocessing of simulated data pages is performed by a nonlinear pixel shift compensation algorithm [Opt. Lett. 26, 542 (2001)]. The performance of this algorithm is analyzed in the presence of noise by determining the achievable areal density. The impact of inaccurate measurements of page misregistration is also investigated. Results show that the shift-compensation algorithm can provide almost complete immunity to page misregistration, although at some penalty to the baseline areal density offered by a system with zero tolerance to misalignment.

  12. Branch Point Mitigation of Thermal Blooming Phase Compensation Instability

    DTIC Science & Technology

    2011-03-01

    Turbulence ...............................................................79 2.5 High Energy Laser Beam Phase Compensation using Adaptive Optics...that scintillates the HEL beam irradiance. Atmospheric advection causes turbulent eddies to travel across the HEL beam distorting the target ...with multiple atmospheric effects including extinction, thermal blooming, and optical turbulence . Using the BPM provides both speed and accuracy and

  13. Algorithms and physical parameters involved in the calculation of model stellar atmospheres

    NASA Astrophysics Data System (ADS)

    Merlo, D. C.

    This contribution summarizes the Doctoral Thesis presented at Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba for the degree of PhD in Astronomy. We analyze some algorithms and physical parameters involved in the calculation of model stellar atmospheres, such as atomic partition functions, functional relations connecting gaseous and electronic pressure, molecular formation, temperature distribution, chemical compositions, Gaunt factors, atomic cross-sections and scattering sources, as well as computational codes for calculating models. Special attention is paid to the integration of hydrostatic equation. We compare our results with those obtained by other authors, finding reasonable agreement. We make efforts on the implementation of methods that modify the originally adopted temperature distribution in the atmosphere, in order to obtain constant energy flux throughout. We find limitations and we correct numerical instabilities. We integrate the transfer equation solving directly the integral equation involving the source function. As a by-product, we calculate updated atomic partition functions of the light elements. Also, we discuss and enumerate carefully selected formulae for the monochromatic absorption and dispersion of some atomic and molecular species. Finally, we obtain a flexible code to calculate model stellar atmospheres.

  14. Using ultrasound CBE imaging without echo shift compensation for temperature estimation.

    PubMed

    Tsui, Po-Hsiang; Chien, Yu-Ting; Liu, Hao-Li; Shu, Yu-Chen; Chen, Wen-Shiang

    2012-09-01

    Clinical trials have demonstrated that hyperthermia improves cancer treatments. Previous studies developed ultrasound temperature imaging methods, based on the changes in backscattered energy (CBE), to monitor temperature variations during hyperthermia. Echo shift, induced by increasing temperature, contaminates the CBE image, and its tracking and compensation should normally ensure that estimations of CBE at each pixel are correct. To obtain a simplified algorithm that would allow real-time computation of CBE images, this study evaluated the usefulness of CBE imaging without echo shift compensation in detecting distributions in temperature. Experiments on phantoms, using different scatterer concentrations, and porcine livers were conducted to acquire raw backscattered data at temperatures ranging from 37°C to 45°C. Tissue samples of pork tenderloin were ablated in vitro by microwave irradiation to evaluate the feasibility of using the CBE image without compensation to monitor tissue ablation. CBE image construction was based on a ratio map obtained from the envelope image divided by the reference envelope image at 37°C. The experimental results demonstrated that the CBE image obtained without echo shift compensation has the ability to estimate temperature variations induced during uniform heating or tissue ablation. The magnitude of the CBE as a function of temperature obtained without compensation is stronger than that with compensation, implying that the CBE image without compensation has a better sensitivity to detect temperature. These findings suggest that echo shift tracking and compensation may be unnecessary in practice, thus simplifying the algorithm required to implement real-time CBE imaging. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. 50 CFR 600.245 - Council member compensation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Council member compensation. 600.245 Section 600.245 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE MAGNUSON-STEVENS ACT PROVISIONS Council Membership § 600...

  16. An innovative approach to compensator design

    NASA Technical Reports Server (NTRS)

    Mitchell, J. R.

    1972-01-01

    The primary goal is to present for a control system a computer-aided-compensator design technique from a frequency domain point of view. The thesis for developing this technique is to describe the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. In order to do this several definitions in regard to measuring the performance of a system in the frequency domain are given. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. Then for applying the constraint improvement algorithm generalized gradients for the constraints are derived. Finally, the necessary theory is incorporated in a computer program called CIP (compensator improvement program).

  17. Transport delay compensation for computer-generated imagery systems

    NASA Technical Reports Server (NTRS)

    Mcfarland, Richard E.

    1988-01-01

    In the problem of pure transport delay in a low-pass system, a trade-off exists with respect to performance within and beyond a frequency bandwidth. When activity beyond the band is attenuated because of other considerations, this trade-off may be used to improve the performance within the band. Specifically, transport delay in computer-generated imagery systems is reduced to a manageable problem by recognizing frequency limits in vehicle activity and manual-control capacity. Based on these limits, a compensation algorithm has been developed for use in aircraft simulation at NASA Ames Research Center. For direct measurement of transport delays, a beam-splitter experiment is presented that accounts for the complete flight simulation environment. Values determined by this experiment are appropriate for use in the compensation algorithm. The algorithm extends the bandwidth of high-frequency flight simulation to well beyond that of normal pilot inputs. Within this bandwidth, the visual scene presentation manifests negligible gain distortion and phase lag. After a year of utilization, two minor exceptions to universal simulation applicability have been identified and subsequently resolved.

  18. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning

    PubMed Central

    Deng, Zhongliang

    2018-01-01

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization. PMID:29361718

  19. An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning.

    PubMed

    Deng, Zhongliang; Fu, Xiao; Wang, Hanhua

    2018-01-20

    Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.

  20. Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design

    NASA Technical Reports Server (NTRS)

    Whorton, Mark; Buschek, Harald; Calise, Anthony J.

    1996-01-01

    Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.

  1. Aeromagnetic Compensation for UAVs

    NASA Astrophysics Data System (ADS)

    Naprstek, T.; Lee, M. D.

    2017-12-01

    Aeromagnetic data is one of the most widely collected types of data in exploration geophysics. With the continuing prevalence of unmanned air vehicles (UAVs) in everyday life there is a strong push for aeromagnetic data collection using UAVs. However, apart from the many political and legal barriers to overcome in the development of UAVs as aeromagnetic data collection platforms, there are also significant scientific hurdles, primary of which is magnetic compensation. This is a well-established process in manned aircraft achieved through a combination of platform magnetic de-noising and compensation routines. However, not all of this protocol can be directly applied to UAVs due to fundamental differences in the platforms, most notably the decrease in scale causing magnetometers to be significantly closer to the avionics. As such, the methodology must be suitably adjusted. The National Research Council of Canada has collaborated with Aeromagnetic Solutions Incorporated to develop a standardized approach to de-noising and compensating UAVs, which is accomplished through a series of static and dynamic experiments. On the ground, small static tests are conducted on individual components to determine their magnetization. If they are highly magnetic, they are removed, demagnetized, or characterized such that they can be accounted for in the compensation. Dynamic tests can include measuring specific components as they are powered on and off to assess their potential effect on airborne data. The UAV is then flown, and a modified compensation routine is applied. These modifications include utilizing onboard autopilot current sensors as additional terms in the compensation algorithm. This process has been applied with success to fixed-wing and rotary-wing platforms, with both a standard manned-aircraft magnetometer, as well as a new atomic magnetometer, much smaller in scale.

  2. Compensating for pneumatic distortion in pressure sensing devices

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Leondes, Cornelius T.

    1990-01-01

    A technique of compensating for pneumatic distortion in pressure sensing devices was developed and verified. This compensation allows conventional pressure sensing technology to obtain improved unsteady pressure measurements. Pressure distortion caused by frictional attenuation and pneumatic resonance within the sensing system makes obtaining unsteady pressure measurements by conventional sensors difficult. Most distortion occurs within the pneumatic tubing which transmits pressure impulses from the aircraft's surface to the measurement transducer. To avoid pneumatic distortion, experiment designers mount the pressure sensor at the surface of the aircraft, (called in-situ mounting). In-situ transducers cannot always fit in the available space and sometimes pneumatic tubing must be run from the aircraft's surface to the pressure transducer. A technique to measure unsteady pressure data using conventional pressure sensing technology was developed. A pneumatic distortion model is reduced to a low-order, state-variable model retaining most of the dynamic characteristics of the full model. The reduced-order model is coupled with results from minimum variance estimation theory to develop an algorithm to compensate for the effects of pneumatic distortion. Both postflight and real-time algorithms are developed and evaluated using simulated and flight data.

  3. Actuator stiction compensation via variable amplitude pulses.

    PubMed

    Arifin, B M S; Munaro, C J; Angarita, O F B; Cypriano, M V G; Shah, S L

    2018-02-01

    A novel model free stiction compensation scheme is developed which eliminates the oscillations and also reduces valve movement, allowing good setpoint tracking and disturbance rejection. Pulses with varying amplitude are added to the controller output to overcome stiction and when the error becomes smaller than a specified limit, the compensation ceases and remains in a standby mode. The compensation re-starts as soon as the error exceeds the user specified threshold. The ability to cope with uncertainty in friction is a feature achieved by the use of pulses of varying amplitude. The algorithm has been evaluated via simulation and by application on an industrial DCS system interfaced to a pilot scale process with features identical to those found in industry including a valve positioner. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Evaluation and application of an algorithm for atmospheric profiling continuity from Aqua to Suomi-NPP

    NASA Astrophysics Data System (ADS)

    Lipton, A.; Moncet, J. L.; Lynch, R.; Payne, V.; Alvarado, M. J.

    2016-12-01

    We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. For analysis of satellite profiles over multi-decade periods, a concern is that the algorithm could respond inadequately to climate change if it uses a static background as a retrieval constraint, leading to retrievals that underestimate secular changes over extended periods of time and become biased toward an outdated climatology. We assessed the ability of our algorithm to respond appropriately to changes in temperature and water vapor profiles associated with climate change and, in particular, on the impact of using a climatological background in retrievals when the climatology is not static. We simulated a scenario wherein our algorithm processes 30 years of data from CrIS and ATMS (CrIMSS) with a static background based on data from the start of the 30-year period. We performed simulations using products from Coupled Model Intercomparison Project 5 (CMIP5), and in particular the "representative concentration pathways" midrange emissions (RCP4.5) scenario from the GISS-E2-R model. We will present results indicating that regularization using empirical orthogonal functions (EOFs) from a 30-year outdated covariance had a negligible effect on results. For temperature, the secular change is represented with high fidelity with the Cr

  5. Indoor positioning algorithm combined with angular vibration compensation and the trust region technique based on received signal strength-visible light communication

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Li, Haoxu; Zhang, Xiaofeng; Wu, Rangzhong

    2017-05-01

    Indoor positioning using visible light communication has become a topic of intensive research in recent years. Because the normal of the receiver always deviates from that of the transmitter in application, the positioning systems which require that the normal of the receiver be aligned with that of the transmitter have large positioning errors. Some algorithms take the angular vibrations into account; nevertheless, these positioning algorithms cannot meet the requirement of high accuracy or low complexity. A visible light positioning algorithm combined with angular vibration compensation is proposed. The angle information from the accelerometer or other angle acquisition devices is used to calculate the angle of incidence even when the receiver is not horizontal. Meanwhile, a received signal strength technique with high accuracy is employed to determine the location. Moreover, an eight-light-emitting-diode (LED) system model is provided to improve the accuracy. The simulation results show that the proposed system can achieve a low positioning error with low complexity, and the eight-LED system exhibits improved performance. Furthermore, trust region-based positioning is proposed to determine three-dimensional locations and achieves high accuracy in both the horizontal and the vertical components.

  6. Fluid surface compensation in digital holographic microscopy for topography measurement

    NASA Astrophysics Data System (ADS)

    Lin, Li-Chien; Tu, Han-Yen; Lai, Xin-Ji; Wang, Sheng-Shiun; Cheng, Chau-Jern

    2012-06-01

    A novel technique is presented for surface compensation and topography measurement of a specimen in fluid medium by digital holographic microscopy (DHM). In the measurement, the specimen is preserved in a culture dish full of liquid culture medium and an environmental vibration induces a series of ripples to create a non-uniform background on the reconstructed phase image. A background surface compensation algorithm is proposed to account for this problem. First, we distinguish the cell image from the non-uniform background and a morphological image operation is used to reduce the noise effect on the background surface areas. Then, an adaptive sampling from the background surface is employed, taking dense samples from the high-variation area while leaving the smooth region mostly untouched. A surface fitting algorithm based on the optimal bi-cubic functional approximation is used to establish a whole background surface for the phase image. Once the background surface is found, the background compensated phase can be obtained by subtracting the estimated background from the original phase image. From the experimental results, the proposed algorithm performs effectively in removing the non-uniform background of the phase image and has the ability to obtain the specimen topography inside fluid medium under environmental vibrations.

  7. Compensating for telecommunication delays during robotic telerehabilitation.

    PubMed

    Consoni, Leonardo J; Siqueira, Adriano A G; Krebs, Hermano I

    2017-07-01

    Rehabilitation robotic systems may afford better care and telerehabilitation may extend the use and benefits of robotic therapy to the home. Data transmissions over distance are bound by intrinsic communication delays which can be significant enough to deem the activity unfeasible. Here we describe an approach that combines unilateral robotic telerehabilitation and serious games. This approach has a modular and distributed design that permits different types of robots to interact without substantial code changes. We demonstrate the approach through an online multiplayer game. Two users can remotely interact with each other with no force exchanges, while a smoothing and prediction algorithm compensates motions for the delay in the Internet connection. We demonstrate that this approach can successfully compensate for data transmission delays, even when testing between the United States and Brazil. This paper presents the initial experimental results, which highlight the performance degradation with increasing delays as well as improvements provided by the proposed algorithm, and discusses planned future developments.

  8. Retrieval Algorithm for Broadband Albedo at the Top of the Atmosphere

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Ho; Lee, Kyu-Tae; Kim, Bu-Yo; Zo, ll-Sung; Jung, Hyun-Seok; Rim, Se-Hun

    2018-05-01

    The objective of this study is to develop an algorithm that retrieves the broadband albedo at the top of the atmosphere (TOA albedo) for radiation budget and climate analysis of Earth's atmosphere using Geostationary Korea Multi-Purse Satellite/Advanced Meteorological Imager (GK-2A/AMI) data. Because the GK-2A satellite will launch in 2018, we used data from the Japanese weather satellite Himawari-8 and onboard sensor Advanced Himawari Imager (AHI), which has similar sensor properties and observation area to those of GK-2A. TOA albedo was retrieved based on reflectance and regression coefficients of shortwave channels 1 to 6 of AHI. The regression coefficient was calculated using the results of the radiative transfer model (SBDART) and ridge regression. The SBDART used simulations of the correlation between TOA albedo and reflectance of each channel according to each atmospheric conditions (solar zenith angle, viewing zenith angle, relative azimuth angle, surface type, and absence/presence of clouds). The TOA albedo from Himawari-8/AHI were compared to that from the National Aeronautics and Space Administration (NASA) satellite Terra with onboard sensor Clouds and the Earth's Radiant Energy System (CERES). The correlation coefficients between the two datasets from the week containing the first day of every month between 1st August 2015 and 1st July 2016 were high, ranging between 0.934 and 0.955, with the root mean square error in the 0.053-0.068 range.

  9. Assessment of Atmospheric Algorithms to Retrieve Vegetation in Natural Protected Areas Using Multispectral High Resolution Imagery

    PubMed Central

    Marcello, Javier; Eugenio, Francisco; Perdomo, Ulises; Medina, Anabella

    2016-01-01

    The precise mapping of vegetation covers in semi-arid areas is a complex task as this type of environment consists of sparse vegetation mainly composed of small shrubs. The launch of high resolution satellites, with additional spectral bands and the ability to alter the viewing angle, offers a useful technology to focus on this objective. In this context, atmospheric correction is a fundamental step in the pre-processing of such remote sensing imagery and, consequently, different algorithms have been developed for this purpose over the years. They are commonly categorized as imaged-based methods as well as in more advanced physical models based on the radiative transfer theory. Despite the relevance of this topic, a few comparative studies covering several methods have been carried out using high resolution data or which are specifically applied to vegetation covers. In this work, the performance of five representative atmospheric correction algorithms (DOS, QUAC, FLAASH, ATCOR and 6S) has been assessed, using high resolution Worldview-2 imagery and field spectroradiometer data collected simultaneously, with the goal of identifying the most appropriate techniques. The study also included a detailed analysis of the parameterization influence on the final results of the correction, the aerosol model and its optical thickness being important parameters to be properly adjusted. The effects of corrections were studied in vegetation and soil sites belonging to different protected semi-arid ecosystems (high mountain and coastal areas). In summary, the superior performance of model-based algorithms, 6S in particular, has been demonstrated, achieving reflectance estimations very close to the in-situ measurements (RMSE of between 2% and 3%). Finally, an example of the importance of the atmospheric correction in the vegetation estimation in these natural areas is presented, allowing the robust mapping of species and the analysis of multitemporal variations related to the

  10. Assessment of Atmospheric Algorithms to Retrieve Vegetation in Natural Protected Areas Using Multispectral High Resolution Imagery.

    PubMed

    Marcello, Javier; Eugenio, Francisco; Perdomo, Ulises; Medina, Anabella

    2016-09-30

    The precise mapping of vegetation covers in semi-arid areas is a complex task as this type of environment consists of sparse vegetation mainly composed of small shrubs. The launch of high resolution satellites, with additional spectral bands and the ability to alter the viewing angle, offers a useful technology to focus on this objective. In this context, atmospheric correction is a fundamental step in the pre-processing of such remote sensing imagery and, consequently, different algorithms have been developed for this purpose over the years. They are commonly categorized as imaged-based methods as well as in more advanced physical models based on the radiative transfer theory. Despite the relevance of this topic, a few comparative studies covering several methods have been carried out using high resolution data or which are specifically applied to vegetation covers. In this work, the performance of five representative atmospheric correction algorithms (DOS, QUAC, FLAASH, ATCOR and 6S) has been assessed, using high resolution Worldview-2 imagery and field spectroradiometer data collected simultaneously, with the goal of identifying the most appropriate techniques. The study also included a detailed analysis of the parameterization influence on the final results of the correction, the aerosol model and its optical thickness being important parameters to be properly adjusted. The effects of corrections were studied in vegetation and soil sites belonging to different protected semi-arid ecosystems (high mountain and coastal areas). In summary, the superior performance of model-based algorithms, 6S in particular, has been demonstrated, achieving reflectance estimations very close to the in-situ measurements (RMSE of between 2% and 3%). Finally, an example of the importance of the atmospheric correction in the vegetation estimation in these natural areas is presented, allowing the robust mapping of species and the analysis of multitemporal variations related to the

  11. Estimation of perceptible water vapor of atmosphere using artificial neural network, support vector machine and multiple linear regression algorithm and their comparative study

    NASA Astrophysics Data System (ADS)

    Shastri, Niket; Pathak, Kamlesh

    2018-05-01

    The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.

  12. Work unit compensation.

    PubMed

    Sodano, M J

    1991-01-01

    The author describes an innovative "work unit compensation" system that acts as an adjunct to existing personnel payment structures. The process, developed as a win-win alternative for both employees and their institution, includes a reward system for the entire department and insures a team atmosphere. The Community Medical Center in Toms River, New Jersey developed the plan which sets the four basic goals: to be fair, economical, lasting and transferable (FELT). The plan has proven to be a useful tool in retention and recruitment of qualified personnel.

  13. Compensators: An alternative IMRT delivery technique

    PubMed Central

    Chang, Sha X.; Cullip, Timothy J.; Deschesne, Katharin M.; Miller, Elizabeth P.; Rosenman, Julian G.

    2004-01-01

    Seven years of experience in compensator intensity‐modulated radiotherapy (IMRT) clinical implementation are presented. An inverse planning dose optimization algorithm was used to generate intensity modulation maps, which were delivered via either the compensator or segmental multileaf collimator (MLC) IMRT techniques. The in‐house developed compensator‐IMRT technique is presented with the focus on several design issues. The dosimetry of the delivery techniques was analyzed for several clinical cases. The treatment time for both delivery techniques on Siemens accelerators was retrospectively analyzed based on the electronic treatment record in LANTIS for 95 patients. We found that the compensator technique consistently took noticeably less time for treatment of equal numbers of fields compared to the segmental technique. The typical time needed to fabricate a compensator was 13 min, 3 min of which was manual processing. More than 80% of the approximately 700 compensators evaluated had a maximum deviation of less than 5% from the calculation in intensity profile. Seventy‐two percent of the patient treatment dosimetry measurements for 340 patients have an error of no more than 5%. The pros and cons of different IMRT compensator materials are also discussed. Our experience shows that the compensator‐IMRT technique offers robustness, excellent intensity modulation resolution, high treatment delivery efficiency, simple fabrication and quality assurance (QA) procedures, and the flexibility to be used in any teletherapy unit. PACS numbers: 87.53Mr, 87.53Tf PMID:15753937

  14. An Alternate Method to Springback Compensation for Sheet Metal Forming

    PubMed Central

    Omar, Badrul; Jusoff, Kamaruzaman

    2014-01-01

    The aim of this work is to improve the accuracy of cold stamping product by accommodating springback. This is a numerical approach to improve the accuracy of springback analysis and die compensation process combining the displacement adjustment (DA) method and the spring forward (SF) algorithm. This alternate hybrid method (HM) is conducted by firstly employing DA method followed by the SF method instead of either DA or SF method individually. The springback shape and the target part are used to optimize the die surfaces compensating springback. The hybrid method (HM) algorithm has been coded in Fortran and tested in two- and three-dimensional models. By implementing the HM, the springback error can be decreased and the dimensional deviation falls in the predefined tolerance range. PMID:25165738

  15. Temperature Effects and Compensation-Control Methods

    PubMed Central

    Xia, Dunzhu; Chen, Shuling; Wang, Shourong; Li, Hongsheng

    2009-01-01

    In the analysis of the effects of temperature on the performance of microgyroscopes, it is found that the resonant frequency of the microgyroscope decreases linearly as the temperature increases, and the quality factor changes drastically at low temperatures. Moreover, the zero bias changes greatly with temperature variations. To reduce the temperature effects on the microgyroscope, temperature compensation-control methods are proposed. In the first place, a BP (Back Propagation) neural network and polynomial fitting are utilized for building the temperature model of the microgyroscope. Considering the simplicity and real-time requirements, piecewise polynomial fitting is applied in the temperature compensation system. Then, an integral-separated PID (Proportion Integration Differentiation) control algorithm is adopted in the temperature control system, which can stabilize the temperature inside the microgyrocope in pursuing its optimal performance. Experimental results reveal that the combination of microgyroscope temperature compensation and control methods is both realizable and effective in a miniaturized microgyroscope prototype. PMID:22408509

  16. Command generator tracker based direct model reference adaptive tracking guidance for Mars atmospheric entry

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Peng, Yuming

    2012-01-01

    In order to accurately deliver an entry vehicle through the Martian atmosphere to the prescribed parachute deployment point, active Mars entry guidance is essential. This paper addresses the issue of Mars atmospheric entry guidance using the command generator tracker (CGT) based direct model reference adaptive control to reduce the adverse effect of the bounded uncertainties on atmospheric density and aerodynamic coefficients. Firstly, the nominal drag acceleration profile meeting a variety of constraints is planned off-line in the longitudinal plane as the reference model to track. Then, the CGT based direct model reference adaptive controller and the feed-forward compensator are designed to robustly track the aforementioned reference drag acceleration profile and to effectively reduce the downrange error. Afterwards, the heading alignment logic is adopted in the lateral plane to reduce the crossrange error. Finally, the validity of the guidance algorithm proposed in this paper is confirmed by Monte Carlo simulation analysis.

  17. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.

  18. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l 1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPowermore » Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less

  19. Prostate implant reconstruction from C-arm images with motion-compensated tomosynthesis

    PubMed Central

    Dehghan, Ehsan; Moradi, Mehdi; Wen, Xu; French, Danny; Lobo, Julio; Morris, W. James; Salcudean, Septimiu E.; Fichtinger, Gabor

    2011-01-01

    Purpose: Accurate localization of prostate implants from several C-arm images is necessary for ultrasound-fluoroscopy fusion and intraoperative dosimetry. The authors propose a computational motion compensation method for tomosynthesis-based reconstruction that enables 3D localization of prostate implants from C-arm images despite C-arm oscillation and sagging. Methods: Five C-arm images are captured by rotating the C-arm around its primary axis, while measuring its rotation angle using a protractor or the C-arm joint encoder. The C-arm images are processed to obtain binary seed-only images from which a volume of interest is reconstructed. The motion compensation algorithm, iteratively, compensates for 2D translational motion of the C-arm by maximizing the number of voxels that project on a seed projection in all of the images. This obviates the need for C-arm full pose tracking traditionally implemented using radio-opaque fiducials or external trackers. The proposed reconstruction method is tested in simulations, in a phantom study and on ten patient data sets. Results: In a phantom implanted with 136 dummy seeds, the seed detection rate was 100% with a localization error of 0.86 ± 0.44 mm (Mean ± STD) compared to CT. For patient data sets, a detection rate of 99.5% was achieved in approximately 1 min per patient. The reconstruction results for patient data sets were compared against an available matching-based reconstruction method and showed relative localization difference of 0.5 ± 0.4 mm. Conclusions: The motion compensation method can successfully compensate for large C-arm motion without using radio-opaque fiducial or external trackers. Considering the efficacy of the algorithm, its successful reconstruction rate and low computational burden, the algorithm is feasible for clinical use. PMID:21992346

  20. Quantitative assessment of tumor angiogenesis using real-time motion-compensated contrast-enhanced ultrasound imaging

    PubMed Central

    Pysz, Marybeth A.; Guracar, Ismayil; Foygel, Kira; Tian, Lu; Willmann, Jürgen K.

    2015-01-01

    Purpose To develop and test a real-time motion compensation algorithm for contrast-enhanced ultrasound imaging of tumor angiogenesis on a clinical ultrasound system. Materials and methods The Administrative Institutional Panel on Laboratory Animal Care approved all experiments. A new motion correction algorithm measuring the sum of absolute differences in pixel displacements within a designated tracking box was implemented in a clinical ultrasound machine. In vivo angiogenesis measurements (expressed as percent contrast area) with and without motion compensated maximum intensity persistence (MIP) ultrasound imaging were analyzed in human colon cancer xenografts (n = 64) in mice. Differences in MIP ultrasound imaging signal with and without motion compensation were compared and correlated with displacements in x- and y-directions. The algorithm was tested in an additional twelve colon cancer xenograft-bearing mice with (n = 6) and without (n = 6) anti-vascular therapy (ASA-404). In vivo MIP percent contrast area measurements were quantitatively correlated with ex vivo microvessel density (MVD) analysis. Results MIP percent contrast area was significantly different (P < 0.001) with and without motion compensation. Differences in percent contrast area correlated significantly (P < 0.001) with x- and y-displacements. MIP percent contrast area measurements were more reproducible with motion compensation (ICC = 0.69) than without (ICC = 0.51) on two consecutive ultrasound scans. Following anti-vascular therapy, motion-compensated MIP percent contrast area significantly (P = 0.03) decreased by 39.4 ± 14.6 % compared to non-treated mice and correlated well with ex vivo MVD analysis (Rho = 0.70; P = 0.05). Conclusion Real-time motion-compensated MIP ultrasound imaging allows reliable and accurate quantification and monitoring of angiogenesis in tumors exposed to breathing-induced motion artifacts. PMID:22535383

  1. Quantitative assessment of tumor angiogenesis using real-time motion-compensated contrast-enhanced ultrasound imaging.

    PubMed

    Pysz, Marybeth A; Guracar, Ismayil; Foygel, Kira; Tian, Lu; Willmann, Jürgen K

    2012-09-01

    To develop and test a real-time motion compensation algorithm for contrast-enhanced ultrasound imaging of tumor angiogenesis on a clinical ultrasound system. The Administrative Institutional Panel on Laboratory Animal Care approved all experiments. A new motion correction algorithm measuring the sum of absolute differences in pixel displacements within a designated tracking box was implemented in a clinical ultrasound machine. In vivo angiogenesis measurements (expressed as percent contrast area) with and without motion compensated maximum intensity persistence (MIP) ultrasound imaging were analyzed in human colon cancer xenografts (n = 64) in mice. Differences in MIP ultrasound imaging signal with and without motion compensation were compared and correlated with displacements in x- and y-directions. The algorithm was tested in an additional twelve colon cancer xenograft-bearing mice with (n = 6) and without (n = 6) anti-vascular therapy (ASA-404). In vivo MIP percent contrast area measurements were quantitatively correlated with ex vivo microvessel density (MVD) analysis. MIP percent contrast area was significantly different (P < 0.001) with and without motion compensation. Differences in percent contrast area correlated significantly (P < 0.001) with x- and y-displacements. MIP percent contrast area measurements were more reproducible with motion compensation (ICC = 0.69) than without (ICC = 0.51) on two consecutive ultrasound scans. Following anti-vascular therapy, motion-compensated MIP percent contrast area significantly (P = 0.03) decreased by 39.4 ± 14.6 % compared to non-treated mice and correlated well with ex vivo MVD analysis (Rho = 0.70; P = 0.05). Real-time motion-compensated MIP ultrasound imaging allows reliable and accurate quantification and monitoring of angiogenesis in tumors exposed to breathing-induced motion artifacts.

  2. Control optimization, stabilization and computer algorithms for aircraft applications

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.

  3. Automated hierarchical time gain compensation for in-vivo ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Hemmsen, Martin C.; Martins, Bo; Brandt, Andreas H.; Hansen, Kristoffer L.; Nielsen, Michael B.; Jensen, Jørgen A.

    2015-03-01

    Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents an automated hierarchical TGC (AHTGC) algorithm that accurately adapts to the large attenuation variation between different types of tissues and structures. The algorithm relies on estimates of tissue attenuation, scattering strength, and noise level to gain a more quantitative understanding of the underlying tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10-13) and estimated to be 1.01 (95% CI: 0.85; 1.16) favoring the processed data with the proposed AHTGC algorithm.

  4. Time-varying delays compensation algorithm for powertrain active damping of an electrified vehicle equipped with an axle motor during regenerative braking

    NASA Astrophysics Data System (ADS)

    Zhang, Junzhi; Li, Yutong; Lv, Chen; Gou, Jinfang; Yuan, Ye

    2017-03-01

    The flexibility of the electrified powertrain system elicits a negative effect upon the cooperative control performance between regenerative and hydraulic braking and the active damping control performance. Meanwhile, the connections among sensors, controllers, and actuators are realized via network communication, i.e., controller area network (CAN), that introduces time-varying delays and deteriorates the control performances of the closed-loop control systems. As such, the goal of this paper is to develop a control algorithm to cope with all these challenges. To this end, the models of the stochastic network induced time-varying delays, based on a real in-vehicle network topology and on a flexible electrified powertrain, were firstly built. In order to further enhance the control performances of active damping and cooperative control of regenerative and hydraulic braking, the time-varying delays compensation algorithm for the electrified powertrain active damping during regenerative braking was developed based on a predictive scheme. The augmented system is constructed and the H∞ performance is analyzed. Based on this analysis, the control gains are derived by solving a nonlinear minimization problem. The simulations and hardware-in-loop (HIL) tests were carried out to validate the effectiveness of the developed algorithm. The test results show that the active damping and cooperative control performances are enhanced significantly.

  5. Simultaneous Retrieval of Temperature, Water Vapor and Ozone Atmospheric Profiles from IASI: Compression, De-noising, First Guess Retrieval and Inversion Algorithms

    NASA Technical Reports Server (NTRS)

    Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.

  6. An adaptive guidance algorithm for an aerodynamically assisted orbital plane change maneuver. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Blissit, J. A.

    1986-01-01

    Using analysis results from the post trajectory optimization program, an adaptive guidance algorithm is developed to compensate for density, aerodynamic and thrust perturbations during an atmospheric orbital plane change maneuver. The maneuver offers increased mission flexibility along with potential fuel savings for future reentry vehicles. Although designed to guide a proposed NASA Entry Research Vehicle, the algorithm is sufficiently generic for a range of future entry vehicles. The plane change analysis provides insight suggesting a straight-forward algorithm based on an optimized nominal command profile. Bank angle, angle of attack, and engine thrust level, ignition and cutoff times are modulated to adjust the vehicle's trajectory to achieve the desired end-conditions. A performance evaluation of the scheme demonstrates a capability to guide to within 0.05 degrees of the desired plane change and five nautical miles of the desired apogee altitude while maintaining heating constraints. The algorithm is tested under off-nominal conditions of + or -30% density biases, two density profile models, + or -15% aerodynamic uncertainty, and a 33% thrust loss and for various combinations of these conditions.

  7. Compensation in the presence of deep turbulence using tiled-aperture architectures

    NASA Astrophysics Data System (ADS)

    Spencer, Mark F.; Brennan, Terry J.

    2017-05-01

    The presence of distributed-volume atmospheric aberrations or "deep turbulence" presents unique challenges for beam-control applications which look to sense and correct for disturbances found along the laser-propagation path. This paper explores the potential for branch-point-tolerant reconstruction algorithms and tiled-aperture architectures to correct for the branch cuts contained in the phase function due to deep-turbulence conditions. Using wave-optics simulations, the analysis aims to parameterize the fitting-error performance of tiled-aperture architectures operating in a null-seeking control loop with piston, tip, and tilt compensation of the individual optical beamlet trains. To evaluate fitting-error performance, the analysis plots normalized power in the bucket as a function of the Fried coherence diameter, the log-amplitude variance, and the number of subapertures for comparison purposes. Initial results show that tiled-aperture architectures with a large number of subapertures outperform filled-aperture architectures with continuous-face-sheet deformable mirrors.

  8. Transponder-aided joint calibration and synchronization compensation for distributed radar systems.

    PubMed

    Wang, Wen-Qin

    2015-01-01

    High-precision radiometric calibration and synchronization compensation must be provided for distributed radar system due to separate transmitters and receivers. This paper proposes a transponder-aided joint radiometric calibration, motion compensation and synchronization for distributed radar remote sensing. As the transponder signal can be separated from the normal radar returns, it is used to calibrate the distributed radar for radiometry. Meanwhile, the distributed radar motion compensation and synchronization compensation algorithms are presented by utilizing the transponder signals. This method requires no hardware modifications to both the normal radar transmitter and receiver and no change to the operating pulse repetition frequency (PRF). The distributed radar radiometric calibration and synchronization compensation require only one transponder, but the motion compensation requires six transponders because there are six independent variables in the distributed radar geometry. Furthermore, a maximum likelihood method is used to estimate the transponder signal parameters. The proposed methods are verified by simulation results.

  9. A computer program for borehole compensation of dual-detector density well logs

    USGS Publications Warehouse

    Scott, James Henry

    1978-01-01

    The computer program described in this report was developed for applying a borehole-rugosity and mudcake compensation algorithm to dual-density logs using the following information: the water level in the drill hole, hole diameter (from a caliper log if available, or the nominal drill diameter if not), and the two gamma-ray count rate logs from the near and far detectors of the density probe. The equations that represent the compensation algorithm and the calibration of the two detectors (for converting countrate or density) were derived specifically for a probe manufactured by Comprobe Inc. (5.4 cm O.D. dual-density-caliper); they are not applicable to other probes. However, equivalent calibration and compensation equations can be empirically determined for any other similar two-detector density probes and substituted in the computer program listed in this report. * Use of brand names in this report does not necessarily constitute endorsement by the U.S. Geological Survey.

  10. Dreaming of Atmospheres

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2016-10-01

    Radiative transfer retrievals have become the standard in modelling of exoplanetary transmission and emission spectra. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain.To address these issues, we have developed the Tau-REx (tau-retrieval of exoplanets) retrieval and the RobERt spectral recognition algorithms. Tau-REx is a bayesian atmospheric retrieval framework using Nested Sampling and cluster computing to fully map these large correlated parameter spaces. Nonetheless, data volumes can become prohibitively large and we must often select a subset of potential molecular/atomic absorbers in an atmosphere.In the era of open-source, automated and self-sufficient retrieval algorithms, such manual input should be avoided. User dependent input could, in worst case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is build to address these issues. RobERt is a deep belief neural (DBN) networks trained to accurately recognise molecular signatures for a wide range of planets, atmospheric thermal profiles and compositions. Using these deep neural networks, we work towards retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.In this talk I will discuss how neural networks and Bayesian Nested Sampling can be used to solve highly degenerate spectral retrieval problems and what 'dreaming' neural networks can tell us about atmospheric characteristics.

  11. Robust control algorithms for Mars aerobraking

    NASA Technical Reports Server (NTRS)

    Shipley, Buford W., Jr.; Ward, Donald T.

    1992-01-01

    Four atmospheric guidance concepts have been adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. The first two offer improvements to the Analytic Predictor Corrector (APC) to increase its robustness to density variations. The second two are variations of a new Liapunov tracking exit phase algorithm, developed to guide the vehicle along a reference trajectory. These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. MARSGRAM is used to develop realistic atmospheres for the study. When square wave density pulses perturb the atmosphere all four controllers are successful. The algorithms are tested against atmospheres where the inbound and outbound density functions are different. Square wave density pulses are again used, but only for the outbound leg of the trajectory. Additionally, sine waves are used to perturb the density function. The new algorithms are found to be more robust than any previously tested and a Liapunov controller is selected as the most robust control algorithm overall examined.

  12. An unstructured-mesh finite-volume MPDATA for compressible atmospheric dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kühnlein, Christian, E-mail: christian.kuehnlein@ecmwf.int; Smolarkiewicz, Piotr K., E-mail: piotr.smolarkiewicz@ecmwf.int

    An advancement of the unstructured-mesh finite-volume MPDATA (Multidimensional Positive Definite Advection Transport Algorithm) is presented that formulates the error-compensative pseudo-velocity of the scheme to rely only on face-normal advective fluxes to the dual cells, in contrast to the full vector employed in previous implementations. This is essentially achieved by expressing the temporal truncation error underlying the pseudo-velocity in a form consistent with the flux-divergence of the governing conservation law. The development is especially important for integrating fluid dynamics equations on non-rectilinear meshes whenever face-normal advective mass fluxes are employed for transport compatible with mass continuity—the latter being essential for flux-formmore » schemes. In particular, the proposed formulation enables large-time-step semi-implicit finite-volume integration of the compressible Euler equations using MPDATA on arbitrary hybrid computational meshes. Furthermore, it facilitates multiple error-compensative iterations of the finite-volume MPDATA and improved overall accuracy. The advancement combines straightforwardly with earlier developments, such as the nonoscillatory option, the infinite-gauge variant, and moving curvilinear meshes. A comprehensive description of the scheme is provided for a hybrid horizontally-unstructured vertically-structured computational mesh for efficient global atmospheric flow modelling. The proposed finite-volume MPDATA is verified using selected 3D global atmospheric benchmark simulations, representative of hydrostatic and non-hydrostatic flow regimes. Besides the added capabilities, the scheme retains fully the efficacy of established finite-volume MPDATA formulations.« less

  13. DREAMING OF ATMOSPHERES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk

    Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as themore » “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.« less

  14. Dreaming of Atmospheres

    NASA Astrophysics Data System (ADS)

    Waldmann, I. P.

    2016-04-01

    Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as the “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.

  15. Joint compensation scheme of polarization crosstalk, intersymbol interference, frequency offset, and phase noise based on cascaded Kalman filter

    NASA Astrophysics Data System (ADS)

    Zhang, Qun; Yang, Yanfu; Xiang, Qian; Zhou, Zhongqing; Yao, Yong

    2018-02-01

    A joint compensation scheme based on cascaded Kalman filter is proposed, which can implement polarization tracking, channel equalization, frequency offset, and phase noise compensation simultaneously. The experimental results show that the proposed algorithm can not only compensate multiple channel impairments simultaneously but also improve the polarization tracking capacity and accelerate the convergence speed. The scheme has up to eight times faster convergence speed compared with radius-directed equalizer (RDE) + Max-FFT (maximum fast Fourier transform) + BPS (blind phase search) and can track up polarization rotation 60 times and 15 times faster than that of RDE + Max-FFT + BPS and CMMA (cascaded multimodulus algorithm) + Max-FFT + BPS, respectively.

  16. CFO compensation method using optical feedback path for coherent optical OFDM system

    NASA Astrophysics Data System (ADS)

    Moon, Sang-Rok; Hwang, In-Ki; Kang, Hun-Sik; Chang, Sun Hyok; Lee, Seung-Woo; Lee, Joon Ki

    2017-07-01

    We investigate feasibility of carrier frequency offset (CFO) compensation method using optical feedback path for coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. Recently proposed CFO compensation algorithms provide wide CFO estimation range in electrical domain. However, their practical compensation range is limited by sampling rate of an analog-to-digital converter (ADC). This limitation has not drawn attention, since the ADC sampling rate was high enough comparing to the data bandwidth and CFO in the wireless OFDM system. For CO-OFDM, the limitation is becoming visible because of increased data bandwidth, laser instability (i.e. large CFO) and insufficient ADC sampling rate owing to high cost. To solve the problem and extend practical CFO compensation range, we propose a CFO compensation method having optical feedback path. By adding simple wavelength control for local oscillator, the practical CFO compensation range can be extended to the sampling frequency range. The feasibility of the proposed method is experimentally investigated.

  17. A hybrid method for synthetic aperture ladar phase-error compensation

    NASA Astrophysics Data System (ADS)

    Hua, Zhili; Li, Hongping; Gu, Yongjian

    2009-07-01

    As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.

  18. Adaptive Fading Memory H∞ Filter Design for Compensation of Delayed Components in Self Powered Flux Detectors

    NASA Astrophysics Data System (ADS)

    Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol

    2015-08-01

    The paper deals with dynamic compensation of delayed Self Powered Flux Detectors (SPFDs) using discrete time H∞ filtering method for improving the response of SPFDs with significant delayed components such as Platinum and Vanadium SPFD. We also present a comparative study between the Linear Matrix Inequality (LMI) based H∞ filtering and Algebraic Riccati Equation (ARE) based Kalman filtering methods with respect to their delay compensation capabilities. Finally an improved recursive H∞ filter based on the adaptive fading memory technique is proposed which provides an improved performance over existing methods. The existing delay compensation algorithms do not account for the rate of change in the signal for determining the filter gain and therefore add significant noise during the delay compensation process. The proposed adaptive fading memory H∞ filter minimizes the overall noise very effectively at the same time keeps the response time at minimum values. The recursive algorithm is easy to implement in real time as compared to the LMI (or ARE) based solutions.

  19. Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis

    NASA Technical Reports Server (NTRS)

    Whorton, M.; Buschek, H.; Calise, A. J.

    1994-01-01

    A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.

  20. Status of the NPP and J1 NOAA Unique Combined Atmospheric Processing System (NUCAPS): recent algorithm enhancements geared toward validation and near real time users applications.

    NASA Astrophysics Data System (ADS)

    Gambacorta, A.; Nalli, N. R.; Tan, C.; Iturbide-Sanchez, F.; Wilson, M.; Zhang, K.; Xiong, X.; Barnet, C. D.; Sun, B.; Zhou, L.; Wheeler, A.; Reale, A.; Goldberg, M.

    2017-12-01

    The NOAA Unique Combined Atmospheric Processing System (NUCAPS) is the NOAA operational algorithm to retrieve thermodynamic and composition variables from hyper spectral thermal sounders such as CrIS, IASI and AIRS. The combined use of microwave sounders, such as ATMS, AMSU and MHS, enables full atmospheric sounding of the atmospheric column under all-sky conditions. NUCAPS retrieval products are accessible in near real time (about 1.5 hour delay) through the NOAA Comprehensive Large Array-data Stewardship System (CLASS). Since February 2015, NUCAPS retrievals have been also accessible via Direct Broadcast, with unprecedented low latency of less than 0.5 hours. NUCAPS builds on a long-term, multi-agency investment on algorithm research and development. The uniqueness of this algorithm consists in a number of features that are key in providing highly accurate and stable atmospheric retrievals, suitable for real time weather and air quality applications. Firstly, maximizing the use of the information content present in hyper spectral thermal measurements forms the foundation of the NUCAPS retrieval algorithm. Secondly, NUCAPS is a modular, name-list driven design. It can process multiple hyper spectral infrared sounders (on Aqua, NPP, MetOp and JPSS series) by mean of the same exact retrieval software executable and underlying spectroscopy. Finally, a cloud-clearing algorithm and a synergetic use of microwave radiance measurements enable full vertical sounding of the atmosphere, under all-sky regimes. As we transition toward improved hyper spectral missions, assessing retrieval skill and consistency across multiple platforms becomes a priority for real time users applications. Focus of this presentation is a general introduction on the recent improvements in the delivery of the NUCAPS full spectral resolution upgrade and an overview of the lessons learned from the 2017 Hazardous Weather Test bed Spring Experiment. Test cases will be shown on the use of NPP and Met

  1. Motion-compensated speckle tracking via particle filtering

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu

    2015-07-01

    Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.

  2. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  3. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  4. Optical-beam wavefront control based on the atmospheric backscatter signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banakh, V A; Razenkov, I A; Rostov, A P

    2015-02-28

    The feasibility of compensating for aberrations of the optical-beam initial wavefront by aperture sounding, based on the atmospheric backscatter signal from an additional laser source with a different wavelength, is experimentally studied. It is shown that the adaptive system based on this principle makes it possible to compensate for distortions of the initial beam wavefront on a surface path in atmosphere. Specifically, the beam divergence decreases, while the level of the detected mean backscatter power from the additional laser source increases. (light scattering)

  5. Absorption cooling sources atmospheric emissions decrease by implementation of simple algorithm for limiting temperature of cooling water

    NASA Astrophysics Data System (ADS)

    Wojdyga, Krzysztof; Malicki, Marcin

    2017-11-01

    Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.

  6. Self-compensating tensiometer and method

    DOEpatents

    Hubbell, Joel M.; Sisson, James B.

    2003-01-01

    A pressure self-compensating tensiometer and method to in situ determine below grade soil moisture potential of earthen soil independent of changes in the volume of water contained within the tensiometer chamber, comprising a body having first and second ends, a porous material defining the first body end, a liquid within the body, a transducer housing submerged in the liquid such that a transducer sensor within the housing is kept below the working fluid level in the tensiometer and in fluid contact with the liquid and the ambient atmosphere.

  7. Neural Network Compensation for Frequency Cross-Talk in Laser Interferometry

    NASA Astrophysics Data System (ADS)

    Lee, Wooram; Heo, Gunhaeng; You, Kwanho

    The heterodyne laser interferometer acts as an ultra-precise measurement apparatus in semiconductor manufacture. However the periodical nonlinearity property caused from frequency cross-talk is an obstacle to improve the high measurement accuracy in nanometer scale. In order to minimize the nonlinearity error of the heterodyne interferometer, we propose a frequency cross-talk compensation algorithm using an artificial intelligence method. The feedforward neural network trained by back-propagation compensates the nonlinearity error and regulates to minimize the difference with the reference signal. With some experimental results, the improved accuracy is proved through comparison with the position value from a capacitive displacement sensor.

  8. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  9. Motion-compensated optical coherence tomography using envelope-based surface detection and Kalman-based prediction

    NASA Astrophysics Data System (ADS)

    Irsch, Kristina; Lee, Soohyun; Bose, Sanjukta N.; Kang, Jin U.

    2018-02-01

    We present an optical coherence tomography (OCT) imaging system that effectively compensates unwanted axial motion with micron-scale accuracy. The OCT system is based on a swept-source (SS) engine (1060-nm center wavelength, 100-nm full-width sweeping bandwidth, and 100-kHz repetition rate), with axial and lateral resolutions of about 4.5 and 8.5 microns respectively. The SS-OCT system incorporates a distance sensing method utilizing an envelope-based surface detection algorithm. The algorithm locates the target surface from the B-scans, taking into account not just the first or highest peak but the entire signature of sequential A-scans. Subsequently, a Kalman filter is applied as predictor to make up for system latencies, before sending the calculated position information to control a linear motor, adjusting and maintaining a fixed system-target distance. To test system performance, the motioncorrection algorithm was compared to earlier, more basic peak-based surface detection methods and to performing no motion compensation. Results demonstrate increased robustness and reproducibility, particularly noticeable in multilayered tissues, while utilizing the novel technique. Implementing such motion compensation into clinical OCT systems may thus improve the reliability of objective and quantitative information that can be extracted from OCT measurements.

  10. Motion compensation for fully 4D PET reconstruction using PET superset data

    NASA Astrophysics Data System (ADS)

    Verhaeghe, J.; Gravel, P.; Mio, R.; Fukasawa, R.; Rosa-Neto, P.; Soucy, J.-P.; Thompson, C. J.; Reader, A. J.

    2010-07-01

    Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for 18F-FDG obtained from Patlak analysis.

  11. Motion compensation for fully 4D PET reconstruction using PET superset data.

    PubMed

    Verhaeghe, J; Gravel, P; Mio, R; Fukasawa, R; Rosa-Neto, P; Soucy, J-P; Thompson, C J; Reader, A J

    2010-07-21

    Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for (18)F-FDG obtained from Patlak analysis.

  12. Iterative motion compensation approach for ultrasonic thermal imaging

    NASA Astrophysics Data System (ADS)

    Fleming, Ioana; Hager, Gregory; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad

    2015-03-01

    As thermal imaging attempts to estimate very small tissue motion (on the order of tens of microns), it can be negatively influenced by signal decorrelation. Patient's breathing and cardiac cycle generate shifts in the RF signal patterns. Other sources of movement could be found outside the patient's body, like transducer slippage or small vibrations due to environment factors like electronic noise. Here, we build upon a robust displacement estimation method for ultrasound elastography and we investigate an iterative motion compensation algorithm, which can detect and remove non-heat induced tissue motion at every step of the ablation procedure. The validation experiments are performed on laboratory induced ablation lesions in ex-vivo tissue. The ultrasound probe is either held by the operator's hand or supported by a robotic arm. We demonstrate the ability to detect and remove non-heat induced tissue motion in both settings. We show that removing extraneous motion helps unmask the effects of heating. Our strain estimation curves closely mirror the temperature changes within the tissue. While previous results in the area of motion compensation were reported for experiments lasting less than 10 seconds, our algorithm was tested on experiments that lasted close to 20 minutes.

  13. Application of Static Var Compensator (SVC) With PI Controller for Grid Integration of Wind Farm Using Harmony Search

    NASA Astrophysics Data System (ADS)

    Keshta, H. E.; Ali, A. A.; Saied, E. M.; Bendary, F. M.

    2016-10-01

    Large-scale integration of wind turbine generators (WTGs) may have significant impacts on power system operation with respect to system frequency and bus voltages. This paper studies the effect of Static Var Compensator (SVC) connected to wind energy conversion system (WECS) on voltage profile and the power generated from the induction generator (IG) in wind farm. Also paper presents, a dynamic reactive power compensation using Static Var Compensator (SVC) at the a point of interconnection of wind farm while static compensation (Fixed Capacitor Bank) is unable to prevent voltage collapse. Moreover, this paper shows that using advanced optimization techniques based on artificial intelligence (AI) such as Harmony Search Algorithm (HS) and Self-Adaptive Global Harmony Search Algorithm (SGHS) instead of a Conventional Control Method to tune the parameters of PI controller for SVC and pitch angle. Also paper illustrates that the performance of the system with controllers based on AI is improved under different operating conditions. MATLAB/Simulink based simulation is utilized to demonstrate the application of SVC in wind farm integration. It is also carried out to investigate the enhancement in performance of the WECS achieved with a PI Controller tuned by Harmony Search Algorithm as compared to a Conventional Control Method.

  14. Application of a self-compensation mechanism to a rotary-laser scanning measurement system

    NASA Astrophysics Data System (ADS)

    Guo, Siyang; Lin, Jiarui; Ren, Yongjie; Shi, Shendong; Zhu, Jigui

    2017-11-01

    In harsh environmental conditions, the relative orientations of transmitters of rotary-laser scanning measuring systems are easily influenced by low-frequency vibrations or creep deformation of the support structure. A self-compensation method that counters this problem is presented. This method is based on an improved workshop Measurement Positioning System (wMPS) with inclinometer-combined transmitters. A calibration method for the spatial rotation between the transmitter and inclinometer with an auxiliary horizontal reference frame is presented. It is shown that the calibration accuracy can be improved by a mechanical adjustment using a special bubble level. The orientation-compensation algorithm of the transmitters is described in detail. The feasibility of this compensation mechanism is validated by Monte Carlo simulations and experiments. The mechanism mainly provides a two-degrees-of-freedom attitude compensation.

  15. An Algorithm to Atmospherically Correct Visible and Thermal Airborne Imagery

    NASA Technical Reports Server (NTRS)

    Rickman, Doug L.; Luvall, Jeffrey C.; Schiller, Stephen; Arnold, James E. (Technical Monitor)

    2000-01-01

    The program Watts implements a system of physically based models developed by the authors, described elsewhere, for the removal of atmospheric effects in multispectral imagery. The band range we treat covers the visible, near IR and the thermal IR. Input to the program begins with atmospheric pal red models specifying transmittance and path radiance. The system also requires the sensor's spectral response curves and knowledge of the scanner's geometric definition. Radiometric characterization of the sensor during data acquisition is also necessary. While the authors contend that active calibration is critical for serious analytical efforts, we recognize that most remote sensing systems, either airborne or space borne, do not as yet attain that minimal level of sophistication. Therefore, Watts will also use semi-active calibration where necessary and available. All of the input is then reduced to common terms, in terms of the physical units. From this it Is then practical to convert raw sensor readings into geophysically meaningful units. There are a large number of intricate details necessary to bring an algorithm or this type to fruition and to even use the program. Further, at this stage of development the authors are uncertain as to the optimal presentation or minimal analytical techniques which users of this type of software must have. Therefore, Watts permits users to break out and analyze the input in various ways. Implemented in REXX under OS/2 the program is designed with attention to the probability that it will be ported to other systems and other languages. Further, as it is in REXX, it is relatively simple for anyone that is literate in any computer language to open the code and modify to meet their needs. The authors have employed Watts in their research addressing precision agriculture and urban heat island.

  16. Real time mitigation of atmospheric turbulence in long distance imaging using the lucky region fusion algorithm with FPGA and GPU hardware acceleration

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher Robert

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.

  17. Two-dimensional atmospheric transport and chemistry model - Numerical experiments with a new advection algorithm

    NASA Technical Reports Server (NTRS)

    Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.

    1990-01-01

    Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.

  18. STAR Algorithm Integration Team - Facilitating operational algorithm development

    NASA Astrophysics Data System (ADS)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  19. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine

    PubMed Central

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir

    2017-01-01

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080

  20. Study on Temperature and Synthetic Compensation of Piezo-Resistive Differential Pressure Sensors by Coupled Simulated Annealing and Simplex Optimized Kernel Extreme Learning Machine.

    PubMed

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2017-04-19

    As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.

  1. Constraining planetary atmospheric density: application of heuristic search algorithms to aerodynamic modeling of impact ejecta trajectories

    NASA Astrophysics Data System (ADS)

    Liu, Z. Y. C.; Shirzaei, M.

    2015-12-01

    Impact craters on the terrestrial planets are typically surrounded by a continuous ejecta blanket that the initial emplacement is via ballistic sedimentation. Following an impact event, a significant volume of material is ejected and falling debris surrounds the crater. Aerodynamics rule governs the flight path and determines the spatial distribution of these ejecta. Thus, for the planets with atmosphere, the preserved ejecta deposit directly recorded the interaction of ejecta and atmosphere at the time of impact. In this study, we develop a new framework to establish links between distribution of the ejecta, age of the impact and the properties of local atmosphere. Given the radial distance of the continuous ejecta extent from crater, an inverse aerodynamic modeling approach is employed to estimate the local atmospheric drags and density as well as the lift forces at the time of impact. Based on earlier studies, we incorporate reasonable value ranges for ejection angle, initial velocity, aerodynamic drag, and lift in the model. In order to solve the trajectory differential equations, obtain the best estimate of atmospheric density, and the associated uncertainties, genetic algorithm is applied. The method is validated using synthetic data sets as well as detailed maps of impact ejecta associated with five fresh martian and two lunar impact craters, with diameter of 20-50 m, 10-20 m, respectively. The estimated air density for martian carters range 0.014-0.028 kg/m3, consistent with the recent surface atmospheric density measurement of 0.015-0.020 kg/m3. This constancy indicates the robustness of the presented methodology. In the following, the inversion results for the lunar craters yield air density of 0.003-0.008 kg/m3, which suggest the inversion results are accurate to the second decimal place. This framework will be applied to older martian craters with preserved ejecta blankets, which expect to constrain the long-term evolution of martian atmosphere.

  2. Algorithms for output feedback, multiple-model, and decentralized control problems

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.

  3. Field Programmable Gate Array Based Parallel Strapdown Algorithm Design for Strapdown Inertial Navigation Systems

    PubMed Central

    Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua

    2011-01-01

    A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058

  4. Motion compensation using origin ensembles in awake small animal positron emission tomography

    NASA Astrophysics Data System (ADS)

    Gillam, John E.; Angelis, Georgios I.; Kyme, Andre Z.; Meikle, Steven R.

    2017-02-01

    In emission tomographic imaging, the stochastic origin ensembles algorithm provides unique information regarding the detected counts given the measured data. Precision in both voxel and region-wise parameters may be determined for a single data set based on the posterior distribution of the count density allowing uncertainty estimates to be allocated to quantitative measures. Uncertainty estimates are of particular importance in awake animal neurological and behavioral studies for which head motion, unique for each acquired data set, perturbs the measured data. Motion compensation can be conducted when rigid head pose is measured during the scan. However, errors in pose measurements used for compensation can degrade the data and hence quantitative outcomes. In this investigation motion compensation and detector resolution models were incorporated into the basic origin ensembles algorithm and an efficient approach to computation was developed. The approach was validated against maximum liklihood—expectation maximisation and tested using simulated data. The resultant algorithm was then used to analyse quantitative uncertainty in regional activity estimates arising from changes in pose measurement precision. Finally, the posterior covariance acquired from a single data set was used to describe correlations between regions of interest providing information about pose measurement precision that may be useful in system analysis and design. The investigation demonstrates the use of origin ensembles as a powerful framework for evaluating statistical uncertainty of voxel and regional estimates. While in this investigation rigid motion was considered in the context of awake animal PET, the extension to arbitrary motion may provide clinical utility where respiratory or cardiac motion perturb the measured data.

  5. Characteristics of compensated hypogonadism in patients with sexual dysfunction.

    PubMed

    Corona, Giovanni; Maseroli, Elisa; Rastrelli, Giulia; Sforza, Alessandra; Forti, Gianni; Mannucci, Edoardo; Maggi, Mario

    2014-07-01

    In the last few years, a view that subclinical endocrine disorders represent milder forms of the clinically overt disease has emerged. Accordingly, it has been proposed that compensated hypogonadism represents a genuine clinical subset of late-onset hypogonadism. The aim of the present study is to investigate the associations of compensated hypogonadism with particular clinical and psychological characteristics of male subjects complaining of sexual dysfunction. After excluding documented genetic causes of hypogonadism, an unselected consecutive series of 4,173 patients consulting our unit for sexual dysfunction was studied. Compensated hypogonadism was identified according to the European Male Ageing study criteria: total testosterone ≥10.5 nmol/L and luteinizing hormone >9.4 U/L. Several hormonal, biochemical, and instrumental (penile Doppler ultrasound) parameters were studied, along with results of the Structured Interview on Erectile Dysfunction (SIEDY) and ANDROTEST. One hundred seventy (4.1%) subjects had compensated hypogonadism, whereas 827 (19.8%) had overt hypogonadism. After adjustment for confounding factors, no specific sexual symptoms were associated with compensated hypogonadism. However, compensated hypogonadism individuals more often reported psychiatric symptoms, as detected by Middlesex Hospital Questionnaire score, when compared with both eugonadal and overt hypogonadal subjects (adjusted odds ratios = 1.018 [1.005;1.031] and 1.014 [1.001;1.028], respectively; both P < 0.005). In addition, subjects with compensated or overt hypogonadism had an increased predicted risk of cardiovascular events (as assessed by Progetto Cuore risk algorithm) when compared with eugonadal individuals. Accordingly, mortality related to major adverse cardiovascular events (MACEs), but not MACE incidence, was significantly higher in subjects with both compensated and overt hypogonadism when compared with eugonadal subjects. The present data do not support

  6. A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors.

    PubMed

    Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li

    2009-09-28

    A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.

  7. Partial compensation interferometry measurement system for parameter errors of conicoid surface

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Li, Tengfei; Hu, Yao; Wang, Shaopu; Ning, Yan; Chen, Zhuo

    2018-06-01

    Surface parameters, such as vertex radius of curvature and conic constant, are used to describe the shape of an aspheric surface. Surface parameter errors (SPEs) are deviations affecting the optical characteristics of an aspheric surface. Precise measurement of SPEs is critical in the evaluation of optical surfaces. In this paper, a partial compensation interferometry measurement system for SPE of a conicoid surface is proposed based on the theory of slope asphericity and the best compensation distance. The system is developed to measure the SPE-caused best compensation distance change and SPE-caused surface shape change and then calculate the SPEs with the iteration algorithm for accuracy improvement. Experimental results indicate that the average relative measurement accuracy of the proposed system could be better than 0.02% for the vertex radius of curvature error and 2% for the conic constant error.

  8. Generalized algebraic scene-based nonuniformity correction algorithm.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2005-02-01

    A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.

  9. An Efficient Adaptive Angle-Doppler Compensation Approach for Non-Sidelooking Airborne Radar STAP

    PubMed Central

    Shen, Mingwei; Yu, Jia; Wu, Di; Zhu, Daiyin

    2015-01-01

    In this study, the effects of non-sidelooking airborne radar clutter dispersion on space-time adaptive processing (STAP) is considered, and an efficient adaptive angle-Doppler compensation (EAADC) approach is proposed to improve the clutter suppression performance. In order to reduce the computational complexity, the reduced-dimension sparse reconstruction (RDSR) technique is introduced into the angle-Doppler spectrum estimation to extract the required parameters for compensating the clutter spectral center misalignment. Simulation results to demonstrate the effectiveness of the proposed algorithm are presented. PMID:26053755

  10. Motion Estimation and Compensation Strategies in Dynamic Computerized Tomography

    NASA Astrophysics Data System (ADS)

    Hahn, Bernadette N.

    2017-12-01

    A main challenge in computerized tomography consists in imaging moving objects. Temporal changes during the measuring process lead to inconsistent data sets, and applying standard reconstruction techniques causes motion artefacts which can severely impose a reliable diagnostics. Therefore, novel reconstruction techniques are required which compensate for the dynamic behavior. This article builds on recent results from a microlocal analysis of the dynamic setting, which enable us to formulate efficient analytic motion compensation algorithms for contour extraction. Since these methods require information about the dynamic behavior, we further introduce a motion estimation approach which determines parameters of affine and certain non-affine deformations directly from measured motion-corrupted Radon-data. Our methods are illustrated with numerical examples for both types of motion.

  11. Atmospheric correction over coastal waters using multilayer neural networks

    NASA Astrophysics Data System (ADS)

    Fan, Y.; Li, W.; Charles, G.; Jamet, C.; Zibordi, G.; Schroeder, T.; Stamnes, K. H.

    2017-12-01

    Standard atmospheric correction (AC) algorithms work well in open ocean areas where the water inherent optical properties (IOPs) are correlated with pigmented particles. However, the IOPs of turbid coastal waters may independently vary with pigmented particles, suspended inorganic particles, and colored dissolved organic matter (CDOM). In turbid coastal waters standard AC algorithms often exhibit large inaccuracies that may lead to negative water-leaving radiances (Lw) or remote sensing reflectance (Rrs). We introduce a new atmospheric correction algorithm for coastal waters based on a multilayer neural network (MLNN) machine learning method. We use a coupled atmosphere-ocean radiative transfer model to simulate the Rayleigh-corrected radiance (Lrc) at the top of the atmosphere (TOA) and the Rrs just above the surface simultaneously, and train a MLNN to derive the aerosol optical depth (AOD) and Rrs directly from the TOA Lrc. The SeaDAS NIR algorithm, the SeaDAS NIR/SWIR algorithm, and the MODIS version of the Case 2 regional water - CoastColour (C2RCC) algorithm are included in the comparison with AERONET-OC measurements. The results show that the MLNN algorithm significantly improves retrieval of normalized Lw in blue bands (412 nm and 443 nm) and yields minor improvements in green and red bands. These results indicate that the MLNN algorithm is suitable for application in turbid coastal waters. Application of the MLNN algorithm to MODIS Aqua images in several coastal areas also shows that it is robust and resilient to contamination due to sunglint or adjacency effects of land and cloud edges. The MLNN algorithm is very fast once the neural network has been properly trained and is therefore suitable for operational use. A significant advantage of the MLNN algorithm is that it does not need SWIR bands, which implies significant cost reduction for dedicated OC missions. A recent effort has been made to extend the MLNN AC algorithm to extreme atmospheric conditions

  12. EUV multilayer defect compensation (MDC) by absorber pattern modification: from theory to wafer validation

    NASA Astrophysics Data System (ADS)

    Pang, Linyong; Hu, Peter; Satake, Masaki; Tolani, Vikram; Peng, Danping; Li, Ying; Chen, Dongxue

    2011-11-01

    According to the ITRS roadmap, mask defects are among the top technical challenges to introduce extreme ultraviolet (EUV) lithography into production. Making a multilayer defect-free extreme ultraviolet (EUV) blank is not possible today, and is unlikely to happen in the next few years. This means that EUV must work with multilayer defects present on the mask. The method proposed by Luminescent is to compensate effects of multilayer defects on images by modifying the absorber patterns. The effect of a multilayer defect is to distort the images of adjacent absorber patterns. Although the defect cannot be repaired, the images may be restored to their desired targets by changing the absorber patterns. This method was first introduced in our paper at BACUS 2010, which described a simple pixel-based compensation algorithm using a fast multilayer model. The fast model made it possible to complete the compensation calculations in seconds, instead of days or weeks required for rigorous Finite Domain Time Difference (FDTD) simulations. Our SPIE 2011 paper introduced an advanced compensation algorithm using the Level Set Method for 2D absorber patterns. In this paper the method is extended to consider process window, and allow repair tool constraints, such as permitting etching but not deposition. The multilayer defect growth model is also enhanced so that the multilayer defect can be "inverted", or recovered from the top layer profile using a calibrated model.

  13. A dimension reduction method for flood compensation operation of multi-reservoir system

    NASA Astrophysics Data System (ADS)

    Jia, B.; Wu, S.; Fan, Z.

    2017-12-01

    Multiple reservoirs cooperation compensation operations coping with uncontrolled flood play vital role in real-time flood mitigation. This paper come up with a reservoir flood compensation operation index (ResFCOI), which formed by elements of flood control storage, flood inflow volume, flood transmission time and cooperation operations period, then establish a flood cooperation compensation operations model of multi-reservoir system, according to the ResFCOI to determine a computational order of each reservoir, and lastly the differential evolution algorithm is implemented for computing single reservoir flood compensation optimization in turn, so that a dimension reduction method is formed to reduce computational complexity. Shiguan River Basin with two large reservoirs and an extensive uncontrolled flood area, is used as a case study, results show that (a) reservoirs' flood discharges and the uncontrolled flood are superimposed at Jiangjiaji Station, while the formed flood peak flow is as small as possible; (b) cooperation compensation operations slightly increase in usage of flood storage capacity in reservoirs, when comparing to rule-based operations; (c) it takes 50 seconds in average when computing a cooperation compensation operations scheme. The dimension reduction method to guide flood compensation operations of multi-reservoir system, can make each reservoir adjust its flood discharge strategy dynamically according to the uncontrolled flood magnitude and pattern, so as to mitigate the downstream flood disaster.

  14. Birefringence dispersion compensation demodulation algorithm for polarized low-coherence interferometry.

    PubMed

    Wang, Shuang; Liu, Tiegen; Jiang, Junfeng; Liu, Kun; Yin, Jinde; Wu, Fan

    2013-08-15

    A demodulation algorithm based on the birefringence dispersion characteristics for a polarized low-coherence interferometer is proposed. With the birefringence dispersion parameter taken into account, the mathematical model of the polarized low-coherence interference fringes is established and used to extract phase shift information between the measured coherence envelope center and the zero-order fringe, which eliminates the interferometric 2 π ambiguity of locating the zero-order fringe. A pressure measurement experiment using an optical fiber Fabry-Perot pressure sensor was carried out to verify the effectiveness of the proposed algorithm. The experiment result showed that the demodulation precision was 0.077 kPa in the range of 210 kPa, which was improved by 23 times compared to the traditional envelope detection method.

  15. Mid-frequency MTF compensation of optical sparse aperture system.

    PubMed

    Zhou, Chenghao; Wang, Zhile

    2018-03-19

    Optical sparse aperture (OSA) can greatly improve the spatial resolution of optical system. However, because of its aperture dispersion and sparse, its mid-frequency modulation transfer function (MTF) are significantly lower than that of a single aperture system. The main focus of this paper is on the mid-frequency MTF compensation of the optical sparse aperture system. Firstly, the principle of the mid-frequency MTF decreasing and missing of optical sparse aperture are analyzed. This paper takes the filling factor as a clue. The method of processing the mid-frequency MTF decreasing with large filling factor and method of compensation mid-frequency MTF with small filling factor are given respectively. For the MTF mid-frequency decreasing, the image spatial-variant restoration method is proposed to restore the mid-frequency information in the image; for the mid-frequency MTF missing, two images obtained by two system respectively are fused to compensate the mid-frequency information in optical sparse aperture image. The feasibility of the two method are analyzed in this paper. The numerical simulation of the system and algorithm of the two cases are presented using Zemax and Matlab. The results demonstrate that by these two methods the mid-frequency MTF of OSA system can be compensated effectively.

  16. PMD compensation in multilevel coded-modulation schemes with coherent detection using BLAST algorithm and iterative polarization cancellation.

    PubMed

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2008-09-15

    We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.

  17. Experimental validation of phase-only pre-compensation over 494  m free-space propagation.

    PubMed

    Brady, Aoife; Berlich, René; Leonhard, Nina; Kopf, Teresa; Böttner, Paul; Eberhardt, Ramona; Reinlein, Claudia

    2017-07-15

    It is anticipated that ground-to-geostationary orbit (GEO) laser communication will benefit from pre-compensation of atmospheric turbulence for laser beam propagation through the atmosphere. Theoretical simulations and laboratory experiments have determined its feasibility; extensive free-space experimental validation has, however, yet to be fulfilled. Therefore, we designed and implemented an adaptive optical (AO)-box which pre-compensates an outgoing laser beam (uplink) using the measurements of an incoming beam (downlink). The setup was designed to approximate the baseline scenario over a horizontal test range of 0.5 km and consisted of a ground terminal with the AO-box and a simplified approximation of a satellite terminal. Our results confirmed that we could focus the uplink beam on the satellite terminal using AO under a point-ahead angle of 28 μrad. Furthermore, we demonstrated a considerable increase in the intensity received at the satellite. These results are further testimony to AO pre-compensation being a viable technique to enhance Earth-to-GEO optical communication.

  18. Automatic relative RPC image model bias compensation through hierarchical image matching for improving DEM quality

    NASA Astrophysics Data System (ADS)

    Noh, Myoung-Jong; Howat, Ian M.

    2018-02-01

    The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.

  19. A New Technique for Compensating Joint Limits in a Robot Manipulator

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan; Hickman, Andre; Guo, Ten-Huei

    1996-01-01

    A new robust, optimal, adaptive technique for compensating rate and position limits in the joints of a six degree-of-freedom elbow manipulator is presented. In this new algorithm, the unmet demand as a result of actuator saturation is redistributed among the remaining unsaturated joints. The scheme is used to compensate for inadequate path planning, problems such as joint limiting, joint freezing, or even obstacle avoidance, where a desired position and orientation are not attainable due to an unrealizable joint command. Once a joint encounters a limit, supplemental commands are sent to other joints to best track, according to a selected criterion, the desired trajectory.

  20. Hysteresis compensation of piezoelectric deformable mirror based on Prandtl-Ishlinskii model

    NASA Astrophysics Data System (ADS)

    Ma, Jianqiang; Tian, Lei; Li, Yan; Yang, Zongfeng; Cui, Yuguo; Chu, Jiaru

    2018-06-01

    Hysteresis of piezoelectric deformable mirror (DM) reduces the closed-loop bandwidth and the open-loop correction accuracy of adaptive optics (AO) systems. In this work, a classical Prandtl-Ishlinskii (PI) model is employed to model the hysteresis behavior of a unimorph DM with 20 actuators. A modified control algorithm combined with the inverse PI model is developed for piezoelectric DMs. With the help of PI model, the hysteresis of the DM was reduced effectively from about 9% to 1%. Furthermore, open-loop regenerations of low-order aberrations with or without hysteresis compensation were carried out. The experimental results demonstrate that the regeneration accuracy with PI model compensation is significantly improved.

  1. The wavefront compensation of free space optics utilizing micro corner-cube-reflector arrays

    NASA Astrophysics Data System (ADS)

    You, Shengzui; Yang, Guowei; Li, Changying; Bi, Meihua; Fan, Bing

    2018-01-01

    The wavefront compensation effect of micro corner-cube-reflector arrays (MCCRAs) in modulating retroreflector (MRR) free-space optical (FSO) link is investigated theoretically and experimentally. Triangular aperture of MCCRAs has been optically characterized and studied in an indoor atmospheric turbulence channel. The use of the MCCRAs instead of a single corner-cube reflector (CCR) as the reflective device is found to improve dramatically the quality of the reflected beam spot. We draw a conclusion that the MCCRAs can in principle yield a powerful wavefront compensation in MRR FSO communication links.

  2. A homotopy algorithm for digital optimal projection control GASD-HADOC

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.

    1993-01-01

    The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.

  3. GIFTS SM EDU Data Processing and Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.

  4. The influence of leaf-atmosphere NH3(g ) exchange on the isotopic composition of nitrogen in plants and the atmosphere.

    PubMed

    Johnson, Jennifer E; Berry, Joseph A

    2013-10-01

    The distribution of nitrogen isotopes in the biosphere has the potential to offer insights into the past, present and future of the nitrogen cycle, but it is challenging to unravel the processes controlling patterns of mixing and fractionation. We present a mathematical model describing a previously overlooked process: nitrogen isotope fractionation during leaf-atmosphere NH3(g ) exchange. The model predicts that when leaf-atmosphere exchange of NH3(g ) occurs in a closed system, the atmospheric reservoir of NH3(g ) equilibrates at a concentration equal to the ammonia compensation point and an isotopic composition 8.1‰ lighter than nitrogen in protein. In an open system, when atmospheric concentrations of NH3(g ) fall below or rise above the compensation point, protein can be isotopically enriched by net efflux of NH3(g ) or depleted by net uptake. Comparison of model output with existing measurements in the literature suggests that this process contributes to variation in the isotopic composition of nitrogen in plants as well as NH3(g ) in the atmosphere, and should be considered in future analyses of nitrogen isotope circulation. The matrix-based modelling approach that is introduced may be useful for quantifying isotope dynamics in other complex systems that can be described by first-order kinetics. © 2013 John Wiley & Sons Ltd.

  5. Fixman compensating potential for general branched molecules

    NASA Astrophysics Data System (ADS)

    Jain, Abhinandan; Kandel, Saugat; Wagner, Jeffrey; Larsen, Adrien; Vaidehi, Nagarajan

    2013-12-01

    The technique of constraining high frequency modes of molecular motion is an effective way to increase simulation time scale and improve conformational sampling in molecular dynamics simulations. However, it has been shown that constraints on higher frequency modes such as bond lengths and bond angles stiffen the molecular model, thereby introducing systematic biases in the statistical behavior of the simulations. Fixman proposed a compensating potential to remove such biases in the thermodynamic and kinetic properties calculated from dynamics simulations. Previous implementations of the Fixman potential have been limited to only short serial chain systems. In this paper, we present a spatial operator algebra based algorithm to calculate the Fixman potential and its gradient within constrained dynamics simulations for branched topology molecules of any size. Our numerical studies on molecules of increasing complexity validate our algorithm by demonstrating recovery of the dihedral angle probability distribution function for systems that range in complexity from serial chains to protein molecules. We observe that the Fixman compensating potential recovers the free energy surface of a serial chain polymer, thus annulling the biases caused by constraining the bond lengths and bond angles. The inclusion of Fixman potential entails only a modest increase in the computational cost in these simulations. We believe that this work represents the first instance where the Fixman potential has been used for general branched systems, and establishes the viability for its use in constrained dynamics simulations of proteins and other macromolecules.

  6. 38 CFR 21.3023 - Nonduplication; pension, compensation, and dependency and indemnity compensation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., compensation, and dependency and indemnity compensation. 21.3023 Section 21.3023 Pensions, Bonuses, and... Nonduplication; pension, compensation, and dependency and indemnity compensation. (a) Child; age 18. A child who... dependency and indemnity compensation based on school attendance must elect whether he or she will receive...

  7. 38 CFR 21.3023 - Nonduplication; pension, compensation, and dependency and indemnity compensation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., compensation, and dependency and indemnity compensation. 21.3023 Section 21.3023 Pensions, Bonuses, and... Nonduplication; pension, compensation, and dependency and indemnity compensation. (a) Child; age 18. A child who... dependency and indemnity compensation based on school attendance must elect whether he or she will receive...

  8. 38 CFR 21.3023 - Nonduplication; pension, compensation, and dependency and indemnity compensation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., compensation, and dependency and indemnity compensation. 21.3023 Section 21.3023 Pensions, Bonuses, and... Nonduplication; pension, compensation, and dependency and indemnity compensation. (a) Child; age 18. A child who... dependency and indemnity compensation based on school attendance must elect whether he or she will receive...

  9. 38 CFR 21.3023 - Nonduplication; pension, compensation, and dependency and indemnity compensation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., compensation, and dependency and indemnity compensation. 21.3023 Section 21.3023 Pensions, Bonuses, and... Nonduplication; pension, compensation, and dependency and indemnity compensation. (a) Child; age 18. A child who... dependency and indemnity compensation based on school attendance must elect whether he or she will receive...

  10. 38 CFR 21.3023 - Nonduplication; pension, compensation, and dependency and indemnity compensation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., compensation, and dependency and indemnity compensation. 21.3023 Section 21.3023 Pensions, Bonuses, and... Nonduplication; pension, compensation, and dependency and indemnity compensation. (a) Child; age 18. A child who... dependency and indemnity compensation based on school attendance must elect whether he or she will receive...

  11. Compensation of significant parametric uncertainties using sliding mode online learning

    NASA Astrophysics Data System (ADS)

    Schnetter, Philipp; Kruger, Thomas

    An augmented nonlinear inverse dynamics (NID) flight control strategy using sliding mode online learning for a small unmanned aircraft system (UAS) is presented. Because parameter identification for this class of aircraft often is not valid throughout the complete flight envelope, aerodynamic parameters used for model based control strategies may show significant deviations. For the concept of feedback linearization this leads to inversion errors that in combination with the distinctive susceptibility of small UAS towards atmospheric turbulence pose a demanding control task for these systems. In this work an adaptive flight control strategy using feedforward neural networks for counteracting such nonlinear effects is augmented with the concept of sliding mode control (SMC). SMC-learning is derived from variable structure theory. It considers a neural network and its training as a control problem. It is shown that by the dynamic calculation of the learning rates, stability can be guaranteed and thus increase the robustness against external disturbances and system failures. With the resulting higher speed of convergence a wide range of simultaneously occurring disturbances can be compensated. The SMC-based flight controller is tested and compared to the standard gradient descent (GD) backpropagation algorithm under the influence of significant model uncertainties and system failures.

  12. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1995-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.

  13. TIGER: Development of Thermal Gradient Compensation Algorithms and Techniques

    NASA Technical Reports Server (NTRS)

    Hereford, James; Parker, Peter A.; Rhew, Ray D.

    2004-01-01

    In a wind tunnel facility, the direct measurement of forces and moments induced on the model are performed by a force measurement balance. The measurement balance is a precision-machined device that has strain gages at strategic locations to measure the strain (i.e., deformations) due to applied forces and moments. The strain gages convert the strain (and hence the applied force) to an electrical voltage that is measured by external instruments. To address the problem of thermal gradients on the force measurement balance NASA-LaRC has initiated a research program called TIGER - Thermally-Induced Gradients Effects Research. The ultimate goals of the TIGER program are to: (a) understand the physics of the thermally-induced strain and its subsequent impact on load measurements and (b) develop a robust thermal gradient compensation technique. This paper will discuss the impact of thermal gradients on force measurement balances, specific aspects of the TIGER program (the design of a special-purpose balance, data acquisition and data analysis challenges), and give an overall summary.

  14. Single neural adaptive controller and neural network identifier based on PSO algorithm for spherical actuators with 3D magnet array

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Lu; Zhu, Bo; Zhang, Jingying; Jiao, Zongxia

    2017-10-01

    Permanent magnet spherical actuator (PMSA) is a multi-variable featured and inter-axis coupled nonlinear system, which unavoidably compromises its motion control implementation. Uncertainties such as external load and friction torque of ball bearing and manufacturing errors also influence motion performance significantly. Therefore, the objective of this paper is to propose a controller based on a single neural adaptive (SNA) algorithm and a neural network (NN) identifier optimized with a particle swarm optimization (PSO) algorithm to improve the motion stability of PMSA with three-dimensional magnet arrays. The dynamic model and computed torque model are formulated for the spherical actuator, and a dynamic decoupling control algorithm is developed. By utilizing the global-optimization property of the PSO algorithm, the NN identifier is trained to avoid locally optimal solution and achieve high-precision compensations to uncertainties. The employment of the SNA controller helps to reduce the effect of compensation errors and convert the system to a stable one, even if there is difference between the compensations and uncertainties due to external disturbances. A simulation model is established, and experiments are conducted on the research prototype to validate the proposed control algorithm. The amplitude of the parameter perturbation is set to 5%, 10%, and 15%, respectively. The strong robustness of the proposed hybrid algorithm is validated by the abundant simulation data. It shows that the proposed algorithm can effectively compensate the influence of uncertainties and eliminate the effect of inter-axis couplings of the spherical actuator.

  15. Application of distance-dependent resolution compensation and post-reconstruction filtering for myocardial SPECT

    NASA Astrophysics Data System (ADS)

    Hutton, Brian F.; Lau, Yiu H.

    1998-06-01

    Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.

  16. Health Insurance Costs and Employee Compensation: Evidence from the National Compensation Survey.

    PubMed

    Anand, Priyanka

    2017-12-01

    This paper examines the relationship between rising health insurance costs and employee compensation. I estimate the extent to which total compensation decreases with a rise in health insurance costs and decompose these changes in compensation into adjustments in wages, non-health fringe benefits, and employee contributions to health insurance premiums. I examine this relationship using the National Compensation Survey, a panel dataset on compensation and health insurance for a sample of establishments across the USA. I find that total hourly compensation reduces by $0.52 for each dollar increase in health insurance costs. This reduction in total compensation is primarily in the form of higher employee premium contributions, and there is no evidence of a change in wages and non-health fringe benefits. These findings show that workers are absorbing at least part of the increase in health insurance costs through lower compensation and highlight the importance of examining total compensation, and not just wages, when examining the relationship between health insurance costs and employee compensation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Inhomogeneity compensation for MR brain image segmentation using a multi-stage FCM-based approach.

    PubMed

    Szilágyi, László; Szilágyi, Sándor M; Dávid, László; Benyó, Zoltán

    2008-01-01

    Intensity inhomogeneity or intensity non-uniformity (INU) is an undesired phenomenon that represents the main obstacle for MR image segmentation and registration methods. Various techniques have been proposed to eliminate or compensate the INU, most of which are embedded into clustering algorithms. This paper proposes a multiple stage fuzzy c-means (FCM) based algorithm for the estimation and compensation of the slowly varying additive or multiplicative noise, supported by a pre-filtering technique for Gaussian and impulse noise elimination. The slowly varying behavior of the bias or gain field is assured by a smoothening filter that performs a context dependent averaging, based on a morphological criterion. The experiments using 2-D synthetic phantoms and real MR images show, that the proposed method provides accurate segmentation. The produced segmentation and fuzzy membership values can serve as excellent support for 3-D registration and segmentation techniques.

  18. Noise-cancellation-based nonuniformity correction algorithm for infrared focal-plane arrays.

    PubMed

    Godoy, Sebastián E; Pezoa, Jorge E; Torres, Sergio N

    2008-10-10

    The spatial fixed-pattern noise (FPN) inherently generated in infrared (IR) imaging systems compromises severely the quality of the acquired imagery, even making such images inappropriate for some applications. The FPN refers to the inability of the photodetectors in the focal-plane array to render a uniform output image when a uniform-intensity scene is being imaged. We present a noise-cancellation-based algorithm that compensates for the additive component of the FPN. The proposed method relies on the assumption that a source of noise correlated to the additive FPN is available to the IR camera. An important feature of the algorithm is that all the calculations are reduced to a simple equation, which allows for the bias compensation of the raw imagery. The algorithm performance is tested using real IR image sequences and is compared to some classical methodologies. (c) 2008 Optical Society of America

  19. A Novel Algorithm Combining Finite State Method and Genetic Algorithm for Solving Crude Oil Scheduling Problem

    PubMed Central

    Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun

    2014-01-01

    A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031

  20. An effective temperature compensation approach for ultrasonic hydrogen sensors

    NASA Astrophysics Data System (ADS)

    Tan, Xiaolong; Li, Min; Arsad, Norhana; Wen, Xiaoyan; Lu, Haifei

    2018-03-01

    Hydrogen is a kind of promising clean energy resource with a wide application prospect, which will, however, cause a serious security issue upon the leakage of hydrogen gas. The measurement of its concentration is of great significance. In a traditional approach of ultrasonic hydrogen sensing, a temperature drift of 0.1 °C results in a concentration error of about 250 ppm, which is intolerable for trace amount of gas sensing. In order to eliminate the influence brought by temperature drift, we propose a feasible approach named as linear compensation algorithm, which utilizes the linear relationship between the pulse count and temperature to compensate for the pulse count error (ΔN) caused by temperature drift. Experimental results demonstrate that our proposed approach is capable of improving the measurement accuracy and can easily detect sub-100 ppm of hydrogen concentration under variable temperature conditions.

  1. 38 CFR 3.351 - Special monthly dependency and indemnity compensation, death compensation, pension and spouse's...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. 3.351..., Compensation, and Dependency and Indemnity Compensation Ratings for Special Purposes § 3.351 Special monthly dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. (a...

  2. 38 CFR 3.351 - Special monthly dependency and indemnity compensation, death compensation, pension and spouse's...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. 3.351..., Compensation, and Dependency and Indemnity Compensation Ratings for Special Purposes § 3.351 Special monthly dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. (a...

  3. 38 CFR 3.351 - Special monthly dependency and indemnity compensation, death compensation, pension and spouse's...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. 3.351..., Compensation, and Dependency and Indemnity Compensation Ratings for Special Purposes § 3.351 Special monthly dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. (a...

  4. 38 CFR 3.351 - Special monthly dependency and indemnity compensation, death compensation, pension and spouse's...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. 3.351..., Compensation, and Dependency and Indemnity Compensation Ratings for Special Purposes § 3.351 Special monthly dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. (a...

  5. 38 CFR 3.351 - Special monthly dependency and indemnity compensation, death compensation, pension and spouse's...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. 3.351..., Compensation, and Dependency and Indemnity Compensation Ratings for Special Purposes § 3.351 Special monthly dependency and indemnity compensation, death compensation, pension and spouse's compensation ratings. (a...

  6. Calibrating a Soil-Vegetation-Atmosphere system with a genetical algorithm

    NASA Astrophysics Data System (ADS)

    Schneider, S.; Jacques, D.; Mallants, D.

    2009-04-01

    Accuracy of model prediction is well known for being very sensitive to the quality of the calibration of the model. It is also known that quantifying soil hydraulic parameters in a Soil-Vegetation-Atmosphere (SVA) system is a highly non-linear parameter estimation problem, and that robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the north of Belgium (Campine region). Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step. The water table level, which is varying between 95 and 170 cm, has been recorded with a frequency of 0.5 hours. Based on the profile description, four soil layers have been distinguished in the podzol and used for the numerical simulation with the hydrus1D model (Simunek and al., 2005). For the inversion procedure the MYGA program (Yedder, 2002), which is an elitism GA, was used. Optimization was based on the water content measurements realized at the depths of 10, 20, 40, 50, 60, 70, 90, 110, and 120 cm to estimate parameters describing the unsaturated hydraulic soil properties of the different soil layers. Comparison between the modeled and measured water contents shows a good similarity during the simulated year. Impacts of short and intensive events (rainfall) on the water content of the soil are also well reproduced. Errors on predictions are on average equal to 5%, which is considered as a good result. A. Ben Haj Yedder. Numerical optimization and optimal control : (molecular chemistry applications). PhD thesis, Ecole Nationale des Ponts et Chaussées, 2002. Šimůnek, J., M. Th. van Genuchten, and M. Šejna, The HYDRUS-1D software package for simulating the one-dimensional movement

  7. Adaptive compensation of aberrations in ultrafast 3D microscopy using a deformable mirror

    NASA Astrophysics Data System (ADS)

    Sherman, Leah R.; Albert, O.; Schmidt, Christoph F.; Vdovin, Gleb V.; Mourou, Gerard A.; Norris, Theodore B.

    2000-05-01

    3D imaging using a multiphoton scanning confocal microscope is ultimately limited by aberrations of the system. We describe a system to adaptively compensate the aberrations with a deformable mirror. We have increased the transverse scanning range of the microscope by three with compensation of off-axis aberrations.We have also significantly increased the longitudinal scanning depth with compensation of spherical aberrations from the penetration into the sample. Our correction is based on a genetic algorithm that uses second harmonic or two-photon fluorescence signal excited by femtosecond pulses from the sample as the enhancement parameter. This allows us to globally optimize the wavefront without a wavefront measurement. To improve the speed of the optimization we use Zernike polynomials as the basis for correction. Corrections can be stored in a database for look-up with future samples.

  8. Clinical evaluation of 4D PET motion compensation strategies for treatment verification in ion beam therapy

    NASA Astrophysics Data System (ADS)

    Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia

    2016-06-01

    A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra

  9. Locating hazardous gas leaks in the atmosphere via modified genetic, MCMC and particle swarm optimization algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming

    2017-05-01

    Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.

  10. [Vestibular compensation studies]. [Vestibular Compensation and Morphological Studies

    NASA Technical Reports Server (NTRS)

    Perachio, Adrian A. (Principal Investigator)

    1996-01-01

    The following topics are reported: neurophysiological studies on MVN neurons during vestibular compensation; effects of spinal cord lesions on VNC neurons during compensation; a closed-loop vestibular compensation model for horizontally canal-related MVN neurons; spatiotemporal convergence in VNC neurons; contributions of irregularly firing vestibular afferents to linear and angular VOR's; application to flight studies; metabolic measures in vestibular neurons; immediate early gene expression following vestibular stimulation; morphological studies on primary afferents, central vestibular pathways, vestibular efferent projection to the vestibular end organs, and three-dimensional morphometry and imaging.

  11. Multi-Angle Implementation of Atmospheric Correction for MODIS (MAIAC). Part 3: Atmospheric Correction

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Hilker, T.; Hall, F.; Sellers, P.; Tucker, J.; Korkin, S.

    2012-01-01

    This paper describes the atmospheric correction (AC) component of the Multi-Angle Implementation of Atmospheric Correction algorithm (MAIAC) which introduces a new way to compute parameters of the Ross-Thick Li-Sparse (RTLS) Bi-directional reflectance distribution function (BRDF), spectral surface albedo and bidirectional reflectance factors (BRF) from satellite measurements obtained by the Moderate Resolution Imaging Spectroradiometer (MODIS). MAIAC uses a time series and spatial analysis for cloud detection, aerosol retrievals and atmospheric correction. It implements a moving window of up to 16 days of MODIS data gridded to 1 km resolution in a selected projection. The RTLS parameters are computed directly by fitting the cloud-free MODIS top of atmosphere (TOA) reflectance data stored in the processing queue. The RTLS retrieval is applied when the land surface is stable or changes slowly. In case of rapid or large magnitude change (as for instance caused by disturbance), MAIAC follows the MODIS operational BRDF/albedo algorithm and uses a scaling approach where the BRDF shape is assumed stable but its magnitude is adjusted based on the latest single measurement. To assess the stability of the surface, MAIAC features a change detection algorithm which analyzes relative change of reflectance in the Red and NIR bands during the accumulation period. To adjust for the reflectance variability with the sun-observer geometry and allow comparison among different days (view geometries), the BRFs are normalized to the fixed view geometry using the RTLS model. An empirical analysis of MODIS data suggests that the RTLS inversion remains robust when the relative change of geometry-normalized reflectance stays below 15%. This first of two papers introduces the algorithm, a second, companion paper illustrates its potential by analyzing MODIS data over a tropical rainforest and assessing errors and uncertainties of MAIAC compared to conventional MODIS products.

  12. Compensation of non-ideal beam splitter polarization distortion effect in Michelson interferometer

    NASA Astrophysics Data System (ADS)

    Liu, Yeng-Cheng; Lo, Yu-Lung; Liao, Chia-Chi

    2016-02-01

    A composite optical structure consisting of two quarter-wave plates and a single half-wave plate is proposed for compensating for the polarization distortion induced by a non-ideal beam splitter in a Michelson interferometer. In the proposed approach, the optimal orientations of the optical components within the polarization compensator are determined using a genetic algorithm (GA) such that the beam splitter can be treated as a free-space medium and modeled using a unit Mueller matrix accordingly. Two implementations of the proposed polarization controller are presented. In the first case, the compensator is placed in the output arm of Michelson interferometer such that the state of polarization of the interfered output light is equal to that of the input light. However, in this configuration, the polarization effects induced by the beam splitter in the two arms of the interferometer structure cannot be separately addressed. Consequently, in the second case, compensator structures are placed in the Michelson interferometer for compensation on both the scanning and reference beams. The practical feasibility of the proposed approach is introduced by considering a Mueller polarization-sensitive (PS) optical coherence tomography (OCT) structure with three polarization controllers in the input, reference and sample arms, respectively. In general, the results presented in this study show that the proposed polarization controller provides an effective and experimentally-straightforward means of compensating for the polarization distortion effects induced by the non-ideal beam splitters in Michelson interferometers and Mueller PS-OCT structures.

  13. Improvement and implementation for Canny edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Qiu, Yue-hong

    2015-07-01

    Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.

  14. A robust H.264/AVC video watermarking scheme with drift compensation.

    PubMed

    Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.

  15. Influence of Evaporating Droplets in the Turbulent Marine Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Peng, Tianze; Richter, David

    2017-12-01

    Sea-spray droplets ejected into the marine atmospheric boundary layer take part in a series of complex transport processes. By capturing the air-droplet coupling and feedback, we focus on how droplets modify the total heat transfer across a turbulent boundary layer. We implement a high-resolution Eulerian-Lagrangian algorithm with varied droplet size and mass loading in a turbulent open-channel flow, revealing that the influence from evaporating droplets varies for different dynamic and thermodynamic characteristics of droplets. Droplets that both respond rapidly to the ambient environment and have long suspension times are able to modify the latent and sensible heat fluxes individually, however the competing signs of this modification lead to an overall weak effect on the total heat flux. On the other hand, droplets with a slower thermodynamic response to the environment are less subjected to this compensating effect. This indicates a potential to enhance the total heat flux, but the enhancement is highly dependent on the concentration and suspension time.

  16. GIFTS SM EDU Level 1B Algorithms

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.

    2007-01-01

    The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a

  17. Study on compensation algorithm of head skew in hard disk drives

    NASA Astrophysics Data System (ADS)

    Xiao, Yong; Ge, Xiaoyu; Sun, Jingna; Wang, Xiaoyan

    2011-10-01

    In hard disk drives (HDDs), head skew among multiple heads is pre-calibrated during manufacturing process. In real applications with high capacity of storage, the head stack may be tilted due to environmental change, resulting in additional head skew errors from outer diameter (OD) to inner diameter (ID). In case these errors are below the preset threshold for power on recalibration, the current strategy may not be aware, and drive performance under severe environment will be degraded. In this paper, in-the-field compensation of small DC head skew variation across stroke is proposed, where a zone table has been equipped. Test results demonstrating its effectiveness to reduce observer error and to enhance drive performance via accurate prediction of DC head skew are provided.

  18. Compensator design for improved counterbalancing in high speed atomic force microscopy

    PubMed Central

    Bozchalooi, I. S.; Youcef-Toumi, K.; Burns, D. J.; Fantner, G. E.

    2011-01-01

    High speed atomic force microscopy can provide the possibility of many new scientific observations and applications ranging from nano-manufacturing to the study of biological processes. However, the limited imaging speed has been an imperative drawback of the atomic force microscopes. One of the main reasons behind this limitation is the excitation of the AFM dynamics at high scan speeds, severely undermining the reliability of the acquired images. In this research, we propose a piezo based, feedforward controlled, counter actuation mechanism to compensate for the excited out-of-plane scanner dynamics. For this purpose, the AFM controller output is properly filtered via a linear compensator and then applied to a counter actuating piezo. An effective algorithm for estimating the compensator parameters is developed. The information required for compensator design is extracted from the cantilever deflection signal, hence eliminating the need for any additional sensors. The proposed approach is implemented and experimentally evaluated on the dynamic response of a custom made AFM. It is further assessed by comparing the imaging performance of the AFM with and without the application of the proposed technique and in comparison with the conventional counterbalancing methodology. The experimental results substantiate the effectiveness of the method in significantly improving the imaging performance of AFM at high scan speeds. PMID:22128989

  19. Compensator design for improved counterbalancing in high speed atomic force microscopy

    NASA Astrophysics Data System (ADS)

    Bozchalooi, I. S.; Youcef-Toumi, K.; Burns, D. J.; Fantner, G. E.

    2011-11-01

    High speed atomic force microscopy can provide the possibility of many new scientific observations and applications ranging from nano-manufacturing to the study of biological processes. However, the limited imaging speed has been an imperative drawback of the atomic force microscopes. One of the main reasons behind this limitation is the excitation of the AFM dynamics at high scan speeds, severely undermining the reliability of the acquired images. In this research, we propose a piezo based, feedforward controlled, counter actuation mechanism to compensate for the excited out-of-plane scanner dynamics. For this purpose, the AFM controller output is properly filtered via a linear compensator and then applied to a counter actuating piezo. An effective algorithm for estimating the compensator parameters is developed. The information required for compensator design is extracted from the cantilever deflection signal, hence eliminating the need for any additional sensors. The proposed approach is implemented and experimentally evaluated on the dynamic response of a custom made AFM. It is further assessed by comparing the imaging performance of the AFM with and without the application of the proposed technique and in comparison with the conventional counterbalancing methodology. The experimental results substantiate the effectiveness of the method in significantly improving the imaging performance of AFM at high scan speeds.

  20. Rationalizing vaccine injury compensation.

    PubMed

    Mello, Michelle M

    2008-01-01

    Legislation recently adopted by the United States Congress provides producers of pandemic vaccines with near-total immunity from civil lawsuits without making individuals injured by those vaccines eligible for compensation through the Vaccine Injury Compensation Program. The unusual decision not to provide an alternative mechanism for compensation is indicative of a broader problem of inconsistency in the American approach to vaccine-injury compensation policy. Compensation policies have tended to reflect political pressures and economic considerations more than any cognizable set of principles. This article identifies a set of ethical principles bearing on the circumstances in which vaccine injuries should be compensated, both inside and outside public health emergencies. A series of possible bases for compensation rules, some grounded in utilitarianism and some nonconsequentialist, are discussed and evaluated. Principles of fairness and reasonableness are found to constitute the strongest bases. An ethically defensible compensation policy grounded in these principles would make a compensation fund available to all individuals with severe injuries and to individuals with less-severe injuries whenever the vaccination was required by law or professional duty.

  1. Translation compensation and micro-Doppler extraction for precession ballistic targets with a wideband terahertz radar

    NASA Astrophysics Data System (ADS)

    Yang, Qi; Deng, Bin; Wang, Hongqiang; Zhang, Ye; Qin, Yuliang

    2018-01-01

    Imaging, classification, and recognition techniques of ballistic targets in midcourse have always been the focus of research in the radar field for military applications. However, the high velocity translation of ballistic targets will subject range profile and Doppler to translation, slope, and fold, which are especially severe in the terahertz region. Therefore, a two-step translation compensation method based on envelope alignment is presented. The rough compensation is based on the traditional envelope alignment algorithm in inverse synthetic aperture radar imaging, and the fine compensation is supported by distance fitting. Then, a wideband imaging radar system with a carrier frequency of 0.32 THz is introduced, and an experiment on a precession missile model is carried out. After translation compensation with the method proposed in this paper, the range profile and the micro-Doppler distributions unaffected by translation are obtained, providing an important foundation for the high-resolution imaging and micro-Doppler extraction of the terahertz radar.

  2. Atmospheric transformation of multispectral remote sensor data. [Great Lakes

    NASA Technical Reports Server (NTRS)

    Turner, R. E. (Principal Investigator)

    1977-01-01

    The author has identified the following significant results. The effects of earth's atmosphere were accounted for, and a simple algorithm, based upon a radiative transfer model, was developed to determine the radiance at earth's surface free of atmospheric effects. Acutal multispectral remote sensor data for Lake Erie and associated optical thickness data were used to demonstrate the effectiveness of the atmospheric transformation algorithm. The basic transformation was general in nature and could be applied to the large scale processing of multispectral aircraft or satellite remote sensor data.

  3. Extracting atmospheric turbulence and aerosol characteristics from passive imagery

    NASA Astrophysics Data System (ADS)

    Reinhardt, Colin N.; Wayne, D.; McBryde, K.; Cauble, G.

    2013-09-01

    Obtaining accurate, precise and timely information about the local atmospheric turbulence and extinction conditions and aerosol/particulate content remains a difficult problem with incomplete solutions. It has important applications in areas such as optical and IR free-space communications, imaging systems performance, and the propagation of directed energy. The capability to utilize passive imaging data to extract parameters characterizing atmospheric turbulence and aerosol/particulate conditions would represent a valuable addition to the current piecemeal toolset for atmospheric sensing. Our research investigates an application of fundamental results from optical turbulence theory and aerosol extinction theory combined with recent advances in image-quality-metrics (IQM) and image-quality-assessment (IQA) methods. We have developed an algorithm which extracts important parameters used for characterizing atmospheric turbulence and extinction along the propagation channel, such as the refractive-index structure parameter C2n , the Fried atmospheric coherence width r0 , and the atmospheric extinction coefficient βext , from passive image data. We will analyze the algorithm performance using simulations based on modeling with turbulence modulation transfer functions. An experimental field campaign was organized and data were collected from passive imaging through turbulence of Siemens star resolution targets over several short littoral paths in Point Loma, San Diego, under conditions various turbulence intensities. We present initial results of the algorithm's effectiveness using this field data and compare against measurements taken concurrently with other standard atmospheric characterization equipment. We also discuss some of the challenges encountered with the algorithm, tasks currently in progress, and approaches planned for improving the performance in the near future.

  4. Efficient nonlinear equalizer for intra-channel nonlinearity compensation for next generation agile and dynamically reconfigurable optical networks.

    PubMed

    Malekiha, Mahdi; Tselniker, Igor; Plant, David V

    2016-02-22

    In this work, we propose and experimentally demonstrate a novel low-complexity technique for fiber nonlinearity compensation. We achieved a transmission distance of 2818 km for a 32-GBaud dual-polarization 16QAM signal. For efficient implantation, and to facilitate integration with conventional digital signal processing (DSP) approaches, we independently compensate fiber nonlinearities after linear impairment equalization. Therefore this algorithm can be easily implemented in currently deployed transmission systems after using linear DSP. The proposed equalizer operates at one sample per symbol and requires only one computation step. The structure of the algorithm is based on a first-order perturbation model with quantized perturbation coefficients. Also, it does not require any prior calculation or detailed knowledge of the transmission system. We identified common symmetries between perturbation coefficients to avoid duplicate and unnecessary operations. In addition, we use only a few adaptive filter coefficients by grouping multiple nonlinear terms and dedicating only one adaptive nonlinear filter coefficient to each group. Finally, the complexity of the proposed algorithm is lower than previously studied nonlinear equalizers by more than one order of magnitude.

  5. Polarimetric Remote Sensing of Atmospheric Particulate Pollutants

    NASA Astrophysics Data System (ADS)

    Li, Z.; Zhang, Y.; Hong, J.

    2018-04-01

    Atmospheric particulate pollutants not only reduce atmospheric visibility, change the energy balance of the troposphere, but also affect human and vegetation health. For monitoring the particulate pollutants, we establish and develop a series of inversion algorithms based on polarimetric remote sensing technology which has unique advantages in dealing with atmospheric particulates. A solution is pointed out to estimate the near surface PM2.5 mass concentrations from full remote sensing measurements including polarimetric, active and infrared remote sensing technologies. It is found that the mean relative error of PM2.5 retrieved by full remote sensing measurements is 35.5 % in the case of October 5th 2013, improved to a certain degree compared to previous studies. A systematic comparison with the ground-based observations further indicates the effectiveness of the inversion algorithm and reliability of results. A new generation of polarized sensors (DPC and PCF), whose observation can support these algorithms, will be onboard GF series satellites and launched by China in the near future.

  6. Variability of the atmospheric turbulence in the region lake of Baykal

    NASA Astrophysics Data System (ADS)

    Botygina, N. N.; Kopylov, E. A.; Lukin, V. P.; Kovadlo, P. G.; Shihovcev, A. Yu.

    2015-11-01

    The estimations of the fried parameter according to micrometeorological and optical measurements in the atmospheric surface layer in the area of lake Baikal, Baikal astrophysical Observatory. According to the archive of NCEP/NCAR Reanalysis data obtained vertical distribution of temperature pulsations, and revealed the most pronounced atmospheric layers with high turbulence. A comparison of astronomical conditions vision in winter and in summer. By the registration of optical radiation of the Sun with telescopes, ground-based there is a need to compensate for the effects of atmospheric turbulence. Atmospheric turbulence reduces the angular resolution of the observed objects and distorts the structure of the obtained images. To improve image quality, and ideally closer to angular resolution, limited only by diffraction, it is necessary to implement and use adaptive optics system. The specificity of image correction using adaptive optics is that it is necessary not only to compensate for the random jitter of the image as a whole, but also adjust the geometry of the individual parts of the image. Evaluation of atmospheric radius of coherence (Fried parameter) are of interest not only for site-testing research space, but also are the basis for the efficient operation of adaptive optical systems 1 .

  7. Reactive power compensator

    DOEpatents

    El-Sharkawi, Mohamed A.; Venkata, Subrahmanyam S.; Chen, Mingliang; Andexler, George; Huang, Tony

    1992-01-01

    A system and method for determining and providing reactive power compensation for an inductive load. A reactive power compensator (50,50') monitors the voltage and current flowing through each of three distribution lines (52a, 52b, 52c), which are supplying three-phase power to one or more inductive loads. Using signals indicative of the current on each of these lines when the voltage waveform on the line crosses zero, the reactive power compensator determines a reactive power compensator capacitance that must be connected to the lines to maintain a desired VAR level, power factor, or line voltage. Alternatively, an operator can manually select a specific capacitance for connection to each line, or the capacitance can be selected based on a time schedule. The reactive power compensator produces control signals, which are coupled through optical fibers (102/106) to a switch driver (110, 110') to select specific compensation capacitors (112) for connections to each line. The switch driver develops triggering signals that are supplied to a plurality of series-connected solid state switches (350), which control charge current in one direction in respect to ground for each compensation capacitor. During each cycle, current flows from ground to charge the capacitors as the voltage on the line begins to go negative from its positive peak value. The triggering signals are applied to gate the solid state switches into a conducting state when the potential on the lines and on the capacitors reaches a negative peak value, thereby minimizing both the potential difference and across the charge current through the switches when they begin to conduct. Any harmonic distortion on the potential and current carried by the lines is filtered out from the current and potential signals used by the reactive power compensator so that it does not affect the determination of the required reactive compensation.

  8. Reactive Power Compensator.

    DOEpatents

    El-Sharkawi, M.A.; Venkata, S.S.; Chen, M.; Andexler, G.; Huang, T.

    1992-07-28

    A system and method for determining and providing reactive power compensation for an inductive load. A reactive power compensator (50,50') monitors the voltage and current flowing through each of three distribution lines (52a, 52b, 52c), which are supplying three-phase power to one or more inductive loads. Using signals indicative of the current on each of these lines when the voltage waveform on the line crosses zero, the reactive power compensator determines a reactive power compensator capacitance that must be connected to the lines to maintain a desired VAR level, power factor, or line voltage. Alternatively, an operator can manually select a specific capacitance for connection to each line, or the capacitance can be selected based on a time schedule. The reactive power compensator produces control signals, which are coupled through optical fibers (102/106) to a switch driver (110, 110') to select specific compensation capacitors (112) for connections to each line. The switch driver develops triggering signals that are supplied to a plurality of series-connected solid state switches (350), which control charge current in one direction in respect to ground for each compensation capacitor. During each cycle, current flows from ground to charge the capacitors as the voltage on the line begins to go negative from its positive peak value. The triggering signals are applied to gate the solid state switches into a conducting state when the potential on the lines and on the capacitors reaches a negative peak value, thereby minimizing both the potential difference and across the charge current through the switches when they begin to conduct. Any harmonic distortion on the potential and current carried by the lines is filtered out from the current and potential signals used by the reactive power compensator so that it does not affect the determination of the required reactive compensation. 26 figs.

  9. A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation

    PubMed Central

    Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing

    2014-01-01

    A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376

  10. Comparing performance of many-core CPUs and GPUs for static and motion compensated reconstruction of C-arm CT data.

    PubMed

    Hofmann, Hannes G; Keck, Benjamin; Rohkohl, Christopher; Hornegger, Joachim

    2011-01-01

    Interventional reconstruction of 3-D volumetric data from C-arm CT projections is a computationally demanding task. Hardware optimization is not an option but mandatory for interventional image processing and, in particular, for image reconstruction due to the high demands on performance. Several groups have published fast analytical 3-D reconstruction on highly parallel hardware such as GPUs to mitigate this issue. The authors show that the performance of modern CPU-based systems is in the same order as current GPUs for static 3-D reconstruction and outperforms them for a recent motion compensated (3-D+time) image reconstruction algorithm. This work investigates two algorithms: Static 3-D reconstruction as well as a recent motion compensated algorithm. The evaluation was performed using a standardized reconstruction benchmark, RABBITCT, to get comparable results and two additional clinical data sets. The authors demonstrate for a parametric B-spline motion estimation scheme that the derivative computation, which requires many write operations to memory, performs poorly on the GPU and can highly benefit from modern CPU architectures with large caches. Moreover, on a 32-core Intel Xeon server system, the authors achieve linear scaling with the number of cores used and reconstruction times almost in the same range as current GPUs. Algorithmic innovations in the field of motion compensated image reconstruction may lead to a shift back to CPUs in the future. For analytical 3-D reconstruction, the authors show that the gap between GPUs and CPUs became smaller. It can be performed in less than 20 s (on-the-fly) using a 32-core server.

  11. Algorithm Estimates Microwave Water-Vapor Delay

    NASA Technical Reports Server (NTRS)

    Robinson, Steven E.

    1989-01-01

    Accuracy equals or exceeds conventional linear algorithms. "Profile" algorithm improved algorithm using water-vapor-radiometer data to produce estimates of microwave delays caused by water vapor in troposphere. Does not require site-specific and weather-dependent empirical parameters other than standard meteorological data, latitude, and altitude for use in conjunction with published standard atmospheric data. Basic premise of profile algorithm, wet-path delay approximated closely by solution to simplified version of nonlinear delay problem and generated numerically from each radiometer observation and simultaneous meteorological data.

  12. The controllability of the aeroassist flight experiment atmospheric skip trajectory

    NASA Technical Reports Server (NTRS)

    Wood, R.

    1989-01-01

    The Aeroassist Flight Experiment (AFE) will be the first vehicle to simulate a return from geosynchronous orbit, deplete energy during an aerobraking maneuver, and navigate back out of the atmosphere to a low earth orbit It will gather scientific data necessary for future Aeroasisted Orbitl Transfer Vehicles (AOTV's). Critical to mission success is the ability of the atmospheric guidance to accurately attain a targeted post-aeropass orbital apogee while nulling inclination errors and compensating for dispersions in state, aerodynamic, and atmospheric parameters. In typing to satisfy mission constraints, atmospheric entry-interface (EI) conditions, guidance gains, and trajectory. The results of the investigation are presented; emphasizing the adverse effects of dispersed atmospheres on trajectory controllability.

  13. Reactive power compensating system

    DOEpatents

    Williams, Timothy J.; El-Sharkawi, Mohamed A.; Venkata, Subrahmanyam S.

    1987-01-01

    The reactive power of an induction machine is compensated by providing fixed capacitors on each phase line for the minimum compensation required, sensing the current on one line at the time its voltage crosses zero to determine the actual compensation required for each phase, and selecting switched capacitors on each line to provide the balance of the compensation required.

  14. Effects of Atmospheric Water and Surface Wind on Passive Microwave Retrievals of Sea Ice Concentration: a Simulation Study

    NASA Astrophysics Data System (ADS)

    Shin, D.; Chiu, L. S.; Clemente-Colon, P.

    2006-05-01

    The atmospheric effects on the retrieval of sea ice concentration from passive microwave sensors are examined using simulated data typical for the Arctic summer. The simulation includes atmospheric contributions of cloud liquid water, water vapor and surface wind on the microwave signatures. A plane parallel radiative transfer model is used to compute brightness temperatures at SSM/I frequencies over surfaces that contain open water, first-year (FY) ice and multi-year (MY) ice and their combinations. Synthetic retrievals in this study use the NASA Team (NT) algorithm for the estimation of sea ice concentrations. This study shows that if the satellite sensor's field of view is filled with only FY ice the retrieval is not much affected by the atmospheric conditions due to the high contrast between emission signals from FY ice surface and the signals from the atmosphere. Pure MY ice concentration is generally underestimated due to the low MY ice surface emissivity that results in the enhancement of emission signals from the atmospheric parameters. Simulation results in marginal ice areas also show that the atmospheric effects from cloud liquid water, water vapor and surface wind tend to degrade the accuracy at low sea ice concentration. FY ice concentration is overestimated and MY ice concentration is underestimated in the presence of atmospheric water and surface wind at low ice concentration. This compensating effect reduces the retrieval uncertainties of total (FY and MY) ice concentration. Over marginal ice zones, our results suggest that strong surface wind is more important than atmospheric water in contributing to the retrieval errors of total ice concentrations in the normal ranges of these variables.

  15. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  16. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  17. An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST

    NASA Astrophysics Data System (ADS)

    Hang, Xu; Jun, Zhao

    2018-05-01

    Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.

  18. Comparison of atmospheric correction algorithms for the Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Jain, S. C.

    1984-01-01

    Before Nimbus-7 Costal Zone Color Scanner (CZC) data can be used to distinguish between coastal water types, methods must be developed for the removal of spatial variations in aerosol path radiance. These can dominate radiance measurements made by the satellite. An assessment is presently made of the ability of four different algorithms to quantitatively remove haze effects; each was adapted for the extraction of the required scene-dependent parameters during an initial pass through the data set The CZCS correction algorithms considered are (1) the Gordon (1981, 1983) algorithm; (2) the Smith and Wilson (1981) iterative algorityhm; (3) the pseudooptical depth method; and (4) the residual component algorithm.

  19. Determination of in-flight AVIRIS spectral, radiometric, spatial and signal-to-noise characteristics using atmospheric and surface measurements from the vicinity of the rare-earth-bearing carbonatite at Mountain Pass, California

    NASA Technical Reports Server (NTRS)

    Green, Robert O.; Vane, Gregg; Conel, James E.

    1988-01-01

    An assessment of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) performance was made for a flight over Mountain Pass, California, July 30, 1987. The flight data were reduced to reflectance using an empirical algorithm which compensates for solar, atmospheric and instrument factors. AVIRIS data in conjunction with surface and atmospheric measurements acquired concurrently were used to develop an improved spectral calibration. An accurate in-flight radiometric calibration was also performed using the LOWTRAN 7 radiative transfer code together with measured surface reflectance and atmospheric optical depths. A direct comparison with coincident Thematic Mapper imagery of Mountain Pass was used to demonstrate the high spatial resolution and good geometric performance of AVIRIS. The in-flight instrument noise was independently determined with two methods which showed good agreement. A signal-to-noise ratio was calculated using data from a uniform playa. This ratio was scaled to the AVIRIS reference radiance model, which provided a basis for comparison with laboratory and other in-flight signal-to-noise determinations.

  20. Acquisition of control skill with delayed and compensated displays.

    PubMed

    Ricard, G L

    1995-09-01

    The difficulty of mastering a two-axis, compensatory, manual control task was manipulated by introducing transport delays into the feedback loop of the controlled element. Realistic aircraft dynamics were used. Subjects' display was a simulation of an "inside-out" artificial horizon instrument perturbed by atmospheric turbulence. The task was to maintain straight and level flight, and delays tested were representative of those found in current training simulators. Delay compensations in the form of first-order lead and first-order lead/lag transfer functions, along with an uncompensated condition, were factorially combined with added delays. Subjects were required to meet a relatively strict criterion for performance. Control activity showed no differences during criterion performance, but the trials needed to achieve the criterion were linearly related to the magnitude of the delay and the compensation condition. These data were collected in the context of aircraft attitude control, but the results can be applied to the simulation of other vehicles, to remote manipulation, and to maneuvering in graphical environments.

  1. 38 CFR 3.4 - Compensation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Compensation. 3.4 Section 3.4 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.4 Compensation. (a) Compensation. This term...

  2. 38 CFR 3.4 - Compensation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Compensation. 3.4 Section 3.4 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.4 Compensation. (a) Compensation. This term...

  3. 38 CFR 3.4 - Compensation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Compensation. 3.4 Section 3.4 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.4 Compensation. (a) Compensation. This term...

  4. 38 CFR 3.4 - Compensation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Compensation. 3.4 Section 3.4 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.4 Compensation. (a) Compensation. This term...

  5. 38 CFR 3.4 - Compensation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Compensation. 3.4 Section 3.4 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.4 Compensation. (a) Compensation. This term...

  6. Method and system for enabling real-time speckle processing using hardware platforms

    NASA Technical Reports Server (NTRS)

    Ortiz, Fernando E. (Inventor); Kelmelis, Eric (Inventor); Durbano, James P. (Inventor); Curt, Peterson F. (Inventor)

    2012-01-01

    An accelerator for the speckle atmospheric compensation algorithm may enable real-time speckle processing of video feeds that may enable the speckle algorithm to be applied in numerous real-time applications. The accelerator may be implemented in various forms, including hardware, software, and/or machine-readable media.

  7. Experimental investigation of a moving averaging algorithm for motion perpendicular to the leaf travel direction in dynamic MLC target tracking.

    PubMed

    Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul

    2011-07-01

    In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of

  8. Residual motion compensation in ECG-gated interventional cardiac vasculature reconstruction

    NASA Astrophysics Data System (ADS)

    Schwemmer, C.; Rohkohl, C.; Lauritsch, G.; Müller, K.; Hornegger, J.

    2013-06-01

    Three-dimensional reconstruction of cardiac vasculature from angiographic C-arm CT (rotational angiography) data is a major challenge. Motion artefacts corrupt image quality, reducing usability for diagnosis and guidance. Many state-of-the-art approaches depend on retrospective ECG-gating of projection data for image reconstruction. A trade-off has to be made regarding the size of the ECG-gating window. A large temporal window is desirable to avoid undersampling. However, residual motion will occur in a large window, causing motion artefacts. We present an algorithm to correct for residual motion. Our approach is based on a deformable 2D-2D registration between the forward projection of an initial, ECG-gated reconstruction, and the original projection data. The approach is fully automatic and does not require any complex segmentation of vasculature, or landmarks. The estimated motion is compensated for during the backprojection step of a subsequent reconstruction. We evaluated the method using the publicly available CAVAREV platform and on six human clinical datasets. We found a better visibility of structure, reduced motion artefacts, and increased sharpness of the vessels in the compensated reconstructions compared to the initial reconstructions. At the time of writing, our algorithm outperforms the leading result of the CAVAREV ranking list. For the clinical datasets, we found an average reduction of motion artefacts by 13 ± 6%. Vessel sharpness was improved by 25 ± 12% on average.

  9. L 1-2 minimization for exact and stable seismic attenuation compensation

    NASA Astrophysics Data System (ADS)

    Wang, Yufeng; Ma, Xiong; Zhou, Hui; Chen, Yangkang

    2018-06-01

    Frequency-dependent amplitude absorption and phase velocity dispersion are typically linked by the causality-imposed Kramers-Kronig relations, which inevitably degrade the quality of seismic data. Seismic attenuation compensation is an important processing approach for enhancing signal resolution and fidelity, which can be performed on either pre-stack or post-stack data so as to mitigate amplitude absorption and phase dispersion effects resulting from intrinsic anelasticity of subsurface media. Inversion-based compensation with L1 norm constraint, enlightened by the sparsity of the reflectivity series, enjoys better stability over traditional inverse Q filtering. However, constrained L1 minimization serving as the convex relaxation of the literal L0 sparsity count may not give the sparsest solution when the kernel matrix is severely ill conditioned. Recently, non-convex metric for compressed sensing has attracted considerable research interest. In this paper, we propose a nearly unbiased approximation of the vector sparsity, denoted as L1-2 minimization, for exact and stable seismic attenuation compensation. Non-convex penalty function of L1-2 norm can be decomposed into two convex subproblems via difference of convex algorithm, each subproblem can be solved efficiently by alternating direction method of multipliers. The superior performance of the proposed compensation scheme based on L1-2 metric over conventional L1 penalty is further demonstrated by both synthetic and field examples.

  10. Compensator design for improved counterbalancing in high speed atomic force microscopy.

    PubMed

    Bozchalooi, I S; Youcef-Toumi, K; Burns, D J; Fantner, G E

    2011-11-01

    High speed atomic force microscopy can provide the possibility of many new scientific observations and applications ranging from nano-manufacturing to the study of biological processes. However, the limited imaging speed has been an imperative drawback of the atomic force microscopes. One of the main reasons behind this limitation is the excitation of the AFM dynamics at high scan speeds, severely undermining the reliability of the acquired images. In this research, we propose a piezo based, feedforward controlled, counter actuation mechanism to compensate for the excited out-of-plane scanner dynamics. For this purpose, the AFM controller output is properly filtered via a linear compensator and then applied to a counter actuating piezo. An effective algorithm for estimating the compensator parameters is developed. The information required for compensator design is extracted from the cantilever deflection signal, hence eliminating the need for any additional sensors. The proposed approach is implemented and experimentally evaluated on the dynamic response of a custom made AFM. It is further assessed by comparing the imaging performance of the AFM with and without the application of the proposed technique and in comparison with the conventional counterbalancing methodology. The experimental results substantiate the effectiveness of the method in significantly improving the imaging performance of AFM at high scan speeds. © 2011 American Institute of Physics

  11. A Comparison of Two Skip Entry Guidance Algorithms

    NASA Technical Reports Server (NTRS)

    Rea, Jeremy R.; Putnam, Zachary R.

    2007-01-01

    The Orion capsule vehicle will have a Lift-to-Drag ratio (L/D) of 0.3-0.35. For an Apollo-like direct entry into the Earth's atmosphere from a lunar return trajectory, this L/D will give the vehicle a maximum range of about 2500 nm and a maximum crossrange of 216 nm. In order to y longer ranges, the vehicle lift must be used to loft the trajectory such that the aerodynamic forces are decreased. A Skip-Trajectory results if the vehicle leaves the sensible atmosphere and a second entry occurs downrange of the atmospheric exit point. The Orion capsule is required to have landing site access (either on land or in water) inside the Continental United States (CONUS) for lunar returns anytime during the lunar month. This requirement means the vehicle must be capable of flying ranges of at least 5500 nm. For the L/D of the vehicle, this is only possible with the use of a guided Skip-Trajectory. A skip entry guidance algorithm is necessary to achieve this requirement. Two skip entry guidance algorithms have been developed: the Numerical Skip Entry Guidance (NSEG) algorithm was developed at NASA/JSC and PredGuid was developed at Draper Laboratory. A comparison of these two algorithms will be presented in this paper. Each algorithm has been implemented in a high-fidelity, 6 degree-of-freedom simulation called the Advanced NASA Technology Architecture for Exploration Studies (ANTARES). NASA and Draper engineers have completed several monte carlo analyses in order to compare the performance of each algorithm in various stress states. Each algorithm has been tested for entry-to-target ranges to include direct entries and skip entries of varying length. Dispersions have been included on the initial entry interface state, vehicle mass properties, vehicle aerodynamics, atmosphere, and Reaction Control System (RCS). Performance criteria include miss distance to the target, RCS fuel usage, maximum g-loads and heat rates for the first and second entry, total heat load, and control

  12. Compensation for first-order polarization-mode dispersion by using a novel tunable compensator

    NASA Astrophysics Data System (ADS)

    Qiu, Feng; Ning, Tigang; Pei, Shanshan; Xing, Yujun; Jian, Shuisheng

    2005-01-01

    Polarization-related impairments have become a critical issue for high-data-rate optical systems, particularly when considering polarization-mode dispersion (PMD). Consequently, compensation of PMD, especially for the first-order PMD is necessary to maintain adequate performance in long-haul systems at a high bit rate of 10 Gb/s or beyond. In this paper, we successfully demonstrated automatic and tunable compensation for first-order polarization-mode dispersion. Furthermore, we reported the statistical assessment of this tunable compensator at 10 Gbit/s. Experimental results, including bit error rate measurements, are successfully compared with theory, therefore demonstrating the compensator efficiency at 10 Gbit/s. The first-order PMD was max 274 ps before PMD compensation, and it was lower than 7ps after PMD compensation.

  13. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    PubMed

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-04-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.

  14. Relationship between Scanning Laser Polarimetry with Enhanced Corneal Compensation and with Variable Corneal Compensation

    PubMed Central

    Kim, Kyung Hoon; Choi, Jaewan; Lee, Chang Hwan; Cho, Beom-Jin; Kook, Michael S.

    2008-01-01

    Purpose To evaluate the structure-function relationships between retinal sensitivity measured by Humphrey visual field analyzer (HVFA) and the retinal nerve fiber layer (RNFL) thickness measured by scanning laser polarimetry (SLP) with variable corneal compensation (VCC) and enhanced corneal compensation (ECC) in glaucomatous and healthy eyes. Methods Fifty-three eyes with an atypical birefringence pattern (ABP) based on SLP-VCC (28 glaucomatous eyes and 25 normal healthy eyes) were enrolled in this cross-sectional study. RNFL thickness was measured by both VCC and ECC techniques, and the visual field was examined by HVFA with 24-2 full-threshold program. The relationships between RNFL measurements in superior and inferior sectors and corresponding retinal mean sensitivity were sought globally and regionally with linear regression analysis in each group. Coefficients of the determination were calculated and compared between VCC and ECC techniques. Results In eyes with ABP, R2 values for the association between SLP parameters and retinal sensitivity were 0.06-0.16 with VCC, whereas they were 0.21-0.48 with ECC. The association of RNFL thickness with retinal sensitivity was significantly better with ECC than with VCC in 5 out of 8 regression models between SLP parameters and HVF parameters (P<0.05). Conclusions The strength of the structure-function association was higher with ECC than with VCC in eyes with ABP, which suggests that the ECC algorithm is a better approach for evaluating the structure-function relationship in eyes with ABP. PMID:18323701

  15. Seasonal and interannual variations of top-of-atmosphere irradiance and cloud cover over polar regions derived from the CERES data set

    NASA Astrophysics Data System (ADS)

    Kato, Seiji; Loeb, Norman G.; Minnis, Patrick; Francis, Jennifer A.; Charlock, Thomas P.; Rutan, David A.; Clothiaux, Eugene E.; Sun-Mack, Szedung

    2006-10-01

    The daytime cloud fraction derived by the Clouds and the Earth's Radiant Energy System (CERES) cloud algorithm using Moderate Resolution Imaging Spectroradiometer (MODIS) radiances over the Arctic from March 2000 through February 2004 increases at a rate of 0.047 per decade. The trend is significant at an 80% confidence level. The corresponding top-of-atmosphere (TOA) shortwave irradiances derived from CERES radiance measurements show less significant trend during this period. These results suggest that the influence of reduced Arctic sea ice cover on TOA reflected shortwave radiation is reduced by the presence of clouds and possibly compensated by the increase in cloud cover. The cloud fraction and TOA reflected shortwave irradiance over the Antarctic show no significant trend during the same period.

  16. Compensation Chemistry

    ERIC Educational Resources Information Center

    Roady, Celia

    2008-01-01

    Congress, the news media, and the Internal Revenue Service (IRS) continue to cast a wary eye on the compensation of nonprofit leaders. Hence, any college or university board that falls short of IRS expectations in its procedures for setting the president's compensation is putting the president, other senior officials, and board members at…

  17. Design of capacity incentive and energy compensation for demand response programs

    NASA Astrophysics Data System (ADS)

    Liu, Zhoubin; Cui, Wenqi; Shen, Ran; Hu, Yishuang; Wu, Hui; Ye, Chengjin

    2018-02-01

    Variability and Uncertainties caused by renewable energy sources have called for large amount of balancing services. Demand side resources (DSRs) can be a good alternative of traditional generating units to provide balancing service. In the areas where the electricity market has not been fully established, e.g., China, DSRs can help balance the power system with incentive-based demand response programs. However, there is a lack of information about the interruption cost of consumers in these areas, making it hard to determine the rational amount of capacity incentive and energy compensation for the participants of demand response programs. This paper proposes an algorithm to calculate the amount of capacity incentive and energy compensation for demand response programs when there lacks the information about interruption cost. Available statistical information of interruption cost in referenced areas is selected as the referenced data. Interruption cost of the targeted area is converted from the referenced area by product per electricity consumption. On this basis, capacity incentive and energy compensation are obtained to minimize the payment to consumers. Moreover, the loss of consumers is guaranteed to be covered by the revenue they earned from load serving entities.

  18. Refractive Index Compensation in Over-Determined Interferometric Systems

    PubMed Central

    Lazar, Josef; Holá, Miroslava; Číp, Ondřej; Čížek, Martin; Hrabina, Jan; Buchta, Zdeněk

    2012-01-01

    We present an interferometric technique based on a differential interferometry setup for measurement under atmospheric conditions. The key limiting factor in any interferometric dimensional measurement are fluctuations of the refractive index of air representing a dominating source of uncertainty when evaluated indirectly from the physical parameters of the atmosphere. Our proposal is based on the concept of an over-determined interferometric setup where a reference length is derived from a mechanical frame made from a material with a very low thermal coefficient. The technique allows one to track the variations of the refractive index of air on-line directly in the line of the measuring beam and to compensate for the fluctuations. The optical setup consists of three interferometers sharing the same beam path where two measure differentially the displacement while the third evaluates the changes in the measuring range, acting as a tracking refractometer. The principle is demonstrated in an experimental setup. PMID:23202037

  19. Refractive index compensation in over-determined interferometric systems.

    PubMed

    Lazar, Josef; Holá, Miroslava; Číp, Ondřej; Čížek, Martin; Hrabina, Jan; Buchta, Zdeněk

    2012-10-19

    We present an interferometric technique based on a differential interferometry setup for measurement under atmospheric conditions. The key limiting factor in any interferometric dimensional measurement are fluctuations of the refractive index of air representing a dominating source of uncertainty when evaluated indirectly from the physical parameters of the atmosphere. Our proposal is based on the concept of an over-determined interferometric setup where a reference length is derived from a mechanical frame made from a material with a very low thermal coefficient. The technique allows one to track the variations of the refractive index of air on-line directly in the line of the measuring beam and to compensate for the fluctuations. The optical setup consists of three interferometers sharing the same beam path where two measure differentially the displacement while the third evaluates the changes in the measuring range, acting as a tracking refractometer. The principle is demonstrated in an experimental setup.

  20. Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner

    NASA Technical Reports Server (NTRS)

    Tanis, Fred J.

    1984-01-01

    A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.

  1. Hierarchy compensation of non-homogeneous intermittent atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Redondo, Jose M.; Mahjoub, Otman B.; Cantalapiedra, Inma R.

    2010-05-01

    In this work a study both the internal turbulence energy cascade intermittency evaluated from wind speed series in the atmospheric boundary layer, as well as the role of external or forcing intermittency based on the flatness (Vindel et al 2008)is carried out. The degree of intermittency in the stratified ABL flow (Cuxart et al. 2000) can be studied as the deviation, from the linear form, of the absolute scaling exponents of the structure functions as well as generalizing for non-isotropic and non-homogeneous turbulence, even in non-inertial ranges (in the Kolmogorov-Kraichnan sense) where the scaling exponents are not constant. The degree of intermittency, evaluated in the non-local quasi-inertial range, is explained from the variation with scale of the energy transfer as well as the dissipation. The scale to scale transfer and the structure function scaling exponents are calculated and from these the intermittency parametres. The turbulent diffusivity could also be estimated and compared with Richardson's law. Some two point correlations and time lag calculations are used to investigate the time and spatial integral length scales obtained from both Lagrangian and Eulerian correlations and functions, and we compare these results with both theoretical and laboratory data. We develop a theoretical description of how to measure the different levels of intermittency following (Mahjoub et al. 1998, 2000) and the role of locality in higher order exponents of structure function analysis. Vindel J.M., Yague C. and Redondo J.M. (2008) Structure function analysis and intermittency in the ABL. Nonlin. Processes Geophys., 15, 915-929. Cuxart J, Yague C, Morales G, Terradellas E, Orbe J, Calvo J, Fernández A, Soler M R, Infante C, Buenestado P, Espinalt A, Joergensen H E, Rees J M, Vilá J, Redondo J M, Cantalapiedra R and Conangla L (2000): Stable atmospheric boundary-layer experiment in Spain (Sables 98): a report, Boundary-Layer Meteorology 96, 337-370 Mahjoub O

  2. Whiplash and the compensation hypothesis.

    PubMed

    Spearing, Natalie M; Connelly, Luke B

    2011-12-01

    Review article. To explain why the evidence that compensation-related factors lead to worse health outcomes is not compelling, either in general, or in the specific case of whiplash. There is a common view that compensation-related factors lead to worse health outcomes ("the compensation hypothesis"), despite the presence of important, and unresolved sources of bias. The empirical evidence on this question has ramifications for the design of compensation schemes. Using studies on whiplash, this article outlines the methodological problems that impede attempts to confirm or refute the compensation hypothesis. Compensation studies are prone to measurement bias, reverse causation bias, and selection bias. Errors in measurement are largely due to the latent nature of whiplash injuries and health itself, a lack of clarity over the unit of measurement (specific factors, or "compensation"), and a lack of appreciation for the heterogeneous qualities of compensation-related factors and schemes. There has been a failure to acknowledge and empirically address reverse causation bias, or the likelihood that poor health influences the decision to pursue compensation: it is unclear if compensation is a cause or a consequence of poor health, or both. Finally, unresolved selection bias (and hence, confounding) is evident in longitudinal studies and natural experiments. In both cases, between-group differences have not been addressed convincingly. The nature of the relationship between compensation-related factors and health is unclear. Current approaches to testing the compensation hypothesis are prone to several important sources of bias, which compromise the validity of their results. Methods that explicitly test the hypothesis and establish whether or not a causal relationship exists between compensation factors and prolonged whiplash symptoms are needed in future studies.

  3. Atmospheric infrared sounder

    NASA Technical Reports Server (NTRS)

    Rosenkranz, Philip, W.; Staelin, David, H.

    1995-01-01

    This report summarizes the activities of two Atmospheric Infrared Sounder (AIRS) team members during the first half of 1995. Changes to the microwave first-guess algorithm have separated processing of Advanced Microwave Sounding Unit A (AMSU-A) from AMSU-B data so that the different spatial resolutions of the two instruments may eventually be considered. Two-layer cloud simulation data was processed with this algorithm. The retrieved water vapor column densities and liquid water are compared. The information content of AIRS data was applied to AMSU temperature profile retrievals in clear and cloudy atmospheres. The significance of this study for AIRS/AMSU processing lies in the improvement attributable to spatial averaging and in the good results obtained with a very simple algorithm when all of the channels are used. Uncertainty about the availability of either a Microwave Humidity Sensor (MHS) or AMSU-B for EOS has motivated consideration of possible low-cost alternative designs for a microwave humidity sensor. One possible configuration would have two local oscillators (compared to three for MHS) at 118.75 and 183.31 GHz. Retrieval performances of the two instruments were compared in a memorandum titled 'Comparative Analysis of Alternative MHS Configurations', which is attached.

  4. Pattern Recognition Application of Support Vector Machine for Fault Classification of Thyristor Controlled Series Compensated Transmission Lines

    NASA Astrophysics Data System (ADS)

    Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup

    2016-06-01

    Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.

  5. Improved Determination of Surface and Atmospheric Temperatures Using Only Shortwave AIRS Channels: The AIRS Version 6 Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena

    2010-01-01

    AIRS was launched on EOS Aqua on May 4, 2002 together with ASMU-A and HSB to form a next generation polar orbiting infrared and microwave atmosphere sounding system (Pagano et al 2003). The theoretical approach used to analyze AIRS/AMSU/HSB data in the presence of clouds in the AIRS Science Team Version 3 at-launch algorithm, and that used in the Version 4 post-launch algorithm, have been published previously. Significant theoretical and practical improvements have been made in the analysis of AIRS/AMSU data since the Version 4 algorithm. Most of these have already been incorporated in the AIRS Science Team Version 5 algorithm (Susskind et al 2010), now being used operationally at the Goddard DISC. The AIRS Version 5 retrieval algorithm contains three significant improvements over Version 4. Improved physics in Version 5 allowed for use of AIRS clear column radiances (R(sub i)) in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations were used primarily in the generation of clear column radiances (R(sub i)) for all channels. This new approach allowed for the generation of accurate Quality Controlled values of R(sub i) and T(p) under more stressing cloud conditions. Secondly, Version 5 contained a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 contained for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Susskind et al 2010 shows that Version 5 AIRS Only sounding are only slightly degraded from the AIRS/AMSU soundings, even at large fractional cloud

  6. Motion Compensation in Extremity Cone-Beam CT Using a Penalized Image Sharpness Criterion

    PubMed Central

    Sisniega, A.; Stayman, J. W.; Yorkston, J.; Siewerdsen, J. H.; Zbijewski, W.

    2017-01-01

    Cone-beam CT (CBCT) for musculoskeletal imaging would benefit from a method to reduce the effects of involuntary patient motion. In particular, the continuing improvement in spatial resolution of CBCT may enable tasks such as quantitative assessment of bone microarchitecture (0.1 mm – 0.2 mm detail size), where even subtle, sub-mm motion blur might be detrimental. We propose a purely image based motion compensation method that requires no fiducials, tracking hardware or prior images. A statistical optimization algorithm (CMA-ES) is used to estimate a motion trajectory that optimizes an objective function consisting of an image sharpness criterion augmented by a regularization term that encourages smooth motion trajectories. The objective function is evaluated using a volume of interest (VOI, e.g. a single bone and surrounding area) where the motion can be assumed to be rigid. More complex motions can be addressed by using multiple VOIs. Gradient variance was found to be a suitable sharpness metric for this application. The performance of the compensation algorithm was evaluated in simulated and experimental CBCT data, and in a clinical dataset. Motion-induced artifacts and blurring were significantly reduced across a broad range of motion amplitudes, from 0.5 mm to 10 mm. Structure Similarity Index (SSIM) against a static volume was used in the simulation studies to quantify the performance of the motion compensation. In studies with translational motion, the SSIM improved from 0.86 before compensation to 0.97 after compensation for 0.5 mm motion, from 0.8 to 0.94 for 2 mm motion and from 0.52 to 0.87 for 10 mm motion (~70% increase). Similar reduction of artifacts was observed in a benchtop experiment with controlled translational motion of an anthropomorphic hand phantom, where SSIM (against a reconstruction of a static phantom) improved from 0.3 to 0.8 for 10 mm motion. Application to a clinical dataset of a lower extremity showed dramatic reduction of streaks

  7. Entry vehicle performance analysis and atmospheric guidance algorithm for precision landing on Mars. M.S. Thesis - Massachusetts Inst. of Technology

    NASA Technical Reports Server (NTRS)

    Dieriam, Todd A.

    1990-01-01

    Future missions to Mars may require pin-point landing precision, possibly on the order of tens of meters. The ability to reach a target while meeting a dynamic pressure constraint to ensure safe parachute deployment is complicated at Mars by low atmospheric density, high atmospheric uncertainty, and the desire to employ only bank angle control. The vehicle aerodynamic performance requirements and guidance necessary for 0.5 to 1.5 lift drag ratio vehicle to maximize the achievable footprint while meeting the constraints are examined. A parametric study of the various factors related to entry vehicle performance in the Mars environment is undertaken to develop general vehicle aerodynamic design requirements. The combination of low lift drag ratio and low atmospheric density at Mars result in a large phugoid motion involving the dynamic pressure which complicates trajectory control. Vehicle ballistic coefficient is demonstrated to be the predominant characteristic affecting final dynamic pressure. Additionally, a speed brake is shown to be ineffective at reducing the final dynamic pressure. An adaptive precision entry atmospheric guidance scheme is presented. The guidance uses a numeric predictor-corrector algorithm to control downrange, an azimuth controller to govern crossrange, and analytic control law to reduce the final dynamic pressure. Guidance performance is tested against a variety of dispersions, and the results from selected tests are presented. Precision entry using bank angle control only is demonstrated to be feasible at Mars.

  8. An 'adding' algorithm for the Markov chain formalism for radiation transfer

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.

    1979-01-01

    An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.

  9. Five-dimensional motion compensation for respiratory and cardiac motion with cone-beam CT of the thorax region

    NASA Astrophysics Data System (ADS)

    Sauppe, Sebastian; Hahn, Andreas; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc

    2016-03-01

    We propose an adapted method of our previously published five-dimensional (5D) motion compensation (MoCo) algorithm1, developed for micro-CT imaging of small animals, to provide for the first time motion artifact-free 5D cone-beam CT (CBCT) images from a conventional flat detector-based CBCT scan of clinical patients. Image quality of retrospectively respiratory- and cardiac-gated volumes from flat detector CBCT scans is deteriorated by severe sparse projection artifacts. These artifacts further complicate motion estimation, as it is required for MoCo image reconstruction. For high quality 5D CBCT images at the same x-ray dose and the same number of projections as todays 3D CBCT we developed a double MoCo approach based on motion vector fields (MVFs) for respiratory and cardiac motion. In a first step our already published four-dimensional (4D) artifact-specific cyclic motion-compensation (acMoCo) approach is applied to compensate for the respiratory patient motion. With this information a cyclic phase-gated deformable heart registration algorithm is applied to the respiratory motion-compensated 4D CBCT data, thus resulting in cardiac MVFs. We apply these MVFs on double-gated images and thereby respiratory and cardiac motion-compensated 5D CBCT images are obtained. Our 5D MoCo approach processing patient data acquired with the TrueBeam 4D CBCT system (Varian Medical Systems). Our double MoCo approach turned out to be very efficient and removed nearly all streak artifacts due to making use of 100% of the projection data for each reconstructed frame. The 5D MoCo patient data show fine details and no motion blurring, even in regions close to the heart where motion is fastest.

  10. Seismic random noise removal by delay-compensation time-frequency peak filtering

    NASA Astrophysics Data System (ADS)

    Yu, Pengjun; Li, Yue; Lin, Hongbo; Wu, Ning

    2017-06-01

    Over the past decade, there has been an increasing awareness of time-frequency peak filtering (TFPF) due to its outstanding performance in suppressing non-stationary and strong seismic random noise. The traditional approach based on time-windowing achieves local linearity and meets the unbiased estimation. However, the traditional TFPF (including the improved algorithms with alterable window lengths) could hardly relieve the contradiction between removing noise and recovering the seismic signal, and this situation is more obvious in wave crests and troughs, even for alterable window lengths (WL). To improve the efficiency of the algorithm, the following TFPF in the time-space domain is applied, such as in the Radon domain and radial trace domain. The time-space transforms obtain a reduced-frequency input to reduce the TFPF error and stretch the desired signal along a certain direction, therefore the time-space development brings an improvement by both enhancing reflection events and attenuating noise. It still proves limited in application because the direction should be matched as a straight line or quadratic curve. As a result, waveform distortion and false seismic events may appear when processing the complex stratum record. The main emphasis in this article is placed on the time-space TFPF applicable expansion. The reconstructed signal in delay-compensation TFPF, which is generated according to the similarity among the reflection events, overcomes the limitation of the direction curve fitting. Moreover, the reconstructed signal just meets the TFPF linearity unbiased estimation and integrates signal reservation with noise attenuation. Experiments on both the synthetic model and field data indicate that delay-compensation TFPF has a better performance over the conventional filtering algorithms.

  11. Digital watermarking algorithm research of color images based on quaternion Fourier transform

    NASA Astrophysics Data System (ADS)

    An, Mali; Wang, Weijiang; Zhao, Zhen

    2013-10-01

    A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.

  12. Correcting Satellite Image Derived Surface Model for Atmospheric Effects

    NASA Technical Reports Server (NTRS)

    Emery, William; Baldwin, Daniel

    1998-01-01

    This project was a continuation of the project entitled "Resolution Earth Surface Features from Repeat Moderate Resolution Satellite Imagery". In the previous study, a Bayesian Maximum Posterior Estimate (BMPE) algorithm was used to obtain a composite series of repeat imagery from the Advanced Very High Resolution Radiometer (AVHRR). The spatial resolution of the resulting composite was significantly greater than the 1 km resolution of the individual AVHRR images. The BMPE algorithm utilized a simple, no-atmosphere geometrical model for the short-wave radiation budget at the Earth's surface. A necessary assumption of the algorithm is that all non geometrical parameters remain static over the compositing period. This assumption is of course violated by temporal variations in both the surface albedo and the atmospheric medium. The effect of the albedo variations is expected to be minimal since the variations are on a fairly long time scale compared to the compositing period, however, the atmospheric variability occurs on a relatively short time scale and can be expected to cause significant errors in the surface reconstruction. The current project proposed to incorporate an atmospheric correction into the BMPE algorithm for the purpose of investigating the effects of a variable atmosphere on the surface reconstructions. Once the atmospheric effects were determined, the investigation could be extended to include corrections various cloud effects, including short wave radiation through thin cirrus clouds. The original proposal was written for a three year project, funded one year at a time. The first year of the project focused on developing an understanding of atmospheric corrections and choosing an appropriate correction model. Several models were considered and the list was narrowed to the two best suited. These were the 5S and 6S shortwave radiation models developed at NASA/GODDARD and tested extensively with data from the AVHRR instrument. Although the 6S model

  13. Anisotropic Scattering Shadow Compensation Method for Remote Sensing Image with Consideration of Terrain

    NASA Astrophysics Data System (ADS)

    Wang, Qiongjie; Yan, Li

    2016-06-01

    With the rapid development of sensor networks and earth observation technology, a large quantity of high resolution remote sensing data is available. However, the influence of shadow has become increasingly greater due to the higher resolution shows more complex and detailed land cover, especially under the shadow. Shadow areas usually have lower intensity and fuzzy boundary, which make the images hard to interpret automatically. In this paper, a simple and effective shadow (including soft shadow) detection and compensation method is proposed based on normal data, Digital Elevation Model (DEM) and sun position. First, we use high accuracy DEM and sun position to rebuild the geometric relationship between surface and sun at the time the image shoot and get the hard shadow boundary and sky view factor (SVF) of each pixel. Anisotropic scattering assumption is accepted to determine the soft shadow factor mainly affected by diffuse radiation. Finally, an easy radiation transmission model is used to compensate the shadow area. Compared with the spectral detection method, our detection method has strict theoretical basis, reliable compensation result and minor affected by the image quality. The compensation strategy can effectively improve the radiation intensity of shadow area, reduce the information loss brought by shadow and improve the robustness and efficiency of the classification algorithms.

  14. Restoration algorithms for imaging through atmospheric turbulence

    DTIC Science & Technology

    2017-02-18

    the Fourier spectrum of each frame. The reconstructed image is then obtained by taking the inverse Fourier transform of the average of all processed...with wipξq “ Gσp|Fpviqpξq|pq řM j“1Gσp|Fpvjqpξq|pq , where F denotes the Fourier transform (ξ are the frequencies) and Gσ is a Gaussian filter of...a combination of SIFT [26] and ORSA [14] algorithms) in order to remove affine transformations (translations, rotations and homothety). The authors

  15. Atmospheric Models for Aerocapture

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Duvall, Aleta L.; Keller, Vernon W.

    2004-01-01

    There are eight destinations in the solar System with sufficient atmosphere for aerocapture to be a viable aeroassist option - Venus, Earth, Mars, Jupiter, Saturn and its moon Titan, Uranus, and Neptune. Engineering-level atmospheric models for four of these targets (Earth, Mars, Titan, and Neptune) have been developed for NASA to support systems analysis studies of potential future aerocapture missions. Development of a similar atmospheric model for Venus has recently commenced. An important capability of all of these models is their ability to simulate quasi-random density perturbations for Monte Carlo analyses in developing guidance, navigation and control algorithm, and for thermal systems design. Similarities and differences among these atmospheric models are presented, with emphasis on the recently developed Neptune model and on planned characteristics of the Venus model. Example applications for aerocapture are also presented and illustrated. Recent updates to the Titan atmospheric model are discussed, in anticipation of applications for trajectory and atmospheric reconstruct of Huygens Probe entry at Titan.

  16. Closed-loop endo-atmospheric ascent guidance for reusable launch vehicle

    NASA Astrophysics Data System (ADS)

    Sun, Hongsheng

    This dissertation focuses on the development of a closed-loop endo-atmospheric ascent guidance algorithm for the 2nd generation reusable launch vehicle. Special attention has been given to the issues that impact on viability, complexity and reliability in on-board implementation. The algorithm is called once every guidance update cycle to recalculate the optimal solution based on the current flight condition, taking into account atmospheric effects and path constraints. This is different from traditional ascent guidance algorithms which operate in a simple open-loop mode inside atmosphere, and later switch to a closed-loop vacuum ascent guidance scheme. The classical finite difference method is shown to be well suited for fast solution of the constrained optimal three-dimensional ascent problem. The initial guesses for the solutions are generated using an analytical vacuum optimal ascent guidance algorithm. Homotopy method is employed to gradually introduce the aerodynamic forces to generate the optimal solution from the optimal vacuum solution. The vehicle chosen for this study is the Lockheed Martin X-33 lifting-body reusable launch vehicle. To verify the algorithm presented in this dissertation, a series of open-loop and closed-loop tests are performed for three different missions. Wind effects are also studied in the closed-loop simulations. For comparison, the solutions for the same missions are also obtained by two independent optimization softwares. The results clearly establish the feasibility of closed-loop endo-atmospheric ascent guidance of rocket-powered launch vehicles. ATO cases are also tested to assess the adaptability of the algorithm to autonomously incorporate the abort modes.

  17. Nonlinear Decoupling Control With ANFIS-Based Unmodeled Dynamics Compensation for a Class of Complex Industrial Processes.

    PubMed

    Zhang, Yajun; Chai, Tianyou; Wang, Hong; Wang, Dianhui; Chen, Xinkai

    2018-06-01

    Complex industrial processes are multivariable and generally exhibit strong coupling among their control loops with heavy nonlinear nature. These make it very difficult to obtain an accurate model. As a result, the conventional and data-driven control methods are difficult to apply. Using a twin-tank level control system as an example, a novel multivariable decoupling control algorithm with adaptive neural-fuzzy inference system (ANFIS)-based unmodeled dynamics (UD) compensation is proposed in this paper for a class of complex industrial processes. At first, a nonlinear multivariable decoupling controller with UD compensation is introduced. Different from the existing methods, the decomposition estimation algorithm using ANFIS is employed to estimate the UD, and the desired estimating and decoupling control effects are achieved. Second, the proposed method does not require the complicated switching mechanism which has been commonly used in the literature. This significantly simplifies the obtained decoupling algorithm and its realization. Third, based on some new lemmas and theorems, the conditions on the stability and convergence of the closed-loop system are analyzed to show the uniform boundedness of all the variables. This is then followed by the summary on experimental tests on a heavily coupled nonlinear twin-tank system that demonstrates the effectiveness and the practicability of the proposed method.

  18. The effect of static cyclotorsion compensation on refractive and visual outcomes using the Schwind Amaris laser platform for the correction of high astigmatism.

    PubMed

    Aslanides, Ioannis M; Toliou, Georgia; Padroni, Sara; Arba Mosquera, Samuel; Kolli, Sai

    2011-06-01

    To compare the refractive and visual outcomes using the Schwind Amaris excimer laser in patients with high astigmatism (>1D) with and without the static cyclotorsion compensation (SCC) algorithm available with this new laser platform. 70 consecutive eyes with ≥1D astigmatism were randomized to treatment with compensation of static cyclotorsion (SCC group- 35 eyes) or not (control group- 35 eyes). A previously validated optimized aspheric ablation algorithm profile was used in every case. All patients underwent LASIK with a microkeratome cut flap. The SCC and control group did not differ preoperatively, in terms of refractive error, magnitude of astigmatism or in terms of cardinal or oblique astigmatism. Following treatment, average deviation from target was SEq +0.16D, SD±0.52 D, range -0.98 D to +1.71 D in the SCC group compared to +0.46 D, SD±0.61 D, range -0.25 D to +2.35 D in the control group, which was statistically significant (p<0.05). Following treatment, average astigmatism was 0.24 D (SD±0.28 D, range -1.01 D to 0.00 D) in the SCC group compared to 0.46 D (SD±0.42 D, range -1.80 D to 0.00 D) in the control group, which was highly statistically significant (p<0.005). There was no statistical difference in the postoperative uncorrected vision when the aspheric algorithm was used although there was a trend to increased number of lines gained in the SCC group. This study shows that static cyclotorsion is accurately compensated for by the Schwind Amaris laser platform. The compensation of static cyclotorsion in patients with moderate astigmatism produces a significant improvement in refractive and astigmatic outcomes than when not compensated. Copyright © 2011 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  19. An algorithm for variational data assimilation of contact concentration measurements for atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir

    2014-05-01

    Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate

  20. Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.

    PubMed

    Hibino, Kenichi; Kim, Yangjin

    2016-08-10

    In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error.

  1. Middle atmosphere project: A radiative heating and cooling algorithm for a numerical model of the large scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    Wehrbein, W. M.; Leovy, C. B.

    1981-01-01

    A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.

  2. A new root-based direction-finding algorithm

    NASA Astrophysics Data System (ADS)

    Wasylkiwskyj, Wasyl; Kopriva, Ivica; DoroslovačKi, Miloš; Zaghloul, Amir I.

    2007-04-01

    Polynomial rooting direction-finding (DF) algorithms are a computationally efficient alternative to search-based DF algorithms and are particularly suitable for uniform linear arrays of physically identical elements provided that mutual interaction among the array elements can be either neglected or compensated for. A popular algorithm in such situations is Root Multiple Signal Classification (Root MUSIC (RM)), wherein the estimation of the directions of arrivals (DOA) requires the computation of the roots of a (2N - 2) -order polynomial, where N represents number of array elements. The DOA are estimated from the L pairs of roots closest to the unit circle, where L represents number of sources. In this paper we derive a modified root polynomial (MRP) algorithm requiring the calculation of only L roots in order to estimate the L DOA. We evaluate the performance of the MRP algorithm numerically and show that it is as accurate as the RM algorithm but with a significantly simpler algebraic structure. In order to demonstrate that the theoretically predicted performance can be achieved in an experimental setting, a decoupled array is emulated in hardware using phase shifters. The results are in excellent agreement with theory.

  3. No-fault compensation in New Zealand: harmonizing injury compensation, provider accountability, and patient safety.

    PubMed

    Bismark, Marie; Paterson, Ron

    2006-01-01

    In 1974 New Zealand jettisoned a tort-based system for compensating medical injuries in favor of a government-funded compensation system. Although the system retained some residual fault elements, it essentially barred medical malpractice litigation. Reforms in 2005 expanded eligibility for compensation to all "treatment injuries," creating a true no-fault compensation system. Compared with a medical malpractice system, the New Zealand system offers more-timely compensation to a greater number of injured patients and more-effective processes for complaint resolution and provider accountability. The unfinished business lies in realizing its full potential for improving patient safety.

  4. Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Zuo, Chao; Tao, Tianyang; Hu, Yan; Zhang, Minliang; Chen, Qian; Gu, Guohua

    2018-04-01

    Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the N-step phase-shifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy by more than 95% for objects in motion.

  5. Atmospheric Models for Aerocapture

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Duval, Aleta; Keller, Vernon W.

    2003-01-01

    There are eight destinations in the Solar System with sufficient atmosphere for aerocapture to be a viable aeroassist option - Venus, Earth, Mars, Jupiter, Saturn and its moon Titan, Uranus, and Neptune. Engineering-level atmospheric models for four of these targets (Earth, Mars, Titan, and Neptune) have been developed for NASA to support systems analysis studies of potential future aerocapture missions. Development of a similar atmospheric model for Venus has recently commenced. An important capability of all of these models is their ability to simulate quasi-random density perturbations for Monte Carlo analyses in developing guidance, navigation and control algorithms, and for thermal systems design. Similarities and differences among these atmospheric models are presented, with emphasis on the recently developed Neptune model and on planned characteristics of the Venus model. Example applications for aerocapture are also presented and illustrated. Recent updates to the Titan atmospheric model, in anticipation of applications for trajectory and atmospheric reconstruct of Huygens Robe entry at Titan, are discussed. Recent updates to the Mars atmospheric model, in support of ongoing Mars aerocapture systems analysis studies, are also presented.

  6. Technical Note: Modification of the standard gain correction algorithm to compensate for the number of used reference flat frames in detector performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.

    2011-12-15

    Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat framesmore » used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.« less

  7. Towards frameless maskless SRS through real-time 6DoF robotic motion compensation

    NASA Astrophysics Data System (ADS)

    Belcher, Andrew H.; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D.

    2017-12-01

    Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient’s skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system’s effectiveness in maintaining the target’s 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system’s effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system’s success with volunteers has demonstrated its capability for implementation with frameless and

  8. Towards frameless maskless SRS through real-time 6DoF robotic motion compensation.

    PubMed

    Belcher, Andrew H; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D

    2017-11-13

    Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient's skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system's effectiveness in maintaining the target's 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system's effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system's success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS

  9. Compensation seeking and disability after injury: the role of compensation-related stress and mental health.

    PubMed

    O'Donnell, Meaghan L; Grant, Genevieve; Alkemade, Nathan; Spittal, Matthew; Creamer, Mark; Silove, Derrick; McFarlane, Alexander; Bryant, Richard A; Forbes, David; Studdert, David M

    2015-08-01

    Claiming for compensation after injury is associated with poor health outcomes. This study examined the degree to which compensation-related stress predicts long-term disability and the mental health factors that contribute to this relationship. In a longitudinal, multisite cohort study, 332 injury patients (who claimed for compensation) recruited from April 2004 to February 2006 were assessed during hospitalization and at 3 and 72 months after injury. Posttraumatic stress, depression, and anxiety symptoms (using the Mini-International Neuropsychiatric Interview) were assessed at 3 months; compensation-related stress and disability levels (using the World Health Organization Disability Assessment Schedule II) were assessed at 72 months. A significant direct relationship was found between levels of compensation-related stress and levels of long-term disability (β = 0.35, P < .001). Three-month posttraumatic stress symptoms had a significant relationship with compensation-related stress (β = 0.29, P < .001) as did 3-month depression symptoms (β = 0.39, P < .001), but 3-month anxiety symptoms did not. A significant indirect relationship was found for posttraumatic stress symptoms and disability via compensation stress (β = 0.099, P = .001) and for depression and disability via compensation stress (β = 0.136, P < .001). Stress associated with seeking compensation is significantly related to long-term disability. Posttraumatic stress and depression symptoms increase the perception of stress associated with the claims process, which in turn is related to higher levels of long-term disability. Early interventions targeting those at risk for compensation-related stress may decrease long-term costs for compensation schemes. © Copyright 2015 Physicians Postgraduate Press, Inc.

  10. Analysis and compensation of an aircraft simulator control loading system with compliant linkage. [using hydraulic equipment

    NASA Technical Reports Server (NTRS)

    Johnson, P. R.; Bardusch, R. E.

    1974-01-01

    A hydraulic control loading system for aircraft simulation was analyzed to find the causes of undesirable low frequency oscillations and loading effects in the output. The hypothesis of mechanical compliance in the control linkage was substantiated by comparing the behavior of a mathematical model of the system with previously obtained experimental data. A compensation scheme based on the minimum integral of the squared difference between desired and actual output was shown to be effective in reducing the undesirable output effects. The structure of the proposed compensation was computed by use of a dynamic programing algorithm and a linear state space model of the fixed elements in the system.

  11. Coastal Zone Color Scanner atmospheric correction - Influence of El Chichon

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1988-01-01

    The addition of an El Chichon-like aerosol layer in the stratosphere is shown to have very little effect on the basic CZCS atmospheric correction algorithm. The additional stratospheric aerosol is found to increase the total radiance exiting the atmosphere, thereby increasing the probability that the sensor will saturate. It is suggested that in the absence of saturation the correction algorithm should perform as well as in the absence of the stratospheric layer.

  12. An Improved Algorithm for Retrieving Surface Downwelling Longwave Radiation from Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.

    2006-01-01

    Retrieving surface longwave radiation from space has been a difficult task since the surface downwelling longwave radiation (SDLW) are integrations from radiation emitted by the entire atmosphere, while those emitted from the upper atmosphere are absorbed before reaching the surface. It is particularly problematic when thick clouds are present since thick clouds will virtually block all the longwave radiation from above, while satellites observe atmosphere emissions mostly from above the clouds. Zhou and Cess developed an algorithm for retrieving SDLW based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for areas that were covered with ice clouds. An improved version of the algorithm was developed that prevents the large errors in the SDLW at low water vapor amounts. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths measured from the Cloud and the Earth's Radiant Energy System (CERES) satellites to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for the Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing. It will be incorporated in the CERES project as one of the empirical surface radiation algorithms.

  13. A study of digital gyro compensation loops. [data conversion routines and breadboard models

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The feasibility is discussed of replacing existing state-of-the-art analog gyro compensation loops with digital computations. This was accomplished by designing appropriate compensation loops for the dry turned TDF gyro, selecting appropriate data conversion and processing techniques and algorithms, and breadboarding the design for laboratory evaluation. A breadboard design was established in which one axis of a Teledyne turned-gimbal TDF gyro was caged digitally while the other was caged using conventional analog electronics. The digital loop was designed analytically to closely resemble the analog loop in performance. The breadboard was subjected to various static and dynamic tests in order to establish the relative stability characteristics and frequency responses of the digital and analog loops. Several variations of the digital loop configuration were evaluated. The results were favorable.

  14. Compensating the intensity fall-off effect in cone-beam tomography by an empirical weight formula.

    PubMed

    Chen, Zikuan; Calhoun, Vince D; Chang, Shengjiang

    2008-11-10

    The Feldkamp-David-Kress (FDK) algorithm is widely adopted for cone-beam reconstruction due to its one-dimensional filtered backprojection structure and parallel implementation. In a reconstruction volume, the conspicuous cone-beam artifact manifests as intensity fall-off along the longitudinal direction (the gantry rotation axis). This effect is inherent to circular cone-beam tomography due to the fact that a cone-beam dataset acquired from circular scanning fails to meet the data sufficiency condition for volume reconstruction. Upon observations of the intensity fall-off phenomenon associated with the FDK reconstruction of a ball phantom, we propose an empirical weight formula to compensate for the fall-off degradation. Specifically, a reciprocal cosine can be used to compensate the voxel values along longitudinal direction during three-dimensional backprojection reconstruction, in particular for boosting the values of voxels at positions with large cone angles. The intensity degradation within the z plane, albeit insignificant, can also be compensated by using the same weight formula through a parameter for radial distance dependence. Computer simulations and phantom experiments are presented to demonstrate the compensation effectiveness of the fall-off effect inherent in circular cone-beam tomography.

  15. MAMS: High resolution atmospheric moisture/surface properties

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; Guillory, Anthony R.; Suggs, Ron; Atkinson, Robert J.; Carlson, Grant S.

    1991-01-01

    Multispectral Atmospheric Mapping Sensor (MAMS) data collected from a number of U2/ER2 aircraft flights were used to investigate atmospheric and surface (land) components of the hydrologic cycle. Algorithms were developed to retrieve surface and atmospheric geophysical parameters which describe the variability of atmospheric moisture, its role in cloud and storm development, and the influence of surface moisture and heat sources on convective activity. Techniques derived with MAMS data are being applied to existing satellite measurements to show their applicability to regional and large process studies and their impact on operational forecasting.

  16. Carrier Compensation Induced by Thermal Annealing in Al-Doped ZnO Films

    PubMed Central

    Koida, Takashi; Kaneko, Tetsuya; Shibata, Hajime

    2017-01-01

    This study investigated carrier compensation induced by thermal annealing in sputtered ZnO:Al (Al2O3: 0.25, 0.5, 1.0, and 2.0 wt %) films. The films were post-annealed in a N2 atmosphere at low (1 × 10−23 atm) and high (1 × 10−4 atm) oxygen partial pressures (PO2). In ZnO:Al films with low Al contents (i.e., 0.25 wt %), the carrier density (n) began to decrease at annealing temperatures (Ta) of 600 °C at low PO2. At higher PO2 and/or Al contents, n values began to decrease significantly at lower Ta (ca. 400 °C). In addition, Zn became desorbed from the films during heating in a high vacuum (i.e., <1 × 10−7 Pa). These results suggest the following: (i) Zn interstitials and Zn vacancies are created in the ZnO lattice during post-annealing treatments, thereby leading to carrier compensation by acceptor-type Zn vacancies; (ii) The compensation behavior is significantly enhanced for ZnO:Al films with high Al contents. PMID:28772501

  17. Experimental verification of a two-dimensional respiratory motion compensation system with ultrasound tracking technique in radiation therapy.

    PubMed

    Ting, Lai-Lei; Chuang, Ho-Chiao; Liao, Ai-Ho; Kuo, Chia-Chun; Yu, Hsiao-Wei; Zhou, Yi-Liang; Tien, Der-Chi; Jeng, Shiu-Chen; Chiou, Jeng-Fong

    2018-05-01

    This study proposed respiratory motion compensation system (RMCS) combined with an ultrasound image tracking algorithm (UITA) to compensate for respiration-induced tumor motion during radiotherapy, and to address the problem of inaccurate radiation dose delivery caused by respiratory movement. This study used an ultrasound imaging system to monitor respiratory movements combined with the proposed UITA and RMCS for tracking and compensation of the respiratory motion. Respiratory motion compensation was performed using prerecorded human respiratory motion signals and also sinusoidal signals. A linear accelerator was used to deliver radiation doses to GAFchromic EBT3 dosimetry film, and the conformity index (CI), root-mean-square error, compensation rate (CR), and planning target volume (PTV) were used to evaluate the tracking and compensation performance of the proposed system. Human respiratory pattern signals were captured using the UITA and compensated by the RMCS, which yielded CR values of 34-78%. In addition, the maximum coronal area of the PTV ranged from 85.53 mm 2 to 351.11 mm 2 (uncompensated), which reduced to from 17.72 mm 2 to 66.17 mm 2 after compensation, with an area reduction ratio of up to 90%. In real-time monitoring of the respiration compensation state, the CI values for 85% and 90% isodose areas increased to 0.7 and 0.68, respectively. The proposed UITA and RMCS can reduce the movement of the tracked target relative to the LINAC in radiation therapy, thereby reducing the required size of the PTV margin and increasing the effect of the radiation dose received by the treatment target. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. A novel control architecture for physiological tremor compensation in teleoperated systems.

    PubMed

    Ghorbanian, A; Zareinejad, M; Rezaei, S M; Sheikhzadeh, H; Baghestan, K

    2013-09-01

    Telesurgery delivers surgical care to a 'remote' patient by means of robotic manipulators. When accurate positioning of the surgeon's tool is required, as in microsurgery, physiological tremor causes unwanted imprecision during a surgical operation. Accurate estimation/compensation of physiological tremor in teleoperation systems has been shown to improve performance during telesurgery. A new control architecture is proposed for estimation and compensation of physiological tremor in the presence of communication time delays. This control architecture guarantees stability with satisfactory transparency. In addition, the proposed method can be used for applications that require modifications in transmitted signals through communication channels. Stability of the bilateral tremor-compensated teleoperation is preserved by extending the bilateral teleoperation to the equivalent trilateral Dual-master/Single-slave teleoperation. The bandlimited multiple Fourier linear combiner (BMFLC) algorithm is employed for real-time estimation of the operator's physiological tremor. Two kinds of stability analysis are employed. In the model-base controller, Llewellyn's Criterion is used to analyze the teleoperation absolute stability. In the second method, a nonmodel-based controller is proposed and the stability of the time-delayed teleoperated system is proved by employing a Lyapunov function. Experimental results are presented to validate the effectiveness of the new control architecture. The tremorous motion is measured by accelerometer to be compensated in real time. In addition, a Needle-Insertion setup is proposed as a slave robot for the application of brachytherapy, in which the needle penetrates in the desired position. The slave performs the desired task in two classes of environments (free motion of the slave and in the soft tissue). Experiments show that the proposed control architecture effectively compensates the user's tremorous motion and the slave follows only the

  19. The American compensation phenomenon.

    PubMed

    Bale, A

    1990-01-01

    In this article, the author defines the occupational safety and health domain, characterizes the distinct compensation phenomenon in the United States, and briefly reviews important developments in the last decade involving Karen Silkwood, intentional torts, and asbestos litigation. He examines the class conflict over the value and meaning of work-related injuries and illnesses involved in the practical activity of making claims and turning them into money through compensation inquiries. Juries, attributions of fault, and medicolegal discourse play key roles in the compensation phenomenon. This article demonstrates the extensive, probing inquiry through workers' bodies constituted by the American compensation phenomenon into the moral basis of elements of the system of production.

  20. A metal artifact reduction algorithm in CT using multiple prior images by recursive active contour segmentation

    PubMed Central

    Nam, Haewon

    2017-01-01

    We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794

  1. Separation of Atmospheric and Surface Spectral Features in Mars Global Surveyor Thermal Emission Spectrometer (TES) Spectra

    NASA Technical Reports Server (NTRS)

    Smith, Michael D.; Bandfield, Joshua L.; Christensen, Philip R.

    2000-01-01

    We present two algorithms for the separation of spectral features caused by atmospheric and surface components in Thermal Emission Spectrometer (TES) data. One algorithm uses radiative transfer and successive least squares fitting to find spectral shapes first for atmospheric dust, then for water-ice aerosols, and then, finally, for surface emissivity. A second independent algorithm uses a combination of factor analysis, target transformation, and deconvolution to simultaneously find dust, water ice, and surface emissivity spectral shapes. Both algorithms have been applied to TES spectra, and both find very similar atmospheric and surface spectral shapes. For TES spectra taken during aerobraking and science phasing periods in nadir-geometry these two algorithms give meaningful and usable surface emissivity spectra that can be used for mineralogical identification.

  2. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli B.; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  3. Neuronal Mechanism for Compensation of Longitudinal Chromatic Aberration-Derived Algorithm.

    PubMed

    Barkan, Yuval; Spitzer, Hedva

    2018-01-01

    The human visual system faces many challenges, among them the need to overcome the imperfections of its optics, which degrade the retinal image. One of the most dominant limitations is longitudinal chromatic aberration (LCA), which causes short wavelengths (blue light) to be focused in front of the retina with consequent blurring of the retinal chromatic image. The perceived visual appearance, however, does not display such chromatic distortions. The intriguing question, therefore, is how the perceived visual appearance of a sharp and clear chromatic image is achieved despite the imperfections of the ocular optics. To address this issue, we propose a neural mechanism and computational model, based on the unique properties of the S -cone pathway. The model suggests that the visual system overcomes LCA through two known properties of the S channel: (1) omitting the contribution of the S channel from the high-spatial resolution pathway (utilizing only the L and M channels). (b) Having large and coextensive receptive fields that correspond to the small bistratified cells. Here, we use computational simulations of our model on real images to show how integrating these two basic principles can provide a significant compensation for LCA. Further support for the proposed neuronal mechanism is given by the ability of the model to predict an enigmatic visual phenomenon of large color shifts as part of the assimilation effect.

  4. Neuronal Mechanism for Compensation of Longitudinal Chromatic Aberration-Derived Algorithm

    PubMed Central

    Barkan, Yuval; Spitzer, Hedva

    2018-01-01

    The human visual system faces many challenges, among them the need to overcome the imperfections of its optics, which degrade the retinal image. One of the most dominant limitations is longitudinal chromatic aberration (LCA), which causes short wavelengths (blue light) to be focused in front of the retina with consequent blurring of the retinal chromatic image. The perceived visual appearance, however, does not display such chromatic distortions. The intriguing question, therefore, is how the perceived visual appearance of a sharp and clear chromatic image is achieved despite the imperfections of the ocular optics. To address this issue, we propose a neural mechanism and computational model, based on the unique properties of the S-cone pathway. The model suggests that the visual system overcomes LCA through two known properties of the S channel: (1) omitting the contribution of the S channel from the high-spatial resolution pathway (utilizing only the L and M channels). (b) Having large and coextensive receptive fields that correspond to the small bistratified cells. Here, we use computational simulations of our model on real images to show how integrating these two basic principles can provide a significant compensation for LCA. Further support for the proposed neuronal mechanism is given by the ability of the model to predict an enigmatic visual phenomenon of large color shifts as part of the assimilation effect. PMID:29527525

  5. A Managerial Approach to Compensation

    ERIC Educational Resources Information Center

    Wolfe, Arthur V.

    1975-01-01

    The article examines the major external forces constraining equitable employee compensation, sets forth the classical employee compensation assumptions, suggests somewhat more realistic employee compensation assumptions, and proposes guidelines based on analysis of these external constraints and assumptions. (Author)

  6. Theoretical algorithms for satellite-derived sea surface temperatures

    NASA Astrophysics Data System (ADS)

    Barton, I. J.; Zavody, A. M.; O'Brien, D. M.; Cutten, D. R.; Saunders, R. W.; Llewellyn-Jones, D. T.

    1989-03-01

    Reliable climate forecasting using numerical models of the ocean-atmosphere system requires accurate data sets of sea surface temperature (SST) and surface wind stress. Global sets of these data will be supplied by the instruments to fly on the ERS 1 satellite in 1990. One of these instruments, the Along-Track Scanning Radiometer (ATSR), has been specifically designed to provide SST in cloud-free areas with an accuracy of 0.3 K. The expected capabilities of the ATSR can be assessed using transmission models of infrared radiative transfer through the atmosphere. The performances of several different models are compared by estimating the infrared brightness temperatures measured by the NOAA 9 AVHRR for three standard atmospheres. Of these, a computationally quick spectral band model is used to derive typical AVHRR and ATSR SST algorithms in the form of linear equations. These algorithms show that a low-noise 3.7-μm channel is required to give the best satellite-derived SST and that the design accuracy of the ATSR is likely to be achievable. The inclusion of extra water vapor information in the analysis did not improve the accuracy of multiwavelength SST algorithms, but some improvement was noted with the multiangle technique. Further modeling is required with atmospheric data that include both aerosol variations and abnormal vertical profiles of water vapor and temperature.

  7. Compensation of hospital-based physicians.

    PubMed Central

    Steinwald, B

    1983-01-01

    This study is concerned with methods of compensating hospital-based physicians (HBPs) in five medical specialties: anesthesiology, pathology, radiology, cardiology, and emergency medicine. Data on 2232 nonfederal, short-term general hospitals came from a mail questionnaire survey conducted in Fall 1979. The data indicate that numerous compensation methods exist but these methods, without much loss of precision, can be reduced to salary, percentage of department revenue, and fee-for-service. When HBPs are compensated by salary or percentage methods, most patient billing is conducted by the hospital. In contrast, most fee-for-service HBPs bill their patients directly. Determinants of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods are investigated via multinomial logit analysis. This analysis indicates that choice of HBP compensation methods is sensitive to a number of hospital characteristics and attributes of both the hospital and physicians' services markets. The empirical findings are discussed in light of past conceptual and empirical research on physician compensation, and current policy issues in the health services sector. PMID:6841112

  8. Soil-plant-atmosphere ammonia exchange associated with calluna vulgaris and deschampsia flexuosa

    NASA Astrophysics Data System (ADS)

    Schjoerring, Jan K.; Husted, Søren; Poulsen, Mette M.

    Ammonia fluxes and compensation points at atmospheric NH 3 concentrations corresponding to those occurring under natural growth conditions (0-26 nmol NH 3 mol air -1) were measured for canopies of two species native to heathland in N.W. Europe, viz. Calluna vulgaris (L.) Hull and Deschampsia flexuosa (L.) Trin. The NH 3 compensation point in 2 yr-old C. vulgaris plants, in which current year's shoots had just started growing, was below the detection limit (0.1 nmol mol -1 at 8°C). Fifty days later, when current year's shoots were elongating and flowers developed, the NH 3 compensation point was approximately 6±2.0 nmol mol -1 at 22°C (0.8±0.3 nmol mol -1 at 8°C). The plants in which the shoot tips had just started growing were characterized by a low N concentration in the shoot dry matter (5.8 mg N g -1 shoot dry weight) and a low photosynthetic CO 2 assimilation compared to the flowering plants in which the average dry matter N concentration in old shoots and woody stems was 7.4 and in new shoots 9.5 mg N g -1 shoot dry weight. Plant-atmosphere NH 3 fluxes in C. vulgaris responded approximately linearly to changes in the atmospheric NH 3 concentration. The maximum net absorption rate at 26 nmol NH 3 mol -1 air was 12 nmol NH 3 m -2 ground surface s -1 (equivalent to 13.3 pmol NH 3 g -1 shoot dry matter s -1). Ammonia absorption in Deschampsia flexuosa plants increased approximately linearly with increasing NH 3 concentrations up to 20 nmol mol -1. The maximum NH 3 absorption was 8.5 nmol m -2 ground surface s -1 (30.4 pmol g -1 shoot dry weight s -1). The NH 3 compensation point at 24°C was 3.0±1.1, and at 31°C 7.5±0.6 nmol mol air -1. These values correspond to a NH 3 compensation point of 0.45±0.15 at 8°C. The soil used for cultivation of C. vulgaris (peat soil with pH 6.9) initially adsorbed NH 3 at a rate which exceeded the absorption by the plant canopy. During a 24 d period following the harvest of the plants soil NH 3 adsorption declined and the

  9. Atmospheric Correction for Satellite Ocean Color Radiometry

    NASA Technical Reports Server (NTRS)

    Mobley, Curtis D.; Werdell, Jeremy; Franz, Bryan; Ahmad, Ziauddin; Bailey, Sean

    2016-01-01

    This tutorial is an introduction to atmospheric correction in general and also documentation of the atmospheric correction algorithms currently implemented by the NASA Ocean Biology Processing Group (OBPG) for processing ocean color data from satellite-borne sensors such as MODIS and VIIRS. The intended audience is graduate students or others who are encountering this topic for the first time. The tutorial is in two parts. Part I discusses the generic atmospheric correction problem. The magnitude and nature of the problem are first illustrated with numerical results generated by a coupled ocean-atmosphere radiative transfer model. That code allow the various contributions (Rayleigh and aerosol path radiance, surface reflectance, water-leaving radiance, etc.) to the topof- the-atmosphere (TOA) radiance to be separated out. Particular attention is then paid to the definition, calculation, and interpretation of the so-called "exact normalized water-leaving radiance" and its equivalent reflectance. Part I ends with chapters on the calculation of direct and diffuse atmospheric transmittances, and on how vicarious calibration is performed. Part II then describes one by one the particular algorithms currently used by the OBPG to effect the various steps of the atmospheric correction process, viz. the corrections for absorption and scattering by gases and aerosols, Sun and sky reflectance by the sea surface and whitecaps, and finally corrections for sensor out-of-band response and polarization effects. One goal of the tutorial-guided by teaching needs- is to distill the results of dozens of papers published over several decades of research in atmospheric correction for ocean color remote sensing.

  10. Atmospheric Models for Aeroentry and Aeroassist

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Duvall, Aleta; Keller, Vernon W.

    2005-01-01

    Eight destinations in the Solar System have sufficient atmosphere for aeroentry, aeroassist, or aerobraking/aerocapture: Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune, plus Saturn's moon Titan. Engineering-level atmospheric models for Earth, Mars, Titan, and Neptune have been developed for use in NASA's systems analysis studies of aerocapture applications. Development has begun on a similar atmospheric model for Venus. An important capability of these models is simulation of quasi-random perturbations for Monte Carlo analyses in developing guidance, navigation and control algorithms, and for thermal systems design. Characteristics of these atmospheric models are compared, and example applications for aerocapture are presented. Recent Titan atmospheric model updates are discussed, in anticipation of applications for trajectory and atmospheric reconstruct of Huygens Probe entry at Titan. Recent and planned updates to the Mars atmospheric model, in support of future Mars aerocapture systems analysis studies, are also presented.

  11. Atmospheric Models for Aeroentry and Aeroassist

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Duvall, Aleta; Keller, Vernon W.

    2004-01-01

    Eight destinations in the Solar System have sufficient atmosphere for aeroentry, aeroassist, or aerobraking/aerocapture: Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune, plus Saturn's moon Titan. Engineering-level atmospheric models for Earth, Mars, Titan, and Neptune have been developed for use in NASA s systems analysis studies of aerocapture applications. Development has begun on a similar atmospheric model for Venus. An important capability of these models is simulation of quasi-random perturbations for Monte Carlo analyses in developing guidance, navigation and control algorithms, and for thermal systems design. Characteristics of these atmospheric models are compared, and example applications for aerocapture are presented. Recent Titan atmospheric model updates are discussed, in anticipation of applications for trajectory and atmospheric reconstruct of Huygens Probe entry at Titan. Recent and planned updates to the Mars atmospheric model, in support of future Mars aerocapture systems analysis studies, are also presented.

  12. A Study of Dispersion Compensation of Polarization Multiplexing-Based OFDM-OCDMA for Radio-over-Fiber Transmissions

    PubMed Central

    Yen, Chih-Ta; Chen, Wen-Bin

    2016-01-01

    Chromatic dispersion from optical fiber is the most important problem that produces temporal skews and destroys the rectangular structure of code patterns in the spectra-amplitude-coding-based optical code-division multiple-access (SAC-OCDMA) system. Thus, the balance detection scheme does not work perfectly to cancel multiple access interference (MAI) and the system performance will be degraded. Orthogonal frequency-division multiplexing (OFDM) is the fastest developing technology in the academic and industrial fields of wireless transmission. In this study, the radio-over-fiber system is realized by integrating OFDM and OCDMA via polarization multiplexing scheme. The electronic dispersion compensation (EDC) equalizer element of OFDM integrated with the dispersion compensation fiber (DCF) is used in the proposed radio-over-fiber (RoF) system, which can efficiently suppress the chromatic dispersion influence in long-haul transmitted distance. A set of length differences for 10 km-long single-mode fiber (SMF) and 4 km-long DCF is to verify the compensation scheme by relative equalizer algorithms and constellation diagrams. In the simulation result, the proposed dispersion mechanism successfully compensates the dispersion from SMF and the system performance with dispersion equalizer is highly improved. PMID:27618042

  13. 75 FR 32293 - Nonduplication; Pension, Compensation, and Dependency and Indemnity Compensation; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-08

    ... DEPARTMENT OF VETERANS AFFAIRS 38 CFR Part 21 Nonduplication; Pension, Compensation, and Dependency and Indemnity Compensation; Correction AGENCY: Department of Veterans Affairs. ACTION: Correcting amendment. SUMMARY: This document corrects the Department of Veterans Affairs (VA) regulation that governs...

  14. 38 CFR 3.459 - Death compensation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Death compensation. 3.459 Section 3.459 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation Apportionments § 3.459 Death compensation. (a) Death...

  15. 38 CFR 3.459 - Death compensation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Death compensation. 3.459 Section 3.459 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation Apportionments § 3.459 Death compensation. (a) Death...

  16. 38 CFR 3.459 - Death compensation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Death compensation. 3.459 Section 3.459 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation Apportionments § 3.459 Death compensation. (a) Death...

  17. 38 CFR 3.459 - Death compensation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Death compensation. 3.459 Section 3.459 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation Apportionments § 3.459 Death compensation. (a) Death...

  18. 38 CFR 3.459 - Death compensation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Death compensation. 3.459 Section 3.459 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation Apportionments § 3.459 Death compensation. (a) Death...

  19. 48 CFR 970.2270 - Unemployment compensation.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Unemployment compensation... Unemployment compensation. (a) Each state has its own unemployment compensation system to provide payments to... unemployment compensation benefits through a payroll tax on employers. Most DOE contractors are subject to the...

  20. 48 CFR 970.2270 - Unemployment compensation.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Unemployment compensation... Unemployment compensation. (a) Each state has its own unemployment compensation system to provide payments to... unemployment compensation benefits through a payroll tax on employers. Most DOE contractors are subject to the...

  1. 48 CFR 970.2270 - Unemployment compensation.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Unemployment compensation... Unemployment compensation. (a) Each state has its own unemployment compensation system to provide payments to... unemployment compensation benefits through a payroll tax on employers. Most DOE contractors are subject to the...

  2. 48 CFR 970.2270 - Unemployment compensation.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Unemployment compensation... Unemployment compensation. (a) Each state has its own unemployment compensation system to provide payments to... unemployment compensation benefits through a payroll tax on employers. Most DOE contractors are subject to the...

  3. Study on improved Ip-iq APF control algorithm and its application in micro grid

    NASA Astrophysics Data System (ADS)

    Xie, Xifeng; Shi, Hua; Deng, Haiyingv

    2018-01-01

    In order to enhance the tracking velocity and accuracy of harmonic detection by ip-iq algorithm, a novel ip-iq control algorithm based on the Instantaneous reactive power theory is presented, the improved algorithm adds the lead correction link to adjust the zero point of the detection system, the Fuzzy Self-Tuning Adaptive PI control is introduced to dynamically adjust the DC-link Voltage, which meets the requirement of the harmonic compensation of the micro grid. Simulation and experimental results verify the proposed method is feasible and effective in micro grid.

  4. Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft

    NASA Astrophysics Data System (ADS)

    Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei

    2018-05-01

    Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).

  5. The Federal Employees' Compensation Act.

    ERIC Educational Resources Information Center

    Nordlund, Willis J.

    1991-01-01

    The 1916 Federal Employees' Compensation Act is still the focal point around which the federal workers compensation program works today. The program has gone through many changes on its way to becoming a modern means of compensating workers for job-related injury, disease, and death. (Author)

  6. 29 CFR 525.6 - Compensable time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Compensable time. 525.6 Section 525.6 Labor Regulations... WITH DISABILITIES UNDER SPECIAL CERTIFICATES § 525.6 Compensable time. Individuals employed subject to this part must be compensated for all hours worked. Compensable time includes not only those hours...

  7. 29 CFR 525.6 - Compensable time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Compensable time. 525.6 Section 525.6 Labor Regulations... WITH DISABILITIES UNDER SPECIAL CERTIFICATES § 525.6 Compensable time. Individuals employed subject to this part must be compensated for all hours worked. Compensable time includes not only those hours...

  8. 29 CFR 525.6 - Compensable time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Compensable time. 525.6 Section 525.6 Labor Regulations... WITH DISABILITIES UNDER SPECIAL CERTIFICATES § 525.6 Compensable time. Individuals employed subject to this part must be compensated for all hours worked. Compensable time includes not only those hours...

  9. Mars Pathfinder Atmospheric Entry Navigation Operations

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Spencer, D. A.; Kallemeyn, P. H.; Vaughan, R. M.

    1997-01-01

    On July 4, 1997, after traveling close to 500 million km, the Pathfinder spacecraft successfully completed entry, descent, and landing, coming to rest on the surface of Mars just 27 km from its target point. In the present paper, the atmospheric entry and approach navigation activities required in support of this mission are discussed. In particular, the flight software parameter update and landing site prediction analyses performed by the Pathfinder operations navigation team are described. A suite of simulation tools developed during Pathfinder's design cycle, but extendible to Pathfinder operations, are also presented. Data regarding the accuracy of the primary parachute deployment algorithm is extracted from the Pathfinder flight data, demonstrating that this algorithm performed as predicted. The increased probability of mission success through the software parameter update process is discussed. This paper also demonstrates the importance of modeling atmospheric flight uncertainties in the estimation of an accurate landing site. With these atmospheric effects included, the final landed ellipse prediction differs from the post-flight determined landing site by less then 0.5 km in downtrack.

  10. Motion compensation in digital subtraction angiography using graphics hardware.

    PubMed

    Deuerling-Zheng, Yu; Lell, Michael; Galant, Adam; Hornegger, Joachim

    2006-07-01

    An inherent disadvantage of digital subtraction angiography (DSA) is its sensitivity to patient motion which causes artifacts in the subtraction images. These artifacts could often reduce the diagnostic value of this technique. Automated, fast and accurate motion compensation is therefore required. To cope with this requirement, we first examine a method explicitly designed to detect local motions in DSA. Then, we implement a motion compensation algorithm by means of block matching on modern graphics hardware. Both methods search for maximal local similarity by evaluating a histogram-based measure. In this context, we are the first who have mapped an optimizing search strategy on graphics hardware while paralleling block matching. Moreover, we provide an innovative method for creating histograms on graphics hardware with vertex texturing and frame buffer blending. It turns out that both methods can effectively correct the artifacts in most case, as the hardware implementation of block matching performs much faster: the displacements of two 1024 x 1024 images can be calculated at 3 frames/s with integer precision or 2 frames/s with sub-pixel precision. Preliminary clinical evaluation indicates that the computation with integer precision could already be sufficient.

  11. Hybrid flower pollination algorithm strategies for t-way test suite generation.

    PubMed

    Nasser, Abdullah B; Zamli, Kamal Z; Alsewari, AbdulRahman A; Ahmed, Bestoun S

    2018-01-01

    The application of meta-heuristic algorithms for t-way testing has recently become prevalent. Consequently, many useful meta-heuristic algorithms have been developed on the basis of the implementation of t-way strategies (where t indicates the interaction strength). Mixed results have been reported in the literature to highlight the fact that no single strategy appears to be superior compared with other configurations. The hybridization of two or more algorithms can enhance the overall search capabilities, that is, by compensating the limitation of one algorithm with the strength of others. Thus, hybrid variants of the flower pollination algorithm (FPA) are proposed in the current work. Four hybrid variants of FPA are considered by combining FPA with other algorithmic components. The experimental results demonstrate that FPA hybrids overcome the problems of slow convergence in the original FPA and offers statistically superior performance compared with existing t-way strategies in terms of test suite size.

  12. Hybrid flower pollination algorithm strategies for t-way test suite generation

    PubMed Central

    Zamli, Kamal Z.; Alsewari, AbdulRahman A.

    2018-01-01

    The application of meta-heuristic algorithms for t-way testing has recently become prevalent. Consequently, many useful meta-heuristic algorithms have been developed on the basis of the implementation of t-way strategies (where t indicates the interaction strength). Mixed results have been reported in the literature to highlight the fact that no single strategy appears to be superior compared with other configurations. The hybridization of two or more algorithms can enhance the overall search capabilities, that is, by compensating the limitation of one algorithm with the strength of others. Thus, hybrid variants of the flower pollination algorithm (FPA) are proposed in the current work. Four hybrid variants of FPA are considered by combining FPA with other algorithmic components. The experimental results demonstrate that FPA hybrids overcome the problems of slow convergence in the original FPA and offers statistically superior performance compared with existing t-way strategies in terms of test suite size. PMID:29718918

  13. DOLWD Division of Workers' Compensation

    Science.gov Websites

    ' Compensation Act (Act). The Act provides for the payment by employers or their insurance carriers of medical -related medical and disability benefits. Workers' Compensation also requires the payment of benefits to Workforce Development, Workers' Compensation Division, Medical Services Review Committee will meet June 15

  14. 22 CFR 96.34 - Compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Financial and Risk Management § 96.34 Compensation. (a) The agency or person does not compensate any... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Compensation. 96.34 Section 96.34 Foreign Relations DEPARTMENT OF STATE LEGAL AND RELATED SERVICES ACCREDITATION OF AGENCIES AND APPROVAL OF PERSONS...

  15. Ni-MH battery charger with a compensator for electric vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, H.W.; Han, C.S.; Kim, C.S.

    1996-09-01

    The development of a high-performance battery and safe and reliable charging methods are two important factors for commercialization of the Electric Vehicles (EV). Hyundai and Ovonic together spent many years in the research on optimum charging method for Ni-MH battery. This paper presents in detail the results of intensive experimental analysis, performed by Hyundai in collaboration with Ovonic. An on-board Ni-MH battery charger and its controller which are designed to use as a standard home electricity supply are described. In addition, a 3 step constant current recharger with the temperature and the battery aging compensator is proposed. This has amore » multi-loop algorithm function to detect its 80% and fully charged state, and carry out equalization charging control. The algorithm is focused on safety, reliability, efficiency, charging speed and thermal management (maintaining uniform temperatures within a battery pack). It is also designed to minimize the necessity for user input.« less

  16. Analysis and compensation of reference frequency mismatch in multiple-frequency feedforward active noise and vibration control system

    NASA Astrophysics Data System (ADS)

    Liu, Jinxin; Chen, Xuefeng; Yang, Liangdong; Gao, Jiawei; Zhang, Xingwu

    2017-11-01

    In the field of active noise and vibration control (ANVC), a considerable part of unwelcome noise and vibration is resulted from rotational machines, making the spectrum of response signal multiple-frequency. Narrowband filtered-x least mean square (NFXLMS) is a very popular algorithm to suppress such noise and vibration. It has good performance since a priori-knowledge of fundamental frequency of the noise source (called reference frequency) is adopted. However, if the priori-knowledge is inaccurate, the control performance will be dramatically degraded. This phenomenon is called reference frequency mismatch (RFM). In this paper, a novel narrowband ANVC algorithm with orthogonal pair-wise reference frequency regulator is proposed to compensate for the RFM problem. Firstly, the RFM phenomenon in traditional NFXLMS is closely investigated both analytically and numerically. The results show that RFM changes the parameter estimation problem of the adaptive controller into a parameter tracking problem. Then, adaptive sinusoidal oscillators with output rectification are introduced as the reference frequency regulator to compensate for the RFM problem. The simulation results show that the proposed algorithm can dramatically suppress the multiple-frequency noise and vibration with an improved convergence rate whether or not there is RFM. Finally, case studies using experimental data are conducted under the conditions of none, small and large RFM. The shaft radial run-out signal of a rotor test-platform is applied to simulate the primary noise, and an IIR model identified from a real steel structure is applied to simulate the secondary path. The results further verify the robustness and effectiveness of the proposed algorithm.

  17. Compensation: How to Apply

    MedlinePlus

    ... assist them in completing their claims. Claims for Dependency and Indemnity Compensation made by surviving spouses or ... Benefits or VA Form 21-534a, Application for Dependency and Indemnity Compensation by a Surviving Spouse or ...

  18. A primer for workers' compensation.

    PubMed

    Bible, Jesse E; Spengler, Dan M; Mir, Hassan R

    2014-07-01

    A physician's role within a workers' compensation injury extends far beyond just evaluation and treatment with several socioeconomic and psychological factors at play compared with similar injuries occurring outside of the workplace. Although workers' compensation statutes vary among states, all have several basic features with the overall goal of returning the injured worker to maximal function in the shortest time period, with the least residual disability and shortest time away from work. To help physicians unfamiliar with the workers' compensation process accomplish these goals. Review. Educational review. The streamlined review addresses the topics of why is workers' compensation necessary; what does workers' compensation cover; progression after work injury; impairment and maximum medical improvement, including how to use the sixth edition of American Medical Association's (AMA) Guides to the evaluation of permanent impairment (Guides); completion of work injury claim after impairment rating; independent medical evaluation; and causation. In the "no-fault" workers' compensation system, physicians play a key role in progressing the claim along and, more importantly, getting the injured worker back to work as soon as safely possible. Physicians should remain familiar with the workers' compensation process, along with how to properly use the AMA Guides. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Deferred Compensation Becomes More Common

    ERIC Educational Resources Information Center

    June, Audrey Williams

    2006-01-01

    A key part of the compensation package for some college and university presidents is money that they do not receive in their paychecks. Formally known as deferred compensation, such payments can take many forms, including supplemental retirement pay, severance pay, or even bonuses. With large institutions leading the way, deferred compensation has…

  20. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  1. Registration Methods for IVUS: Transversal and Longitudinal Transducer Motion Compensation.

    PubMed

    Talou, Gonzalo D Maso; Blanco, Pablo J; Larrabide, Ignacio; Bezerra, Cristiano Guedes; Lemos, Pedro A; Feijoo, Raul A

    2017-04-01

    Intravascular ultrasound (IVUS) is a fundamental imaging technique for atherosclerotic plaque assessment, interventionist guidance, and, ultimately, as a tissue characterization tool. The studies acquired by this technique present the spatial description of the vessel during the cardiac cycle. However, the study frames are not properly sorted. As gating methods deal with the cardiac phase classification of the frames, the gated studies lack motion compensation between vessel and catheter. In this study, we develop registration strategies to arrange the vessel data into its rightful spatial sequence. Registration is performed by compensating longitudinal and transversal relative motion between vessel and catheter. Transversal motion is identified through maximum likelihood estimator optimization, while longitudinal motion is estimated by a neighborhood similarity estimator among the study frames. A strongly coupled implementation is proposed to compensate for both motion components at once. Loosely coupled implementations (DLT and DTL) decouple the registration process, resulting in more computationally efficient algorithms in detriment of the size of the set of candidate solutions. The DTL outperforms DLT and coupled implementations in terms of accuracy by a factor of 1.9 and 1.4, respectively. Sensitivity analysis shows that perivascular tissue must be considered to obtain the best registration outcome. Evidences suggest that the method is able to measure axial strain along the vessel wall. The proposed registration sorts the IVUS frames for spatial location, which is crucial for a correct interpretation of the vessel wall kinematics along the cardiac phases.

  2. Gmti Motion Compensation

    DOEpatents

    Doerry, Armin W.

    2004-07-20

    Movement of a GMTI radar during a coherent processing interval over which a set of radar pulses are processed may cause defocusing of a range-Doppler map in the video signal. This problem may be compensated by varying waveform or sampling parameters of each pulse to compensate for distortions caused by variations in viewing angles from the radar to the target.

  3. The TROPOMI surface UV algorithm

    NASA Astrophysics Data System (ADS)

    Lindfors, Anders V.; Kujanpää, Jukka; Kalakoski, Niilo; Heikkilä, Anu; Lakkala, Kaisa; Mielonen, Tero; Sneep, Maarten; Krotkov, Nickolay A.; Arola, Antti; Tamminen, Johanna

    2018-02-01

    The TROPOspheric Monitoring Instrument (TROPOMI) is the only payload of the Sentinel-5 Precursor (S5P), which is a polar-orbiting satellite mission of the European Space Agency (ESA). TROPOMI is a nadir-viewing spectrometer measuring in the ultraviolet, visible, near-infrared, and the shortwave infrared that provides near-global daily coverage. Among other things, TROPOMI measurements will be used for calculating the UV radiation reaching the Earth's surface. Thus, the TROPOMI surface UV product will contribute to the monitoring of UV radiation by providing daily information on the prevailing UV conditions over the globe. The TROPOMI UV algorithm builds on the heritage of the Ozone Monitoring Instrument (OMI) and the Satellite Application Facility for Atmospheric Composition and UV Radiation (AC SAF) algorithms. This paper provides a description of the algorithm that will be used for estimating surface UV radiation from TROPOMI observations. The TROPOMI surface UV product includes the following UV quantities: the UV irradiance at 305, 310, 324, and 380 nm; the erythemally weighted UV; and the vitamin-D weighted UV. Each of these are available as (i) daily dose or daily accumulated irradiance, (ii) overpass dose rate or irradiance, and (iii) local noon dose rate or irradiance. In addition, all quantities are available corresponding to actual cloud conditions and as clear-sky values, which otherwise correspond to the same conditions but assume a cloud-free atmosphere. This yields 36 UV parameters altogether. The TROPOMI UV algorithm has been tested using input based on OMI and the Global Ozone Monitoring Experiment-2 (GOME-2) satellite measurements. These preliminary results indicate that the algorithm is functioning according to expectations.

  4. A new version of Stochastic-parallel-gradient-descent algorithm (SPGD) for phase correction of a distorted orbital angular momentum (OAM) beam

    NASA Astrophysics Data System (ADS)

    Jiao Ling, LIn; Xiaoli, Yin; Huan, Chang; Xiaozhou, Cui; Yi-Lin, Guo; Huan-Yu, Liao; Chun-YU, Gao; Guohua, Wu; Guang-Yao, Liu; Jin-KUn, Jiang; Qing-Hua, Tian

    2018-02-01

    Atmospheric turbulence limits the performance of orbital angular momentum-based free-space optical communication (FSO-OAM) system. In order to compensate phase distortion induced by atmospheric turbulence, wavefront sensorless adaptive optics (WSAO) has been proposed and studied in recent years. In this paper a new version of SPGD called MZ-SPGD, which combines the Z-SPGD based on the deformable mirror influence function and the M-SPGD based on the Zernike polynomials, is proposed. Numerical simulations show that the hybrid method decreases convergence times markedly but can achieve the same compensated effect compared to Z-SPGD and M-SPGD.

  5. Ocean observations with EOS/MODIS: Algorithm development and post launch studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1996-01-01

    An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm is nearly complete. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. Simple algorithms such as subtracting the reflectance at 1380 nm from the visible and near infrared bands can significantly reduce the error; however, only if the diffuse transmittance of the aerosol layer is taken into account. The atmospheric correction code has been modified for use with absorbing aerosols. Tests of the code showed that, in contrast to non absorbing aerosols, the retrievals were strongly influenced by the vertical structure of the aerosol, even when the candidate aerosol set was restricted to a set appropriate to the absorbing aerosol. This will further complicate the problem of atmospheric correction in an atmosphere with strongly absorbing aerosols. Our whitecap radiometer system and solar aureole camera were both tested at sea and performed well. Investigation of a technique to remove the effects of residual instrument polarization sensitivity were initiated and applied to an instrument possessing (approx.) 3-4 times the polarization sensitivity expected for MODIS. Preliminary results suggest that for such an instrument, elimination of the polarization effect is possible at the required level of accuracy by estimating the polarization of the top-of-atmosphere radiance to be that expected for a pure Rayleigh scattering atmosphere. This may be of significance for design of a follow-on MODIS instrument. W.M. Balch participated on two month-long cruises to the Arabian sea, measuring coccolithophore abundance, production, and optical properties. A thorough understanding of the relationship between calcite abundance and light scatter, in situ, will provide the basis for a generic suspended calcite algorithm.

  6. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  7. Rapid measurement and compensation method of eccentricity in automatic profile measurement of the ICF capsule.

    PubMed

    Li, Shaobai; Wang, Yun; Wang, Qi; Ma, Xianxian; Wang, Longxiao; Zhao, Weiqian; Zhang, Xusheng

    2018-05-10

    In this paper, we propose a new measurement and compensation method for the eccentricity of the inertial confinement fusion (ICF) capsule, which combines computer vision and the laser differential confocal method to align the capsule in rotation measurement. This technique measures the eccentricity of the capsule by obtaining the sub-pixel profile with a moment-based algorithm, then performs the preliminary alignment by the two-dimensional adjustment. Next, we use the laser differential confocal sensor to measure the height data of the equatorial surface of the capsule by turning it around, then obtain and compensate the remaining eccentricity ultimately. This method is a non-contact, automatic, rapid, high-precision measurement and compensation technique of eccentricity for the capsule. Theoretical analyses and preliminary experiments indicate that the maximum measurement range of eccentricity of this proposed method is 1.8 mm for the capsule with a diameter of 1 mm, and it could eliminate the eccentricity to less than 0.5 μm in 30 s.

  8. Assessing state-of-the-art capabilities for probing the atmospheric boundary layer: The XPIA field campaign

    DOE PAGES

    Lundquist, Julie K.; Wilczak, James M.; Ashton, Ryan; ...

    2017-03-07

    To assess current capabilities for measuring flow within the atmospheric boundary layer, including within wind farms, the U.S. Dept. of Energy sponsored the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign at the Boulder Atmospheric Observatory (BAO) in spring 2015. Herein, we summarize the XPIA field experiment, highlight novel measurement approaches, and quantify uncertainties associated with these measurement methods. Line-of-sight velocities measured by scanning lidars and radars exhibit close agreement with tower measurements, despite differences in measurement volumes. Virtual towers of wind measurements, from multiple lidars or radars, also agree well with tower and profiling lidar measurements. Estimates of windsmore » over volumes from scanning lidars and radars are in close agreement, enabling assessment of spatial variability. Strengths of the radar systems used here include high scan rates, large domain coverage, and availability during most precipitation events, but they struggle at times to provide data during periods with limited atmospheric scatterers. In contrast, for the deployment geometry tested here, the lidars have slower scan rates and less range, but provide more data during non-precipitating atmospheric conditions. Microwave radiometers provide temperature profiles with approximately the same uncertainty as Radio-Acoustic Sounding Systems (RASS). Using a motion platform, we assess motion-compensation algorithms for lidars to be mounted on offshore platforms. As a result, we highlight cases for validation of mesoscale or large-eddy simulations, providing information on accessing the archived dataset. We conclude that modern remote sensing systems provide a generational improvement in observational capabilities, enabling resolution of fine-scale processes critical to understanding inhomogeneous boundary-layer flows.« less

  9. Assessing state-of-the-art capabilities for probing the atmospheric boundary layer: The XPIA field campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundquist, Julie K.; Wilczak, James M.; Ashton, Ryan

    To assess current capabilities for measuring flow within the atmospheric boundary layer, including within wind farms, the U.S. Dept. of Energy sponsored the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign at the Boulder Atmospheric Observatory (BAO) in spring 2015. Herein, we summarize the XPIA field experiment, highlight novel measurement approaches, and quantify uncertainties associated with these measurement methods. Line-of-sight velocities measured by scanning lidars and radars exhibit close agreement with tower measurements, despite differences in measurement volumes. Virtual towers of wind measurements, from multiple lidars or radars, also agree well with tower and profiling lidar measurements. Estimates of windsmore » over volumes from scanning lidars and radars are in close agreement, enabling assessment of spatial variability. Strengths of the radar systems used here include high scan rates, large domain coverage, and availability during most precipitation events, but they struggle at times to provide data during periods with limited atmospheric scatterers. In contrast, for the deployment geometry tested here, the lidars have slower scan rates and less range, but provide more data during non-precipitating atmospheric conditions. Microwave radiometers provide temperature profiles with approximately the same uncertainty as Radio-Acoustic Sounding Systems (RASS). Using a motion platform, we assess motion-compensation algorithms for lidars to be mounted on offshore platforms. As a result, we highlight cases for validation of mesoscale or large-eddy simulations, providing information on accessing the archived dataset. We conclude that modern remote sensing systems provide a generational improvement in observational capabilities, enabling resolution of fine-scale processes critical to understanding inhomogeneous boundary-layer flows.« less

  10. Assessing State-of-the-Art Capabilities for Probing the Atmospheric Boundary Layer: The XPIA Field Campaign

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundquist, Julie K.; Wilczak, James M.; Ashton, Ryan

    The synthesis of new measurement technologies with advances in high performance computing provides an unprecedented opportunity to advance our understanding of the atmosphere, particularly with regard to the complex flows in the atmospheric boundary layer. To assess current measurement capabilities for quantifying features of atmospheric flow within wind farms, the U.S. Dept. of Energy sponsored the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign at the Boulder Atmospheric Observatory (BAO) in spring 2015. Herein, we summarize the XPIA field experiment design, highlight novel approaches to boundary-layer measurements, and quantify measurement uncertainties associated with these experimental methods. Line-of-sight velocities measured bymore » scanning lidars and radars exhibit close agreement with tower measurements, despite differences in measurement volumes. Virtual towers of wind measurements, from multiple lidars or dual radars, also agree well with tower and profiling lidar measurements. Estimates of winds over volumes,conducted with rapid lidar scans, agree with those from scanning radars, enabling assessment of spatial variability. Microwave radiometers provide temperature profiles within and above the boundary layer with approximately the same uncertainty as operational remote sensing measurements. Using a motion platform, we assess motion-compensation algorithms for lidars to be mounted on offshore platforms. Finally, we highlight cases that could be useful for validation of large-eddy simulations or mesoscale numerical weather prediction, providing information on accessing the archived dataset. We conclude that modern remote Lundquist et al. XPIA BAMS Page 4 of 81 sensing systems provide a generational improvement in observational capabilities, enabling resolution of refined processes critical to understanding 61 inhomogeneous boundary-layer flows such as those found in wind farms.« less

  11. Profiling atmospheric water vapor by microwave radiometry

    NASA Technical Reports Server (NTRS)

    Wang, J. R.; Wilheit, T. T.; Szejwach, G.; Gesell, L. H.; Nieman, R. A.; Niver, D. S.; Krupp, B. M.; Gagliano, J. A.; King, J. L.

    1983-01-01

    High-altitude microwave radiometric observations at frequencies near 92 and 183.3 GHz were used to study the potential of retrieving atmospheric water vapor profiles over both land and water. An algorithm based on an extended kalman-Bucy filter was implemented and applied for the water vapor retrieval. The results show great promise in atmospheric water vapor profiling by microwave radiometry heretofore not attainable at lower frequencies.

  12. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.

    1997-01-01

    Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.

  13. Binarization algorithm for document image with complex background

    NASA Astrophysics Data System (ADS)

    Miao, Shaojun; Lu, Tongwei; Min, Feng

    2015-12-01

    The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.

  14. An improved NAS-RIF algorithm for image restoration

    NASA Astrophysics Data System (ADS)

    Gao, Weizhe; Zou, Jianhua; Xu, Rong; Liu, Changhai; Li, Hengnian

    2016-10-01

    Space optical images are inevitably degraded by atmospheric turbulence, error of the optical system and motion. In order to get the true image, a novel nonnegativity and support constants recursive inverse filtering (NAS-RIF) algorithm is proposed to restore the degraded image. Firstly the image noise is weaken by Contourlet denoising algorithm. Secondly, the reliable object support region estimation is used to accelerate the algorithm convergence. We introduce the optimal threshold segmentation technology to improve the object support region. Finally, an object construction limit and the logarithm function are added to enhance algorithm stability. Experimental results demonstrate that, the proposed algorithm can increase the PSNR, and improve the quality of the restored images. The convergence speed of the proposed algorithm is faster than that of the original NAS-RIF algorithm.

  15. LAWS simulation: Sampling strategies and wind computation algorithms

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D. A.; Wood, S. A.; Houston, S. H.

    1989-01-01

    In general, work has continued on developing and evaluating algorithms designed to manage the Laser Atmospheric Wind Sounder (LAWS) lidar pulses and to compute the horizontal wind vectors from the line-of-sight (LOS) measurements. These efforts fall into three categories: Improvements to the shot management and multi-pair algorithms (SMA/MPA); observing system simulation experiments; and ground-based simulations of LAWS.

  16. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  17. Synthesis of atmospheric turbulence point spread functions by sparse and redundant representations

    NASA Astrophysics Data System (ADS)

    Hunt, Bobby R.; Iler, Amber L.; Bailey, Christopher A.; Rucci, Michael A.

    2018-02-01

    Atmospheric turbulence is a fundamental problem in imaging through long slant ranges, horizontal-range paths, or uplooking astronomical cases through the atmosphere. An essential characterization of atmospheric turbulence is the point spread function (PSF). Turbulence images can be simulated to study basic questions, such as image quality and image restoration, by synthesizing PSFs of desired properties. In this paper, we report on a method to synthesize PSFs of atmospheric turbulence. The method uses recent developments in sparse and redundant representations. From a training set of measured atmospheric PSFs, we construct a dictionary of "basis functions" that characterize the atmospheric turbulence PSFs. A PSF can be synthesized from this dictionary by a properly weighted combination of dictionary elements. We disclose an algorithm to synthesize PSFs from the dictionary. The algorithm can synthesize PSFs in three orders of magnitude less computing time than conventional wave optics propagation methods. The resulting PSFs are also shown to be statistically representative of the turbulence conditions that were used to construct the dictionary.

  18. Weighted nonnegative tensor factorization for atmospheric tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Carmona-Ballester, David; Trujillo-Sevilla, Juan M.; Bonaque-González, Sergio; Gómez-Cárdenes, Óscar; Rodríguez-Ramos, José M.

    2018-06-01

    Context. Increasing the area on the sky over which atmospheric turbulences can be corrected is a matter of wide interest in astrophysics, especially when a new generation of extremely large telescopes (ELT) is to come in the near future. Aims: In this study we tested if a method for visual representation in three-dimensional displays, the weighted nonnegative tensor factorization (WNTF), is able to improve the quality of the atmospheric tomography (AT) reconstruction as compared to a more standardized method like a randomized Kaczmarz algorithm. Methods: A total of 1000 different atmospheres were simulated and recovered by both methods. Recovering was computed for two and three layers and for four different constellations of laser guiding stars (LGS). The goodness of both methods was tested by means of the radial average of the Strehl ratio across the field of view of a telescope of 8m diameter with a sky coverage of 97.8 arcsec. Results: The proposed method significantly outperformed the Kaczmarz in all tested cases (p ≤ 0.05). In WNTF, three-layers configuration provided better outcomes, but there was no clear relation between different LGS constellations and the quality of Strehl ratio maps. Conclusions: The WNTF method is a novel technique in astronomy and its use to recover atmospheric turbulence profiles was proposed and tested. It showed better quality of reconstruction than a conventional Kaczmarz algorithm independently of the number and height of recovered atmospheric layers and of the constellation of laser guide star used. The WNTF method was shown to be a useful tool in highly ill-posed AT problems, where the difficulty of classical algorithms produce high Strehl value maps.

  19. Asbestos-related occupational cancers compensated under the Industrial Accident Compensation Insurance in Korea.

    PubMed

    Ahn, Yeon-Soon; Kang, Seong-Kyu

    2009-04-01

    Compensation for asbestos-related cancers occurring in occupationally-exposed workers is a global issue; this is also an issue in Korea. To provide basic information regarding compensation for workers exposed to asbestos, 60 cases of asbestos-related occupational lung cancer and mesothelioma that were compensated during 15 yr; from 1993 (the year the first case was compensated) to 2007 by the Korea Labor Welfare Corporation (KLWC) are described. The characteristics of the cases were analyzed using the KLWC electronic data and the epidemiologic investigation data conducted by the Occupational Safety and Health Research Institute (OSHRI) of the Korea Occupational Safety and Health Agency (KOSHA). The KLWC approved compensation for 41 cases of lung cancer and 19 cases of mesothelioma. Males accounted for 91.7% (55 cases) of the approved cases. The most common age group was 50-59 yr (45.0%). The mean duration of asbestos exposure for lung cancer and mesothelioma cases was 19.2 and 16.0 yr, respectively. The mean latency period for lung cancer and mesothelioma cases was 22.1 and 22.6 yr, respectively. The major industries associated with mesothelioma cases were shipbuilding and maintenance (4 cases) and manufacture of asbestos textiles (3 cases). The major industries associated with lung cancer cases were shipbuilding and maintenance (7 cases), construction (6 cases), and manufacture of basic metals (4 cases). The statistics pertaining to asbestos-related occupational cancers in Korea differ from other developed countries in that more cases of mesothelioma were compensated than lung cancer cases. Also, the mean latency period for disease onset was shorter than reported by existing epidemiologic studies; this discrepancy may be related to the short history of occupational asbestos use in Korea. Considering the current Korean use of asbestos, the number of compensated cases in Korea is expected to increase in the future but not as much as developed countries.

  20. Bjerknes Compensation in Meridional Heat Transport under Freshwater Forcing and the Role of Climate Feedback

    NASA Astrophysics Data System (ADS)

    Wen, Qin

    2017-04-01

    Using a coupled Earth climate model, freshwater experiments are performed to study the Bjerknes compensation (BJC) between meridional atmosphere heat transport (AHT) and meridional ocean heat transport (OHT). Freshwater hosing in the North Atlantic weakens the Atlantic meridional overturning circulation (AMOC) and thus reduces the northward OHT in the Atlantic significantly, leading to a cooling (warming) in surface layer in the Northern (Southern) Hemisphere. This results in an enhanced Hadley Cell and northward AHT. Meanwhile, the OHT in the Indo-Pacific is increased in response to the Hadley Cell change, partially offsetting the reduced OHT in the Atlantic. Two compensations occur here: compensation between the AHT and the Atlantic OHT, and that between the Indo-Pacific OHT and the Atlantic OHT. The AHT change compensates the OHT change very well in the extratropics, while the former overcompensates the latter in the tropics due to the Indo-Pacific change. The BJC can be understood from the viewpoint of large-scale circulation change. However, the intrinsic mechanism of BJC is related to the climate feedback of Earth system. Our coupled model experiments confirm that the occurrence of BJC is an intrinsic requirement of local energy balance, and local climate feedback determines the extent of BJC, consistent with previous theoretical results. Even during the transient period of climate change in the model, the BJC is well established when the ocean heat storage is slowly varying and its change is weaker than the net heat flux changes at the ocean surface and the top of the atmosphere. The BJC can be deduced from the local climate feedback. Under the freshwater forcing, the overcompensation in the tropics (undercompensation in the extratropics) is mainly caused by the positive longwave feedback related to cloud (negative longwave feedback related to surface temperature change). Different dominant feedbacks determine different BJC scenarios in different regions

  1. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  2. Markerless motion estimation for motion-compensated clinical brain imaging

    NASA Astrophysics Data System (ADS)

    Kyme, Andre Z.; Se, Stephen; Meikle, Steven R.; Fulton, Roger R.

    2018-05-01

    Motion-compensated brain imaging can dramatically reduce the artifacts and quantitative degradation associated with voluntary and involuntary subject head motion during positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT). However, motion-compensated imaging protocols are not in widespread clinical use for these modalities. A key reason for this seems to be the lack of a practical motion tracking technology that allows for smooth and reliable integration of motion-compensated imaging protocols in the clinical setting. We seek to address this problem by investigating the feasibility of a highly versatile optical motion tracking method for PET, SPECT and CT geometries. The method requires no attached markers, relying exclusively on the detection and matching of distinctive facial features. We studied the accuracy of this method in 16 volunteers in a mock imaging scenario by comparing the estimated motion with an accurate marker-based method used in applications such as image guided surgery. A range of techniques to optimize performance of the method were also studied. Our results show that the markerless motion tracking method is highly accurate (<2 mm discrepancy against a benchmarking system) on an ethnically diverse range of subjects and, moreover, exhibits lower jitter and estimation of motion over a greater range than some marker-based methods. Our optimization tests indicate that the basic pose estimation algorithm is very robust but generally benefits from rudimentary background masking. Further marginal gains in accuracy can be achieved by accounting for non-rigid motion of features. Efficiency gains can be achieved by capping the number of features used for pose estimation provided that these features adequately sample the range of head motion encountered in the study. These proof-of-principle data suggest that markerless motion tracking is amenable to motion-compensated brain imaging and holds

  3. Flies compensate for unilateral wing damage through modular adjustments of wing and body kinematics.

    PubMed

    Muijres, Florian T; Iwasaki, Nicole A; Elzinga, Michael J; Melis, Johan M; Dickinson, Michael H

    2017-02-06

    Using high-speed videography, we investigated how fruit flies compensate for unilateral wing damage, in which loss of area on one wing compromises both weight support and roll torque equilibrium. Our results show that flies control for unilateral damage by rolling their body towards the damaged wing and by adjusting the kinematics of both the intact and damaged wings. To compensate for the reduction in vertical lift force due to damage, flies elevate wingbeat frequency. Because this rise in frequency increases the flapping velocity of both wings, it has the undesired consequence of further increasing roll torque. To compensate for this effect, flies increase the stroke amplitude and advance the timing of pronation and supination of the damaged wing, while making the opposite adjustments on the intact wing. The resulting increase in force on the damaged wing and decrease in force on the intact wing function to maintain zero net roll torque. However, the bilaterally asymmetrical pattern of wing motion generates a finite lateral force, which flies balance by maintaining a constant body roll angle. Based on these results and additional experiments using a dynamically scaled robotic fly, we propose a simple bioinspired control algorithm for asymmetric wing damage.

  4. Postglacial Terrestrial Carbon Dynamics and Atmospheric CO2

    NASA Astrophysics Data System (ADS)

    Prentice, C. I.; Harrison, S. P.; Kaplan, J. O.

    2002-12-01

    Combining PMIP climate model results from the last glacial maximum (LGM) with biome modelling indicates the involvement of both cold, dry climate and physiological effects of low atmospheric CO2 in reducing tree cover on the continents. Further results with the LPJ dynamic vegetation model agree with independent evidence for greatly reduced terrestrial carbon storage at LGM, and suggest that terrestrial carbon storage continued to increase during the Holocene. These results point to predominantly oceanic explanations for preindustrial changes in atmospheric CO2, although land changes after the LGM may have contributed indirectly by reducing the aeolian marine Fe source and (on a longer time scale) by triggering CaCO3 compensation in the ocean.

  5. Polarization-based compensation of astigmatism.

    PubMed

    Chowdhury, Dola Roy; Bhattacharya, Kallol; Chakraborty, Ajay K; Ghosh, Raja

    2004-02-01

    One approach to aberration compensation of an imaging system is to introduce a suitable phase mask at the aperture plane of an imaging system. We utilize this principle for the compensation of astigmatism. A suitable polarization mask used on the aperture plane together with a polarizer-retarder combination at the input of the imaging system provides the compensating polarization-induced phase steps at different quadrants of the apertures masked by different polarizers. The aberrant phase can be considerably compensated by the proper choice of a polarization mask and suitable selection of the polarization parameters involved. The results presented here bear out our theoretical expectation.

  6. The digital implementation of control compensators: The coefficient wordlength issue

    NASA Technical Reports Server (NTRS)

    Moroney, P.; Willsky, A. S.; Houpt, P. K.

    1979-01-01

    There exists a number of mathematical procedures for designing discrete-time compensators. However, the digital implementation of these designs, with a microprocessor for example, has not received nearly as thorough an investigation. The finite-precision nature of the digital hardware makes it necessary to choose an algorithm (computational structure) that will perform 'well-enough' with regard to the initial objectives of the design. This paper describes a procedure for estimating the required fixed-point coefficient wordlength for any given computational structure for the implementation of a single-input single-output LOG design. The results are compared to the actual number of bits necessary to achieve a specified performance index.

  7. Design of static synchronous series compensator based damping controller employing invasive weed optimization algorithm.

    PubMed

    Ahmed, Ashik; Al-Amin, Rasheduzzaman; Amin, Ruhul

    2014-01-01

    This paper proposes designing of Static Synchronous Series Compensator (SSSC) based damping controller to enhance the stability of a Single Machine Infinite Bus (SMIB) system by means of Invasive Weed Optimization (IWO) technique. Conventional PI controller is used as the SSSC damping controller which takes rotor speed deviation as the input. The damping controller parameters are tuned based on time integral of absolute error based cost function using IWO. Performance of IWO based controller is compared to that of Particle Swarm Optimization (PSO) based controller. Time domain based simulation results are presented and performance of the controllers under different loading conditions and fault scenarios is studied in order to illustrate the effectiveness of the IWO based design approach.

  8. Airborne Wind Profiling Algorithms for the Pulsed 2-Micron Coherent Doppler Lidar at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.; Ray, Taylor J.

    2013-01-01

    Two versions of airborne wind profiling algorithms for the pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia are presented. Each algorithm utilizes different number of line-of-sight (LOS) lidar returns while compensating the adverse effects of different coordinate systems between the aircraft and the Earth. One of the two algorithms APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) estimates wind products using two LOSs. The other algorithm utilizes five LOSs. The airborne lidar data were acquired during the NASA's Genesis and Rapid Intensification Processes (GRIP) campaign in 2010. The wind profile products from the two algorithms are compared with the dropsonde data to validate their results.

  9. Temperature-independent fiber-Bragg-grating-based atmospheric pressure sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiguo; Shen, Chunyan; Li, Luming

    2018-03-01

    Atmospheric pressure is an important way to achieve a high degree of measurement for modern aircrafts, moreover, it is also an indispensable parameter in the meteorological telemetry system. With the development of society, people are increasingly concerned about the weather. Accurate and convenient atmospheric pressure parameters can provide strong support for meteorological analysis. However, electronic atmospheric pressure sensors currently in application suffer from several shortcomings. After an analysis and discussion, we propose an innovative structural design, in which a vacuum membrane box and a temperature-independent strain sensor based on an equal strength cantilever beam structure and fiber Bragg grating (FBG) sensors are used. We provide experimental verification of that the atmospheric pressure sensor device has the characteristics of a simple structure, lack of an external power supply, automatic temperature compensation, and high sensitivity. The sensor system has good sensitivity, which can be up to 100 nm/MPa, and repeatability. In addition, the device exhibits desired hysteresis.

  10. Extended use of two crossed Babinet compensators for wavefront sensing in adaptive optics

    NASA Astrophysics Data System (ADS)

    Paul, Lancelot; Kumar Saxena, Ajay

    2010-12-01

    An extended use of two crossed Babinet compensators as a wavefront sensor for adaptive optics applications is proposed. This method is based on the lateral shearing interferometry technique in two directions. A single record of the fringes in a pupil plane provides the information about the wavefront. The theoretical simulations based on this approach for various atmospheric conditions and other errors of optical surfaces are provided for better understanding of this method. Derivation of the results from a laboratory experiment using simulated atmospheric conditions demonstrates the steps involved in data analysis and wavefront evaluation. It is shown that this method has a higher degree of freedom in terms of subapertures and on the choice of detectors, and can be suitably adopted for real-time wavefront sensing for adaptive optics.

  11. How to avoid deferred-compensation troubles.

    PubMed

    Freeman, Todd I

    2005-06-01

    Executive compensation packages have long included stock options and deferred compensation plans in order to compete for talent. Last year, Congress passed a law in response to the Enron debacle, in which executives were perceived to be protecting their deferred compensation at the expense of employees, creditors, and investors. The new law is designed to protect companies and their shareholders from being raided by the very executives that guided the company to financial ruin. Physicians who are part owners of medical practices need to know about the changes in the law regarding deferred compensation and how to avoid costly tax penalties. This article discusses how the changes affect medical practices as well as steps physician-owned clinics can take to avoid the risk of penalty, such as freezing deferred compensation and creating a new deferred compensation plan.

  12. Array signal recovery algorithm for a single-RF-channel DBF array

    NASA Astrophysics Data System (ADS)

    Zhang, Duo; Wu, Wen; Fang, Da Gang

    2016-12-01

    An array signal recovery algorithm based on sparse signal reconstruction theory is proposed for a single-RF-channel digital beamforming (DBF) array. A single-RF-channel antenna array is a low-cost antenna array in which signals are obtained from all antenna elements by only one microwave digital receiver. The spatially parallel array signals are converted into time-sequence signals, which are then sampled by the system. The proposed algorithm uses these time-sequence samples to recover the original parallel array signals by exploiting the second-order sparse structure of the array signals. Additionally, an optimization method based on the artificial bee colony (ABC) algorithm is proposed to improve the reconstruction performance. Using the proposed algorithm, the motion compensation problem for the single-RF-channel DBF array can be solved effectively, and the angle and Doppler information for the target can be simultaneously estimated. The effectiveness of the proposed algorithms is demonstrated by the results of numerical simulations.

  13. Rapid calculation of radiative heating rates and photodissociation rates in inhomogeneous multiple scattering atmospheres

    NASA Technical Reports Server (NTRS)

    Toon, Owen B.; Mckay, C. P.; Ackerman, T. P.; Santhanam, K.

    1989-01-01

    The solution of the generalized two-stream approximation for radiative transfer in homogeneous multiple scattering atmospheres is extended to vertically inhomogeneous atmospheres in a manner which is numerically stable and computationally efficient. It is shown that solar energy deposition rates, photolysis rates, and infrared cooling rates all may be calculated with the simple modifications of a single algorithm. The accuracy of the algorithm is generally better than 10 percent, so that other uncertainties, such as in absorption coefficients, may often dominate the error in calculation of the quantities of interest to atmospheric studies.

  14. Description of algorithms for processing Coastal Zone Color Scanner (CZCS) data

    NASA Technical Reports Server (NTRS)

    Zion, P. M.

    1983-01-01

    The algorithms for processing coastal zone color scanner (CZCS) data to geophysical units (pigment concentration) are described. Current public domain information for processing these data is summarized. Calibration, atmospheric correction, and bio-optical algorithms are presented. Three CZCS data processing implementations are compared.

  15. Image-classification-based global dimming algorithm for LED backlights in LCDs

    NASA Astrophysics Data System (ADS)

    Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv

    2015-07-01

    Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.

  16. Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker.

    PubMed

    Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun

    2016-10-12

    The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers' attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers' attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers' LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95'', 25.14'', 82.43''], 3σ to [16.12'', 15.89'', 53.27''], 3σ.

  17. SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell

    PubMed Central

    González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-García, Mateo; Dorta-Naranjo, Blas-Pablo

    2008-01-01

    This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar. PMID:27879884

  18. SAR System for UAV Operation with Motion Error Compensation beyond the Resolution Cell.

    PubMed

    González-Partida, José-Tomás; Almorox-González, Pablo; Burgos-Garcia, Mateo; Dorta-Naranjo, Blas-Pablo

    2008-05-23

    This paper presents an experimental Synthetic Aperture Radar (SAR) system that is under development in the Universidad Politécnica de Madrid. The system uses Linear Frequency Modulated Continuous Wave (LFM-CW) radar with a two antenna configuration for transmission and reception. The radar operates in the millimeter-wave band with a maximum transmitted bandwidth of 2 GHz. The proposed system is being developed for Unmanned Aerial Vehicle (UAV) operation. Motion errors in UAV operation can be critical. Therefore, this paper proposes a method for focusing SAR images with movement errors larger than the resolution cell. Typically, this problem is solved using two processing steps: first, coarse motion compensation based on the information provided by an Inertial Measuring Unit (IMU); and second, fine motion compensation for the residual errors within the resolution cell based on the received raw data. The proposed technique tries to focus the image without using data of an IMU. The method is based on a combination of the well known Phase Gradient Autofocus (PGA) for SAR imagery and typical algorithms for translational motion compensation on Inverse SAR (ISAR). This paper shows the first real experiments for obtaining high resolution SAR images using a car as a mobile platform for our radar.

  19. Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Conboy, B. (Technical Monitor)

    1999-01-01

    Significant accomplishments made during the present reporting period include: 1) Installed spectral optimization algorithm in the SeaDas image processing environment and successfully processed SeaWiFS imagery. The results were superior to the standard SeaWiFS algorithm (the MODIS prototype) in a turbid atmosphere off the US East Coast, but similar in a clear (typical) oceanic atmosphere; 2) Inverted ACE-2 LIDAR measurements coupled with sun photometer-derived aerosol optical thickness to obtain the vertical profile of aerosol optical thickness. The profile was validated with simultaneous aircraft measurements; and 3) Obtained LIDAR and CIMEL measurements of typical maritime and mineral dust-dominated marine atmosphere in the U.S. Virgin Islands. Contemporaneous SeaWiFS imagery were also acquired.

  20. X-Chromosome dosage compensation.

    PubMed

    Meyer, Barbara J

    2005-06-25

    In mammals, flies, and worms, sex is determined by distinctive regulatory mechanisms that cause males (XO or XY) and females (XX) to differ in their dose of X chromosomes. In each species, an essential X chromosome-wide process called dosage compensation ensures that somatic cells of either sex express equal levels of X-linked gene products. The strategies used to achieve dosage compensation are diverse, but in all cases, specialized complexes are targeted specifically to the X chromosome(s) of only one sex to regulate transcript levels. In C. elegans, this sex-specific targeting of the dosage compensation complex (DCC) is controlled by the same developmental signal that establishes sex, the ratio of X chromosomes to sets of autosomes (X:A signal). Molecular components of this chromosome counting process have been defined. Following a common step of regulation, sex determination and dosage compensation are controlled by distinct genetic pathways. C. elegans dosage compensation is implemented by a protein complex that binds both X chromosomes of hermaphrodites to reduce transcript levels by one-half. The dosage compensation complex resembles the conserved 13S condensin complex required for both mitotic and meiotic chromosome resolution and condensation, implying the recruitment of ancient proteins to the new task of regulating gene expression. Within each C. elegans somatic cell, one of the DCC components also participates in the separate mitotic/meiotic condensin complex. Other DCC components play pivotal roles in regulating the number and distribution of crossovers during meiosis. The strategy by which C. elegans X chromosomes attract the condensin-like DCC is known. Small, well-dispersed X-recognition elements act as entry sites to recruit the dosage compensation complex and to nucleate spreading of the complex to X regions that lack recruitment sites. In this manner, a repressed chromatin state is spread in cis over short or long distances, thus establishing the

  1. Stochastic speckle noise compensation in optical coherence tomography using non-stationary spline-based speckle noise modelling.

    PubMed

    Cameron, Andrew; Lui, Dorothy; Boroomand, Ameneh; Glaister, Jeffrey; Wong, Alexander; Bizheva, Kostadinka

    2013-01-01

    Optical coherence tomography (OCT) allows for non-invasive 3D visualization of biological tissue at cellular level resolution. Often hindered by speckle noise, the visualization of important biological tissue details in OCT that can aid disease diagnosis can be improved by speckle noise compensation. A challenge with handling speckle noise is its inherent non-stationary nature, where the underlying noise characteristics vary with the spatial location. In this study, an innovative speckle noise compensation method is presented for handling the non-stationary traits of speckle noise in OCT imagery. The proposed approach centers on a non-stationary spline-based speckle noise modeling strategy to characterize the speckle noise. The novel method was applied to ultra high-resolution OCT (UHROCT) images of the human retina and corneo-scleral limbus acquired in-vivo that vary in tissue structure and optical properties. Test results showed improved performance of the proposed novel algorithm compared to a number of previously published speckle noise compensation approaches in terms of higher signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and better overall visual assessment.

  2. Stochastic speckle noise compensation in optical coherence tomography using non-stationary spline-based speckle noise modelling

    PubMed Central

    Cameron, Andrew; Lui, Dorothy; Boroomand, Ameneh; Glaister, Jeffrey; Wong, Alexander; Bizheva, Kostadinka

    2013-01-01

    Optical coherence tomography (OCT) allows for non-invasive 3D visualization of biological tissue at cellular level resolution. Often hindered by speckle noise, the visualization of important biological tissue details in OCT that can aid disease diagnosis can be improved by speckle noise compensation. A challenge with handling speckle noise is its inherent non-stationary nature, where the underlying noise characteristics vary with the spatial location. In this study, an innovative speckle noise compensation method is presented for handling the non-stationary traits of speckle noise in OCT imagery. The proposed approach centers on a non-stationary spline-based speckle noise modeling strategy to characterize the speckle noise. The novel method was applied to ultra high-resolution OCT (UHROCT) images of the human retina and corneo-scleral limbus acquired in-vivo that vary in tissue structure and optical properties. Test results showed improved performance of the proposed novel algorithm compared to a number of previously published speckle noise compensation approaches in terms of higher signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and better overall visual assessment. PMID:24049697

  3. MBE growth of strain-compensated InGaAs/InAlAs/InP quantum cascade lasers

    NASA Astrophysics Data System (ADS)

    Gutowski, P.; Sankowska, I.; Karbownik, P.; Pierścińska, D.; Serebrennikova, O.; Morawiec, M.; Pruszyńska-Karbownik, E.; Gołaszewska-Malec, K.; Pierściński, K.; Muszalski, J.; Bugajski, M.

    2017-05-01

    We investigate growth conditions for strain-compensated In0.67Ga0.33As/In0.36Al0.64As/InP quantum cascade lasers (QCLs) by solid-source molecular beam epitaxy (SSMBE). The extensive discussion of growth procedures is presented. The technology was first elaborated for In0.53Ga0.47As/In0.52Al0.48As material system lattice matched to InP. After that QCLs with lattice matched active region were grown for validation of design and obtained material quality. The next step was elaboration of growth process and especially growth preparation procedures for strain compensated active regions. The grown structures were examined by HRXRD, AFM, and TEM techniques. The on-line implementation of obtained results in subsequent growth runs was crucial for achieving room temperature operating 4.4-μm lasers. For uncoated devices with Fabry-Perrot resonator up to 250 mW of optical power per facet at 300 K was obtained under pulsed conditions. The paper focuses on MBE technology and presents developed algorithm for strain-compensated QCL growth.

  4. 20 CFR 211.14 - Maximum creditable compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation for calendar years after 1984 is the maximum annual taxable wage base defined in section 3231(e)(2)(B...

  5. Recursive algorithms for bias and gain nonuniformity correction in infrared videos.

    PubMed

    Pipa, Daniel R; da Silva, Eduardo A B; Pagliari, Carla L; Diniz, Paulo S R

    2012-12-01

    Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.

  6. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  7. Quality Assessment of Collection 6 MODIS Atmospheric Science Products

    NASA Astrophysics Data System (ADS)

    Manoharan, V. S.; Ridgway, B.; Platnick, S. E.; Devadiga, S.; Mauoka, E.

    2015-12-01

    Since the launch of the NASA Terra and Aqua satellites in December 1999 and May 2002, respectively, atmosphere and land data acquired by the MODIS (Moderate Resolution Imaging Spectroradiometer) sensor on-board these satellites have been reprocessed five times at the MODAPS (MODIS Adaptive Processing System) located at NASA GSFC. The global land and atmosphere products use science algorithms developed by the NASA MODIS science team investigators. MODAPS completed Collection 6 reprocessing of MODIS Atmosphere science data products in April 2015 and is currently generating the Collection 6 products using the latest version of the science algorithms. This reprocessing has generated one of the longest time series of consistent data records for understanding cloud, aerosol, and other constituents in the earth's atmosphere. It is important to carefully evaluate and assess the quality of this data and remove any artifacts to maintain a useful climate data record. Quality Assessment (QA) is an integral part of the processing chain at MODAPS. This presentation will describe the QA approaches and tools adopted by the MODIS Land/Atmosphere Operational Product Evaluation (LDOPE) team to assess the quality of MODIS operational Atmospheric products produced at MODAPS. Some of the tools include global high resolution images, time series analysis and statistical QA metrics. The new high resolution global browse images with pan and zoom have provided the ability to perform QA of products in real time through synoptic QA on the web. This global browse generation has been useful in identifying production error, data loss, and data quality issues from calibration error, geolocation error and algorithm performance. A time series analysis for various science datasets in the Level-3 monthly product was recently developed for assessing any long term drifts in the data arising from instrument errors or other artifacts. This presentation will describe and discuss some test cases from the

  8. SST algorithm based on radiative transfer model

    NASA Astrophysics Data System (ADS)

    Mat Jafri, Mohd Z.; Abdullah, Khiruddin; Bahari, Alui

    2001-03-01

    An algorithm for measuring sea surface temperature (SST) without recourse to the in-situ data for calibration has been proposed. The algorithm which is based on the recorded infrared signal by the satellite sensor is composed of three terms, namely, the surface emission, the up-welling radiance emitted by the atmosphere, and the down-welling atmospheric radiance reflected at the sea surface. This algorithm requires the transmittance values of thermal bands. The angular dependence of the transmittance function was modeled using the MODTRAN code. Radiosonde data were used with the MODTRAN code. The expression of transmittance as a function of zenith view angle was obtained for each channel through regression of the MODTRAN output. The Ocean Color Temperature Scanner (OCTS) data from the Advanced Earth Observation Satellite (ADEOS) were used in this study. The study area covers the seas of the North West of Peninsular Malaysia region. The in-situ data (ship collected SST values) were used for verification of the results. Cloud contaminated pixels were masked out using the standard procedures which have been applied to the Advanced Very High Resolution Radiometer (AVHRR) data. The cloud free pixels at the in-situ sites were extracted for analysis. The OCTS data were then substituted in the proposed algorithm. The appropriate transmittance value for each channel was then assigned in the calculation. Assessment for the accuracy was made by observing the correlation and the rms deviations between the computed and the ship collected values. The results were also compared with the results from OCTS multi- channel sea surface temperature algorithm. The comparison produced high correlation values. The performance of this algorithm is comparable with the established OCTS algorithm. The effect of emissivity on the retrieved SST values was also investigated. SST map was generated and contoured manually.

  9. Submesoscale-selective compensation of fronts in a salinity-stratified ocean.

    PubMed

    Spiro Jaeger, Gualtiero; Mahadevan, Amala

    2018-02-01

    Salinity, rather than temperature, is the leading influence on density in some regions of the world's upper oceans. In the Bay of Bengal, heavy monsoonal rains and runoff generate strong salinity gradients that define density fronts and stratification in the upper ~50 m. Ship-based observations made in winter reveal that fronts exist over a wide range of length scales, but at O(1)-km scales, horizontal salinity gradients are compensated by temperature to alleviate about half the cross-front density gradient. Using a process study ocean model, we show that scale-selective compensation occurs because of surface cooling. Submesoscale instabilities cause density fronts to slump, enhancing stratification along-front. Specifically for salinity fronts, the surface mixed layer (SML) shoals on the less saline side, correlating sea surface salinity (SSS) with SML depth at O(1)-km scales. When losing heat to the atmosphere, the shallower and less saline SML experiences a larger drop in temperature compared to the adjacent deeper SML on the salty side of the front, thus correlating sea surface temperature (SST) with SSS at the submesoscale. This compensation of submesoscale fronts can diminish their strength and thwart the forward cascade of energy to smaller scales. During winter, salinity fronts that are dynamically submesoscale experience larger temperature drops, appearing in satellite-derived SST as cold filaments. In freshwater-influenced regions, cold filaments can mark surface-trapped layers insulated from deeper nutrient-rich waters, unlike in other regions, where they indicate upwelling of nutrient-rich water and enhanced surface biological productivity.

  10. Submesoscale-selective compensation of fronts in a salinity-stratified ocean

    PubMed Central

    Spiro Jaeger, Gualtiero; Mahadevan, Amala

    2018-01-01

    Salinity, rather than temperature, is the leading influence on density in some regions of the world’s upper oceans. In the Bay of Bengal, heavy monsoonal rains and runoff generate strong salinity gradients that define density fronts and stratification in the upper ~50 m. Ship-based observations made in winter reveal that fronts exist over a wide range of length scales, but at O(1)-km scales, horizontal salinity gradients are compensated by temperature to alleviate about half the cross-front density gradient. Using a process study ocean model, we show that scale-selective compensation occurs because of surface cooling. Submesoscale instabilities cause density fronts to slump, enhancing stratification along-front. Specifically for salinity fronts, the surface mixed layer (SML) shoals on the less saline side, correlating sea surface salinity (SSS) with SML depth at O(1)-km scales. When losing heat to the atmosphere, the shallower and less saline SML experiences a larger drop in temperature compared to the adjacent deeper SML on the salty side of the front, thus correlating sea surface temperature (SST) with SSS at the submesoscale. This compensation of submesoscale fronts can diminish their strength and thwart the forward cascade of energy to smaller scales. During winter, salinity fronts that are dynamically submesoscale experience larger temperature drops, appearing in satellite-derived SST as cold filaments. In freshwater-influenced regions, cold filaments can mark surface-trapped layers insulated from deeper nutrient-rich waters, unlike in other regions, where they indicate upwelling of nutrient-rich water and enhanced surface biological productivity. PMID:29507874

  11. CEO Compensation and Hospital Financial Performance

    PubMed Central

    Reiter, Kristin L.; Sandoval, Guillermo A.; Brown, Adalsteinn D.; Pink, George H.

    2010-01-01

    Growing interest in pay-for-performance and the level of CEO pay raises questions about the link between performance and compensation in the health sector. This study compares the compensation of non-profit hospital Chief Executive Officers (CEOs) in Ontario, Canada to the three longest reported and most used measures of hospital financial performance. Our sample consisted of 132 CEOs from 92 hospitals between 1999 and 2006. Unbalanced panel data were analyzed using fixed effects regression. Results suggest that CEO compensation was largely unrelated to hospital financial performance. Inflation-adjusted salaries appeared to increase over time independent of hospital performance, and hospital size was positively correlated with CEO compensation. The apparent upward trend in salary despite some declines in financial performance challenges the fundamental assumption underlying this paper, that is, financial performance is likely linked to CEO compensation in Ontario. Further research is needed to understand long-term performance related to compensation incentives. PMID:19605619

  12. CEO compensation and hospital financial performance.

    PubMed

    Reiter, Kristin L; Sandoval, Guillermo A; Brown, Adalsteinn D; Pink, George H

    2009-12-01

    Growing interest in pay-for-performance and the level of chief executive officers' (CEOs') pay raises questions about the link between performance and compensation in the health sector. This study compares the compensation of nonprofit hospital CEOs in Ontario, Canada to the three longest reported and most used measures of hospital financial performance. Our sample consisted of 132 CEOs from 92 hospitals between 1999 and 2006. Unbalanced panel data were analyzed using fixed effects regression. Results suggest that CEO compensation was largely unrelated to hospital financial performance. Inflation-adjusted salaries appeared to increase over time independent of hospital performance, and hospital size was positively correlated with CEO compensation. The apparent upward trend in salary despite some declines in financial performance challenges the fundamental assumption underlying this article, that is, financial performance is likely linked to CEO compensation in Ontario. Further research is needed to understand long-term performance related to compensation incentives.

  13. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  14. Automated aberration compensation in high numerical aperture systems for arbitrary laser modes (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hering, Julian; Waller, Erik H.; von Freymann, Georg

    2017-02-01

    Since a large number of optical systems and devices are based on differently shaped focal intensity distributions (point-spread-functions, PSF), the PSF's quality is crucial for the application's performance. E.g., optical tweezers, optical potentials for trapping of ultracold atoms as well as stimulated-emission-depletion (STED) based microscopy and lithography rely on precisely controlled intensity distributions. However, especially in high numerical aperture (NA) systems, such complex laser modes are easily distorted by aberrations leading to performance losses. Although different approaches addressing phase retrieval algorithms have been recently presented[1-3], fast and automated aberration compensation for a broad variety of complex shaped PSFs in high NA systems is still missing. Here, we report on a Gerchberg-Saxton[4] based algorithm (GSA) for automated aberration correction of arbitrary PSFs, especially for high NA systems. Deviations between the desired target intensity distribution and the three-dimensionally (3D) scanned experimental focal intensity distribution are used to calculate a correction phase pattern. The target phase distribution plus the correction pattern are displayed on a phase-only spatial-light-modulator (SLM). Focused by a high NA objective, experimental 3D scans of several intensity distributions allow for characterization of the algorithms performance: aberrations are reliably identified and compensated within less than 10 iterations. References 1. B. M. Hanser, M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, "Phase-retrieved pupil functions in wide-field fluorescence microscopy," J. of Microscopy 216(1), 32-48 (2004). 2. A. Jesacher, A. Schwaighofer, S. Frhapter, C. Maurer, S. Bernet, and M. Ritsch-Marte, "Wavefront correction of spatial light modulators using an optical vortex image," Opt. Express 15(9), 5801-5808 (2007). 3. A. Jesacher and M. J. Booth, "Parallel direct laser writing in three dimensions with spatially dependent

  15. 20 CFR 336.4 - Base year compensation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Base year compensation. 336.4 Section 336.4... DURATION OF NORMAL AND EXTENDED BENEFITS Normal Benefits § 336.4 Base year compensation. (a) Formula. For the purposes of this part, an employee's base year compensation includes any compensation in excess of...

  16. 20 CFR 336.4 - Base year compensation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Base year compensation. 336.4 Section 336.4... DURATION OF NORMAL AND EXTENDED BENEFITS Normal Benefits § 336.4 Base year compensation. (a) Formula. For the purposes of this part, an employee's base year compensation includes any compensation in excess of...

  17. 20 CFR 336.4 - Base year compensation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Base year compensation. 336.4 Section 336.4... DURATION OF NORMAL AND EXTENDED BENEFITS Normal Benefits § 336.4 Base year compensation. (a) Formula. For the purposes of this part, an employee's base year compensation includes any compensation in excess of...

  18. 20 CFR 336.4 - Base year compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Base year compensation. 336.4 Section 336.4... DURATION OF NORMAL AND EXTENDED BENEFITS Normal Benefits § 336.4 Base year compensation. (a) Formula. For the purposes of this part, an employee's base year compensation includes any compensation in excess of...

  19. 7 CFR 930.133 - Compensation rate.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 8 2014-01-01 2014-01-01 false Compensation rate. 930.133 Section 930.133 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (MARKETING AGREEMENTS... Regulations § 930.133 Compensation rate. A compensation rate of $250 per meeting shall be paid to the public...

  20. Lithium compensation for full cell operation

    DOEpatents

    Xiao, Jie; Zheng, Jianming; Chen, Xilin; Lu, Dongping; Liu, Jun; Jiguang, Jiguang

    2016-05-17

    Disclosed herein are embodiments of a lithium-ion battery system comprising an anode, an anode current collector, and a layer of lithium metal in contact with the current collector, but not in contact with the anode. The lithium compensation layer dissolves into the electrolyte to compensate for the loss of lithium ions during usage of the full cell. The specific placement of the lithium compensation layer, such that there is no direct physical contact between the lithium compensation layer and the anode, provides certain advantages.

  1. Human vision-based algorithm to hide defective pixels in LCDs

    NASA Astrophysics Data System (ADS)

    Kimpe, Tom; Coulier, Stefaan; Van Hoey, Gert

    2006-02-01

    Producing displays without pixel defects or repairing defective pixels is technically not possible at this moment. This paper presents a new approach to solve this problem: defects are made invisible for the user by using image processing algorithms based on characteristics of the human eye. The performance of this new algorithm has been evaluated using two different methods. First of all the theoretical response of the human eye was analyzed on a series of images and this before and after applying the defective pixel compensation algorithm. These results show that indeed it is possible to mask a defective pixel. A second method was to perform a psycho-visual test where users were asked whether or not a defective pixel could be perceived. The results of these user tests also confirm the value of the new algorithm. Our "defective pixel correction" algorithm can be implemented very efficiently and cost-effectively as pixel-dataprocessing algorithms inside the display in for instance an FPGA, a DSP or a microprocessor. The described techniques are also valid for both monochrome and color displays ranging from high-quality medical displays to consumer LCDTV applications.

  2. Worker's Compensation: Will College and University Professors Be Compensated for Mental Injuries Caused by Work-Related Stress?

    ERIC Educational Resources Information Center

    Hasty, Keith N.

    1991-01-01

    The extent to which college faculty may recover compensation for debilitating mental illness resulting from stressful work-related activities is discussed. General requirements for worker's compensation claims, compensability of stress-related mental and physical illnesses, applicability of these standards to college faculty, and the current state…

  3. Analysis of nonlocal thermodynamic equilibrium CO 4.7 μm fundamental, isotopic, and hot band emissions measured by the Michelson Interferometer for Passive Atmospheric Sounding on Envisat

    NASA Astrophysics Data System (ADS)

    Funke, B.; López-Puertas, M.; Bermejo-Pantaleón, D.; von Clarmann, T.; Stiller, G. P.; HöPfner, M.; Grabowski, U.; Kaufmann, M.

    2007-06-01

    Nonlocal thermodynamic equilibrium (non-LTE) simulations of the 12C16O(1 → 0) fundamental band, the 12C16O(2 → 1) hot band, and the isotopic 13C16O(1 → 0) band performed with the Generic Radiative Transfer and non-LTE population Algorithm (GRANADA) and the Karlsruhe Optimized and Precise Radiative Transfer Algorithm (KOPRA) have been compared to spectrally resolved 4.7 μm radiances measured by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). The performance of the non-LTE simulation has been assessed in terms of band radiance ratios in order to avoid a compensation of possible non-LTE model errors by retrieval errors in the CO abundances inferred from MIPAS data with the same non-LTE algorithms. The agreement with the measurements is within 5% for the fundamental band and within 10% for the hot band. Simulated 13C16O radiances agree with the measurements within the instrumental noise error. Solar reflectance at the surface or clouds has been identified as an important additional excitation mechanism for the CO(2) state. The study represents a thorough validation of the non-LTE scheme used in the retrieval of CO abundances from MIPAS data.

  4. A Deep Learning Algorithm of Neural Network for the Parameterization of Typhoon-Ocean Feedback in Typhoon Forecast Models

    NASA Astrophysics Data System (ADS)

    Jiang, Guo-Qing; Xu, Jing; Wei, Jun

    2018-04-01

    Two algorithms based on machine learning neural networks are proposed—the shallow learning (S-L) and deep learning (D-L) algorithms—that can potentially be used in atmosphere-only typhoon forecast models to provide flow-dependent typhoon-induced sea surface temperature cooling (SSTC) for improving typhoon predictions. The major challenge of existing SSTC algorithms in forecast models is how to accurately predict SSTC induced by an upcoming typhoon, which requires information not only from historical data but more importantly also from the target typhoon itself. The S-L algorithm composes of a single layer of neurons with mixed atmospheric and oceanic factors. Such a structure is found to be unable to represent correctly the physical typhoon-ocean interaction. It tends to produce an unstable SSTC distribution, for which any perturbations may lead to changes in both SSTC pattern and strength. The D-L algorithm extends the neural network to a 4 × 5 neuron matrix with atmospheric and oceanic factors being separated in different layers of neurons, so that the machine learning can determine the roles of atmospheric and oceanic factors in shaping the SSTC. Therefore, it produces a stable crescent-shaped SSTC distribution, with its large-scale pattern determined mainly by atmospheric factors (e.g., winds) and small-scale features by oceanic factors (e.g., eddies). Sensitivity experiments reveal that the D-L algorithms improve maximum wind intensity errors by 60-70% for four case study simulations, compared to their atmosphere-only model runs.

  5. Genetic algorithm applied to a Soil-Vegetation-Atmosphere system: Sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk

    2010-05-01

    Numerical models are of precious help for predicting water fluxes in the vadose zone and more specifically in Soil-Vegetation-Atmosphere (SVA) systems. For such simulations, robust models and representative soil hydraulic parameters are required. Calibration of unsaturated hydraulic properties is known to be a difficult optimization problem due to the high non-linearity of the water flow equations. Therefore, robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. Additionally, GAs offer the opportunity to assess the confidence in the hydraulic parameter estimations, because of the large number of model realizations. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the Campine region in the north of Belgium. Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step in two lysimeters. The water table level, which is varying between 95 and 170 cm, has been recorded with intervals of 0.5 hour. The leaf area index was measured as well at some selected time moments during the year in order to evaluate the energy which reaches the soil and to deduce the potential evaporation. Water contents at several depths have been recorded. Based on the profile description, five soil layers have been distinguished in the podzol. Two models have been used for simulating water fluxes: (i) a mechanistic model, the HYDRUS-1D model, which solves the Richards' equation, and (ii) a compartmental model, which treats the soil profile as a bucket into which water flows until its maximum capacity is reached. A global sensitivity analysis (Morris' one-at-a-time sensitivity analysis) was run previously to the calibration, in order to check the sensitivity in the chosen parameter search space. For

  6. Wind profiling based on the optical beam intensity statistics in a turbulent atmosphere.

    PubMed

    Banakh, Victor A; Marakasov, Dimitrii A

    2007-10-01

    Reconstruction of the wind profile from the statistics of intensity fluctuations of an optical beam propagating in a turbulent atmosphere is considered. The equations for the spatiotemporal correlation function and the spectrum of weak intensity fluctuations of a Gaussian beam are obtained. The algorithms of wind profile retrieval from the spatiotemporal intensity spectrum are described and the results of end-to-end computer experiments on wind profiling based on the developed algorithms are presented. It is shown that the developed algorithms allow retrieval of the wind profile from the turbulent optical beam intensity fluctuations with acceptable accuracy in many practically feasible laser measurements set up in the atmosphere.

  7. Water Quality Monitoring for Lake Constance with a Physically Based Algorithm for MERIS Data.

    PubMed

    Odermatt, Daniel; Heege, Thomas; Nieke, Jens; Kneubühler, Mathias; Itten, Klaus

    2008-08-05

    A physically based algorithm is used for automatic processing of MERIS level 1B full resolution data. The algorithm is originally used with input variables for optimization with different sensors (i.e. channel recalibration and weighting), aquatic regions (i.e. specific inherent optical properties) or atmospheric conditions (i.e. aerosol models). For operational use, however, a lake-specific parameterization is required, representing an approximation of the spatio-temporal variation in atmospheric and hydrooptic conditions, and accounting for sensor properties. The algorithm performs atmospheric correction with a LUT for at-sensor radiance, and a downhill simplex inversion of chl-a, sm and y from subsurface irradiance reflectance. These outputs are enhanced by a selective filter, which makes use of the retrieval residuals. Regular chl-a sampling measurements by the Lake's protection authority coinciding with MERIS acquisitions were used for parameterization, training and validation.

  8. Sun compensation by bees.

    PubMed

    Gould, J L

    1980-02-01

    In both their navigation and dance communication, bees are able to compensate for the sun's movement. When foragers are prevented from seeing the sun for 2 hours, they compensate by extrapolation, using the sun's rate of movement when last observed. These and other data suggest a time-averaging processing strategy in honey bee orientation.

  9. Implications for high speed research: The relationship between sonic boom signature distortion and atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Sparrow, Victor W.; Gionfriddo, Thomas A.

    1994-01-01

    In this study there were two primary tasks. The first was to develop an algorithm for quantifying the distortion in a sonic boom. Such an algorithm should be somewhat automatic, with minimal human intervention. Once the algorithm was developed, it was used to test the hypothesis that the cause of a sonic boom distortion was due to atmospheric turbulence. This hypothesis testing was the second task. Using readily available sonic boom data, we statistically tested whether there was a correlation between the sonic boom distortion and the distance a boom traveled through atmospheric turbulence.

  10. Testing trivializing maps in the Hybrid Monte Carlo algorithm

    PubMed Central

    Engel, Georg P.; Schaefer, Stefan

    2011-01-01

    We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CPN−1 model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied. PMID:21969733

  11. The Goddard Profiling Algorithm (GPROF): Description and Current Applications

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea

    2004-01-01

    Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.

  12. Near-Continuous Profiling of Temperature, Moisture, and Atmospheric Stability Using the Atmospheric Emitted Radiance Interferometer (AERI).

    NASA Astrophysics Data System (ADS)

    Feltz, W. F.; Smith, W. L.; Howell, H. B.; Knuteson, R. O.; Woolf, H.; Revercomb, H. E.

    2003-05-01

    The Department of Energy Atmospheric Radiation Measurement Program (ARM) has funded the development and installation of five ground-based atmospheric emitted radiance interferometer (AERI) systems at the Southern Great Plains (SGP) site. The purpose of this paper is to provide an overview of the AERI instrument, improvement of the AERI temperature and moisture retrieval technique, new profiling utility, and validation of high-temporal-resolution AERI-derived stability indices important for convective nowcasting. AERI systems have been built at the University of Wisconsin-Madison, Madison, Wisconsin, and deployed in the Oklahoma-Kansas area collocated with National Oceanic and Atmospheric Administration 404-MHz wind profilers at Lamont, Vici, Purcell, and Morris, Oklahoma, and Hillsboro, Kansas. The AERI systems produce absolutely calibrated atmospheric infrared emitted radiances at one-wavenumber resolution from 3 to 20 m at less than 10-min temporal resolution. The instruments are robust, are automated in the field, and are monitored via the Internet in near-real time. The infrared radiances measured by the AERI systems contain meteorological information about the vertical structure of temperature and water vapor in the planetary boundary layer (PBL; 0-3 km). A mature temperature and water vapor retrieval algorithm has been developed over a 10-yr period that provides vertical profiles at less than 10-min temporal resolution to 3 km in the PBL. A statistical retrieval is combined with the hourly Geostationary Operational Environmental Satellite (GOES) sounder water vapor or Rapid Update Cycle, version 2, numerical weather prediction (NWP) model profiles to provide a nominal hybrid first guess of temperature and moisture to the AERI physical retrieval algorithm. The hourly satellite or NWP data provide a best estimate of the atmospheric state in the upper PBL; the AERI radiances provide the mesoscale temperature and moisture profile correction in the PBL to the

  13. Log amplifier with pole-zero compensation

    DOEpatents

    Brookshier, W.

    1985-02-08

    A logarithmic amplifier circuit provides pole-zero compensation for improved stability and response time over 6-8 decades of input signal frequency. The amplifer circuit includes a first operational amplifier with a first feedback loop which includes a second, inverting operational amplifier in a second feedstock loop. The compensated output signal is provided by the second operational amplifier with the log elements, i.e., resistors, and the compensating capacitors in each of the feedback loops having equal values so that each break point is offset by a compensating break point or zero.

  14. Log amplifier with pole-zero compensation

    DOEpatents

    Brookshier, William

    1987-01-01

    A logarithmic amplifier circuit provides pole-zero compensation for improved stability and response time over 6-8 decades of input signal frequency. The amplifier circuit includes a first operational amplifier with a first feedback loop which includes a second, inverting operational amplifier in a second feedback loop. The compensated output signal is provided by the second operational amplifier with the log elements, i.e., resistors, and the compensating capacitors in each of the feedback loops having equal values so that each break point or pole is offset by a compensating break point or zero.

  15. Does CEO compensation impact patient satisfaction?

    PubMed

    Akingbola, Kunle; van den Berg, Herman A

    2015-01-01

    This study examines the relationship between CEO compensation and patient satisfaction in Ontario, Canada. The purpose of this paper is to determine what impact hospital CEO compensation has on hospital patient satisfaction. The analyses in this study were based on data of 261 CEO-hospital-year observations in a sample of 103 nonprofit hospitals. A number of linear regressions were conducted, with patient satisfaction as the dependent variable and CEO compensation as the independent variable of interest. Controlling variables included hospital size, type of hospital, and frequency of adverse clinical outcomes. CEO compensation does not significantly influence hospital patient satisfaction. Both patient satisfaction and CEO compensation appear to be driven primarily by hospital size. Patient satisfaction decreases, while CEO compensation increases, with the number of acute care beds in a hospital. In addition, CEO compensation does not even appear to moderate the influence of hospital size on patient satisfaction. There are several limitations to this study. First, observations of CEO-hospital-years in which annual nominal CEO compensation was below $100,000 were excluded, as they were not publicly available. Second, this research was limited to a three-year range. Third, this study related the compensation of individual CEOs to a measure of performance based on a multitude of patient satisfaction surveys. Finally, this research is restricted to not-for-profit hospitals in Ontario, Canada. The findings seem to suggest that hospital directors seeking to improve patient satisfaction may find their efforts frustrated if they focus exclusively on the hospital CEO. The findings highlight the need for further research on how CEOs may, through leading and supporting those hospital clinicians and staff that interact more closely with patients, indirectly enhance patient satisfaction. To the best of the authors' knowledge, no research has examined the relationship between

  16. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm

    PubMed Central

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938

  17. Poster — Thur Eve — 58: Dosimetric validation of electronic compensation for radiotherapy treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gräfe, James; Khan, Rao; Meyer, Tyler

    2014-08-15

    In this study we investigate the deliverability of dosimetric plans generated by the irregular surface compensator (ISCOMP) algorithm for 6 MV photon beams in Eclipse (Varian Medical System, CA). In contrast to physical tissue compensation, the electronic ISCOMP uses MLCs to dynamically modulate the fluence of a photon beam in order to deliver a uniform dose at a user defined plane in tissue. This method can be used to shield critical organs that are located within the treatment portal or improve dose uniformity by tissue compensation in inhomogeneous regions. Three site specific plans and a set of test fields weremore » evaluated using the γ-metric of 3%/ 3 mm on Varian EPID, MapCHECK, and Gafchromic EBT3 film with a clinical tolerance of >95% passing rates. Point dose measurements with an NRCC calibrated ionization chamber were also performed to verify the absolute dose delivered. In all cases the MapCHECK measured plans met the gamma criteria. The mean passing rate for the six EBT3 film field measurements was 96.2%, with only two fields at 93.4 and 94.0% passing rates. The EPID plans passed for fields encompassing the central ∼10 × 10 cm{sup 2} region of the detector; however for larger fields and greater off-axis distances discrepancies were observed and attributed to the profile corrections and modeling of backscatter in the portal dose calculation. The magnitude of the average percentage difference for 21 ion chamber point dose measurements and 17 different fields was 1.4 ± 0.9%, and the maximum percentage difference was −3.3%. These measurements qualify the algorithm for routine clinical use subject to the same pre-treatment patient specific QA as IMRT.« less

  18. Towards the Mitigation of Correlation Effects in the Analysis of Hyperspectral Imagery with Extensions to Robust Parameter Design

    DTIC Science & Technology

    2012-08-01

    Difference Vegetation Index ( NDVI ) ..................................... 15  2.3  Methodology...Atmospheric Compensation ........................................................................ 31  3.2.3.1  Normalized Difference Vegetation Index ( NDVI ...anomaly detection algorithms are contrasted and implemented, and explains the use of the Normalized Difference Vegetation Index ( NDVI ) in post

  19. Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays.

    PubMed

    Orżanowski, Tomasz

    2016-01-01

    This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.

  20. Atmospheric turbulence profiling with unknown power spectral density

    NASA Astrophysics Data System (ADS)

    Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny

    2018-04-01

    Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.

  1. High precision tracking of a piezoelectric nano-manipulator with parameterized hysteresis compensation

    NASA Astrophysics Data System (ADS)

    Yan, Peng; Zhang, Yangming

    2018-06-01

    High performance scanning of nano-manipulators is widely deployed in various precision engineering applications such as SPM (scanning probe microscope), where trajectory tracking of sophisticated reference signals is an challenging control problem. The situation is further complicated when rate dependent hysteresis of the piezoelectric actuators and the stress-stiffening induced nonlinear stiffness of the flexure mechanism are considered. In this paper, a novel control framework is proposed to achieve high precision tracking of a piezoelectric nano-manipulator subjected to hysteresis and stiffness nonlinearities. An adaptive parameterized rate-dependent Prandtl-Ishlinskii model is constructed and the corresponding adaptive inverse model based online compensation is derived. Meanwhile a robust adaptive control architecture is further introduced to improve the tracking accuracy and robustness of the compensated system, where the parametric uncertainties of the nonlinear dynamics can be well eliminated by on-line estimations. Comparative experimental studies of the proposed control algorithm are conducted on a PZT actuated nano-manipulating stage, where hysteresis modeling accuracy and excellent tracking performance are demonstrated in real-time implementations, with significant improvement over existing results.

  2. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVille, R.E.L., E-mail: rdeville@illinois.edu; Riemer, N., E-mail: nriemer@illinois.edu; West, M., E-mail: mwest@illinois.edu

    2011-09-20

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangianmore » trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.« less

  3. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    NASA Astrophysics Data System (ADS)

    DeVille, R. E. L.; Riemer, N.; West, M.

    2011-09-01

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

  4. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. volume 4; Determination of surface and atmosphere fluxes and temporally and spatially averaged products (subsystems 5-12); Determination of surface and atmosphere fluxes and temporally and spatially averaged products

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.

  5. Intensity compensation for on-line detection of defects on fruit

    NASA Astrophysics Data System (ADS)

    Wen, Zhiqing; Tao, Yang

    1997-10-01

    A machine-vision sorting system was developed that utilizes the difference in light reflectance of fruit surfaces to distinguish the defective and good apples. To accommodate to the spherical reflectance characteristics of fruit with curved surface like apple, a spherical transform algorithm was developed that converts the original image to a non-radiant image without losing defective segments on the fruit. To prevent high-quality dark-colored fruit form being classified into the defective class and increase the defect detection rate for light-colored fruit, an intensity compensation method using maximum propagation was used. Experimental results demonstrated the effectiveness of the method based on maximum propagation and spherical transform for on-line detection of defects on apples.

  6. Refinement of the CALIOP cloud mask algorithm

    NASA Astrophysics Data System (ADS)

    Katagiri, Shuichiro; Sato, Kaori; Ohta, Kohei; Okamoto, Hajime

    2018-04-01

    A modified cloud mask algorithm was applied to the CALIOP data to have more ability to detect the clouds in the lower atmosphere. In this algorithm, we also adopt the fully attenuation discrimination and the remain noise estimation using the data obtained at an altitude of 40 km to avoid contamination of stratospheric aerosols. The new cloud mask shows an increase in the lower cloud fraction. Comparison of the results to the data observed with a PML ground observation was also made.

  7. Vertical vibration analysis for elevator compensating sheave

    NASA Astrophysics Data System (ADS)

    Watanabe, Seiji; Okawa, Takeya; Nakazawa, Daisuke; Fukui, Daiki

    2013-07-01

    Most elevators applied to tall buildings include compensating ropes to satisfy the balanced rope tension between the car and the counter weight. The compensating ropes receive tension by the compensating sheave, which is installed at the bottom space of the elevator shaft. The compensating sheave is only suspended by the compensating ropes, therefore, the sheave can move vertically while the car is traveling. This paper shows the elevator dynamic model to evaluate the vertical motion of the compensating sheave. Especially, behavior in emergency cases, such as brake activation and buffer strike, was investigated to evaluate the maximum upward motion of the sheave. The simulation results were validated by experiments and the most influenced factor for the sheave vertical motion was clarified.

  8. A compensation method of lever arm effect for tri-axis hybrid inertial navigation system based on fiber optic gyro

    NASA Astrophysics Data System (ADS)

    Liu, Zengjun; Wang, Lei; Li, Kui; Gao, Jiaxin

    2017-05-01

    Hybrid inertial navigation system (HINS) is a new kind of inertial navigation system (INS), which combines advantages of platform INS, strap-down INS and rotational INS. HINS has a physical platform to isolate the angular motion as platform INS does, HINS also uses strap-down attitude algorithms and applies rotation modulation technique. Tri-axis HINS has three gimbals to isolate the angular motion in the dynamic base, in which way the system can reduce the effects of angular motion and improve the positioning precision. However, the angular motion will affect the compensation of some error parameters, especially for the lever arm effect. The lever arm effect caused by position errors between the accelerometers and rotation center cannot be ignored due to the rapid rotation of inertial measurement unit (IMU) and it will cause fluctuation and stage in velocity in HINS. The influences of angular motion on the lever arm effect compensation are analyzed firstly in this paper, and then the compensation method of lever arm effect based on the photoelectric encoders in dynamic base is proposed. Results of experiments on turntable show that after compensation, the fluctuations and stages in velocity curve disappear.

  9. 75 FR 5499 - Claims for Compensation; Death Gratuity Under the Federal Employees' Compensation Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... for Compensation; Death Gratuity Under the Federal Employees' Compensation Act AGENCY: Office of... Labor (DOL) published an interim final rule in order to administer the death gratuity created by section... provides a death gratuity payment to eligible survivors of federal employees and non-appropriated fund...

  10. Flies compensate for unilateral wing damage through modular adjustments of wing and body kinematics

    PubMed Central

    Iwasaki, Nicole A.; Elzinga, Michael J.; Melis, Johan M.; Dickinson, Michael H.

    2017-01-01

    Using high-speed videography, we investigated how fruit flies compensate for unilateral wing damage, in which loss of area on one wing compromises both weight support and roll torque equilibrium. Our results show that flies control for unilateral damage by rolling their body towards the damaged wing and by adjusting the kinematics of both the intact and damaged wings. To compensate for the reduction in vertical lift force due to damage, flies elevate wingbeat frequency. Because this rise in frequency increases the flapping velocity of both wings, it has the undesired consequence of further increasing roll torque. To compensate for this effect, flies increase the stroke amplitude and advance the timing of pronation and supination of the damaged wing, while making the opposite adjustments on the intact wing. The resulting increase in force on the damaged wing and decrease in force on the intact wing function to maintain zero net roll torque. However, the bilaterally asymmetrical pattern of wing motion generates a finite lateral force, which flies balance by maintaining a constant body roll angle. Based on these results and additional experiments using a dynamically scaled robotic fly, we propose a simple bioinspired control algorithm for asymmetric wing damage. PMID:28163885

  11. Multichannel loudness compensation method based on segmented sound pressure level for digital hearing aids

    NASA Astrophysics Data System (ADS)

    Liang, Ruiyu; Xi, Ji; Bao, Yongqiang

    2017-07-01

    To improve the performance of gain compensation based on three-segment sound pressure level (SPL) in hearing aids, an improved multichannel loudness compensation method based on eight-segment SPL was proposed. Firstly, the uniform cosine modulated filter bank was designed. Then, the adjacent channels which have low or gradual slopes were adaptively merged to obtain the corresponding non-uniform cosine modulated filter according to the audiogram of hearing impaired persons. Secondly, the input speech was decomposed into sub-band signals and the SPL of every sub-band signal was computed. Meanwhile, the audible SPL range from 0 dB SPL to 120 dB SPL was equally divided into eight segments. Based on these segments, a different prescription formula was designed to compute more detailed gain to compensate according to the audiogram and the computed SPL. Finally, the enhanced signal was synthesized. Objective experiments showed the decomposed signals after cosine modulated filter bank have little distortion. Objective experiments showed that the hearing aids speech perception index (HASPI) and hearing aids speech quality index (HASQI) increased 0.083 and 0.082 on average, respectively. Subjective experiments showed the proposed algorithm can effectively improve the speech recognition of six hearing impaired persons.

  12. 7 CFR 930.133 - Compensation rate.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 8 2013-01-01 2013-01-01 false Compensation rate. 930.133 Section 930.133 Agriculture... Regulations § 930.133 Compensation rate. A compensation rate of $250 per meeting shall be paid to the public... meeting rate. For example, if a Board meeting is convened and lasts one or two days or only four hours...

  13. 7 CFR 930.133 - Compensation rate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 8 2011-01-01 2011-01-01 false Compensation rate. 930.133 Section 930.133 Agriculture... Regulations § 930.133 Compensation rate. A compensation rate of $250 per meeting shall be paid to the public... meeting rate. For example, if a Board meeting is convened and lasts one or two days or only four hours...

  14. 7 CFR 930.133 - Compensation rate.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 8 2012-01-01 2012-01-01 false Compensation rate. 930.133 Section 930.133 Agriculture... Regulations § 930.133 Compensation rate. A compensation rate of $250 per meeting shall be paid to the public... meeting rate. For example, if a Board meeting is convened and lasts one or two days or only four hours...

  15. 23 CFR 751.15 - Just compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 23 Highways 1 2010-04-01 2010-04-01 false Just compensation. 751.15 Section 751.15 Highways... AND ACQUISITION § 751.15 Just compensation. (a) Just compensation shall be paid the owner for the... nonconforming junkyard as provided in § 751.11 must pertain at the time of the taking or removal in order to...

  16. 23 CFR 751.15 - Just compensation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... nonconforming junkyard as provided in § 751.11 must pertain at the time of the taking or removal in order to... 23 Highways 1 2011-04-01 2011-04-01 false Just compensation. 751.15 Section 751.15 Highways... AND ACQUISITION § 751.15 Just compensation. (a) Just compensation shall be paid the owner for the...

  17. A frequency dependent preconditioned wavelet method for atmospheric tomography

    NASA Astrophysics Data System (ADS)

    Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny

    2013-12-01

    Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.

  18. Analysis Of AVIRIS Data From LEO-15 Using Tafkaa Atmospheric Correction

    NASA Technical Reports Server (NTRS)

    Montes, Marcos J.; Gao, Bo-Cai; Davis, Curtiss O.; Moline, Mark

    2004-01-01

    We previously developed an algorithm named Tafkaa for atmospheric correction of remote sensing ocean color data from aircraft and satellite platforms. The algorithm allows quick atmospheric correction of hyperspectral data using lookup tables generated with a modified version of Ahmad & Fraser s vector radiative transfer code. During the past few years we have extended the capabilities of the code. Current modifications include the ability to account for within scene variation in solar geometry (important for very long scenes) and view geometries (important for wide fields of view). Additionally, versions of Tafkaa have been made for a variety of multi-spectral sensors, including SeaWiFS and MODIS. In this proceeding we present some initial results of atmospheric correction of AVIRIS data from the 2001 July Hyperspectral Coastal Ocean Dynamics Experiment (HyCODE) at LEO-15.

  19. Implementation of a rapid correction algorithm for adaptive optics using a plenoptic sensor

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2016-09-01

    Adaptive optics relies on the accuracy and speed of a wavefront sensor in order to provide quick corrections to distortions in the optical system. In weaker cases of atmospheric turbulence often encountered in astronomical fields, a traditional Shack-Hartmann sensor has proved to be very effective. However, in cases of stronger atmospheric turbulence often encountered near the surface of the Earth, atmospheric turbulence no longer solely causes small tilts in the wavefront. Instead, lasers passing through strong or "deep" atmospheric turbulence encounter beam breakup, which results in interference effects and discontinuities in the incoming wavefront. In these situations, a Shack-Hartmann sensor can no longer effectively determine the shape of the incoming wavefront. We propose a wavefront reconstruction and correction algorithm based around the plenoptic sensor. The plenoptic sensor's design allows it to match and exceed the wavefront sensing capabilities of a Shack-Hartmann sensor for our application. Novel wavefront reconstruction algorithms can take advantage of the plenoptic sensor to provide a rapid wavefront reconstruction necessary for real time turbulence. To test the integrity of the plenoptic sensor and its reconstruction algorithms, we use artificially generated turbulence in a lab scale environment to simulate the structure and speed of outdoor atmospheric turbulence. By analyzing the performance of our system with and without the closed-loop plenoptic sensor adaptive optics system, we can show that the plenoptic sensor is effective in mitigating real time lab generated atmospheric turbulence.

  20. The dependence of the anisoplanatic Strehl of a compensated beam on the beacon distribution

    NASA Astrophysics Data System (ADS)

    Stroud, P.

    1992-02-01

    There are several applications for lasers where the effect of atmospheric turbulence is strong enough to require wavefront compensation, and the compensation can be made by an adaptive optics (AO) system which processes light returned from the target itself. The distribution of the target return light produces limitations to the performance of the AO system. The primary intent of this documentation is to present the new results of an analysis of the anisoplanatic effects arising from target return beacon geometries. It will also lay out the assumptions and steps in the analysis, so that the results can be validated or extended. The intent is to provide a self-consistent notation, simple physical interpretations of the mathematical formulations, and enough detail to reduce the investment of time required to become acquainted or reacquainted with the physics of laser propagation through turbulence, at a level needed to analyze anisoplanatic effects. A general formulation has been developed to calculate the anisoplanatic Strehl of a compensated beam for any beacon distribution and turbulence profile. Numerical calculations are also shown for several beacon geometries and turbulence profiles. The key result is that the spread of the beacon distribution has a much less deleterious effect than does the offset of the beacon centroid from the aimpoint.

  1. Retrieval with Infrared Atmospheric Sounding Interferometer and Validation during JAIVEx

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Larar, Allen M.; Smith, William L.; Taylor, Jonathan P.; Schluessel, Peter; Strow, L. Larrabee; Mango, Stephen A.

    2008-01-01

    A state-of-the-art IR-only retrieval algorithm has been developed with an all-season-global EOF Physical Regression and followed by 1-D Var. Physical Iterative Retrieval for IASI, AIRS, and NAST-I. The benefits of this retrieval are to produce atmospheric structure with a single FOV horizontal resolution (approx. 15 km for IASI and AIRS), accurate profiles above the cloud (at least) or down to the surface, surface parameters, and/or cloud microphysical parameters. Initial case study and validation indicates that surface, cloud, and atmospheric structure (include TBL) are well captured by IASI and AIRS measurements. Coincident dropsondes during the IASI and AIRS overpasses are used to validate atmospheric conditions, and accurate retrievals are obtained with an expected vertical resolution. JAIVEx has provided the data needed to validate the retrieval algorithm and its products which allows us to assess the instrument ability and/or performance. Retrievals with global coverage are under investigation for detailed retrieval assessment. It is greatly desired that these products be used for testing the impact on Atmospheric Data Assimilation and/or Numerical Weather Prediction.

  2. Theoretical Study of Watershed Eco-Compensation Standards

    NASA Astrophysics Data System (ADS)

    Yan, Dandan; Fu, Yicheng; Liu, Biu; Sha, Jinxia

    2018-01-01

    Watershed eco-compensation is an effective way to solve conflicts over water allocation and ecological destruction problems in the exploitation of water resources. Despite an increasing interest in the topic, the researches has neglected the effect of water quality and lacked systematic calculation method. In this study we reviewed and analyzed the current literature and proposedatheoretical framework to improve the calculation of co-compensation standard.Considering the perspectives of the river ecosystems, forest ecosystems and wetland ecosystems, the benefit compensation standard was determined by the input-output corresponding relationship. Based on the opportunity costs related to limiting development and water conservation loss, the eco-compensation standard was calculated.In order to eliminate the defects of eco-compensation implementation, the improvement suggestions were proposed for the compensation standard calculation and implementation.

  3. An Automated Algorithm for Identifying and Tracking Transverse Waves in Solar Images

    NASA Astrophysics Data System (ADS)

    Weberg, Micah J.; Morton, Richard J.; McLaughlin, James A.

    2018-01-01

    Recent instrumentation has demonstrated that the solar atmosphere supports omnipresent transverse waves, which could play a key role in energizing the solar corona. Large-scale studies are required in order to build up an understanding of the general properties of these transverse waves. To help facilitate this, we present an automated algorithm for identifying and tracking features in solar images and extracting the wave properties of any observed transverse oscillations. We test and calibrate our algorithm using a set of synthetic data, which includes noise and rotational effects. The results indicate an accuracy of 1%–2% for displacement amplitudes and 4%–10% for wave periods and velocity amplitudes. We also apply the algorithm to data from the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory and find good agreement with previous studies. Of note, we find that 35%–41% of the observed plumes exhibit multiple wave signatures, which indicates either the superposition of waves or multiple independent wave packets observed at different times within a single structure. The automated methods described in this paper represent a significant improvement on the speed and quality of direct measurements of transverse waves within the solar atmosphere. This algorithm unlocks a wide range of statistical studies that were previously impractical.

  4. The "perfect storm" in compensation: convergence of events leads to a greater need to review compensation strategies.

    PubMed

    Jones, Robert B

    2004-01-01

    The recent unprecedented convergence of significant strategic events in the compensation arena has created the need for ongoing and extensive compensation planning. This article reviews the events leading to this point, describes the implications of the results from a recent Aon study with WorldatWork, and suggests what employers can do to successfully navigate the "perfect storm" in compensation.

  5. Atmosphere Assessment for MARS Science Laboratory Entry, Descent and Landing Operations

    NASA Technical Reports Server (NTRS)

    Cianciolo, Alicia D.; Cantor, Bruce; Barnes, Jeff; Tyler, Daniel, Jr.; Rafkin, Scot; Chen, Allen; Kass, David; Mischna, Michael; Vasavada, Ashwin R.

    2013-01-01

    On August 6, 2012, the Mars Science Laboratory rover, Curiosity, successfully landed on the surface of Mars. The Entry, Descent and Landing (EDL) sequence was designed using atmospheric conditions estimated from mesoscale numerical models. The models, developed by two independent organizations (Oregon State University and the Southwest Research Institute), were validated against observations at Mars from three prior years. In the weeks and days before entry, the MSL "Council of Atmospheres" (CoA), a group of atmospheric scientists and modelers, instrument experts and EDL simulation engineers, evaluated the latest Mars data from orbiting assets including the Mars Reconnaissance Orbiter's Mars Color Imager (MARCI) and Mars Climate Sounder (MCS), as well as Mars Odyssey's Thermal Emission Imaging System (THEMIS). The observations were compared to the mesoscale models developed for EDL performance simulation to determine if a spacecraft parameter update was necessary prior to entry. This paper summarizes the daily atmosphere observations and comparison to the performance simulation atmosphere models. Options to modify the atmosphere model in the simulation to compensate for atmosphere effects are also presented. Finally, a summary of the CoA decisions and recommendations to the MSL project in the days leading up to EDL is provided.

  6. A retrieval algorithm of hydrometer profile for submillimeter-wave radiometer

    NASA Astrophysics Data System (ADS)

    Liu, Yuli; Buehler, Stefan; Liu, Heguang

    2017-04-01

    Vertical profiles of particle microphysics perform vital functions for the estimation of climatic feedback. This paper proposes a new algorithm to retrieve the profile of the parameters of the hydrometeor(i.e., ice, snow, rain, liquid cloud, graupel) based on passive submillimeter-wave measurements. These parameters include water content and particle size. The first part of the algorithm builds the database and retrieves the integrated quantities. Database is built up by Atmospheric Radiative Transfer Simulator(ARTS), which uses atmosphere data to simulate the corresponding brightness temperature. Neural network, trained by the precalculated database, is developed to retrieve the water path for each type of particles. The second part of the algorithm analyses the statistical relationship between water path and vertical parameters profiles. Based on the strong dependence existing between vertical layers in the profiles, Principal Component Analysis(PCA) technique is applied. The third part of the algorithm uses the forward model explicitly to retrieve the hydrometeor profiles. Cost function is calculated in each iteration, and Differential Evolution(DE) algorithm is used to adjust the parameter values during the evolutionary process. The performance of this algorithm is planning to be verified for both simulation database and measurement data, by retrieving profiles in comparison with the initial one. Results show that this algorithm has the ability to retrieve the hydrometeor profiles efficiently. The combination of ARTS and optimization algorithm can get much better results than the commonly used database approach. Meanwhile, the concept that ARTS can be used explicitly in the retrieval process shows great potential in providing solution to other retrieval problems.

  7. An Estimation of Turbulent Kinetic Energy and Energy Dissipation Rate Based on Atmospheric Boundary Layer Similarity Theory

    NASA Technical Reports Server (NTRS)

    Han, Jongil; Arya, S. Pal; Shaohua, Shen; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)

    2000-01-01

    Algorithms are developed to extract atmospheric boundary layer profiles for turbulence kinetic energy (TKE) and energy dissipation rate (EDR), with data from a meteorological tower as input. The profiles are based on similarity theory and scalings for the atmospheric boundary layer. The calculated profiles of EDR and TKE are required to match the observed values at 5 and 40 m. The algorithms are coded for operational use and yield plausible profiles over the diurnal variation of the atmospheric boundary layer.

  8. Atmospheric correction of the ocean color observations of the medium resolution imaging spectrometer (MERIS)

    NASA Astrophysics Data System (ADS)

    Antoine, David; Morel, Andre

    1997-02-01

    An algorithm is proposed for the atmospheric correction of the ocean color observations by the MERIS instrument. The principle of the algorithm, which accounts for all multiple scattering effects, is presented. The algorithm is then teste, and its accuracy assessed in terms of errors in the retrieved marine reflectances.

  9. On-sky Closed-loop Correction of Atmospheric Dispersion for High-contrast Coronagraphy and Astrometry

    NASA Astrophysics Data System (ADS)

    Pathak, P.; Guyon, O.; Jovanovic, N.; Lozi, J.; Martinache, F.; Minowa, Y.; Kudo, T.; Kotani, T.; Takami, H.

    2018-02-01

    Adaptive optic (AO) systems delivering high levels of wavefront correction are now common at observatories. One of the main limitations to image quality after wavefront correction comes from atmospheric refraction. An atmospheric dispersion compensator (ADC) is employed to correct for atmospheric refraction. The correction is applied based on a look-up table consisting of dispersion values as a function of telescope elevation angle. The look-up table-based correction of atmospheric dispersion results in imperfect compensation leading to the presence of residual dispersion in the point spread function (PSF) and is insufficient when sub-milliarcsecond precision is required. The presence of residual dispersion can limit the achievable contrast while employing high-performance coronagraphs or can compromise high-precision astrometric measurements. In this paper, we present the first on-sky closed-loop correction of atmospheric dispersion by directly using science path images. The concept behind the measurement of dispersion utilizes the chromatic scaling of focal plane speckles. An adaptive speckle grid generated with a deformable mirror (DM) that has a sufficiently large number of actuators is used to accurately measure the residual dispersion and subsequently correct it by driving the ADC. We have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system on-sky closed-loop correction of residual dispersion to <1 mas across H-band. This work will aid in the direct detection of habitable exoplanets with upcoming extremely large telescopes (ELTs) and also provide a diagnostic tool to test the performance of instruments which require sub-milliarcsecond correction.

  10. Low-Frequency Error Extraction and Compensation for Attitude Measurements from STECE Star Tracker

    PubMed Central

    Lai, Yuwang; Gu, Defeng; Liu, Junhong; Li, Wenping; Yi, Dongyun

    2016-01-01

    The low frequency errors (LFE) of star trackers are the most penalizing errors for high-accuracy satellite attitude determination. Two test star trackers- have been mounted on the Space Technology Experiment and Climate Exploration (STECE) satellite, a small satellite mission developed by China. To extract and compensate the LFE of the attitude measurements for the two test star trackers, a new approach, called Fourier analysis, combined with the Vondrak filter method (FAVF) is proposed in this paper. Firstly, the LFE of the two test star trackers’ attitude measurements are analyzed and extracted by the FAVF method. The remarkable orbital reproducibility features are found in both of the two test star trackers’ attitude measurements. Then, by using the reproducibility feature of the LFE, the two star trackers’ LFE patterns are estimated effectively. Finally, based on the actual LFE pattern results, this paper presents a new LFE compensation strategy. The validity and effectiveness of the proposed LFE compensation algorithm is demonstrated by the significant improvement in the consistency between the two test star trackers. The root mean square (RMS) of the relative Euler angle residuals are reduced from [27.95′′, 25.14′′, 82.43′′], 3σ to [16.12′′, 15.89′′, 53.27′′], 3σ. PMID:27754320

  11. Hadronic energy resolution of a highly granular scintillator-steel hadron calorimeter using software compensation techniques

    NASA Astrophysics Data System (ADS)

    Adloff, C.; Blaha, J.; Blaising, J.-J.; Drancourt, C.; Espargilière, A.; Gaglione, R.; Geffroy, N.; Karyotakis, Y.; Prast, J.; Vouters, G.; Francis, K.; Repond, J.; Smith, J.; Xia, L.; Baldolemar, E.; Li, J.; Park, S. T.; Sosebee, M.; White, A. P.; Yu, J.; Buanes, T.; Eigen, G.; Mikami, Y.; Watson, N. K.; Goto, T.; Mavromanolakis, G.; Thomson, M. A.; Ward, D. R.; Yan, W.; Benchekroun, D.; Hoummada, A.; Khoulaki, Y.; Benyamna, M.; Cârloganu, C.; Fehr, F.; Gay, P.; Manen, S.; Royer, L.; Blazey, G. C.; Dyshkant, A.; Lima, J. G. R.; Zutshi, V.; Hostachy, J.-Y.; Morin, L.; Cornett, U.; David, D.; Falley, G.; Gadow, K.; Göttlicher, P.; Günter, C.; Hermberg, B.; Karstensen, S.; Krivan, F.; Lucaci-Timoce, A.-I.; Lu, S.; Lutz, B.; Morozov, S.; Morgunov, V.; Reinecke, M.; Sefkow, F.; Smirnov, P.; Terwort, M.; Vargas-Trevino, A.; Feege, N.; Garutti, E.; Marchesini, I.; Ramilli, M.; Eckert, P.; Harion, T.; Kaplan, A.; Schultz-Coulon, H.-Ch; Shen, W.; Stamen, R.; Tadday, A.; Bilki, B.; Norbeck, E.; Onel, Y.; Wilson, G. W.; Kawagoe, K.; Dauncey, P. D.; Magnan, A.-M.; Wing, M.; Salvatore, F.; Calvo Alamillo, E.; Fouz, M.-C.; Puerta-Pelayo, J.; Balagura, V.; Bobchenko, B.; Chadeeva, M.; Danilov, M.; Epifantsev, A.; Markin, O.; Mizuk, R.; Novikov, E.; Rusinov, V.; Tarkovsky, E.; Kirikova, N.; Kozlov, V.; Smirnov, P.; Soloviev, Y.; Buzhan, P.; Dolgoshein, B.; Ilyin, A.; Kantserov, V.; Kaplin, V.; Karakash, A.; Popova, E.; Smirnov, S.; Kiesling, C.; Pfau, S.; Seidel, K.; Simon, F.; Soldner, C.; Szalay, M.; Tesar, M.; Weuste, L.; Bonis, J.; Bouquet, B.; Callier, S.; Cornebise, P.; Doublet, Ph; Dulucq, F.; Faucci Giannelli, M.; Fleury, J.; Li, H.; Martin-Chassard, G.; Richard, F.; de la Taille, Ch; Pöschl, R.; Raux, L.; Seguin-Moreau, N.; Wicek, F.; Anduze, M.; Boudry, V.; Brient, J.-C.; Jeans, D.; Mora de Freitas, P.; Musat, G.; Reinhard, M.; Ruan, M.; Videau, H.; Bulanek, B.; Zacek, J.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Ruzicka, P.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; Belhorma, B.; Ghazlane, H.; Takeshita, T.; Uozumi, S.; Sauer, J.; Weber, S.; Zeitnitz, C.

    2012-09-01

    The energy resolution of a highly granular 1 m3 analogue scintillator-steel hadronic calorimeter is studied using charged pions with energies from 10 GeV to 80 GeV at the CERN SPS. The energy resolution for single hadrons is determined to be approximately 58%/√E/GeV. This resolution is improved to approximately 45%/√E/GeV with software compensation techniques. These techniques take advantage of the event-by-event information about the substructure of hadronic showers which is provided by the imaging capabilities of the calorimeter. The energy reconstruction is improved either with corrections based on the local energy density or by applying a single correction factor to the event energy sum derived from a global measure of the shower energy density. The application of the compensation algorithms to geant4 simulations yield resolution improvements comparable to those observed for real data.

  12. Atmospheric, Cloud, and Surface Parameters Retrieved from Satellite Ultra-spectral Infrared Sounder Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Larar, Allen M.; Smith, William L.; Yang, Ping; Schluessel, Peter; Strow, Larrabee

    2007-01-01

    An advanced retrieval algorithm with a fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. This physical inversion scheme has been developed, dealing with cloudy as well as cloud-free radiance observed with ultraspectral infrared sounders, to simultaneously retrieve surface, atmospheric thermodynamic, and cloud microphysical parameters. A fast radiative transfer model, which applies to the clouded atmosphere, is used for atmospheric profile and cloud parameter retrieval. A one-dimensional (1-d) variational multivariable inversion solution is used to improve an iterative background state defined by an eigenvector-regression-retrieval. The solution is iterated in order to account for non-linearity in the 1-d variational solution. This retrieval algorithm is applied to the MetOp satellite Infrared Atmospheric Sounding Interferometer (IASI) launched on October 19, 2006. IASI possesses an ultra-spectral resolution of 0.25 cm(exp -1) and a spectral coverage from 645 to 2760 cm(exp -1). Preliminary retrievals of atmospheric soundings, surface properties, and cloud optical/microphysical properties with the IASI measurements are obtained and presented.

  13. Work on Planetary Atmospheres and Planetary Atmosphere Probes

    NASA Technical Reports Server (NTRS)

    Seiff, Alvin; Lester, Peter

    1999-01-01

    instrument performance, although performed greater than 5 years prior to Jupiter encounter. Capability of decoding the science data from the Experiment Data Record to be provided at encounter was developed and exercised using the tape recording of the first Cruise Checkout data. A team effort was organized to program the selection and combination of data words defining pressure, temperature, acceleration, turbulence, and engineering quantities; to apply decalibration algorithms to convert readings from digital numbers to physical quantities; and to organize the data into a suitable printout. A paper on the Galileo Atmosphere Structure Instrument was written and submitted for publication in a special issue of Space Science Reviews. At the Journal editor's request, the grantee reviewed other Probe instrument papers submitted for this special issue. Calibration data were carefully taken for all experiment sensors and accumulated over a period of 10 years. The data were analyzed, fitted with algorithms, and summarized in a calibration report for use in analyzing and interpreting data returned from Jupiter's atmosphere. The sensors included were the primary science pressure, temperature, and acceleration sensors, and the supporting engineering temperature sensors. This report was distributed to experiment coinvestigators and the Probe Project Office.

  14. A Regularized Neural Net Approach for Retrieval of Atmospheric and Surface Temperatures with the IASI Instrument

    NASA Technical Reports Server (NTRS)

    Aires, F.; Chedin, A.; Scott, N. A.; Rossow, W. B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    Abstract In this paper, a fast atmospheric and surface temperature retrieval algorithm is developed for the high resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. This algorithm is constructed on the basis of a neural network technique that has been regularized by introduction of a priori information. The performance of the resulting fast and accurate inverse radiative transfer model is presented for a large divE:rsified dataset of radiosonde atmospheres including rare events. Two configurations are considered: a tropical-airmass specialized scheme and an all-air-masses scheme.

  15. Dosage Compensation of the Sex Chromosomes

    PubMed Central

    Disteche, Christine M.

    2013-01-01

    Differentiated sex chromosomes evolved because of suppressed recombination once sex became genetically controlled. In XX/XY and ZZ/ZW systems, the heterogametic sex became partially aneuploid after degeneration of the Y or W. Often, aneuploidy causes abnormal levels of gene expression throughout the entire genome. Dosage compensation mechanisms evolved to restore balanced expression of the genome. These mechanisms include upregulation of the heterogametic chromosome as well as repression in the homogametic sex. Remarkably, strategies for dosage compensation differ between species. In organisms where more is known about molecular mechanisms of dosage compensation, specific protein complexes containing noncoding RNAs are targeted to the X chromosome. In addition, the dosage-regulated chromosome often occupies a specific nuclear compartment. Some genes escape dosage compensation, potentially resulting in sex-specific differences in gene expression. This review focuses on dosage compensation in mammals, with comparisons to fruit flies, nematodes, and birds. PMID:22974302

  16. Data Processing for Atmospheric Phase Interferometers

    NASA Technical Reports Server (NTRS)

    Acosta, Roberto J.; Nessel, James A.; Morabito, David D.

    2009-01-01

    This paper presents a detailed discussion of calibration procedures used to analyze data recorded from a two-element atmospheric phase interferometer (API) deployed at Goldstone, California. In addition, we describe the data products derived from those measurements that can be used for site intercomparison and atmospheric modeling. Simulated data is used to demonstrate the effectiveness of the proposed algorithm and as a means for validating our procedure. A study of the effect of block size filtering is presented to justify our process for isolating atmospheric fluctuation phenomena from other system-induced effects (e.g., satellite motion, thermal drift). A simulated 24 hr interferometer phase data time series is analyzed to illustrate the step-by-step calibration procedure and desired data products.

  17. Estimating Top-of-Atmosphere Thermal Infrared Radiance Using MERRA-2 Atmospheric Data

    NASA Astrophysics Data System (ADS)

    Kleynhans, Tania

    Space borne thermal infrared sensors have been extensively used for environmental research as well as cross-calibration of other thermal sensing systems. Thermal infrared data from satellites such as Landsat and Terra/MODIS have limited temporal resolution (with a repeat cycle of 1 to 2 days for Terra/MODIS, and 16 days for Landsat). Thermal instruments with finer temporal resolution on geostationary satellites have limited utility for cross-calibration due to their large view angles. Reanalysis atmospheric data is available on a global spatial grid at three hour intervals making it a potential alternative to existing satellite image data. This research explores using the Modern-Era Retrospective analysis for Research and Applications, Version 2 (MERRA-2) reanalysis data product to predict top-of-atmosphere (TOA) thermal infrared radiance globally at time scales finer than available satellite data. The MERRA-2 data product provides global atmospheric data every three hours from 1980 to the present. Due to the high temporal resolution of the MERRA-2 data product, opportunities for novel research and applications are presented. While MERRA-2 has been used in renewable energy and hydrological studies, this work seeks to leverage the model to predict TOA thermal radiance. Two approaches have been followed, namely physics-based approach and a supervised learning approach, using Terra/MODIS band 31 thermal infrared data as reference. The first physics-based model uses forward modeling to predict TOA thermal radiance. The second model infers the presence of clouds from the MERRA-2 atmospheric data, before applying an atmospheric radiative transfer model. The last physics-based model parameterized the previous model to minimize computation time. The second approach applied four different supervised learning algorithms to the atmospheric data. The algorithms included a linear least squares regression model, a non-linear support vector regression (SVR) model, a multi

  18. 28 CFR 301.318 - Civilian compensation laws distinguished.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Civilian compensation laws distinguished... Civilian compensation laws distinguished. The Inmate Accident Compensation system is not obligated to... under civilian workmen's compensation laws in that hospitalization is usually completed prior to the...

  19. Is there a clinical benefit with a smooth compensator design compared with a plunged compensator design for passive scattered protons?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabibian, Art A., E-mail: art.tabibian@gmail.com; Powers, Adam; Dolormente, Keith

    In proton therapy, passive scattered proton plans use compensators to conform the dose to the distal surface of the planning volume. These devices are custom made from acrylic or wax for each treatment field using either a plunge-drilled or smooth-milled compensator design. The purpose of this study was to investigate if there is a clinical benefit of generating passive scattered proton radiation treatment plans with the smooth compensator design. We generated 4 plans with different techniques using the smooth compensators. We chose 5 sites and 5 patients for each site for the range of dosimetric effects to show adequate sample.more » The plans were compared and evaluated using multicriteria (MCA) plan quality metrics for plan assessment and comparison using the Quality Reports [EMR] technology by Canis Lupus LLC. The average absolute difference for dosimetric metrics from the plunged-depth plan ranged from −4.7 to +3.0 and the average absolute performance results ranged from −6.6% to +3%. The manually edited smooth compensator plan yielded the best dosimetric metric, +3.0, and performance, + 3.0% compared to the plunged-depth plan. It was also superior to the other smooth compensator plans. Our results indicate that there are multiple approaches to achieve plans with smooth compensators similar to the plunged-depth plans. The smooth compensators with manual compensator edits yielded equal or better target coverage and normal tissue (NT) doses compared with the other smooth compensator techniques. Further studies are under investigation to evaluate the robustness of the smooth compensator design.« less

  20. Defining Compensable Injury in Biomedical Research.

    PubMed

    Larkin, Megan E

    2015-01-01

    Biomedical research provides a core social good by enabling medical progress. In the twenty-first century alone, this includes reducing transmission of HIV/AIDS, developing innovative therapies for cancer patients, and exploring the possibilities of personalized medicine. In order to continue to advance medical science, research relies on the voluntary participation of human subjects. Because research is inherently uncertain, unintended harm is an inevitable part of the research enterprise. Currently, injured research participants in the United States must turn to the “litigation lottery” of the tort system in search of compensation. This state of affairs fails research participants, who are too often left uncompensated for devastating losses, and makes the United States an outlier in the international community. In spite of forty years’ worth of Presidential Commissions and other respected voices calling for the development of a no-fault compensation system, no progress has been made to date. One of the reasons for this lack of progress is the failure to develop a coherent ethical basis for an obligation to provide compensation for research related injuries. This problem is exacerbated by the lack of a clear definition of “compensable injury” in the biomedical research context. This article makes a number of important contributions to the scholarship in this growing field. To begin, it examines compensation systems already in existence and concludes that there are four main definitional elements that must be used to define “compensable injury.” Next, it examines the justifications that have been put forth as the basis for an ethical obligation to provide compensation, and settles on retrospective nonmaleficence and distributive and compensatory justice as the most salient and persuasive. Finally, it uses the regulatory elements and the justifications discussed in the first two sections to develop a well-rounded definition of “compensable injury