NASA Astrophysics Data System (ADS)
Cao, Jingtai; Zhao, Xiaohui; Li, Zhaokun; Liu, Wei; Gu, Haijun
2017-11-01
The performance of free space optical (FSO) communication system is limited by atmospheric turbulent extremely. Adaptive optics (AO) is the significant method to overcome the atmosphere disturbance. Especially, for the strong scintillation effect, the sensor-less AO system plays a major role for compensation. In this paper, a modified artificial fish school (MAFS) algorithm is proposed to compensate the aberrations in the sensor-less AO system. Both the static and dynamic aberrations compensations are analyzed and the performance of FSO communication before and after aberrations compensations is compared. In addition, MAFS algorithm is compared with artificial fish school (AFS) algorithm, stochastic parallel gradient descent (SPGD) algorithm and simulated annealing (SA) algorithm. It is shown that the MAFS algorithm has a higher convergence speed than SPGD algorithm and SA algorithm, and reaches the better convergence value than AFS algorithm, SPGD algorithm and SA algorithm. The sensor-less AO system with MAFS algorithm effectively increases the coupling efficiency at the receiving terminal with fewer numbers of iterations. In conclusion, the MAFS algorithm has great significance for sensor-less AO system to compensate atmospheric turbulence in FSO communication system.
Assimilation of nontraditional datasets to improve atmospheric compensation
NASA Astrophysics Data System (ADS)
Kelly, Michael A.; Osei-Wusu, Kwame; Spisz, Thomas S.; Strong, Shadrian; Setters, Nathan; Gibson, David M.
2012-06-01
Detection and characterization of space objects require the capability to derive physical properties such as brightness temperature and reflectance. These quantities, together with trajectory and position, are often used to correlate an object from a catalogue of known characteristics. However, retrieval of these physical quantities can be hampered by the radiative obscuration of the atmosphere. Atmospheric compensation must therefore be applied to remove the radiative signature of the atmosphere from electro-optical (EO) collections and enable object characterization. The JHU/APL Atmospheric Compensation System (ACS) was designed to perform atmospheric compensation for long, slant-range paths at wavelengths from the visible to infrared. Atmospheric compensation is critically important for airand ground-based sensors collecting at low elevations near the Earth's limb. It can be demonstrated that undetected thin, sub-visual cirrus clouds in the line of sight (LOS) can significantly alter retrieved target properties (temperature, irradiance). The ACS algorithm employs non-traditional cirrus datasets and slant-range atmospheric profiles to estimate and remove atmospheric radiative effects from EO/IR collections. Results are presented for a NASA-sponsored collection in the near-IR (NIR) during hypersonic reentry of the Space Shuttle during STS-132.
NASA Astrophysics Data System (ADS)
Mariano, Adrian V.; Grossmann, John M.
2010-11-01
Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald
2005-10-01
Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.
Method and system for enabling real-time speckle processing using hardware platforms
NASA Technical Reports Server (NTRS)
Ortiz, Fernando E. (Inventor); Kelmelis, Eric (Inventor); Durbano, James P. (Inventor); Curt, Peterson F. (Inventor)
2012-01-01
An accelerator for the speckle atmospheric compensation algorithm may enable real-time speckle processing of video feeds that may enable the speckle algorithm to be applied in numerous real-time applications. The accelerator may be implemented in various forms, including hardware, software, and/or machine-readable media.
NASA Astrophysics Data System (ADS)
Chang, Huan; Yin, Xiao-li; Cui, Xiao-zhou; Zhang, Zhi-chao; Ma, Jian-xin; Wu, Guo-hua; Zhang, Li-jia; Xin, Xiang-jun
2017-12-01
Practical orbital angular momentum (OAM)-based free-space optical (FSO) communications commonly experience serious performance degradation and crosstalk due to atmospheric turbulence. In this paper, we propose a wave-front sensorless adaptive optics (WSAO) system with a modified Gerchberg-Saxton (GS)-based phase retrieval algorithm to correct distorted OAM beams. We use the spatial phase perturbation (SPP) GS algorithm with a distorted probe Gaussian beam as the only input. The principle and parameter selections of the algorithm are analyzed, and the performance of the algorithm is discussed. The simulation results show that the proposed adaptive optics (AO) system can significantly compensate for distorted OAM beams in single-channel or multiplexed OAM systems, which provides new insights into adaptive correction systems using OAM beams.
NASA Astrophysics Data System (ADS)
Roggemann, M.; Soehnel, G.; Archer, G.
Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.
NASA Astrophysics Data System (ADS)
Huebner, Claudia S.
2016-10-01
As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).
NASA Astrophysics Data System (ADS)
Jiao Ling, LIn; Xiaoli, Yin; Huan, Chang; Xiaozhou, Cui; Yi-Lin, Guo; Huan-Yu, Liao; Chun-YU, Gao; Guohua, Wu; Guang-Yao, Liu; Jin-KUn, Jiang; Qing-Hua, Tian
2018-02-01
Atmospheric turbulence limits the performance of orbital angular momentum-based free-space optical communication (FSO-OAM) system. In order to compensate phase distortion induced by atmospheric turbulence, wavefront sensorless adaptive optics (WSAO) has been proposed and studied in recent years. In this paper a new version of SPGD called MZ-SPGD, which combines the Z-SPGD based on the deformable mirror influence function and the M-SPGD based on the Zernike polynomials, is proposed. Numerical simulations show that the hybrid method decreases convergence times markedly but can achieve the same compensated effect compared to Z-SPGD and M-SPGD.
Simulation results for a finite element-based cumulative reconstructor
NASA Astrophysics Data System (ADS)
Wagner, Roland; Neubauer, Andreas; Ramlau, Ronny
2017-10-01
Modern ground-based telescopes rely on adaptive optics (AO) systems for the compensation of image degradation caused by atmospheric turbulences. Within an AO system, measurements of incoming light from guide stars are used to adjust deformable mirror(s) in real time that correct for atmospheric distortions. The incoming wavefront has to be derived from sensor measurements, and this intermediate result is then translated into the shape(s) of the deformable mirror(s). Rapid changes of the atmosphere lead to the need for fast wavefront reconstruction algorithms. We review a fast matrix-free algorithm that was developed by Neubauer to reconstruct the incoming wavefront from Shack-Hartmann measurements based on a finite element discretization of the telescope aperture. The method is enhanced by a domain decomposition ansatz. We show that this algorithm reaches the quality of standard approaches in end-to-end simulation while at the same time maintaining the speed of recently introduced solvers with linear order speed.
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
2012-08-01
Difference Vegetation Index ( NDVI ) ..................................... 15 2.3 Methodology...Atmospheric Compensation ........................................................................ 31 3.2.3.1 Normalized Difference Vegetation Index ( NDVI ...anomaly detection algorithms are contrasted and implemented, and explains the use of the Normalized Difference Vegetation Index ( NDVI ) in post
Fourier transform wavefront control with adaptive prediction of the atmosphere.
Poyneer, Lisa A; Macintosh, Bruce A; Véran, Jean-Pierre
2007-09-01
Predictive Fourier control is a temporal power spectral density-based adaptive method for adaptive optics that predicts the atmosphere under the assumption of frozen flow. The predictive controller is based on Kalman filtering and a Fourier decomposition of atmospheric turbulence using the Fourier transform reconstructor. It provides a stable way to compensate for arbitrary numbers of atmospheric layers. For each Fourier mode, efficient and accurate algorithms estimate the necessary atmospheric parameters from closed-loop telemetry and determine the predictive filter, adjusting as conditions change. This prediction improves atmospheric rejection, leading to significant improvements in system performance. For a 48x48 actuator system operating at 2 kHz, five-layer prediction for all modes is achievable in under 2x10(9) floating-point operations/s.
Extraction of incident irradiance from LWIR hyperspectral imagery
NASA Astrophysics Data System (ADS)
Lahaie, Pierre
2014-10-01
The atmospheric correction of thermal hyperspectral imagery can be separated in two distinct processes: Atmospheric Compensation (AC) and Temperature and Emissivity separation (TES). TES requires for input at each pixel, the ground leaving radiance and the atmospheric downwelling irradiance, which are the outputs of the AC process. The extraction from imagery of the downwelling irradiance requires assumptions about some of the pixels' nature, the sensor and the atmosphere. Another difficulty is that, often the sensor's spectral response is not well characterized. To deal with this unknown, we defined a spectral mean operator that is used to filter the ground leaving radiance and a computation of the downwelling irradiance from MODTRAN. A user will select a number of pixels in the image for which the emissivity is assumed to be known. The emissivity of these pixels is assumed to be smooth and that the only spectrally fast varying variable in the downwelling irradiance. Using these assumptions we built an algorithm to estimate the downwelling irradiance. The algorithm is used on all the selected pixels. The estimated irradiance is the average on the spectral channels of the resulting computation. The algorithm performs well in simulation and results are shown for errors in the assumed emissivity and for errors in the atmospheric profiles. The sensor noise influences mainly the required number of pixels.
NASA Astrophysics Data System (ADS)
Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew
2017-08-01
Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.
Method to analyze remotely sensed spectral data
Stork, Christopher L [Albuquerque, NM; Van Benthem, Mark H [Middletown, DE
2009-02-17
A fast and rigorous multivariate curve resolution (MCR) algorithm is applied to remotely sensed spectral data. The algorithm is applicable in the solar-reflective spectral region, comprising the visible to the shortwave infrared (ranging from approximately 0.4 to 2.5 .mu.m), midwave infrared, and thermal emission spectral region, comprising the thermal infrared (ranging from approximately 8 to 15 .mu.m). For example, employing minimal a priori knowledge, notably non-negativity constraints on the extracted endmember profiles and a constant abundance constraint for the atmospheric upwelling component, MCR can be used to successfully compensate thermal infrared hyperspectral images for atmospheric upwelling and, thereby, transmittance effects. Further, MCR can accurately estimate the relative spectral absorption coefficients and thermal contrast distribution of a gas plume component near the minimum detectable quantity.
NASA Astrophysics Data System (ADS)
Pieper, Michael
Accurate estimation or retrieval of surface emissivity spectra from long-wave infrared (LWIR) or Thermal Infrared (TIR) hyperspectral imaging data acquired by airborne or space-borne sensors is necessary for many scientific and defense applications. The at-aperture radiance measured by the sensor is a function of the ground emissivity and temperature, modified by the atmosphere. Thus the emissivity retrieval process consists of two interwoven steps: atmospheric compensation (AC) to retrieve the ground radiance from the measured at-aperture radiance and temperature-emissivity separation (TES) to separate the temperature and emissivity from the ground radiance. In-scene AC (ISAC) algorithms use blackbody-like materials in the scene, which have a linear relationship between their ground radiances and at-aperture radiances determined by the atmospheric transmission and upwelling radiance. Using a clear reference channel to estimate the ground radiance, a linear fitting of the at-aperture radiance and estimated ground radiance is done to estimate the atmospheric parameters. TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the sharp features added by the atmosphere. The ground temperature and emissivity are found by finding the temperature that provides the smoothest emissivity estimate. In this thesis we develop models to investigate the sensitivity of AC and TES to the basic assumptions enabling their performance. ISAC assumes that there are perfect blackbody pixels in a scene and that there is a clear channel, which is never the case. The developed ISAC model explains how the quality of blackbody-like pixels affect the shape of atmospheric estimates and the clear channel assumption affects their magnitude. Emissivity spectra for solids usually have some roughness. The TES model identifies four sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise and wavelength calibration. The ways these errors interact determines the overall TES performance. Since the AC and TES processes are interwoven, any errors in AC are transferred to TES and the final temperature and emissivity estimates. Combining the two models, shape errors caused by the blackbody assumption are transferred to the emissivity estimates, where magnitude errors from the clear channel assumption are compensated by TES temperature induced emissivity errors. The ability for the temperature induced error to compensate for such atmospheric errors makes it difficult to determine the correct atmospheric parameters for a scene. With these models we are able to determine the expected quality of estimated emissivity spectra based on the quality of blackbody-like materials on the ground, the emissivity of the materials being searched for, and the properties of the sensor. The quality of material emissivity spectra is a key factor in determining detection performance for a material in a scene.
Results of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) Experiment
NASA Technical Reports Server (NTRS)
Wilson, K. E.; Leatherman, P. R.; Cleis, R.; Spinhirne, J.; Fugate, R. Q.
1997-01-01
Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes. Phase I of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) experiment demonstrated the first propagation of an atmosphere-compensated laser beam to the lunar retroreflectors. A 1.06-micron Nd:YAG laser beam was propagated through the full aperture of the 1.5-m telescope at the Starfire Optical Range (SOR), Kirtland Air Force Base, New Mexico, to the Apollo 15 retroreflector array at Hadley Rille. Laser guide-star adaptive optics were used to compensate turbulence-induced aberrations across the transmitter's 1.5-m aperture. A 3.5-m telescope, also located at the SOR, was used as a receiver for detecting the return signals. JPL-supplied Chebyshev polynomials of the retroreflector locations were used to develop tracking algorithms for the telescopes. At times we observed in excess of 100 photons returned from a single pulse when the outgoing beam from the 1.5-m telescope was corrected by the adaptive optics system. No returns were detected when the outgoing beam was uncompensated. The experiment was conducted from March through September 1994, during the first or last quarter of the Moon.
Temperature - Emissivity Separation Assessment in a Sub-Urban Scenario
NASA Astrophysics Data System (ADS)
Moscadelli, M.; Diani, M.; Corsini, G.
2017-10-01
In this paper, a methodology that aims at evaluating the effectiveness of different TES strategies is presented. The methodology takes into account the specific material of interest in the monitored scenario, sensor characteristics, and errors in the atmospheric compensation step. The methodology is proposed in order to predict and analyse algorithms performances during the planning of a remote sensing mission, aimed to discover specific materials of interest in the monitored scenario. As case study, the proposed methodology is applied to a real airborne data set of a suburban scenario. In order to perform the TES problem, three state-of-the-art algorithms, and a recently proposed one, are investigated: Temperature-Emissivity Separation '98 (TES-98) algorithm, Stepwise Refining TES (SRTES) algorithm, Linear piecewise TES (LTES) algorithm, and Optimized Smoothing TES (OSTES) algorithm. At the end, the accuracy obtained with real data, and the ones predicted by means of the proposed methodology are compared and discussed.
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
A Novel Modified Omega-K Algorithm for Synthetic Aperture Imaging Lidar through the Atmosphere
Guo, Liang; Xing, Mendao; Tang, Yu; Dan, Jing
2008-01-01
The spatial resolution of a conventional imaging lidar system is constrained by the diffraction limit of the telescope's aperture. The combination of the lidar and synthetic aperture (SA) processing techniques may overcome the diffraction limit and pave the way for a higher resolution air borne or space borne remote sensor. Regarding the lidar transmitting frequency modulation continuous-wave (FMCW) signal, the motion during the transmission of a sweep and the reception of the corresponding echo were expected to be one of the major problems. The given modified Omega-K algorithm takes the continuous motion into account, which can compensate for the Doppler shift induced by the continuous motion efficiently and azimuth ambiguity for the low pulse recurrence frequency limited by the tunable laser. And then, simulation of Phase Screen (PS) distorted by atmospheric turbulence following the von Karman spectrum by using Fourier Transform is implemented in order to simulate turbulence. Finally, the computer simulation shows the validity of the modified algorithm and if in the turbulence the synthetic aperture length does not exceed the similar coherence length of the atmosphere for SAIL, we can ignore the effect of the turbulence. PMID:27879865
Atmospheric correction for hyperspectral ocean color sensors
NASA Astrophysics Data System (ADS)
Ibrahim, A.; Ahmad, Z.; Franz, B. A.; Knobelspiesse, K. D.
2017-12-01
NASA's heritage Atmospheric Correction (AC) algorithm for multi-spectral ocean color sensors is inadequate for the new generation of spaceborne hyperspectral sensors, such as NASA's first hyperspectral Ocean Color Instrument (OCI) onboard the anticipated Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite mission. The AC process must estimate and remove the atmospheric path radiance contribution due to the Rayleigh scattering by air molecules and by aerosols from the measured top-of-atmosphere (TOA) radiance. Further, it must also compensate for the absorption by atmospheric gases and correct for reflection and refraction of the air-sea interface. We present and evaluate an improved AC for hyperspectral sensors beyond the heritage approach by utilizing the additional spectral information of the hyperspectral sensor. The study encompasses a theoretical radiative transfer sensitivity analysis as well as a practical application of the Hyperspectral Imager for the Coastal Ocean (HICO) and the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensors.
Numerical system for monitoring pressurized equipment
NASA Astrophysics Data System (ADS)
Dobra, Remus; Pasculescu, Dragos; Boca, Maria Loredana; Moldovan, Lucian
2016-12-01
Electrical devices for operation in potentially explosive atmospheres are designed and built in accordance with European standard EN 50015: 1995 ex. the pressurized enclosure "p". The type of protector p, by using a protective gas in the housing is intended to prevent the formation of an explosive atmosphere within it, while maintaining an overpressure to the surrounding atmosphere and, where appropriate, by the use dilution. Research conducted for pressurized encapsulation aimed at developing new procedures for determining the parameters of pressurization to allow safe use of electrical appliances. Pressurization with compensation for losses allegedly maintaining overpressure inside the enclosure when the outlets are closed, is made by feeding protective gas in an amount sufficient to fully compensate for losses from the housing inevitable pressurized and its associated pipework. The conditions and necessary measures that are required for appliances and equipment with potential ignition of explosive atmospheres are detailed in the SR EN 50016/2000. For pressurized encapsulation protection mode, the electric equipment can be maintained safety by the overpressure created inside them and in the supply pipes with air. The paper presents a modern method to determine the parameters of the electric equipment with pressurization enclosures. For controlling of such equipment, a specific algorithm has been developed and laboratory tested.
40 CFR 1065.275 - N2O measurement devices.
Code of Federal Regulations, 2013 CFR
2013-07-01
... infrared (NDIR) analyzer. You may use an NDIR analyzer that has compensation algorithms that are functions... any compensation algorithm is 0% (that is, no bias high and no bias low), regardless of the... has compensation algorithms that are functions of other gaseous measurements and the engine's known or...
NASA Astrophysics Data System (ADS)
Li, Shuang; Peng, Yuming
2012-01-01
In order to accurately deliver an entry vehicle through the Martian atmosphere to the prescribed parachute deployment point, active Mars entry guidance is essential. This paper addresses the issue of Mars atmospheric entry guidance using the command generator tracker (CGT) based direct model reference adaptive control to reduce the adverse effect of the bounded uncertainties on atmospheric density and aerodynamic coefficients. Firstly, the nominal drag acceleration profile meeting a variety of constraints is planned off-line in the longitudinal plane as the reference model to track. Then, the CGT based direct model reference adaptive controller and the feed-forward compensator are designed to robustly track the aforementioned reference drag acceleration profile and to effectively reduce the downrange error. Afterwards, the heading alignment logic is adopted in the lateral plane to reduce the crossrange error. Finally, the validity of the guidance algorithm proposed in this paper is confirmed by Monte Carlo simulation analysis.
An observer-based compensator for distributed delays
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok
1990-01-01
This paper presents an algorithm for compensating delays that are distributed between the sensor(s), controller and actuator(s) within a control loop. This observer-based algorithm is specially suited to compensation of network-induced delays in integrated communication and control systems. The robustness of the algorithm relative to plant model uncertainties has been examined.
Impact of beacon wavelength on phase-compensation performance
NASA Astrophysics Data System (ADS)
Enterline, Allison A.; Spencer, Mark F.; Burrell, Derek J.; Brennan, Terry J.
2017-09-01
This study evaluates the effects of beacon-wavelength mismatch on phase-compensation performance. In general, beacon-wavelength mismatch occurs at the system level because the beacon-illuminator laser (BIL) and high-energy laser (HEL) are often at different wavelengths. Such is the case, for example, when using an aperture sharing element to isolate the beam-control sensor suite from the blinding nature of the HEL. With that said, this study uses the WavePlex Toolbox in MATLAB® to model ideal spherical wave propagation through various atmospheric-turbulence conditions. To quantify phase-compensation performance, we also model a nominal adaptive-optics (AO) system. We achieve correction from a Shack-Hartmann wavefront sensor and continuous-face-sheet deformable mirror using a least-squares phase reconstruction algorithm in the Fried geometry and a leaky integrator control law. To this end, we plot the power in the bucket metric as a function of BIL-HEL wavelength difference. Our initial results show that positive BIL-HEL wavelength differences achieve better phase compensation performance compared to negative BIL-HEL wavelength differences (i.e., red BILs outperform blue BILs). This outcome is consistent with past results.
A computerized compensator design algorithm with launch vehicle applications
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1976-01-01
This short paper presents a computerized algorithm for the design of compensators for large launch vehicles. The algorithm is applicable to the design of compensators for linear, time-invariant, control systems with a plant possessing a single control input and multioutputs. The achievement of frequency response specifications is cast into a strict constraint mathematical programming format. An improved solution algorithm for solving this type of problem is given, along with the mathematical necessities for application to systems of the above type. A computer program, compensator improvement program (CIP), has been developed and applied to a pragmatic space-industry-related example.
Application of digital image processing techniques to astronomical imagery, 1979
NASA Technical Reports Server (NTRS)
Lorre, J. J.
1979-01-01
Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.
NASA Astrophysics Data System (ADS)
Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei
2017-09-01
Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region.
An embedded processor for real-time atmoshperic compensation
NASA Astrophysics Data System (ADS)
Bodnar, Michael R.; Curt, Petersen F.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.
2009-05-01
Imaging over long distances is crucial to a number of defense and security applications, such as homeland security and launch tracking. However, the image quality obtained from current long-range optical systems can be severely degraded by the turbulent atmosphere in the path between the region under observation and the imager. While this obscured image information can be recovered using post-processing techniques, the computational complexity of such approaches has prohibited deployment in real-time scenarios. To overcome this limitation, we have coupled a state-of-the-art atmospheric compensation algorithm, the average-bispectrum speckle method, with a powerful FPGA-based embedded processing board. The end result is a light-weight, lower-power image processing system that improves the quality of long-range imagery in real-time, and uses modular video I/O to provide a flexible interface to most common digital and analog video transport methods. By leveraging the custom, reconfigurable nature of the FPGA, a 20x speed increase over a modern desktop PC was achieved in a form-factor that is compact, low-power, and field-deployable.
An observer-based compensator for distributed delays in integrated control systems
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok
1989-01-01
This paper presents an algorithm for compensation of delays that are distributed within a control loop. The observer-based algorithm is especially suitable for compensating network-induced delays that are likely to occur in integrated control systems of the future generation aircraft. The robustness of the algorithm relative to uncertainties in the plant model have been examined.
Network compensation for missing sensors
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1991-01-01
A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.
Estimation of surface temperature in remote pollution measurement experiments
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok
1990-01-01
A procedure for compensating for the effects of distributed network-induced delays in integrated communication and control systems (ICCS) is proposed. The problem of analyzing systems with time-varying and possibly stochastic delays could be circumvented by use of a deterministic observer which is designed to perform under certain restrictive but realistic assumptions. The proposed delay-compensation algorithm is based on a deterministic state estimator and a linear state-variable-feedback control law. The deterministic observer can be replaced by a stochastic observer without any structural modifications of the delay compensation algorithm. However, if a feedforward-feedback control law is chosen instead of the state-variable feedback control law, the observer must be modified as a conventional nondelayed system would be. Under these circumstances, the delay compensation algorithm would be accordingly changed. The separation principle of the classical Luenberger observer holds true for the proposed delay compensator. The algorithm is suitable for ICCS in advanced aircraft, spacecraft, manufacturing automation, and chemical process applications.
Motion compensation for ultra wide band SAR
NASA Technical Reports Server (NTRS)
Madsen, S.
2001-01-01
This paper describes an algorithm that combines wavenumber domain processing with a procedure that enables motion compensation to be applied as a function of target range and azimuth angle. First, data are processed with nominal motion compensation applied, partially focusing the image, then the motion compensation of individual subpatches is refined. The results show that the proposed algorithm is effective in compensating for deviations from a straight flight path, from both a performance and a computational efficiency point of view.
A Novel Speed Compensation Method for ISAR Imaging with Low SNR
Liu, Yongxiang; Zhang, Shuanghui; Zhu, Dekang; Li, Xiang
2015-01-01
In this paper, two novel speed compensation algorithms for ISAR imaging under a low signal-to-noise ratio (SNR) condition have been proposed, which are based on the cubic phase function (CPF) and the integrated cubic phase function (ICPF), respectively. These two algorithms can estimate the speed of the target from the wideband radar echo directly, which breaks the limitation of speed measuring in a radar system. With the utilization of non-coherent accumulation, the ICPF-based speed compensation algorithm is robust to noise and can meet the requirement of speed compensation for ISAR imaging under a low SNR condition. Moreover, a fast searching implementation strategy, which consists of coarse search and precise search, has been introduced to decrease the computational burden of speed compensation based on CPF and ICPF. Experimental results based on radar data validate the effectiveness of the proposed algorithms. PMID:26225980
Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design
NASA Technical Reports Server (NTRS)
Whorton, Mark; Buschek, Harald; Calise, Anthony J.
1996-01-01
Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.
NASA Technical Reports Server (NTRS)
Blissit, J. A.
1986-01-01
Using analysis results from the post trajectory optimization program, an adaptive guidance algorithm is developed to compensate for density, aerodynamic and thrust perturbations during an atmospheric orbital plane change maneuver. The maneuver offers increased mission flexibility along with potential fuel savings for future reentry vehicles. Although designed to guide a proposed NASA Entry Research Vehicle, the algorithm is sufficiently generic for a range of future entry vehicles. The plane change analysis provides insight suggesting a straight-forward algorithm based on an optimized nominal command profile. Bank angle, angle of attack, and engine thrust level, ignition and cutoff times are modulated to adjust the vehicle's trajectory to achieve the desired end-conditions. A performance evaluation of the scheme demonstrates a capability to guide to within 0.05 degrees of the desired plane change and five nautical miles of the desired apogee altitude while maintaining heating constraints. The algorithm is tested under off-nominal conditions of + or -30% density biases, two density profile models, + or -15% aerodynamic uncertainty, and a 33% thrust loss and for various combinations of these conditions.
Novel wavelength diversity technique for high-speed atmospheric turbulence compensation
NASA Astrophysics Data System (ADS)
Arrasmith, William W.; Sullivan, Sean F.
2010-04-01
The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
An approximate, maximum terminal velocity descent to a point
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eisler, G.R.; Hull, D.G.
1987-01-01
No closed form control solution exists for maximizing the terminal velocity of a hypersonic glider at an arbitrary point. As an alternative, this study uses neighboring extremal theory to provide a sampled data feedback law to guide the vehicle to a constrained ground range and altitude. The guidance algorithm is divided into two parts: 1) computation of a nominal, approximate, maximum terminal velocity trajectory to a constrained final altitude and computation of the resulting unconstrained groundrange, and 2) computation of the neighboring extremal control perturbation at the sample value of flight path angle to compensate for changes in the approximatemore » physical model and enable the vehicle to reach the on-board computed groundrange. The trajectories are characterized by glide and dive flight to the target to minimize the time spent in the denser parts of the atmosphere. The proposed on-line scheme successfully brings the final altitude and range constraints together, as well as compensates for differences in flight model, atmosphere, and aerodynamics at the expense of guidance update computation time. Comparison with an independent, parameter optimization solution for the terminal velocity is excellent. 6 refs., 3 figs.« less
A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.
2005-01-01
This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.
NASA Astrophysics Data System (ADS)
Funke, B.; López-Puertas, M.; Bermejo-Pantaleón, D.; von Clarmann, T.; Stiller, G. P.; HöPfner, M.; Grabowski, U.; Kaufmann, M.
2007-06-01
Nonlocal thermodynamic equilibrium (non-LTE) simulations of the 12C16O(1 → 0) fundamental band, the 12C16O(2 → 1) hot band, and the isotopic 13C16O(1 → 0) band performed with the Generic Radiative Transfer and non-LTE population Algorithm (GRANADA) and the Karlsruhe Optimized and Precise Radiative Transfer Algorithm (KOPRA) have been compared to spectrally resolved 4.7 μm radiances measured by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). The performance of the non-LTE simulation has been assessed in terms of band radiance ratios in order to avoid a compensation of possible non-LTE model errors by retrieval errors in the CO abundances inferred from MIPAS data with the same non-LTE algorithms. The agreement with the measurements is within 5% for the fundamental band and within 10% for the hot band. Simulated 13C16O radiances agree with the measurements within the instrumental noise error. Solar reflectance at the surface or clouds has been identified as an important additional excitation mechanism for the CO(2) state. The study represents a thorough validation of the non-LTE scheme used in the retrieval of CO abundances from MIPAS data.
Heat Transport Compensation in Atmosphere and Ocean over the Past 22,000 Years
Yang, Haijun; Zhao, Yingying; Liu, Zhengyu; Li, Qing; He, Feng; Zhang, Qiong
2015-01-01
The Earth’s climate has experienced dramatic changes over the past 22,000 years; however, the total meridional heat transport (MHT) of the climate system remains stable. A 22,000-year-long simulation using an ocean-atmosphere coupled model shows that the changes in atmosphere and ocean MHT are significant but tend to be out of phase in most regions, mitigating the total MHT change, which helps to maintain the stability of the Earth’s overall climate. A simple conceptual model is used to understand the compensation mechanism. The simple model can reproduce qualitatively the evolution and compensation features of the MHT over the past 22,000 years. We find that the global energy conservation requires the compensation changes in the atmosphere and ocean heat transports. The degree of compensation is mainly determined by the local climate feedback between surface temperature and net radiation flux at the top of the atmosphere. This study suggests that an internal mechanism may exist in the climate system, which might have played a role in constraining the global climate change over the past 22,000 years. PMID:26567710
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok
1990-01-01
The implementation and verification of the delay-compensation algorithm are addressed. The delay compensator has been experimentally verified at an IEEE 802.4 network testbed for velocity control of a DC servomotor. The performance of the delay-compensation algorithm was also examined by combined discrete-event and continuous-time simulation of the flight control system of an advanced aircraft that uses the SAE (Society of Automotive Engineers) linear token passing bus for data communications.
Compensation in the presence of deep turbulence using tiled-aperture architectures
NASA Astrophysics Data System (ADS)
Spencer, Mark F.; Brennan, Terry J.
2017-05-01
The presence of distributed-volume atmospheric aberrations or "deep turbulence" presents unique challenges for beam-control applications which look to sense and correct for disturbances found along the laser-propagation path. This paper explores the potential for branch-point-tolerant reconstruction algorithms and tiled-aperture architectures to correct for the branch cuts contained in the phase function due to deep-turbulence conditions. Using wave-optics simulations, the analysis aims to parameterize the fitting-error performance of tiled-aperture architectures operating in a null-seeking control loop with piston, tip, and tilt compensation of the individual optical beamlet trains. To evaluate fitting-error performance, the analysis plots normalized power in the bucket as a function of the Fried coherence diameter, the log-amplitude variance, and the number of subapertures for comparison purposes. Initial results show that tiled-aperture architectures with a large number of subapertures outperform filled-aperture architectures with continuous-face-sheet deformable mirrors.
Design and realization of adaptive optical principle system without wavefront sensing
NASA Astrophysics Data System (ADS)
Wang, Xiaobin; Niu, Chaojun; Guo, Yaxing; Han, Xiang'e.
2018-02-01
In this paper, we focus on the performance improvement of the free space optical communication system and carry out the research on wavefront-sensorless adaptive optics. We use a phase only liquid crystal spatial light modulator (SLM) as the wavefront corrector. The optical intensity distribution of the distorted wavefront is detected by a CCD. We develop a wavefront controller based on ARM and a software based on the Linux operating system. The wavefront controller can control the CCD camera and the wavefront corrector. There being two SLMs in the experimental system, one simulates atmospheric turbulence and the other is used to compensate the wavefront distortion. The experimental results show that the performance quality metric (the total gray value of 25 pixels) increases from 3037 to 4863 after 200 iterations. Besides, it is demonstrated that our wavefront-sensorless adaptive optics system based on SPGD algorithm has a good performance in compensating wavefront distortion.
40 CFR 1065.250 - Nondispersive infrared analyzer.
Code of Federal Regulations, 2013 CFR
2013-07-01
... has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no bias high and...
40 CFR 1065.284 - Zirconia (ZrO2) analyzer.
Code of Federal Regulations, 2012 CFR
2012-07-01
... that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no...
40 CFR 1065.250 - Nondispersive infra-red analyzer.
Code of Federal Regulations, 2011 CFR
2011-07-01
... analyzer that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no...
40 CFR 1065.284 - Zirconia (ZrO2) analyzer.
Code of Federal Regulations, 2011 CFR
2011-07-01
... that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no...
40 CFR 1065.284 - Zirconia (ZrO2) analyzer.
Code of Federal Regulations, 2013 CFR
2013-07-01
... that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no...
40 CFR 1065.250 - Nondispersive infrared analyzer.
Code of Federal Regulations, 2012 CFR
2012-07-01
... has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no bias high and...
40 CFR 1065.284 - Zirconia (ZrO2) analyzer.
Code of Federal Regulations, 2010 CFR
2010-07-01
... that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no...
40 CFR 1065.250 - Nondispersive infra-red analyzer.
Code of Federal Regulations, 2010 CFR
2010-07-01
... analyzer that has compensation algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no...
40 CFR 1065.272 - Nondispersive ultraviolet analyzer.
Code of Federal Regulations, 2013 CFR
2013-07-01
... in § 1065.307. You may use a NDUV analyzer that has compensation algorithms that are functions of... compensation algorithm is 0% (that is, no bias high and no bias low), regardless of the uncompensated signal's...
40 CFR 1065.272 - Nondispersive ultraviolet analyzer.
Code of Federal Regulations, 2012 CFR
2012-07-01
... in § 1065.307. You may use a NDUV analyzer that has compensation algorithms that are functions of... compensation algorithm is 0% (that is, no bias high and no bias low), regardless of the uncompensated signal's...
40 CFR 1065.369 - H2O, CO, and CO2 interference verification for photoacoustic alcohol analyzers.
Code of Federal Regulations, 2014 CFR
2014-07-01
... compensation algorithms that utilize measurements of other gases to meet this interference verification, simultaneously conduct these other measurements to test the compensation algorithms during the analyzer...
Compensation of distributed delays in integrated communication and control systems
NASA Technical Reports Server (NTRS)
Ray, Asok; Luck, Rogelio
1991-01-01
The concept, analysis, implementation, and verification of a method for compensating delays that are distributed between the sensors, controller, and actuators within a control loop are discussed. With the objective of mitigating the detrimental effects of these network induced delays, a predictor-controller algorithm was formulated and analyzed. Robustness of the delay compensation algorithm was investigated relative to parametric uncertainties in plant modeling. The delay compensator was experimentally verified on an IEEE 802.4 network testbed for velocity control of a DC servomotor.
40 CFR 1065.272 - Nondispersive ultraviolet analyzer.
Code of Federal Regulations, 2011 CFR
2011-07-01
... in § 1065.307. You may use a NDUV analyzer that has compensation algorithms that are functions of... compensation algorithm is 0.0% (that is, no bias high and no bias low), regardless of the uncompensated signal...
40 CFR 1065.272 - Nondispersive ultraviolet analyzer.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in § 1065.307. You may use a NDUV analyzer that has compensation algorithms that are functions of... compensation algorithm is 0.0% (that is, no bias high and no bias low), regardless of the uncompensated signal...
NASA Technical Reports Server (NTRS)
Green, Robert O.; Vane, Gregg; Conel, James E.
1988-01-01
An assessment of the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) performance was made for a flight over Mountain Pass, California, July 30, 1987. The flight data were reduced to reflectance using an empirical algorithm which compensates for solar, atmospheric and instrument factors. AVIRIS data in conjunction with surface and atmospheric measurements acquired concurrently were used to develop an improved spectral calibration. An accurate in-flight radiometric calibration was also performed using the LOWTRAN 7 radiative transfer code together with measured surface reflectance and atmospheric optical depths. A direct comparison with coincident Thematic Mapper imagery of Mountain Pass was used to demonstrate the high spatial resolution and good geometric performance of AVIRIS. The in-flight instrument noise was independently determined with two methods which showed good agreement. A signal-to-noise ratio was calculated using data from a uniform playa. This ratio was scaled to the AVIRIS reference radiance model, which provided a basis for comparison with laboratory and other in-flight signal-to-noise determinations.
Differential phase-shift keying and channel equalization in free space optical communication system
NASA Astrophysics Data System (ADS)
Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Wan, Xiongfeng; Xu, Chenlu
2018-01-01
We present the performance benefits of differential phase-shift keying (DPSK) modulation in eliminating influence from atmospheric turbulence, especially for coherent free space optical (FSO) communication with a high communication rate. Analytic expression of detected signal is derived, based on which, homodyne detection efficiency is calculated to indicate the performance of wavefront compensation. Considered laser pulses always suffer from atmospheric scattering effect by clouds, intersymbol interference (ISI) in high-speed FSO communication link is analyzed. Correspondingly, the channel equalization method of a binormalized modified constant modulus algorithm based on set-membership filtering (SM-BNMCMA) is proposed to solve the ISI problem. Finally, through the comparison with existing channel equalization methods, its performance benefits of both ISI elimination and convergence speed are verified. The research findings have theoretical significance in a high-speed FSO communication system.
NASA Astrophysics Data System (ADS)
Kato, Seiji; Loeb, Norman G.; Minnis, Patrick; Francis, Jennifer A.; Charlock, Thomas P.; Rutan, David A.; Clothiaux, Eugene E.; Sun-Mack, Szedung
2006-10-01
The daytime cloud fraction derived by the Clouds and the Earth's Radiant Energy System (CERES) cloud algorithm using Moderate Resolution Imaging Spectroradiometer (MODIS) radiances over the Arctic from March 2000 through February 2004 increases at a rate of 0.047 per decade. The trend is significant at an 80% confidence level. The corresponding top-of-atmosphere (TOA) shortwave irradiances derived from CERES radiance measurements show less significant trend during this period. These results suggest that the influence of reduced Arctic sea ice cover on TOA reflected shortwave radiation is reduced by the presence of clouds and possibly compensated by the increase in cloud cover. The cloud fraction and TOA reflected shortwave irradiance over the Antarctic show no significant trend during the same period.
NASA Astrophysics Data System (ADS)
Li, Shuhui; Chen, Shi; Gao, Chunqing; Willner, Alan E.; Wang, Jian
2018-02-01
Orbital angular momentum (OAM)-carrying beams have recently generated considerable interest due to their potential use in communication systems to increase transmission capacity and spectral efficiency. For OAM-based free-space optical (FSO) links, a critical challenge is the atmospheric turbulence that will distort the helical wavefronts of OAM beams leading to the decrease of received power, introducing crosstalk between multiple channels, and impairing link performance. In this paper, we review recent advances in turbulence effects compensation techniques for OAM-based FSO communication links. First, basic concepts of atmospheric turbulence and theoretical model are introduced. Second, atmospheric turbulence effects on OAM beams are theoretically and experimentally investigated and discussed. Then, several typical turbulence compensation approaches, including both adaptive optics-based (optical domain) and signal processing-based (electrical domain) techniques, are presented. Finally, key challenges and perspectives of compensation of turbulence-distorted OAM links are discussed.
Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis
NASA Technical Reports Server (NTRS)
Whorton, M.; Buschek, H.; Calise, A. J.
1994-01-01
A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.
An unstructured-mesh finite-volume MPDATA for compressible atmospheric dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kühnlein, Christian, E-mail: christian.kuehnlein@ecmwf.int; Smolarkiewicz, Piotr K., E-mail: piotr.smolarkiewicz@ecmwf.int
An advancement of the unstructured-mesh finite-volume MPDATA (Multidimensional Positive Definite Advection Transport Algorithm) is presented that formulates the error-compensative pseudo-velocity of the scheme to rely only on face-normal advective fluxes to the dual cells, in contrast to the full vector employed in previous implementations. This is essentially achieved by expressing the temporal truncation error underlying the pseudo-velocity in a form consistent with the flux-divergence of the governing conservation law. The development is especially important for integrating fluid dynamics equations on non-rectilinear meshes whenever face-normal advective mass fluxes are employed for transport compatible with mass continuity—the latter being essential for flux-formmore » schemes. In particular, the proposed formulation enables large-time-step semi-implicit finite-volume integration of the compressible Euler equations using MPDATA on arbitrary hybrid computational meshes. Furthermore, it facilitates multiple error-compensative iterations of the finite-volume MPDATA and improved overall accuracy. The advancement combines straightforwardly with earlier developments, such as the nonoscillatory option, the infinite-gauge variant, and moving curvilinear meshes. A comprehensive description of the scheme is provided for a hybrid horizontally-unstructured vertically-structured computational mesh for efficient global atmospheric flow modelling. The proposed finite-volume MPDATA is verified using selected 3D global atmospheric benchmark simulations, representative of hydrostatic and non-hydrostatic flow regimes. Besides the added capabilities, the scheme retains fully the efficacy of established finite-volume MPDATA formulations.« less
Lachinova, Svetlana L; Vorontsov, Mikhail A
2008-08-01
We analyze the potential efficiency of laser beam projection onto a remote object in atmosphere with incoherent and coherent phase-locked conformal-beam director systems composed of an adaptive array of fiber collimators. Adaptive optics compensation of turbulence-induced phase aberrations in these systems is performed at each fiber collimator. Our analysis is based on a derived expression for the atmospheric-averaged value of the mean square residual phase error as well as direct numerical simulations. Operation of both conformal-beam projection systems is compared for various adaptive system configurations characterized by the number of fiber collimators, the adaptive compensation resolution, and atmospheric turbulence conditions.
An Analytical Framework for the Steady State Impact of Carbonate Compensation on Atmospheric CO2
NASA Astrophysics Data System (ADS)
Omta, Anne Willem; Ferrari, Raffaele; McGee, David
2018-04-01
The deep-ocean carbonate ion concentration impacts the fraction of the marine calcium carbonate production that is buried in sediments. This gives rise to the carbonate compensation feedback, which is thought to restore the deep-ocean carbonate ion concentration on multimillennial timescales. We formulate an analytical framework to investigate the impact of carbonate compensation under various changes in the carbon cycle relevant for anthropogenic change and glacial cycles. Using this framework, we show that carbonate compensation amplifies by 15-20% changes in atmospheric CO2 resulting from a redistribution of carbon between the atmosphere and ocean (e.g., due to changes in temperature, salinity, or nutrient utilization). A counterintuitive result emerges when the impact of organic matter burial in the ocean is examined. The organic matter burial first leads to a slight decrease in atmospheric CO2 and an increase in the deep-ocean carbonate ion concentration. Subsequently, enhanced calcium carbonate burial leads to outgassing of carbon from the ocean to the atmosphere, which is quantified by our framework. Results from simulations with a multibox model including the minor acids and bases important for the ocean-atmosphere exchange of carbon are consistent with our analytical predictions. We discuss the potential role of carbonate compensation in glacial-interglacial cycles as an example of how our theoretical framework may be applied.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
NASA Astrophysics Data System (ADS)
Wu, Kaizhi; Zhang, Xuming; Chen, Guangxie; Weng, Fei; Ding, Mingyue
2013-10-01
Images acquired in free breathing using contrast enhanced ultrasound exhibit a periodic motion that needs to be compensated for if a further accurate quantification of the hepatic perfusion analysis is to be executed. In this work, we present an algorithm to compensate the respiratory motion by effectively combining the PCA (Principal Component Analysis) method and block matching method. The respiratory kinetics of the ultrasound hepatic perfusion image sequences was firstly extracted using the PCA method. Then, the optimal phase of the obtained respiratory kinetics was detected after normalizing the motion amplitude and determining the image subsequences of the original image sequences. The image subsequences were registered by the block matching method using cross-correlation as the similarity. Finally, the motion-compensated contrast images can be acquired by using the position mapping and the algorithm was evaluated by comparing the TICs extracted from the original image sequences and compensated image subsequences. Quantitative comparisons demonstrated that the average fitting error estimated of ROIs (region of interest) was reduced from 10.9278 +/- 6.2756 to 5.1644 +/- 3.3431 after compensating.
Compensator improvement for multivariable control systems
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.; Gresham, L. L.
1977-01-01
A theory and the associated numerical technique are developed for an iterative design improvement of the compensation for linear, time-invariant control systems with multiple inputs and multiple outputs. A strict constraint algorithm is used in obtaining a solution of the specified constraints of the control design. The result of the research effort is the multiple input, multiple output Compensator Improvement Program (CIP). The objective of the Compensator Improvement Program is to modify in an iterative manner the free parameters of the dynamic compensation matrix so that the system satisfies frequency domain specifications. In this exposition, the underlying principles of the multivariable CIP algorithm are presented and the practical utility of the program is illustrated with space vehicle related examples.
Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation
Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu
2015-01-01
To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401
Wang, Lingling; Fu, Li
2018-01-01
In order to decrease the velocity sculling error under vibration environments, a new sculling error compensation algorithm for strapdown inertial navigation system (SINS) using angular rate and specific force measurements as inputs is proposed in this paper. First, the sculling error formula in incremental velocity update is analytically derived in terms of the angular rate and specific force. Next, two-time scale perturbation models of the angular rate and specific force are constructed. The new sculling correction term is derived and a gravitational search optimization method is used to determine the parameters in the two-time scale perturbation models. Finally, the performance of the proposed algorithm is evaluated in a stochastic real sculling environment, which is different from the conventional algorithms simulated in a pure sculling circumstance. A series of test results demonstrate that the new sculling compensation algorithm can achieve balanced real/pseudo sculling correction performance during velocity update with the advantage of less computation load compared with conventional algorithms. PMID:29346323
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Inverting Monotonic Nonlinearities by Entropy Maximization.
Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
Modeling Self-Referencing Interferometers with Extended Beacons and Strong Turbulence
2011-09-01
identified then typically compensated. These results not only serve to address problems when using adaptive optics to correct for strong turbulence ...compensat- ing for distortions due to atmospheric turbulence with adaptive optics (AO) [70, 84]. AO typically compensates for atmospheric distortions... used in Chapter VII to discuss how strong atmospheric turbulence and extended beacons affect the performance of an SRI. Additionally, it enumerates the
NASA Technical Reports Server (NTRS)
Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.
2003-01-01
A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.
A hybrid method for synthetic aperture ladar phase-error compensation
NASA Astrophysics Data System (ADS)
Hua, Zhili; Li, Hongping; Gu, Yongjian
2009-07-01
As a high resolution imaging sensor, synthetic aperture ladar data contain phase-error whose source include uncompensated platform motion and atmospheric turbulence distortion errors. Two previously devised methods, rank one phase-error estimation algorithm and iterative blind deconvolution are reexamined, of which a hybrid method that can recover both the images and PSF's without any a priori information on the PSF is built to speed up the convergence rate by the consideration in the choice of initialization. To be integrated into spotlight mode SAL imaging model respectively, three methods all can effectively reduce the phase-error distortion. For each approach, signal to noise ratio, root mean square error and CPU time are computed, from which we can see the convergence rate of the hybrid method can be improved because a more efficient initialization set of blind deconvolution. Moreover, by making a further discussion of the hybrid method, the weight distribution of ROPE and IBD is found to be an important factor that affects the final result of the whole compensation process.
Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li
2009-09-28
A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.
Dikbas, Salih; Altunbasak, Yucel
2013-08-01
In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.
Linear phase conjugation for atmospheric aberration compensation
NASA Astrophysics Data System (ADS)
Grasso, Robert J.; Stappaerts, Eddy A.
1998-01-01
Atmospheric induced aberrations can seriously degrade laser performance, greatly affecting the beam that finally reaches the target. Lasers propagated over any distance in the atmosphere suffer from a significant decrease in fluence at the target due to these aberrations. This is especially so for propagation over long distances. It is due primarily to fluctuations in the atmosphere over the propagation path, and from platform motion relative to the intended aimpoint. Also, delivery of high fluence to the target typically requires low beam divergence, thus, atmospheric turbulence, platform motion, or both results in a lack of fine aimpoint control to keep the beam directed at the target. To improve both the beam quality and amount of laser energy delivered to the target, Northrop Grumman has developed the Active Tracking System (ATS); a novel linear phase conjugation aberration compensation technique. Utilizing a silicon spatial light modulator (SLM) as a dynamic wavefront reversing element, ATS undoes aberrations induced by the atmosphere, platform motion or both. ATS continually tracks the target as well as compensates for atmospheric and platform motion induced aberrations. This results in a high fidelity, near-diffraction limited beam delivered to the target.
NASA Astrophysics Data System (ADS)
Yan, Liang; Zhang, Lu; Zhu, Bo; Zhang, Jingying; Jiao, Zongxia
2017-10-01
Permanent magnet spherical actuator (PMSA) is a multi-variable featured and inter-axis coupled nonlinear system, which unavoidably compromises its motion control implementation. Uncertainties such as external load and friction torque of ball bearing and manufacturing errors also influence motion performance significantly. Therefore, the objective of this paper is to propose a controller based on a single neural adaptive (SNA) algorithm and a neural network (NN) identifier optimized with a particle swarm optimization (PSO) algorithm to improve the motion stability of PMSA with three-dimensional magnet arrays. The dynamic model and computed torque model are formulated for the spherical actuator, and a dynamic decoupling control algorithm is developed. By utilizing the global-optimization property of the PSO algorithm, the NN identifier is trained to avoid locally optimal solution and achieve high-precision compensations to uncertainties. The employment of the SNA controller helps to reduce the effect of compensation errors and convert the system to a stable one, even if there is difference between the compensations and uncertainties due to external disturbances. A simulation model is established, and experiments are conducted on the research prototype to validate the proposed control algorithm. The amplitude of the parameter perturbation is set to 5%, 10%, and 15%, respectively. The strong robustness of the proposed hybrid algorithm is validated by the abundant simulation data. It shows that the proposed algorithm can effectively compensate the influence of uncertainties and eliminate the effect of inter-axis couplings of the spherical actuator.
Yoon, Jai-Woong; Sawant, Amit; Suh, Yelin; Cho, Byung-Chul; Suh, Tae-Suk; Keall, Paul
2011-07-01
In dynamic multileaf collimator (MLC) motion tracking with complex intensity-modulated radiation therapy (IMRT) fields, target motion perpendicular to the MLC leaf travel direction can cause beam holds, which increase beam delivery time by up to a factor of 4. As a means to balance delivery efficiency and accuracy, a moving average algorithm was incorporated into a dynamic MLC motion tracking system (i.e., moving average tracking) to account for target motion perpendicular to the MLC leaf travel direction. The experimental investigation of the moving average algorithm compared with real-time tracking and no compensation beam delivery is described. The properties of the moving average algorithm were measured and compared with those of real-time tracking (dynamic MLC motion tracking accounting for both target motion parallel and perpendicular to the leaf travel direction) and no compensation beam delivery. The algorithm was investigated using a synthetic motion trace with a baseline drift and four patient-measured 3D tumor motion traces representing regular and irregular motions with varying baseline drifts. Each motion trace was reproduced by a moving platform. The delivery efficiency, geometric accuracy, and dosimetric accuracy were evaluated for conformal, step-and-shoot IMRT, and dynamic sliding window IMRT treatment plans using the synthetic and patient motion traces. The dosimetric accuracy was quantified via a tgamma-test with a 3%/3 mm criterion. The delivery efficiency ranged from 89 to 100% for moving average tracking, 26%-100% for real-time tracking, and 100% (by definition) for no compensation. The root-mean-square geometric error ranged from 3.2 to 4.0 mm for moving average tracking, 0.7-1.1 mm for real-time tracking, and 3.7-7.2 mm for no compensation. The percentage of dosimetric points failing the gamma-test ranged from 4 to 30% for moving average tracking, 0%-23% for real-time tracking, and 10%-47% for no compensation. The delivery efficiency of moving average tracking was up to four times higher than that of real-time tracking and approached the efficiency of no compensation for all cases. The geometric accuracy and dosimetric accuracy of the moving average algorithm was between real-time tracking and no compensation, approximately half the percentage of dosimetric points failing the gamma-test compared with no compensation.
An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation
Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie
2014-01-01
In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912
Extremum Seeking Control of Smart Inverters for VAR Compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Daniel; Negrete-Pincetic, Matias; Stewart, Emma
2015-09-04
Reactive power compensation is used by utilities to ensure customer voltages are within pre-defined tolerances and reduce system resistive losses. While much attention has been paid to model-based control algorithms for reactive power support and Volt Var Optimization (VVO), these strategies typically require relatively large communications capabilities and accurate models. In this work, a non-model-based control strategy for smart inverters is considered for VAR compensation. An Extremum Seeking control algorithm is applied to modulate the reactive power output of inverters based on real power information from the feeder substation, without an explicit feeder model. Simulation results using utility demand informationmore » confirm the ability of the control algorithm to inject VARs to minimize feeder head real power consumption. In addition, we show that the algorithm is capable of improving feeder voltage profiles and reducing reactive power supplied by the distribution substation.« less
Branch Point Mitigation of Thermal Blooming Phase Compensation Instability
2011-03-01
Turbulence ...............................................................79 2.5 High Energy Laser Beam Phase Compensation using Adaptive Optics...that scintillates the HEL beam irradiance. Atmospheric advection causes turbulent eddies to travel across the HEL beam distorting the target ...with multiple atmospheric effects including extinction, thermal blooming, and optical turbulence . Using the BPM provides both speed and accuracy and
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
NASA Astrophysics Data System (ADS)
Zhang, Qun; Yang, Yanfu; Xiang, Qian; Zhou, Zhongqing; Yao, Yong
2018-02-01
A joint compensation scheme based on cascaded Kalman filter is proposed, which can implement polarization tracking, channel equalization, frequency offset, and phase noise compensation simultaneously. The experimental results show that the proposed algorithm can not only compensate multiple channel impairments simultaneously but also improve the polarization tracking capacity and accelerate the convergence speed. The scheme has up to eight times faster convergence speed compared with radius-directed equalizer (RDE) + Max-FFT (maximum fast Fourier transform) + BPS (blind phase search) and can track up polarization rotation 60 times and 15 times faster than that of RDE + Max-FFT + BPS and CMMA (cascaded multimodulus algorithm) + Max-FFT + BPS, respectively.
NASA Astrophysics Data System (ADS)
Zhao, Runchen; Ientilucci, Emmett J.
2017-05-01
Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.
Pengpen, T; Soleimani, M
2015-06-13
Cone beam computed tomography (CBCT) is an imaging modality that has been used in image-guided radiation therapy (IGRT). For applications such as lung radiation therapy, CBCT images are greatly affected by the motion artefacts. This is mainly due to low temporal resolution of CBCT. Recently, a dual modality of electrical impedance tomography (EIT) and CBCT has been proposed, in which the high temporal resolution EIT imaging system provides motion data to a motion-compensated algebraic reconstruction technique (ART)-based CBCT reconstruction software. High computational time associated with ART and indeed other variations of ART make it less practical for real applications. This paper develops a motion-compensated conjugate gradient least-squares (CGLS) algorithm for CBCT. A motion-compensated CGLS offers several advantages over ART-based methods, including possibilities for explicit regularization, rapid convergence and parallel computations. This paper for the first time demonstrates motion-compensated CBCT reconstruction using CGLS and reconstruction results are shown in limited data CBCT considering only a quarter of the full dataset. The proposed algorithm is tested using simulated motion data in generic motion-compensated CBCT as well as measured EIT data in dual EIT-CBCT imaging. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Imaging spectrometer measurement of water vapor in the 400 to 2500 nm spectral region
NASA Technical Reports Server (NTRS)
Green, Robert O.; Roberts, Dar A.; Conel, James E.; Dozier, Jeff
1995-01-01
The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) measures the total upwelling spectral radiance from 400 to 2500 nm sampled at 10 nm intervals. The instrument acquires spectral data at an altitude of 20 km above sea level, as images of 11 by up to 100 km at 17x17 meter spatial sampling. We have developed a nonlinear spectral fitting algorithm coupled with a radiative transfer code to derive the total path water vapor from the spectrum, measured for each spatial element in an AVIRIS image. The algorithm compensates for variation in the surface spectral reflectance and atmospheric aerosols. It uses water vapor absorption bands centered at 940 nm, 1040 nm, and 1380 nm. We analyze data sets with water vapor abundances ranging from 1 to 40 perceptible millimeters. In one data set, the total path water vapor varies from 7 to 21 mm over a distance of less than 10 km. We have analyzed a time series of five images acquired at 12 minute intervals; these show spatially heterogeneous changes of advocated water vapor of 25 percent over 1 hour. The algorithm determines water vapor for images with a range of ground covers, including bare rock and soil, sparse to dense vegetation, snow and ice, open water, and clouds. The precision of the water vapor determination approaches one percent. However, the precision is sensitive to the absolute abundance and the absorption strength of the atmospheric water vapor band analyzed. We have evaluated the accuracy of the algorithm by comparing several surface-based determinations of water vapor at the time of the AVIRIS data acquisition. The agreement between the AVIRIS measured water vapor and the in situ surface radiometer and surface interferometer measured water vapor is 5 to 10 percent.
NASA Astrophysics Data System (ADS)
Li, Nan; Chu, Xiuxiang; Zhang, Pengfei; Feng, Xiaoxing; Fan, ChengYu; Qiao, Chunhong
2018-01-01
A method which can be used to compensate for a distorted orbital angular momentum and wavefront of a beam in atmospheric turbulence, simultaneously, has been proposed. To confirm the validity of the method, an experimental setup for up-link propagation of a vortex beam in a turbulent atmosphere has been simulated. Simulation results show that both of the distorted orbital angular momentum and the distorted wavefront of a beam due to turbulence can be compensated by an adaptive optics system with the help of a cooperative beacon at satellite. However, when the number of the lenslet of wavefront sensor (WFS) and the actuators of the deform mirror (DM) is small, satisfactory results cannot be obtained.
Further development of the dynamic gas temperature measurement system. Volume 1: Technical efforts
NASA Technical Reports Server (NTRS)
Elmore, D. L.; Robinson, W. W.; Watkins, W. B.
1986-01-01
A compensated dynamic gas temperature thermocouple measurement method was experimentally verified. Dynamic gas temperature signals from a flow passing through a chopped-wheel signal generator and an atmospheric pressure laboratory burner were measured by the dynamic temperature sensor and other fast-response sensors. Compensated data from dynamic temperature sensor thermoelements were compared with fast-response sensors. Results from the two experiments are presented as time-dependent waveforms and spectral plots. Comparisons between compensated dynamic temperature sensor spectra and a commercially available optical fiber thermometer compensated spectra were made for the atmospheric burner experiment. Increases in precision of the measurement method require optimization of several factors, and directions for further work are identified.
The Sensitivity of SeaWiFS Ocean Color Retrievals to Aerosol Amount and Type
NASA Technical Reports Server (NTRS)
Kahn, Ralph A.; Sayer, Andrew M.; Ahmad, Ziauddin; Franz, Bryan A.
2016-01-01
As atmospheric reflectance dominates top-of-the-atmosphere radiance over ocean, atmospheric correction is a critical component of ocean color retrievals. This paper explores the operational Sea-viewing Wide Field-of-View Sensor (SeaWiFS) algorithm atmospheric correction with approximately 13 000 coincident surface-based aerosol measurements. Aerosol optical depth at 440 nm (AOD(sub 440)) is overestimated for AOD below approximately 0.1-0.15 and is increasingly underestimated at higher AOD; also, single-scattering albedo (SSA) appears overestimated when the actual value less than approximately 0.96.AOD(sub 440) and its spectral slope tend to be overestimated preferentially for coarse-mode particles. Sensitivity analysis shows that changes in these factors lead to systematic differences in derived ocean water-leaving reflectance (Rrs) at 440 nm. The standard SeaWiFS algorithm compensates for AOD anomalies in the presence of nonabsorbing, medium-size-dominated aerosols. However, at low AOD and with absorbing aerosols, in situ observations and previous case studies demonstrate that retrieved Rrs is sensitive to spectral AOD and possibly also SSA anomalies. Stratifying the dataset by aerosol-type proxies shows the dependence of the AOD anomaly and resulting Rrs patterns on aerosol type, though the correlation with the SSA anomaly is too subtle to be quantified with these data. Retrieved chlorophyll-a concentrations (Chl) are affected in a complex way by Rrs differences, and these effects occur preferentially at high and low Chl values. Absorbing aerosol effects are likely to be most important over biologically productive waters near coasts and along major aerosol transport pathways. These results suggest that future ocean color spacecraft missions aiming to cover the range of naturally occurring and anthropogenic aerosols, especially at wavelengths shorter than 440 nm, will require better aerosol amount and type constraints.
Optical-beam wavefront control based on the atmospheric backscatter signal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banakh, V A; Razenkov, I A; Rostov, A P
2015-02-28
The feasibility of compensating for aberrations of the optical-beam initial wavefront by aperture sounding, based on the atmospheric backscatter signal from an additional laser source with a different wavelength, is experimentally studied. It is shown that the adaptive system based on this principle makes it possible to compensate for distortions of the initial beam wavefront on a surface path in atmosphere. Specifically, the beam divergence decreases, while the level of the detected mean backscatter power from the additional laser source increases. (light scattering)
Density implications of shift compensation postprocessing in holographic storage systems
NASA Astrophysics Data System (ADS)
Menetrier, Laure; Burr, Geoffrey W.
2003-02-01
We investigate the effect of data page misregistration, and its subsequent correction in postprocessing, on the storage density of holographic data storage systems. A numerical simulation is used to obtain the bit-error rate as a function of hologram aperture, page misregistration, pixel fill factors, and Gaussian additive intensity noise. Postprocessing of simulated data pages is performed by a nonlinear pixel shift compensation algorithm [Opt. Lett. 26, 542 (2001)]. The performance of this algorithm is analyzed in the presence of noise by determining the achievable areal density. The impact of inaccurate measurements of page misregistration is also investigated. Results show that the shift-compensation algorithm can provide almost complete immunity to page misregistration, although at some penalty to the baseline areal density offered by a system with zero tolerance to misalignment.
Optimal line drop compensation parameters under multi-operating conditions
NASA Astrophysics Data System (ADS)
Wan, Yuan; Li, Hang; Wang, Kai; He, Zhe
2017-01-01
Line Drop Compensation (LDC) is a main function of Reactive Current Compensation (RCC) which is developed to improve voltage stability. While LDC has benefit to voltage, it may deteriorate the small-disturbance rotor angle stability of power system. In present paper, an intelligent algorithm which is combined by Genetic Algorithm (GA) and Backpropagation Neural Network (BPNN) is proposed to optimize parameters of LDC. The objective function proposed in present paper takes consideration of voltage deviation and power system oscillation minimal damping ratio under multi-operating conditions. A simulation based on middle area of Jiangxi province power system is used to demonstrate the intelligent algorithm. The optimization result shows that coordinate optimized parameters can meet the multioperating conditions requirement and improve voltage stability as much as possible while guaranteeing enough damping ratio.
A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System
Zhou, Guanwu; Zhao, Yulong; Guo, Fangfang; Xu, Wenju
2014-01-01
Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM) as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU) after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system's performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor. PMID:25006998
An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning
Deng, Zhongliang
2018-01-01
Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization. PMID:29361718
An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning.
Deng, Zhongliang; Fu, Xiao; Wang, Hanhua
2018-01-20
Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization.
Zheng, Hai-ming; Li, Guang-jie; Wu, Hao
2015-06-01
Differential optical absorption spectroscopy (DOAS) is a commonly used atmospheric pollution monitoring method. Denoising of monitoring spectral data will improve the inversion accuracy. Fourier transform filtering method is effectively capable of filtering out the noise in the spectral data. But the algorithm itself can introduce errors. In this paper, a chirp-z transform method is put forward. By means of the local thinning of Fourier transform spectrum, it can retain the denoising effect of Fourier transform and compensate the error of the algorithm, which will further improve the inversion accuracy. The paper study on the concentration retrieving of SO2 and NO2. The results show that simple division causes bigger error and is not very stable. Chirp-z transform is proved to be more accurate than Fourier transform. Results of the frequency spectrum analysis show that Fourier transform cannot solve the distortion and weakening problems of characteristic absorption spectrum. Chirp-z transform shows ability in fine refactoring of specific frequency spectrum.
NASA Technical Reports Server (NTRS)
Swickrath, Michael J.; Anderson, Molly; McMillin, Summer; Broerman, Craig
2012-01-01
Monitoring carbon dioxide (CO2) concentration within a spacecraft or spacesuit is critically important to ensuring the safety of the crew. Carbon dioxide uniquely absorbs light at wavelengths of 3.95 micrometers and 4.26 micrometers. As a result, non-dispersive infrared (NDIR) spectroscopy can be employed as a reliable and inexpensive method for the quantification of CO2 within the atmosphere. A multitude of commercial off-the-shelf (COTS) NDIR sensors exist for CO2 quantification. The COTS sensors provide reasonable accuracy as long as the measurements are attained under conditions close to the calibration conditions of the sensor (typically 21.1 C (70.0 F) and 1 atmosphere). However, as pressure deviates from atmospheric to the pressures associated with a spacecraft (8.0{10.2 pounds per square inch absolute (psia)) or spacesuit (4.1{8.0 psia), the error in the measurement grows increasingly large. In addition to pressure and temperature dependencies, the infrared transmissivity through a volume of gas also depends on the composition of the gas. As the composition is not known a priori, accurate sub-ambient detection must rely on iterative sensor compensation techniques. This manuscript describes the development of recursive compensation algorithms for sub-ambient detection of CO2 with COTS NDIR sensors. In addition, the source of the exponential loss in accuracy is developed theoretically. The basis of the loss can be explained through thermal, Doppler, and Lorentz broadening effects that arise as a result of the temperature, pressure, and composition of the gas mixture under analysis. This manuscript provides an approach to employing COTS sensors at sub-ambient conditions and may also lend insight into designing future NDIR sensors for aerospace application.
Coherent beam combining of collimated fiber array based on target-in-the-loop technique
NASA Astrophysics Data System (ADS)
Li, Xinyang; Geng, Chao; Zhang, Xiaojun; Rao, Changhui
2011-11-01
Coherent beam combining (CBC) of fiber array is a promising way to generate high power and high quality laser beams. Target-in-the-loop (TIL) technique might be an effective way to ensure atmosphere propagation compensation without wavefront sensors. In this paper, we present very recent research work about CBC of collimated fiber array using TIL technique at the Key Lab on Adaptive Optics (KLAO), CAS. A novel Adaptive Fiber Optics Collimator (AFOC) composed of phase-locking module and tip/tilt control module was developed. CBC experimental setup of three-element fiber array was established. Feedback control is realized using stochastic parallel gradient descent (SPGD) algorithm. The CBC based on TIL with piston and tip/tilt correction simultaneously is demonstrated. And the beam pointing to locate or sweep position of combined spot on target was achieved through TIL technique too. The goal of our work is achieve multi-element CBC for long-distance transmission in atmosphere.
Influence of Evaporating Droplets in the Turbulent Marine Atmospheric Boundary Layer
NASA Astrophysics Data System (ADS)
Peng, Tianze; Richter, David
2017-12-01
Sea-spray droplets ejected into the marine atmospheric boundary layer take part in a series of complex transport processes. By capturing the air-droplet coupling and feedback, we focus on how droplets modify the total heat transfer across a turbulent boundary layer. We implement a high-resolution Eulerian-Lagrangian algorithm with varied droplet size and mass loading in a turbulent open-channel flow, revealing that the influence from evaporating droplets varies for different dynamic and thermodynamic characteristics of droplets. Droplets that both respond rapidly to the ambient environment and have long suspension times are able to modify the latent and sensible heat fluxes individually, however the competing signs of this modification lead to an overall weak effect on the total heat flux. On the other hand, droplets with a slower thermodynamic response to the environment are less subjected to this compensating effect. This indicates a potential to enhance the total heat flux, but the enhancement is highly dependent on the concentration and suspension time.
Stream Temperature Estimation From Thermal Infrared Images
NASA Astrophysics Data System (ADS)
Handcock, R. N.; Kay, J. E.; Gillespie, A.; Naveh, N.; Cherkauer, K. A.; Burges, S. J.; Booth, D. B.
2001-12-01
Stream temperature is an important water quality indicator in the Pacific Northwest where endangered fish populations are sensitive to elevated water temperature. Cold water refugia are essential for the survival of threatened salmon when events such as the removal of riparian vegetation result in elevated stream temperatures. Regional assessment of stream temperatures is limited by sparse sampling of temperatures in both space and time. If critical watersheds are to be properly managed it is necessary to have spatially extensive temperature measurements of known accuracy. Remotely sensed thermal infrared (TIR) imagery can be used to derive spatially distributed estimates of the skin temperature (top 100 nm) of streams. TIR imagery has long been used to estimate skin temperatures of the ocean, where split-window techniques have been used to compensate for atmospheric affects. Streams are a more complex environment because 1) most are unresolved in typical TIR images, and 2) the near-bank environment of stream corridors may consist of tall trees or hot rocks and soils that irradiate the stream surface. As well as compensating for atmospheric effects, key problems to solve in estimating stream temperatures include both subpixel unmixing and multiple scattering. Additionally, fine resolution characteristics of the stream surface such as evaporative cooling due to wind, and water surface roughness, will effect measurements of radiant skin temperatures with TIR devices. We apply these corrections across the Green River and Yakima River watersheds in Washington State to assess the accuracy of remotely sensed stream surface temperature estimates made using fine resolution TIR imagery from a ground-based sensor (FLIR), medium resolution data from the airborne MASTER sensor, and coarse-resolution data from the Terra-ASTER satellite. We use linear spectral mixture analysis to isolate the fraction of land-leaving radiance originating from unresolved streams. To compensate the data for atmospheric effects we combine radiosonde profiles with a physically based radiative transfer model (MODTRAN) and an in-scene relative correction adapted from the ISAC algorithm. Laboratory values for water emissivities are used as a baseline estimate of stream emissivities. Emitted radiance reflected by trees in the stream near-bank environment is estimated from the height and canopy temperature, using a radiosity model.
Improved compensation of atmospheric turbulence effects by multiple adaptive mirror systems.
Shamir, J; Crowe, D G; Beletic, J W
1993-08-20
Optical wave-front propagation in a layered model for the atmosphere is analyzed by the use of diffraction theory, leading to a novel approach for utilizing artificial guide stars. Considering recent observations of layering in the atmospheric turbulence, the results of this paper indicate that, even for very large telescopes, a substantial enlargement of the compensated angular field of view is possible when two adaptive mirrors and four or five artificial guide stars are employed. The required number of guide stars increases as the thickness of the turbulent layers increases, converging to the conventional results at the limit of continuously turbulent atmosphere.
Li, Zong-Tao; Wu, Tie-Jun; Lin, Can-Long; Ma, Long-Hua
2011-01-01
A new generalized optimum strapdown algorithm with coning and sculling compensation is presented, in which the position, velocity and attitude updating operations are carried out based on the single-speed structure in which all computations are executed at a single updating rate that is sufficiently high to accurately account for high frequency angular rate and acceleration rectification effects. Different from existing algorithms, the updating rates of the coning and sculling compensations are unrelated with the number of the gyro incremental angle samples and the number of the accelerometer incremental velocity samples. When the output sampling rate of inertial sensors remains constant, this algorithm allows increasing the updating rate of the coning and sculling compensation, yet with more numbers of gyro incremental angle and accelerometer incremental velocity in order to improve the accuracy of system. Then, in order to implement the new strapdown algorithm in a single FPGA chip, the parallelization of the algorithm is designed and its computational complexity is analyzed. The performance of the proposed parallel strapdown algorithm is tested on the Xilinx ISE 12.3 software platform and the FPGA device XC6VLX550T hardware platform on the basis of some fighter data. It is shown that this parallel strapdown algorithm on the FPGA platform can greatly decrease the execution time of algorithm to meet the real-time and high precision requirements of system on the high dynamic environment, relative to the existing implemented on the DSP platform. PMID:22164058
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Joseph Z., E-mail: x@anl.gov; Vasserman, Isaac; Strelnikov, Nikita
2016-07-27
A 2.8-meter long horizontal field prototype undulator with a dynamic force compensation mechanism has been developed and tested at the Advanced Photon Source (APS) at Argonne National Laboratory (Argonne). The magnetic tuning of the undulator integrals has been automated and accomplished by applying magnetic shims. A detailed description of the algorithms and performance is reported.
An Online Tilt Estimation and Compensation Algorithm for a Small Satellite Camera
NASA Astrophysics Data System (ADS)
Lee, Da-Hyun; Hwang, Jai-hyuk
2018-04-01
In the case of a satellite camera designed to execute an Earth observation mission, even after a pre-launch precision alignment process has been carried out, misalignment will occur due to external factors during the launch and in the operating environment. In particular, for high-resolution satellite cameras, which require submicron accuracy for alignment between optical components, misalignment is a major cause of image quality degradation. To compensate for this, most high-resolution satellite cameras undergo a precise realignment process called refocusing before and during the operation process. However, conventional Earth observation satellites only execute refocusing upon de-space. Thus, in this paper, an online tilt estimation and compensation algorithm that can be utilized after de-space correction is executed. Although the sensitivity of the optical performance degradation due to the misalignment is highest in de-space, the MTF can be additionally increased by correcting tilt after refocusing. The algorithm proposed in this research can be used to estimate the amount of tilt that occurs by taking star images, and it can also be used to carry out automatic tilt corrections by employing a compensation mechanism that gives angular motion to the secondary mirror. Crucially, this algorithm is developed using an online processing system so that it can operate without communication with the ground.
Pysz, Marybeth A.; Guracar, Ismayil; Foygel, Kira; Tian, Lu; Willmann, Jürgen K.
2015-01-01
Purpose To develop and test a real-time motion compensation algorithm for contrast-enhanced ultrasound imaging of tumor angiogenesis on a clinical ultrasound system. Materials and methods The Administrative Institutional Panel on Laboratory Animal Care approved all experiments. A new motion correction algorithm measuring the sum of absolute differences in pixel displacements within a designated tracking box was implemented in a clinical ultrasound machine. In vivo angiogenesis measurements (expressed as percent contrast area) with and without motion compensated maximum intensity persistence (MIP) ultrasound imaging were analyzed in human colon cancer xenografts (n = 64) in mice. Differences in MIP ultrasound imaging signal with and without motion compensation were compared and correlated with displacements in x- and y-directions. The algorithm was tested in an additional twelve colon cancer xenograft-bearing mice with (n = 6) and without (n = 6) anti-vascular therapy (ASA-404). In vivo MIP percent contrast area measurements were quantitatively correlated with ex vivo microvessel density (MVD) analysis. Results MIP percent contrast area was significantly different (P < 0.001) with and without motion compensation. Differences in percent contrast area correlated significantly (P < 0.001) with x- and y-displacements. MIP percent contrast area measurements were more reproducible with motion compensation (ICC = 0.69) than without (ICC = 0.51) on two consecutive ultrasound scans. Following anti-vascular therapy, motion-compensated MIP percent contrast area significantly (P = 0.03) decreased by 39.4 ± 14.6 % compared to non-treated mice and correlated well with ex vivo MVD analysis (Rho = 0.70; P = 0.05). Conclusion Real-time motion-compensated MIP ultrasound imaging allows reliable and accurate quantification and monitoring of angiogenesis in tumors exposed to breathing-induced motion artifacts. PMID:22535383
Pysz, Marybeth A; Guracar, Ismayil; Foygel, Kira; Tian, Lu; Willmann, Jürgen K
2012-09-01
To develop and test a real-time motion compensation algorithm for contrast-enhanced ultrasound imaging of tumor angiogenesis on a clinical ultrasound system. The Administrative Institutional Panel on Laboratory Animal Care approved all experiments. A new motion correction algorithm measuring the sum of absolute differences in pixel displacements within a designated tracking box was implemented in a clinical ultrasound machine. In vivo angiogenesis measurements (expressed as percent contrast area) with and without motion compensated maximum intensity persistence (MIP) ultrasound imaging were analyzed in human colon cancer xenografts (n = 64) in mice. Differences in MIP ultrasound imaging signal with and without motion compensation were compared and correlated with displacements in x- and y-directions. The algorithm was tested in an additional twelve colon cancer xenograft-bearing mice with (n = 6) and without (n = 6) anti-vascular therapy (ASA-404). In vivo MIP percent contrast area measurements were quantitatively correlated with ex vivo microvessel density (MVD) analysis. MIP percent contrast area was significantly different (P < 0.001) with and without motion compensation. Differences in percent contrast area correlated significantly (P < 0.001) with x- and y-displacements. MIP percent contrast area measurements were more reproducible with motion compensation (ICC = 0.69) than without (ICC = 0.51) on two consecutive ultrasound scans. Following anti-vascular therapy, motion-compensated MIP percent contrast area significantly (P = 0.03) decreased by 39.4 ± 14.6 % compared to non-treated mice and correlated well with ex vivo MVD analysis (Rho = 0.70; P = 0.05). Real-time motion-compensated MIP ultrasound imaging allows reliable and accurate quantification and monitoring of angiogenesis in tumors exposed to breathing-induced motion artifacts.
Fault-tolerant nonlinear adaptive flight control using sliding mode online learning.
Krüger, Thomas; Schnetter, Philipp; Placzek, Robin; Vörsmann, Peter
2012-08-01
An expanded nonlinear model inversion flight control strategy using sliding mode online learning for neural networks is presented. The proposed control strategy is implemented for a small unmanned aircraft system (UAS). This class of aircraft is very susceptible towards nonlinearities like atmospheric turbulence, model uncertainties and of course system failures. Therefore, these systems mark a sensible testbed to evaluate fault-tolerant, adaptive flight control strategies. Within this work the concept of feedback linearization is combined with feed forward neural networks to compensate for inversion errors and other nonlinear effects. Backpropagation-based adaption laws of the network weights are used for online training. Within these adaption laws the standard gradient descent backpropagation algorithm is augmented with the concept of sliding mode control (SMC). Implemented as a learning algorithm, this nonlinear control strategy treats the neural network as a controlled system and allows a stable, dynamic calculation of the learning rates. While considering the system's stability, this robust online learning method therefore offers a higher speed of convergence, especially in the presence of external disturbances. The SMC-based flight controller is tested and compared with the standard gradient descent backpropagation algorithm in the presence of system failures. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
NASA Astrophysics Data System (ADS)
Keshta, H. E.; Ali, A. A.; Saied, E. M.; Bendary, F. M.
2016-10-01
Large-scale integration of wind turbine generators (WTGs) may have significant impacts on power system operation with respect to system frequency and bus voltages. This paper studies the effect of Static Var Compensator (SVC) connected to wind energy conversion system (WECS) on voltage profile and the power generated from the induction generator (IG) in wind farm. Also paper presents, a dynamic reactive power compensation using Static Var Compensator (SVC) at the a point of interconnection of wind farm while static compensation (Fixed Capacitor Bank) is unable to prevent voltage collapse. Moreover, this paper shows that using advanced optimization techniques based on artificial intelligence (AI) such as Harmony Search Algorithm (HS) and Self-Adaptive Global Harmony Search Algorithm (SGHS) instead of a Conventional Control Method to tune the parameters of PI controller for SVC and pitch angle. Also paper illustrates that the performance of the system with controllers based on AI is improved under different operating conditions. MATLAB/Simulink based simulation is utilized to demonstrate the application of SVC in wind farm integration. It is also carried out to investigate the enhancement in performance of the WECS achieved with a PI Controller tuned by Harmony Search Algorithm as compared to a Conventional Control Method.
Cardiac motion correction based on partial angle reconstructed images in x-ray CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr
2015-05-15
Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogrammore » with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of two conjugate PAR images. To evaluate the proposed algorithm, digital XCAT and physical dynamic cardiac phantom datasets are used. The XCAT phantom datasets were generated with heart rates of 70 and 100 bpm, respectively, by assuming a system rotation time of 300 ms. A physical dynamic cardiac phantom was scanned using a slowly rotating XCT system so that the effective heart rate will be 70 bpm for a system rotation speed of 300 ms. Results: In the XCAT phantom experiment, motion-compensated 3D images obtained from the proposed algorithm show coronary arteries with fewer motion artifacts for all phases. Moreover, object boundaries contaminated by motion are well restored. Even though object positions and boundary shapes are still somewhat different from the ground truth in some cases, the authors see that visibilities of coronary arteries are improved noticeably and motion artifacts are reduced considerably. The physical phantom study also shows that the visual quality of motion-compensated images is greatly improved. Conclusions: The authors propose a novel PAR image-based cardiac motion estimation and compensation algorithm. The algorithm requires an angular scan range of less than 360°. The excellent performance of the proposed algorithm is illustrated by using digital XCAT and physical dynamic cardiac phantom datasets.« less
Atmospheric Correction Algorithm for Hyperspectral Remote Sensing of Ocean Color from Space
2000-02-20
Existing atmospheric correction algorithms for multichannel remote sensing of ocean color from space were designed for retrieving water-leaving...atmospheric correction algorithm for hyperspectral remote sensing of ocean color with the near-future Coastal Ocean Imaging Spectrometer. The algorithm uses
Topography-Dependent Motion Compensation: Application to UAVSAR Data
NASA Technical Reports Server (NTRS)
Jones, Cathleen E.; Hensley, Scott; Michel, Thierry
2009-01-01
The UAVSAR L-band synthetic aperture radar system has been designed for repeat track interferometry in support of Earth science applications that require high-precision measurements of small surface deformations over timescales from hours to years. Conventional motion compensation algorithms, which are based upon assumptions of a narrow beam and flat terrain, yield unacceptably large errors in areas with even moderate topographic relief, i.e., in most areas of interest. This often limits the ability to achieve sub-centimeter surface change detection over significant portions of an acquired scene. To reduce this source of error in the interferometric phase, we have implemented an advanced motion compensation algorithm that corrects for the scene topography and radar beam width. Here we discuss the algorithm used, its implementation in the UAVSAR data processor, and the improvement in interferometric phase and correlation achieved in areas with significant topographic relief.
NASA Astrophysics Data System (ADS)
Shin, D.; Chiu, L. S.; Clemente-Colon, P.
2006-05-01
The atmospheric effects on the retrieval of sea ice concentration from passive microwave sensors are examined using simulated data typical for the Arctic summer. The simulation includes atmospheric contributions of cloud liquid water, water vapor and surface wind on the microwave signatures. A plane parallel radiative transfer model is used to compute brightness temperatures at SSM/I frequencies over surfaces that contain open water, first-year (FY) ice and multi-year (MY) ice and their combinations. Synthetic retrievals in this study use the NASA Team (NT) algorithm for the estimation of sea ice concentrations. This study shows that if the satellite sensor's field of view is filled with only FY ice the retrieval is not much affected by the atmospheric conditions due to the high contrast between emission signals from FY ice surface and the signals from the atmosphere. Pure MY ice concentration is generally underestimated due to the low MY ice surface emissivity that results in the enhancement of emission signals from the atmospheric parameters. Simulation results in marginal ice areas also show that the atmospheric effects from cloud liquid water, water vapor and surface wind tend to degrade the accuracy at low sea ice concentration. FY ice concentration is overestimated and MY ice concentration is underestimated in the presence of atmospheric water and surface wind at low ice concentration. This compensating effect reduces the retrieval uncertainties of total (FY and MY) ice concentration. Over marginal ice zones, our results suggest that strong surface wind is more important than atmospheric water in contributing to the retrieval errors of total ice concentrations in the normal ranges of these variables.
Adaptive free-space optical communications through turbulence using self-healing Bessel beams
Li, Shuhui; Wang, Jian
2017-01-01
We present a scheme to realize obstruction- and turbulence-tolerant free-space orbital angular momentum (OAM) multiplexing link by using self-healing Bessel beams accompanied by adaptive compensation techniques. Compensation of multiple 16-ary quadrature amplitude modulation (16-QAM) data carrying Bessel beams through emulated atmospheric turbulence and obstructions is demonstrated. The obtained experimental results indicate that the compensation scheme can effectively reduce the inter-channel crosstalk, improve the bit-error rate (BER) performance, and recuperate the nondiffracting property of Bessel beams. The proposed scheme might be used in future high-capacity OAM links which are affected by atmospheric turbulence and obstructions. PMID:28230076
NASA Astrophysics Data System (ADS)
Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol
2015-08-01
The paper deals with dynamic compensation of delayed Self Powered Flux Detectors (SPFDs) using discrete time H∞ filtering method for improving the response of SPFDs with significant delayed components such as Platinum and Vanadium SPFD. We also present a comparative study between the Linear Matrix Inequality (LMI) based H∞ filtering and Algebraic Riccati Equation (ARE) based Kalman filtering methods with respect to their delay compensation capabilities. Finally an improved recursive H∞ filter based on the adaptive fading memory technique is proposed which provides an improved performance over existing methods. The existing delay compensation algorithms do not account for the rate of change in the signal for determining the filter gain and therefore add significant noise during the delay compensation process. The proposed adaptive fading memory H∞ filter minimizes the overall noise very effectively at the same time keeps the response time at minimum values. The recursive algorithm is easy to implement in real time as compared to the LMI (or ARE) based solutions.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2016-01-01
This paper describes an algorithm for atmospheric state estimation based on a coupling between inertial navigation and flush air data-sensing pressure measurements. The navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to estimate the atmosphere using a nonlinear weighted least-squares algorithm. The approach uses a high-fidelity model of atmosphere stored in table-lookup form, along with simplified models propagated along the trajectory within the algorithm to aid the solution. Thus, the method is a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content. The algorithm is applied to the design of the pressure measurement system for the Mars 2020 mission. A linear covariance analysis is performed to assess estimator performance. The results indicate that the new estimator produces more precise estimates of atmospheric states than existing algorithms.
NASA Technical Reports Server (NTRS)
Pagnutti, Mary
2006-01-01
This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, Ning; Gombos, Gergely; Mousavi, Mirrasoul J.
A new fault location algorithm for two-end series-compensated double-circuit transmission lines utilizing unsynchronized two-terminal current phasors and local voltage phasors is presented in this paper. The distributed parameter line model is adopted to take into account the shunt capacitance of the lines. The mutual coupling between the parallel lines in the zero-sequence network is also considered. The boundary conditions under different fault types are used to derive the fault location formulation. The developed algorithm directly uses the local voltage phasors on the line side of series compensation (SC) and metal oxide varistor (MOV). However, when potential transformers are not installedmore » on the line side of SC and MOVs for the local terminal, these measurements can be calculated from the local terminal bus voltage and currents by estimating the voltages across the SC and MOVs. MATLAB SimPowerSystems is used to generate cases under diverse fault conditions to evaluating accuracy. The simulation results show that the proposed algorithm is qualified for practical implementation.« less
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Guilin; Zhu, Dengchao; Li, Shengyi
2015-02-01
In order to meet the requirement of aerodynamics, the infrared domes or windows with conformal and thin-wall structure becomes the development trend of high-speed aircrafts in the future. But these parts usually have low stiffness, the cutting force will change along with the axial position, and it is very difficult to meet the requirement of shape accuracy by single machining. Therefore, on-machine measurement and compensating turning are used to control the shape errors caused by the fluctuation of cutting force and the change of stiffness. In this paper, on the basis of ultra precision diamond lathe, a contact measuring system with five DOFs is developed to achieve on-machine measurement of conformal thin-wall parts with high accuracy. According to high gradient surface, the optimizing algorithm is designed on the distribution of measuring points by using the data screening method. The influence rule of sampling frequency is analyzed on measuring errors, the best sampling frequency is found out based on planning algorithm, the effect of environmental factors and the fitting errors are controlled within lower range, and the measuring accuracy of conformal dome is greatly improved in the process of on-machine measurement. According to MgF2 conformal dome with high gradient, the compensating turning is implemented by using the designed on-machine measuring algorithm. The shape error is less than PV 0.8μm, greatly superior compared with PV 3μm before compensating turning, which verifies the correctness of measuring algorithm.
Intraocular scattering compensation in retinal imaging
Christaras, Dimitrios; Ginis, Harilaos; Pennos, Alexandros; Artal, Pablo
2016-01-01
Intraocular scattering affects fundus imaging in a similar way that affects vision; it causes a decrease in contrast which depends on both the intrinsic scattering of the eye but also on the dynamic range of the image. Consequently, in cases where the absolute intensity in the fundus image is important, scattering can lead to a wrong estimation. In this paper, a setup capable of acquiring fundus images and estimating objectively intraocular scattering was built, and the acquired images were then used for scattering compensation in fundus imaging. The method consists of two parts: first, reconstruct the individual’s wide-angle Point Spread Function (PSF) at a specific wavelength to be used within an enhancement algorithm on an acquired fundus image to compensate for scattering. As a proof of concept, a single pass measurement with a scatter filter was carried out first and the complete algorithm of the PSF reconstruction and the scattering compensation was applied. The advantage of the single pass test is that one can compare the reconstructed image with the original one and see the validity, thus testing the efficiency of the method. Following the test, the algorithm was applied in actual fundus images in human eyes and the effect on the contrast of the image before and after the compensation was compared. The comparison showed that depending on the wavelength, contrast can be reduced by 8.6% under certain conditions. PMID:27867710
Fluid surface compensation in digital holographic microscopy for topography measurement
NASA Astrophysics Data System (ADS)
Lin, Li-Chien; Tu, Han-Yen; Lai, Xin-Ji; Wang, Sheng-Shiun; Cheng, Chau-Jern
2012-06-01
A novel technique is presented for surface compensation and topography measurement of a specimen in fluid medium by digital holographic microscopy (DHM). In the measurement, the specimen is preserved in a culture dish full of liquid culture medium and an environmental vibration induces a series of ripples to create a non-uniform background on the reconstructed phase image. A background surface compensation algorithm is proposed to account for this problem. First, we distinguish the cell image from the non-uniform background and a morphological image operation is used to reduce the noise effect on the background surface areas. Then, an adaptive sampling from the background surface is employed, taking dense samples from the high-variation area while leaving the smooth region mostly untouched. A surface fitting algorithm based on the optimal bi-cubic functional approximation is used to establish a whole background surface for the phase image. Once the background surface is found, the background compensated phase can be obtained by subtracting the estimated background from the original phase image. From the experimental results, the proposed algorithm performs effectively in removing the non-uniform background of the phase image and has the ability to obtain the specimen topography inside fluid medium under environmental vibrations.
Leaf Uptake of Nitrogen Dioxide (NO2) Under Different Environmental Conditions.
NASA Astrophysics Data System (ADS)
Chaparro-Suarez, I.; Thielmann, A.; Meixner, F. X.; Kesselmeier, J.
2005-12-01
The chemical budget of Ozone in the troposphere is largely determined by the concentration of NOx (NO, NO2) within a photostationary equilibrium. It is well known that atmospheric concentration is strongly influenced by the bi-directional exchange of NO2. However, there is some debate about the magnitude of the compensation point. Therefore, we investigated the uptake of atmospheric NO2 by trees in relation to atmospheric NO2 concentrations. Using the dynamic chamber technique and a sensitive and specific NO-analysator (CLD 780, Eco Physics) we measured the uptake of NO2 by four different tree species (Betula pendula, Fagus sylvatica, Quercus ilex und Pinus sylvestris) under field and laboratory conditions. Simultaneous measurements of CO2 exchange and transpiration were performed to track photosynthesis and stomatal conductance. Depending on tree species we found the exchange to be controlled by very low NO2 compensation points sometimes reaching zero values (no emission) under laboratory conditions. In the field a high compensation point for European beech (Fagus sylvatica) was observed, which is understood as a result of complex atmospheric conditions.
Performance of synchronous optical receivers using atmospheric compensation techniques.
Belmonte, Aniceto; Khan, Joseph
2008-09-01
We model the impact of atmospheric turbulence-induced phase and amplitude fluctuations on free-space optical links using synchronous detection. We derive exact expressions for the probability density function of the signal-to-noise ratio in the presence of turbulence. We consider the effects of log-normal amplitude fluctuations and Gaussian phase fluctuations, in addition to local oscillator shot noise, for both passive receivers and those employing active modal compensation of wave-front phase distortion. We compute error probabilities for M-ary phase-shift keying, and evaluate the impact of various parameters, including the ratio of receiver aperture diameter to the wave-front coherence diameter, and the number of modes compensated.
Processing Ultra Wide Band Synthetic Aperture Radar Data with Motion Detectors
NASA Technical Reports Server (NTRS)
Madsen, Soren Norvang
1996-01-01
Several issues makes the processing of ultra wide band (UWB) SAR data acquired from an airborne platform difficult. The character of UWB data invalidates many of the usual SAR batch processing techniques, leading to the application of wavenumber domain type processors...This paper will suggest and evaluate an algorithm which combines a wavenumber domain processing algorithm with a motion compensation procedure which enables motion compensation to be applied as a function of target range and the azimuth angle.
NASA Astrophysics Data System (ADS)
Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee
2017-07-01
This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.
Compensation of significant parametric uncertainties using sliding mode online learning
NASA Astrophysics Data System (ADS)
Schnetter, Philipp; Kruger, Thomas
An augmented nonlinear inverse dynamics (NID) flight control strategy using sliding mode online learning for a small unmanned aircraft system (UAS) is presented. Because parameter identification for this class of aircraft often is not valid throughout the complete flight envelope, aerodynamic parameters used for model based control strategies may show significant deviations. For the concept of feedback linearization this leads to inversion errors that in combination with the distinctive susceptibility of small UAS towards atmospheric turbulence pose a demanding control task for these systems. In this work an adaptive flight control strategy using feedforward neural networks for counteracting such nonlinear effects is augmented with the concept of sliding mode control (SMC). SMC-learning is derived from variable structure theory. It considers a neural network and its training as a control problem. It is shown that by the dynamic calculation of the learning rates, stability can be guaranteed and thus increase the robustness against external disturbances and system failures. With the resulting higher speed of convergence a wide range of simultaneously occurring disturbances can be compensated. The SMC-based flight controller is tested and compared to the standard gradient descent (GD) backpropagation algorithm under the influence of significant model uncertainties and system failures.
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2015-01-01
This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.
An innovative approach to compensator design
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1973-01-01
The design is considered of a computer-aided-compensator for a control system from a frequency domain point of view. The design technique developed is based on describing the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. To do this, several definitions in regard to measuring the performance of a system in the frequency domain are given, e.g., relative stability, relative attenuation, proper phasing, etc. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. This tool is called the constraint improvement algorithm. Then for applying the constraint improvement algorithm generalized, gradients for the constraints are derived. Finally, the necessary theory is incorporated in a Computer program called CIP (compensator Improvement Program). The practical usefulness of CIP is demonstrated by two large system examples.
Adaptive Beam Loading Compensation in Room Temperature Bunching Cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Chase, B. E.; Cullerton, E.
In this paper we present the design, simulation, and proof of principle results of an optimization based adaptive feedforward algorithm for beam-loading compensation in a high impedance room temperature cavity. We begin with an overview of prior developments in beam loading compensation. Then we discuss different techniques for adaptive beam loading compensation and why the use of Newton?s Method is of interest for this application. This is followed by simulation and initial experimental results of this method.
NASA Astrophysics Data System (ADS)
Noh, Myoung-Jong; Howat, Ian M.
2018-02-01
The quality and efficiency of automated Digital Elevation Model (DEM) extraction from stereoscopic satellite imagery is critically dependent on the accuracy of the sensor model used for co-locating pixels between stereo-pair images. In the absence of ground control or manual tie point selection, errors in the sensor models must be compensated with increased matching search-spaces, increasing both the computation time and the likelihood of spurious matches. Here we present an algorithm for automatically determining and compensating the relative bias in Rational Polynomial Coefficients (RPCs) between stereo-pairs utilizing hierarchical, sub-pixel image matching in object space. We demonstrate the algorithm using a suite of image stereo-pairs from multiple satellites over a range stereo-photogrammetrically challenging polar terrains. Besides providing a validation of the effectiveness of the algorithm for improving DEM quality, experiments with prescribed sensor model errors yield insight into the dependence of DEM characteristics and quality on relative sensor model bias. This algorithm is included in the Surface Extraction through TIN-based Search-space Minimization (SETSM) DEM extraction software package, which is the primary software used for the U.S. National Science Foundation ArcticDEM and Reference Elevation Model of Antarctica (REMA) products.
Feng, Yibo; Li, Xisheng; Zhang, Xiaojuan
2015-05-13
We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS) gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF), the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to -2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation.
Automated hierarchical time gain compensation for in-vivo ultrasound imaging
NASA Astrophysics Data System (ADS)
Moshavegh, Ramin; Hemmsen, Martin C.; Martins, Bo; Brandt, Andreas H.; Hansen, Kristoffer L.; Nielsen, Michael B.; Jensen, Jørgen A.
2015-03-01
Time gain compensation (TGC) is essential to ensure the optimal image quality of the clinical ultrasound scans. When large fluid collections are present within the scan plane, the attenuation distribution is changed drastically and TGC compensation becomes challenging. This paper presents an automated hierarchical TGC (AHTGC) algorithm that accurately adapts to the large attenuation variation between different types of tissues and structures. The algorithm relies on estimates of tissue attenuation, scattering strength, and noise level to gain a more quantitative understanding of the underlying tissue and the ultrasound signal strength. The proposed algorithm was applied to a set of 44 in vivo abdominal movie sequences each containing 15 frames. Matching pairs of in vivo sequences, unprocessed and processed with the proposed AHTGC were visualized side by side and evaluated by two radiologists in terms of image quality. Wilcoxon signed-rank test was used to evaluate whether radiologists preferred the processed sequences or the unprocessed data. The results indicate that the average visual analogue scale (VAS) is positive ( p-value: 2.34 × 10-13) and estimated to be 1.01 (95% CI: 0.85; 1.16) favoring the processed data with the proposed AHTGC algorithm.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Landmark-Based Drift Compensation Algorithm for Inertial Pedestrian Navigation
Munoz Diaz, Estefania; Caamano, Maria; Fuentes Sánchez, Francisco Javier
2017-01-01
The navigation of pedestrians based on inertial sensors, i.e., accelerometers and gyroscopes, has experienced a great growth over the last years. However, the noise of medium- and low-cost sensors causes a high error in the orientation estimation, particularly in the yaw angle. This error, called drift, is due to the bias of the z-axis gyroscope and other slow changing errors, such as temperature variations. We propose a seamless landmark-based drift compensation algorithm that only uses inertial measurements. The proposed algorithm adds a great value to the state of the art, because the vast majority of the drift elimination algorithms apply corrections to the estimated position, but not to the yaw angle estimation. Instead, the presented algorithm computes the drift value and uses it to prevent yaw errors and therefore position errors. In order to achieve this goal, a detector of landmarks, i.e., corners and stairs, and an association algorithm have been developed. The results of the experiments show that it is possible to reliably detect corners and stairs using only inertial measurements eliminating the need that the user takes any action, e.g., pressing a button. Associations between re-visited landmarks are successfully made taking into account the uncertainty of the position. After that, the drift is computed out of all associations and used during a post-processing stage to obtain a low-drifted yaw angle estimation, that leads to successfully drift compensated trajectories. The proposed algorithm has been tested with quasi-error-free turn rate measurements introducing known biases and with medium-cost gyroscopes in 3D indoor and outdoor scenarios. PMID:28671622
Amplitude and phase controlled adaptive optics system
NASA Astrophysics Data System (ADS)
Pham, Ich; Ma, Sam
2006-06-01
An adaptive optics (AO) system is used to control the deformable mirror (DM) actuators for compensating the optical effects introduced by the turbulence in the Earth's atmosphere and distortions produced by the optical elements between the distant object and its local sensor. The typical AO system commands the DM actuators while minimizing the measured wave front (WF) phase error. This is known as the phase conjugator system, which does not work well in the strong scintillation condition because both amplitude and phase are corrupted along the propagation path. In order to compensate for the wave front amplitude, a dual DM field conjugator system may be used. The first and second DM compensate for the amplitude and the phase respectively. The amplitude controller requires the mapping from DM1 actuator command to DM2 intensity. This can be obtained from either a calibration routine or an intensity transport equation, which relates the phase to the intensity. Instead of a dual-DM, a single Spatial Light Modulator (SLM) may control the amplitude and phase independently. The technique uses the spatial carrier frequency and the resulting intensity is related to the carrier modulation, while the phase is the average carrier phase. The dynamical AO performance using the carrier modulation is limited by the actuator frequency response and not by the computational load of the controller algorithm. Simulation of the proposed field conjugator systems show significant improvement for the on-axis performance compared to the phase conjugator system.
40 CFR 1065.270 - Chemiluminescent detector.
Code of Federal Regulations, 2011 CFR
2011-07-01
... algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no bias high and no bias low...
40 CFR 1065.270 - Chemiluminescent detector.
Code of Federal Regulations, 2012 CFR
2012-07-01
... algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no bias high and no bias low...
40 CFR 1065.270 - Chemiluminescent detector.
Code of Federal Regulations, 2013 CFR
2013-07-01
... algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0% (that is, no bias high and no bias low...
40 CFR 1065.270 - Chemiluminescent detector.
Code of Federal Regulations, 2010 CFR
2010-07-01
... algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no bias high and no bias low...
40 CFR 1065.372 - NDUV analyzer HC and H2O interference verification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... compensation algorithms that utilize measurements of other gases to meet this interference verification, simultaneously conduct such measurements to test the algorithms during the analyzer interference verification. (c...
Using ultrasound CBE imaging without echo shift compensation for temperature estimation.
Tsui, Po-Hsiang; Chien, Yu-Ting; Liu, Hao-Li; Shu, Yu-Chen; Chen, Wen-Shiang
2012-09-01
Clinical trials have demonstrated that hyperthermia improves cancer treatments. Previous studies developed ultrasound temperature imaging methods, based on the changes in backscattered energy (CBE), to monitor temperature variations during hyperthermia. Echo shift, induced by increasing temperature, contaminates the CBE image, and its tracking and compensation should normally ensure that estimations of CBE at each pixel are correct. To obtain a simplified algorithm that would allow real-time computation of CBE images, this study evaluated the usefulness of CBE imaging without echo shift compensation in detecting distributions in temperature. Experiments on phantoms, using different scatterer concentrations, and porcine livers were conducted to acquire raw backscattered data at temperatures ranging from 37°C to 45°C. Tissue samples of pork tenderloin were ablated in vitro by microwave irradiation to evaluate the feasibility of using the CBE image without compensation to monitor tissue ablation. CBE image construction was based on a ratio map obtained from the envelope image divided by the reference envelope image at 37°C. The experimental results demonstrated that the CBE image obtained without echo shift compensation has the ability to estimate temperature variations induced during uniform heating or tissue ablation. The magnitude of the CBE as a function of temperature obtained without compensation is stronger than that with compensation, implying that the CBE image without compensation has a better sensitivity to detect temperature. These findings suggest that echo shift tracking and compensation may be unnecessary in practice, thus simplifying the algorithm required to implement real-time CBE imaging. Copyright © 2012 Elsevier B.V. All rights reserved.
Prostate implant reconstruction from C-arm images with motion-compensated tomosynthesis
Dehghan, Ehsan; Moradi, Mehdi; Wen, Xu; French, Danny; Lobo, Julio; Morris, W. James; Salcudean, Septimiu E.; Fichtinger, Gabor
2011-01-01
Purpose: Accurate localization of prostate implants from several C-arm images is necessary for ultrasound-fluoroscopy fusion and intraoperative dosimetry. The authors propose a computational motion compensation method for tomosynthesis-based reconstruction that enables 3D localization of prostate implants from C-arm images despite C-arm oscillation and sagging. Methods: Five C-arm images are captured by rotating the C-arm around its primary axis, while measuring its rotation angle using a protractor or the C-arm joint encoder. The C-arm images are processed to obtain binary seed-only images from which a volume of interest is reconstructed. The motion compensation algorithm, iteratively, compensates for 2D translational motion of the C-arm by maximizing the number of voxels that project on a seed projection in all of the images. This obviates the need for C-arm full pose tracking traditionally implemented using radio-opaque fiducials or external trackers. The proposed reconstruction method is tested in simulations, in a phantom study and on ten patient data sets. Results: In a phantom implanted with 136 dummy seeds, the seed detection rate was 100% with a localization error of 0.86 ± 0.44 mm (Mean ± STD) compared to CT. For patient data sets, a detection rate of 99.5% was achieved in approximately 1 min per patient. The reconstruction results for patient data sets were compared against an available matching-based reconstruction method and showed relative localization difference of 0.5 ± 0.4 mm. Conclusions: The motion compensation method can successfully compensate for large C-arm motion without using radio-opaque fiducial or external trackers. Considering the efficacy of the algorithm, its successful reconstruction rate and low computational burden, the algorithm is feasible for clinical use. PMID:21992346
High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization
NASA Astrophysics Data System (ADS)
Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan
2017-04-01
Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.
NASA Astrophysics Data System (ADS)
Irsch, Kristina; Lee, Soohyun; Bose, Sanjukta N.; Kang, Jin U.
2018-02-01
We present an optical coherence tomography (OCT) imaging system that effectively compensates unwanted axial motion with micron-scale accuracy. The OCT system is based on a swept-source (SS) engine (1060-nm center wavelength, 100-nm full-width sweeping bandwidth, and 100-kHz repetition rate), with axial and lateral resolutions of about 4.5 and 8.5 microns respectively. The SS-OCT system incorporates a distance sensing method utilizing an envelope-based surface detection algorithm. The algorithm locates the target surface from the B-scans, taking into account not just the first or highest peak but the entire signature of sequential A-scans. Subsequently, a Kalman filter is applied as predictor to make up for system latencies, before sending the calculated position information to control a linear motor, adjusting and maintaining a fixed system-target distance. To test system performance, the motioncorrection algorithm was compared to earlier, more basic peak-based surface detection methods and to performing no motion compensation. Results demonstrate increased robustness and reproducibility, particularly noticeable in multilayered tissues, while utilizing the novel technique. Implementing such motion compensation into clinical OCT systems may thus improve the reliability of objective and quantitative information that can be extracted from OCT measurements.
Mission and Navigation Design for the 2009 Mars Science Laboratory Mission
NASA Technical Reports Server (NTRS)
D'Amario, Louis A.
2008-01-01
NASA s Mars Science Laboratory mission will launch the next mobile science laboratory to Mars in the fall of 2009 with arrival at Mars occurring in the summer of 2010. A heat shield, parachute, and rocket-powered descent stage, including a sky crane, will be used to land the rover safely on the surface of Mars. The direction of the atmospheric entry vehicle lift vector will be controlled by a hypersonic entry guidance algorithm to compensate for entry trajectory errors and counteract atmospheric and aerodynamic dispersions. The key challenges for mission design are (1) develop a launch/arrival strategy that provides communications coverage during the Entry, Descent, and Landing phase either from an X-band direct-to-Earth link or from a Ultra High Frequency link to the Mars Reconnaissance Orbiter for landing latitudes between 30 deg North and 30 deg South, while satisfying mission constraints on Earth departure energy and Mars atmospheric entry speed, and (2) generate Earth-departure targets for the Atlas V-541 launch vehicle for the specified launch/arrival strategy. The launch/arrival strategy employs a 30-day baseline launch period and a 27-day extended launch period with varying arrival dates at Mars. The key challenges for navigation design are (1) deliver the spacecraft to the atmospheric entry interface point (Mars radius of 3522.2 km) with an inertial entry flight path angle error of +/- 0.20 deg (3 sigma), (2) provide knowledge of the entry state vector accurate to +/- 2.8 km (3 sigma) in position and +/- 2.0 m/s (3 sigma) in velocity for initializing the entry guidance algorithm, and (3) ensure a 99% probability of successful delivery at Mars with respect to available cruise stage propellant. Orbit determination is accomplished via ground processing of multiple complimentary radiometric data types: Doppler, range, and Delta-Differential One-way Ranging (a Very Long Baseline Interferometry measurement). The navigation strategy makes use of up to five interplanetary trajectory correction maneuvers to achieve entry targeting requirements. The requirements for cruise propellant usage and atmospheric entry targeting and knowledge are met with ample margins.
A homotopy algorithm for digital optimal projection control GASD-HADOC
NASA Technical Reports Server (NTRS)
Collins, Emmanuel G., Jr.; Richter, Stephen; Davis, Lawrence D.
1993-01-01
The linear-quadratic-gaussian (LQG) compensator was developed to facilitate the design of control laws for multi-input, multi-output (MIMO) systems. The compensator is computed by solving two algebraic equations for which standard closed-loop solutions exist. Unfortunately, the minimal dimension of an LQG compensator is almost always equal to the dimension of the plant and can thus often violate practical implementation constraints on controller order. This deficiency is especially highlighted when considering control-design for high-order systems such as flexible space structures. This deficiency motivated the development of techniques that enable the design of optimal controllers whose dimension is less than that of the design plant. A homotopy approach based on the optimal projection equations that characterize the necessary conditions for optimal reduced-order control. Homotopy algorithms have global convergence properties and hence do not require that the initializing reduced-order controller be close to the optimal reduced-order controller to guarantee convergence. However, the homotopy algorithm previously developed for solving the optimal projection equations has sublinear convergence properties and the convergence slows at higher authority levels and may fail. A new homotopy algorithm for synthesizing optimal reduced-order controllers for discrete-time systems is described. Unlike the previous homotopy approach, the new algorithm is a gradient-based, parameter optimization formulation and was implemented in MATLAB. The results reported may offer the foundation for a reliable approach to optimal, reduced-order controller design.
An Alternate Method to Springback Compensation for Sheet Metal Forming
Omar, Badrul; Jusoff, Kamaruzaman
2014-01-01
The aim of this work is to improve the accuracy of cold stamping product by accommodating springback. This is a numerical approach to improve the accuracy of springback analysis and die compensation process combining the displacement adjustment (DA) method and the spring forward (SF) algorithm. This alternate hybrid method (HM) is conducted by firstly employing DA method followed by the SF method instead of either DA or SF method individually. The springback shape and the target part are used to optimize the die surfaces compensating springback. The hybrid method (HM) algorithm has been coded in Fortran and tested in two- and three-dimensional models. By implementing the HM, the springback error can be decreased and the dimensional deviation falls in the predefined tolerance range. PMID:25165738
A computer program for borehole compensation of dual-detector density well logs
Scott, James Henry
1978-01-01
The computer program described in this report was developed for applying a borehole-rugosity and mudcake compensation algorithm to dual-density logs using the following information: the water level in the drill hole, hole diameter (from a caliper log if available, or the nominal drill diameter if not), and the two gamma-ray count rate logs from the near and far detectors of the density probe. The equations that represent the compensation algorithm and the calibration of the two detectors (for converting countrate or density) were derived specifically for a probe manufactured by Comprobe Inc. (5.4 cm O.D. dual-density-caliper); they are not applicable to other probes. However, equivalent calibration and compensation equations can be empirically determined for any other similar two-detector density probes and substituted in the computer program listed in this report. * Use of brand names in this report does not necessarily constitute endorsement by the U.S. Geological Survey.
50 CFR 600.245 - Council member compensation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false Council member compensation. 600.245 Section 600.245 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE MAGNUSON-STEVENS ACT PROVISIONS Council Membership § 600...
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.
2003-01-01
This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.
Noise-cancellation-based nonuniformity correction algorithm for infrared focal-plane arrays.
Godoy, Sebastián E; Pezoa, Jorge E; Torres, Sergio N
2008-10-10
The spatial fixed-pattern noise (FPN) inherently generated in infrared (IR) imaging systems compromises severely the quality of the acquired imagery, even making such images inappropriate for some applications. The FPN refers to the inability of the photodetectors in the focal-plane array to render a uniform output image when a uniform-intensity scene is being imaged. We present a noise-cancellation-based algorithm that compensates for the additive component of the FPN. The proposed method relies on the assumption that a source of noise correlated to the additive FPN is available to the IR camera. An important feature of the algorithm is that all the calculations are reduced to a simple equation, which allows for the bias compensation of the raw imagery. The algorithm performance is tested using real IR image sequences and is compared to some classical methodologies. (c) 2008 Optical Society of America
Joint source-channel coding for motion-compensated DCT-based SNR scalable video.
Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K
2002-01-01
In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.
Single-dose volume regulation algorithm for a gas-compensated intrathecal infusion pump.
Nam, Kyoung Won; Kim, Kwang Gi; Sung, Mun Hyun; Choi, Seong Wook; Kim, Dae Hyun; Jo, Yung Ho
2011-01-01
The internal pressures of medication reservoirs of gas-compensated intrathecal medication infusion pumps decrease when medication is discharged, and these discharge-induced pressure drops can decrease the volume of medication discharged. To prevent these reductions, the volumes discharged must be adjusted to maintain the required dosage levels. In this study, the authors developed an automatic control algorithm for an intrathecal infusion pump developed by the Korean National Cancer Center that regulates single-dose volumes. The proposed algorithm estimates the amount of medication remaining and adjusts control parameters automatically to maintain single-dose volumes at predetermined levels. Experimental results demonstrated that the proposed algorithm can regulate mean single-dose volumes with a variation of <3% and estimate the remaining medication volume with an accuracy of >98%. © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.M.W.; Frey, E.C.; Lalush, D.S.
1996-12-31
We investigated methods to accurately reconstruct 180{degrees} truncated TCT and SPECT projection data obtained from a right-angle dual-camera SPECT system for myocardial SPECT with attenuation compensation. The 180{degrees} data reconstruction methods would permit substantial savings in transmission data acquisition time. Simulation data from the 3D MCAT phantom and clinical data from large patients were used in the evaluation study. Different transmission reconstruction methods including the FBP, transmission ML-EM, transmission ML-SA, and BIT algorithms with and without using the body contour as support, were used in the TCT image reconstructions. The accuracy of both the TCT and attenuation compensated SPECT imagesmore » were evaluated for different degrees of truncation and noise levels. We found that using the FBP reconstructed TCT images resulted in higher count density in the left ventricular (LV) wall of the attenuation compensated SPECT images. The LV wall count density obtained using the iteratively reconstructed TCT images with and without support were similar to each other and were more accurate than that using the FBP. However, the TCT images obtained with support show fewer image artifacts than without support. Among the iterative reconstruction algorithms, the ML-SA algorithm provides the most accurate reconstruction but is the slowest. The BIT algorithm is the fastest but shows the most image artifacts. We conclude that accurate attenuation compensated images can be obtained with truncated 180{degrees} data from large patients using a right-angle dual-camera SPECT system.« less
Nonlinear Blind Compensation for Array Signal Processing Application
Ma, Hong; Jin, Jiang; Zhang, Hua
2018-01-01
Recently, nonlinear blind compensation technique has attracted growing attention in array signal processing application. However, due to the nonlinear distortion stemming from array receiver which consists of multi-channel radio frequency (RF) front-ends, it is too difficult to estimate the parameters of array signal accurately. A novel nonlinear blind compensation algorithm aims at the nonlinearity mitigation of array receiver and its spurious-free dynamic range (SFDR) improvement, which will be more precise to estimate the parameters of target signals such as their two-dimensional directions of arrival (2-D DOAs). Herein, the suggested method is designed as follows: the nonlinear model parameters of any channel of RF front-end are extracted to synchronously compensate the nonlinear distortion of the entire receiver. Furthermore, a verification experiment on the array signal from a uniform circular array (UCA) is adopted to testify the validity of our approach. The real-world experimental results show that the SFDR of the receiver is enhanced, leading to a significant improvement of the 2-D DOAs estimation performance for weak target signals. And these results demonstrate that our nonlinear blind compensation algorithm is effective to estimate the parameters of weak array signal in concomitance with strong jammers. PMID:29690571
Experimental validation of phase-only pre-compensation over 494 m free-space propagation.
Brady, Aoife; Berlich, René; Leonhard, Nina; Kopf, Teresa; Böttner, Paul; Eberhardt, Ramona; Reinlein, Claudia
2017-07-15
It is anticipated that ground-to-geostationary orbit (GEO) laser communication will benefit from pre-compensation of atmospheric turbulence for laser beam propagation through the atmosphere. Theoretical simulations and laboratory experiments have determined its feasibility; extensive free-space experimental validation has, however, yet to be fulfilled. Therefore, we designed and implemented an adaptive optical (AO)-box which pre-compensates an outgoing laser beam (uplink) using the measurements of an incoming beam (downlink). The setup was designed to approximate the baseline scenario over a horizontal test range of 0.5 km and consisted of a ground terminal with the AO-box and a simplified approximation of a satellite terminal. Our results confirmed that we could focus the uplink beam on the satellite terminal using AO under a point-ahead angle of 28 μrad. Furthermore, we demonstrated a considerable increase in the intensity received at the satellite. These results are further testimony to AO pre-compensation being a viable technique to enhance Earth-to-GEO optical communication.
An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.
1998-01-01
An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.
Malekiha, Mahdi; Tselniker, Igor; Plant, David V
2016-02-22
In this work, we propose and experimentally demonstrate a novel low-complexity technique for fiber nonlinearity compensation. We achieved a transmission distance of 2818 km for a 32-GBaud dual-polarization 16QAM signal. For efficient implantation, and to facilitate integration with conventional digital signal processing (DSP) approaches, we independently compensate fiber nonlinearities after linear impairment equalization. Therefore this algorithm can be easily implemented in currently deployed transmission systems after using linear DSP. The proposed equalizer operates at one sample per symbol and requires only one computation step. The structure of the algorithm is based on a first-order perturbation model with quantized perturbation coefficients. Also, it does not require any prior calculation or detailed knowledge of the transmission system. We identified common symmetries between perturbation coefficients to avoid duplicate and unnecessary operations. In addition, we use only a few adaptive filter coefficients by grouping multiple nonlinear terms and dedicating only one adaptive nonlinear filter coefficient to each group. Finally, the complexity of the proposed algorithm is lower than previously studied nonlinear equalizers by more than one order of magnitude.
Transport delay compensation for computer-generated imagery systems
NASA Technical Reports Server (NTRS)
Mcfarland, Richard E.
1988-01-01
In the problem of pure transport delay in a low-pass system, a trade-off exists with respect to performance within and beyond a frequency bandwidth. When activity beyond the band is attenuated because of other considerations, this trade-off may be used to improve the performance within the band. Specifically, transport delay in computer-generated imagery systems is reduced to a manageable problem by recognizing frequency limits in vehicle activity and manual-control capacity. Based on these limits, a compensation algorithm has been developed for use in aircraft simulation at NASA Ames Research Center. For direct measurement of transport delays, a beam-splitter experiment is presented that accounts for the complete flight simulation environment. Values determined by this experiment are appropriate for use in the compensation algorithm. The algorithm extends the bandwidth of high-frequency flight simulation to well beyond that of normal pilot inputs. Within this bandwidth, the visual scene presentation manifests negligible gain distortion and phase lag. After a year of utilization, two minor exceptions to universal simulation applicability have been identified and subsequently resolved.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline simulations with time delay compensation show that both novel predictors effectively suppress the large spikes caused by the McFarland compensator. The phase errors of the three predictors are not significant. The adaptive predictor yields greater gain errors than the McFarland predictor for short delays (96 and 138 ms), but shows smaller errors for long delays (186 and 282 ms). The advantage of the adaptive predictor becomes more obvious for a longer time delay. Conversely, the state space predictor results in substantially smaller gain error than the other two predictors for all four delay cases.
NASA Technical Reports Server (NTRS)
Karmarkar, J. S.
1972-01-01
Proposal of an algorithmic procedure, based on mathematical programming methods, to design compensators for hyperstable discrete model-reference adaptive systems (MRAS). The objective of the compensator is to render the MRAS insensitive to initial parameter estimates within a maximized hypercube in the model parameter space.
40 CFR 1065.280 - Paramagnetic and magnetopneumatic O2 detection analyzers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... algorithms that are functions of other gaseous measurements and the engine's known or assumed fuel properties. The target value for any compensation algorithm is 0.0% (that is, no bias high and no bias low...
NASA Technical Reports Server (NTRS)
Swickrath, Michael J.; Anderson, Molly; McMillin, Summer; Boerman, Craig
2011-01-01
Monitoring carbon dioxide (CO2) concentration within a spacecraft or spacesuit is critically important to ensuring the safety of the crew. Carbon dioxide uniquely absorbs light at wavelengths of 3.95 micrometers and 4.26 micrometers. As a result, non-dispersive infrared (NDIR) spectroscopy can be employed as a reliable and inexpensive method for the quantification of CO2 within the atmosphere. A multitude of commercial-off-the-shelf (COTS) NDIR sensors exist for CO2 quantification. The COTS sensors provide reasonable accuracy so long as the measurements are attained under conditions close to the calibration conditions of the sensor (typically 21.1 C and 1 atm). However, as pressure deviates from atmospheric to the pressures associated with a spacecraft (8.0-10.2 PSIA) or spacesuit (4.1-8.0 PSIA), the error in the measurement grows increasingly large. In addition to pressure and temperature dependencies, the infrared transmissivity through a volume of gas also depends on the composition of the gas. As the composition is not known a priori, accurate sub-ambient detection must rely on iterative sensor compensation techniques. This manuscript describes the development of recursive compensation algorithms for sub-ambient detection of CO2 with COTS NDIR sensors. In addition, the basis of the exponential loss in accuracy is developed theoretically considering thermal, Doppler, and Lorentz broadening effects which arise as a result of the temperature, pressure, and composition of the gas mixture under analysis. As a result, this manuscript provides an approach to employing COTS sensors at sub-ambient conditions and may also lend insight into designing future NDIR sensors for aerospace application.
Generalized algebraic scene-based nonuniformity correction algorithm.
Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott
2005-02-01
A generalization of a recently developed algebraic scene-based nonuniformity correction algorithm for focal plane array (FPA) sensors is presented. The new technique uses pairs of image frames exhibiting arbitrary one- or two-dimensional translational motion to compute compensator quantities that are then used to remove nonuniformity in the bias of the FPA response. Unlike its predecessor, the generalization does not require the use of either a blackbody calibration target or a shutter. The algorithm has a low computational overhead, lending itself to real-time hardware implementation. The high-quality correction ability of this technique is demonstrated through application to real IR data from both cooled and uncooled infrared FPAs. A theoretical and experimental error analysis is performed to study the accuracy of the bias compensator estimates in the presence of two main sources of error.
NASA Technical Reports Server (NTRS)
Smith, Michael D.; Bandfield, Joshua L.; Christensen, Philip R.
2000-01-01
We present two algorithms for the separation of spectral features caused by atmospheric and surface components in Thermal Emission Spectrometer (TES) data. One algorithm uses radiative transfer and successive least squares fitting to find spectral shapes first for atmospheric dust, then for water-ice aerosols, and then, finally, for surface emissivity. A second independent algorithm uses a combination of factor analysis, target transformation, and deconvolution to simultaneously find dust, water ice, and surface emissivity spectral shapes. Both algorithms have been applied to TES spectra, and both find very similar atmospheric and surface spectral shapes. For TES spectra taken during aerobraking and science phasing periods in nadir-geometry these two algorithms give meaningful and usable surface emissivity spectra that can be used for mineralogical identification.
Algorithms for output feedback, multiple-model, and decentralized control problems
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.
High-resolution spectrometrometry/interferometer
NASA Technical Reports Server (NTRS)
Breckinridge, J. B.; Norton, R. H.; Schindler, R. A.
1980-01-01
Modified double-pass interferometer has several features that maximize its resolution. Proposed for rocket-borne probes of upper atmosphere, it includes cat's-eye retroreflectors in both arms, wedge-shaped beam splitter, and wedged optical-path compensator. Advantages are full tilt compensation, minimal spectrum "channeling," easy tunability, maximum fringe contrast, and even two-sided interferograms.
Shading correction assisted iterative cone-beam CT reconstruction
NASA Astrophysics Data System (ADS)
Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye
2017-11-01
Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Won; Stein, Michael L.; Wang, Jiali
Climate models robustly imply that some significant change in precipitation patterns will occur. Models consistently project that the intensity of individual precipitation events increases by approximately 6-7%/K, following the increase in atmospheric water content, but that total precipitation increases by a lesser amount (2-3%/K in the global average). Some other aspect of precipitation events must then change to compensate for this difference. We develop here a new methodology for identifying individual rainstorms and studying their physical characteristics - including starting location, intensity, spatial extent, duration, and trajectory - that allows identifying that compensating mechanism. We apply this technique to precipitationmore » over the contiguous U.S. from both radar-based data products and high-resolution model runs simulating 100 years of business-as-usual warming. In model studies, we find that the dominant compensating mechanism is a reduction of storm size. In summer, rainstorms become more intense but smaller; in winter, rainstorm shrinkage still dominates, but storms also become less numerous and shorter duration. These results imply that flood impacts from climate change will be less severe than would be expected from changes in precipitation intensity alone. We show also that projected changes are smaller than model-observation biases, implying that the best means of incorporating them into impact assessments is via "data-driven simulations" that apply model-projected changes to observational data. We therefore develop a simulation algorithm that statistically describes model changes in precipitation characteristics and adjusts data accordingly, and show that, especially for summertime precipitation, it outperforms simulation approaches that do not include spatial information.« less
Ocean observations with EOS/MODIS: Algorithm development and post launch studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1995-01-01
An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm was carried out. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. The development of a multi-layer Monte Carlo radiative transfer code that includes polarization by molecular and aerosol scattering and wind-induced sea surface roughness has been completed. Comparison tests with an existing two-layer successive order of scattering code suggests that both codes are capable of producing top-of-atmosphere radiances with errors usually less than 0.1 percent. An initial set of simulations to study the effects of ignoring the polarization of the the ocean-atmosphere light field, in both the development of the atmospheric correction algorithm and the generation of the lookup tables used for operation of the algorithm, have been completed. An algorithm was developed that can be used to invert the radiance exiting the top and bottom of the atmosphere to yield the columnar optical properties of the atmospheric aerosol under clear sky conditions over the ocean, for aerosol optical thicknesses as large as 2. The algorithm is capable of retrievals with such large optical thicknesses because all significant orders of multiple scattering are included.
Devalla, Sripad Krishna; Chin, Khai Sing; Mari, Jean-Martial; Tun, Tin A; Strouthidis, Nicholas G; Aung, Tin; Thiéry, Alexandre H; Girard, Michaël J A
2018-01-01
To develop a deep learning approach to digitally stain optical coherence tomography (OCT) images of the optic nerve head (ONH). A horizontal B-scan was acquired through the center of the ONH using OCT (Spectralis) for one eye of each of 100 subjects (40 healthy and 60 glaucoma). All images were enhanced using adaptive compensation. A custom deep learning network was then designed and trained with the compensated images to digitally stain (i.e., highlight) six tissue layers of the ONH. The accuracy of our algorithm was assessed (against manual segmentations) using the dice coefficient, sensitivity, specificity, intersection over union (IU), and accuracy. We studied the effect of compensation, number of training images, and performance comparison between glaucoma and healthy subjects. For images it had not yet assessed, our algorithm was able to digitally stain the retinal nerve fiber layer + prelamina, the RPE, all other retinal layers, the choroid, and the peripapillary sclera and lamina cribrosa. For all tissues, the dice coefficient, sensitivity, specificity, IU, and accuracy (mean) were 0.84 ± 0.03, 0.92 ± 0.03, 0.99 ± 0.00, 0.89 ± 0.03, and 0.94 ± 0.02, respectively. Our algorithm performed significantly better when compensated images were used for training (P < 0.001). Besides offering a good reliability, digital staining also performed well on OCT images of both glaucoma and healthy individuals. Our deep learning algorithm can simultaneously stain the neural and connective tissues of the ONH, offering a framework to automatically measure multiple key structural parameters of the ONH that may be critical to improve glaucoma management.
Hofmann, Hannes G; Keck, Benjamin; Rohkohl, Christopher; Hornegger, Joachim
2011-01-01
Interventional reconstruction of 3-D volumetric data from C-arm CT projections is a computationally demanding task. Hardware optimization is not an option but mandatory for interventional image processing and, in particular, for image reconstruction due to the high demands on performance. Several groups have published fast analytical 3-D reconstruction on highly parallel hardware such as GPUs to mitigate this issue. The authors show that the performance of modern CPU-based systems is in the same order as current GPUs for static 3-D reconstruction and outperforms them for a recent motion compensated (3-D+time) image reconstruction algorithm. This work investigates two algorithms: Static 3-D reconstruction as well as a recent motion compensated algorithm. The evaluation was performed using a standardized reconstruction benchmark, RABBITCT, to get comparable results and two additional clinical data sets. The authors demonstrate for a parametric B-spline motion estimation scheme that the derivative computation, which requires many write operations to memory, performs poorly on the GPU and can highly benefit from modern CPU architectures with large caches. Moreover, on a 32-core Intel Xeon server system, the authors achieve linear scaling with the number of cores used and reconstruction times almost in the same range as current GPUs. Algorithmic innovations in the field of motion compensated image reconstruction may lead to a shift back to CPUs in the future. For analytical 3-D reconstruction, the authors show that the gap between GPUs and CPUs became smaller. It can be performed in less than 20 s (on-the-fly) using a 32-core server.
New inverse synthetic aperture radar algorithm for translational motion compensation
NASA Astrophysics Data System (ADS)
Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.
1991-10-01
Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.
Compensating for pneumatic distortion in pressure sensing devices
NASA Technical Reports Server (NTRS)
Whitmore, Stephen A.; Leondes, Cornelius T.
1990-01-01
A technique of compensating for pneumatic distortion in pressure sensing devices was developed and verified. This compensation allows conventional pressure sensing technology to obtain improved unsteady pressure measurements. Pressure distortion caused by frictional attenuation and pneumatic resonance within the sensing system makes obtaining unsteady pressure measurements by conventional sensors difficult. Most distortion occurs within the pneumatic tubing which transmits pressure impulses from the aircraft's surface to the measurement transducer. To avoid pneumatic distortion, experiment designers mount the pressure sensor at the surface of the aircraft, (called in-situ mounting). In-situ transducers cannot always fit in the available space and sometimes pneumatic tubing must be run from the aircraft's surface to the pressure transducer. A technique to measure unsteady pressure data using conventional pressure sensing technology was developed. A pneumatic distortion model is reduced to a low-order, state-variable model retaining most of the dynamic characteristics of the full model. The reduced-order model is coupled with results from minimum variance estimation theory to develop an algorithm to compensate for the effects of pneumatic distortion. Both postflight and real-time algorithms are developed and evaluated using simulated and flight data.
CSI, optimal control, and accelerometers: Trials and tribulations
NASA Technical Reports Server (NTRS)
Benjamin, Brian J.; Sesak, John R.
1994-01-01
New results concerning optimal design with accelerometers are presented. These results show that the designer must be concerned with the stability properties of two Linear Quadratic Gaussian (LQG) compensators, one of which does not explicitly appear in the closed-loop system dynamics. The new concepts of virtual and implemented compensators are introduced to cope with these subtleties. The virtual compensator appears in the closed-loop system dynamics and the implemented compensator appears in control electronics. The stability of one compensator does not guarantee the stability of the other. For strongly stable (robust) systems, both compensators should be stable. The presence of controlled and uncontrolled modes in the system results in two additional forms of the compensator with corresponding terms that are of like form, but opposite sign, making simultaneous stabilization of both the virtual and implemented compensator difficult. A new design algorithm termed sensor augmentation is developed that aids stabilization of these compensator forms by incorporating a static augmentation term associated with the uncontrolled modes in the design process.
Robust control algorithms for Mars aerobraking
NASA Technical Reports Server (NTRS)
Shipley, Buford W., Jr.; Ward, Donald T.
1992-01-01
Four atmospheric guidance concepts have been adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. The first two offer improvements to the Analytic Predictor Corrector (APC) to increase its robustness to density variations. The second two are variations of a new Liapunov tracking exit phase algorithm, developed to guide the vehicle along a reference trajectory. These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. MARSGRAM is used to develop realistic atmospheres for the study. When square wave density pulses perturb the atmosphere all four controllers are successful. The algorithms are tested against atmospheres where the inbound and outbound density functions are different. Square wave density pulses are again used, but only for the outbound leg of the trajectory. Additionally, sine waves are used to perturb the density function. The new algorithms are found to be more robust than any previously tested and a Liapunov controller is selected as the most robust control algorithm overall examined.
Inhomogeneity compensation for MR brain image segmentation using a multi-stage FCM-based approach.
Szilágyi, László; Szilágyi, Sándor M; Dávid, László; Benyó, Zoltán
2008-01-01
Intensity inhomogeneity or intensity non-uniformity (INU) is an undesired phenomenon that represents the main obstacle for MR image segmentation and registration methods. Various techniques have been proposed to eliminate or compensate the INU, most of which are embedded into clustering algorithms. This paper proposes a multiple stage fuzzy c-means (FCM) based algorithm for the estimation and compensation of the slowly varying additive or multiplicative noise, supported by a pre-filtering technique for Gaussian and impulse noise elimination. The slowly varying behavior of the bias or gain field is assured by a smoothening filter that performs a context dependent averaging, based on a morphological criterion. The experiments using 2-D synthetic phantoms and real MR images show, that the proposed method provides accurate segmentation. The produced segmentation and fuzzy membership values can serve as excellent support for 3-D registration and segmentation techniques.
Compensating for telecommunication delays during robotic telerehabilitation.
Consoni, Leonardo J; Siqueira, Adriano A G; Krebs, Hermano I
2017-07-01
Rehabilitation robotic systems may afford better care and telerehabilitation may extend the use and benefits of robotic therapy to the home. Data transmissions over distance are bound by intrinsic communication delays which can be significant enough to deem the activity unfeasible. Here we describe an approach that combines unilateral robotic telerehabilitation and serious games. This approach has a modular and distributed design that permits different types of robots to interact without substantial code changes. We demonstrate the approach through an online multiplayer game. Two users can remotely interact with each other with no force exchanges, while a smoothing and prediction algorithm compensates motions for the delay in the Internet connection. We demonstrate that this approach can successfully compensate for data transmission delays, even when testing between the United States and Brazil. This paper presents the initial experimental results, which highlight the performance degradation with increasing delays as well as improvements provided by the proposed algorithm, and discusses planned future developments.
Sensor Drift Compensation Algorithm based on PDF Distance Minimization
NASA Astrophysics Data System (ADS)
Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo
2009-05-01
In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation
Aslanides, Ioannis M; Toliou, Georgia; Padroni, Sara; Arba Mosquera, Samuel; Kolli, Sai
2011-06-01
To compare the refractive and visual outcomes using the Schwind Amaris excimer laser in patients with high astigmatism (>1D) with and without the static cyclotorsion compensation (SCC) algorithm available with this new laser platform. 70 consecutive eyes with ≥1D astigmatism were randomized to treatment with compensation of static cyclotorsion (SCC group- 35 eyes) or not (control group- 35 eyes). A previously validated optimized aspheric ablation algorithm profile was used in every case. All patients underwent LASIK with a microkeratome cut flap. The SCC and control group did not differ preoperatively, in terms of refractive error, magnitude of astigmatism or in terms of cardinal or oblique astigmatism. Following treatment, average deviation from target was SEq +0.16D, SD±0.52 D, range -0.98 D to +1.71 D in the SCC group compared to +0.46 D, SD±0.61 D, range -0.25 D to +2.35 D in the control group, which was statistically significant (p<0.05). Following treatment, average astigmatism was 0.24 D (SD±0.28 D, range -1.01 D to 0.00 D) in the SCC group compared to 0.46 D (SD±0.42 D, range -1.80 D to 0.00 D) in the control group, which was highly statistically significant (p<0.005). There was no statistical difference in the postoperative uncorrected vision when the aspheric algorithm was used although there was a trend to increased number of lines gained in the SCC group. This study shows that static cyclotorsion is accurately compensated for by the Schwind Amaris laser platform. The compensation of static cyclotorsion in patients with moderate astigmatism produces a significant improvement in refractive and astigmatic outcomes than when not compensated. Copyright © 2011 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Uncooperative target-in-the-loop performance with backscattered speckle-field effects
NASA Astrophysics Data System (ADS)
Kansky, Jan E.; Murphy, Daniel V.
2007-09-01
Systems utilizing target-in-the-loop (TIL) techniques for adaptive optics phase compensation rely on a metric sensor to perform a hill climbing algorithm that maximizes the far-field Strehl ratio. In uncooperative TIL, the metric signal is derived from the light backscattered from a target. In cases where the target is illuminated with a laser with suffciently long coherence length, the potential exists for the validity of the metric sensor to be compromised by speckle-field effects. We report experimental results from a scaled laboratory designed to evaluate TIL performance in atmospheric turbulence and thermal blooming conditions where the metric sensors are influenced by varying degrees of backscatter speckle. We compare performance of several TIL configurations and metrics for cases with static speckle, and for cases with speckle fluctuations within the frequency range that the TIL system operates. The roles of metric sensor filtering and system bandwidth are discussed.
Adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope
NASA Astrophysics Data System (ADS)
Ma, Haotong; Hu, Haojun; Xie, Wenke; Zhao, Haichuan; Xu, Xiaojun; Chen, Jinbao
2013-08-01
We demonstrate the adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope based on the stochastic parallel gradient descent (SPGD) algorithm and dual phase only liquid crystal spatial light modulators (LC-SLMs). Adaptive pre-compensation the wavefront of projected laser beam at the transmitter telescope is chosen to improve the power coupling efficiency. One phase only LC-SLM adaptively optimizes phase distribution of the projected laser beam and the other generates turbulence phase screen. The intensity distributions of the dark hollow beam after passing through the turbulent atmosphere with and without adaptive beam shaping are analyzed in detail. The influence of propagation distance and aperture size of the Cassegrain-telescope on coupling efficiency are investigated theoretically and experimentally. These studies show that the power coupling can be significantly improved by adaptive beam shaping. The technique can be used in optical communication, deep space optical communication and relay mirror.
Doppler-based motion compensation algorithm for focusing the signature of a rotorcraft.
Goldman, Geoffrey H
2013-02-01
A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft. For target signatures with large spectral peaks that vary slowly in amplitude and have near constant frequency, the time-varying Doppler shift can be tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of the first harmonic of a rotorcraft was tracked with a fixed-lag smoother. Then, state space estimates of the frequency were used to calculate a time warping that removed the effect of a time-varying Doppler shift from the data. The algorithm was evaluated by analyzing the increase in the amplitude of the harmonics in the spectrum of a rotorcraft. The results depended upon the frequency of the harmonics and the processing interval duration. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved an estimated upper bound. The results for higher frequency harmonics had larger increases in the amplitude of the peaks, but significantly lower than the estimated upper bounds.
Motion compensation for fully 4D PET reconstruction using PET superset data
NASA Astrophysics Data System (ADS)
Verhaeghe, J.; Gravel, P.; Mio, R.; Fukasawa, R.; Rosa-Neto, P.; Soucy, J.-P.; Thompson, C. J.; Reader, A. J.
2010-07-01
Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for 18F-FDG obtained from Patlak analysis.
Motion compensation for fully 4D PET reconstruction using PET superset data.
Verhaeghe, J; Gravel, P; Mio, R; Fukasawa, R; Rosa-Neto, P; Soucy, J-P; Thompson, C J; Reader, A J
2010-07-21
Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for (18)F-FDG obtained from Patlak analysis.
CEMERLL: The Propagation of an Atmosphere-Compensated Laser Beam to the Apollo 15 Lunar Array
NASA Technical Reports Server (NTRS)
Fugate, R. Q.; Leatherman, P. R.; Wilson, K. E.
1997-01-01
Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes.
Transponder-aided joint calibration and synchronization compensation for distributed radar systems.
Wang, Wen-Qin
2015-01-01
High-precision radiometric calibration and synchronization compensation must be provided for distributed radar system due to separate transmitters and receivers. This paper proposes a transponder-aided joint radiometric calibration, motion compensation and synchronization for distributed radar remote sensing. As the transponder signal can be separated from the normal radar returns, it is used to calibrate the distributed radar for radiometry. Meanwhile, the distributed radar motion compensation and synchronization compensation algorithms are presented by utilizing the transponder signals. This method requires no hardware modifications to both the normal radar transmitter and receiver and no change to the operating pulse repetition frequency (PRF). The distributed radar radiometric calibration and synchronization compensation require only one transponder, but the motion compensation requires six transponders because there are six independent variables in the distributed radar geometry. Furthermore, a maximum likelihood method is used to estimate the transponder signal parameters. The proposed methods are verified by simulation results.
Atmospheric correction over coastal waters using multilayer neural networks
NASA Astrophysics Data System (ADS)
Fan, Y.; Li, W.; Charles, G.; Jamet, C.; Zibordi, G.; Schroeder, T.; Stamnes, K. H.
2017-12-01
Standard atmospheric correction (AC) algorithms work well in open ocean areas where the water inherent optical properties (IOPs) are correlated with pigmented particles. However, the IOPs of turbid coastal waters may independently vary with pigmented particles, suspended inorganic particles, and colored dissolved organic matter (CDOM). In turbid coastal waters standard AC algorithms often exhibit large inaccuracies that may lead to negative water-leaving radiances (Lw) or remote sensing reflectance (Rrs). We introduce a new atmospheric correction algorithm for coastal waters based on a multilayer neural network (MLNN) machine learning method. We use a coupled atmosphere-ocean radiative transfer model to simulate the Rayleigh-corrected radiance (Lrc) at the top of the atmosphere (TOA) and the Rrs just above the surface simultaneously, and train a MLNN to derive the aerosol optical depth (AOD) and Rrs directly from the TOA Lrc. The SeaDAS NIR algorithm, the SeaDAS NIR/SWIR algorithm, and the MODIS version of the Case 2 regional water - CoastColour (C2RCC) algorithm are included in the comparison with AERONET-OC measurements. The results show that the MLNN algorithm significantly improves retrieval of normalized Lw in blue bands (412 nm and 443 nm) and yields minor improvements in green and red bands. These results indicate that the MLNN algorithm is suitable for application in turbid coastal waters. Application of the MLNN algorithm to MODIS Aqua images in several coastal areas also shows that it is robust and resilient to contamination due to sunglint or adjacency effects of land and cloud edges. The MLNN algorithm is very fast once the neural network has been properly trained and is therefore suitable for operational use. A significant advantage of the MLNN algorithm is that it does not need SWIR bands, which implies significant cost reduction for dedicated OC missions. A recent effort has been made to extend the MLNN AC algorithm to extreme atmospheric conditions (i.e. heavy polluted continental aerosols) over coastal areas by including additional aerosol and ocean models to generate the training dataset. Preliminary tests show very good results. Results of applying the extended MLNN algorithm to VIIRS images over the Yellow Sea and East China Sea areas with extreme atmospheric and marine conditions will be provided.
Eliminating "Hotspots" in Digital Image Processing
NASA Technical Reports Server (NTRS)
Salomon, P. M.
1984-01-01
Signals from defective picture elements rejected. Image processing program for use with charge-coupled device (CCD) or other mosaic imager augmented with algorithm that compensates for common type of electronic defect. Algorithm prevents false interpretation of "hotspots". Used for robotics, image enhancement, image analysis and digital television.
NASA Astrophysics Data System (ADS)
Cappon, Derek J.; Farrell, Thomas J.; Fang, Qiyin; Hayward, Joseph E.
2016-12-01
Optical spectroscopy of human tissue has been widely applied within the field of biomedical optics to allow rapid, in vivo characterization and analysis of the tissue. When designing an instrument of this type, an imaging spectrometer is often employed to allow for simultaneous analysis of distinct signals. This is especially important when performing spatially resolved diffuse reflectance spectroscopy. In this article, an algorithm is presented that allows for the automated processing of 2-dimensional images acquired from an imaging spectrometer. The algorithm automatically defines distinct spectrometer tracks and adaptively compensates for distortion introduced by optical components in the imaging chain. Crosstalk resulting from the overlap of adjacent spectrometer tracks in the image is detected and subtracted from each signal. The algorithm's performance is demonstrated in the processing of spatially resolved diffuse reflectance spectra recovered from an Intralipid and ink liquid phantom and is shown to increase the range of wavelengths over which usable data can be recovered.
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-01-01
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection. PMID:29023385
Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu
2017-10-12
In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Lundquist, Julie K.; Wilczak, James M.; Ashton, Ryan; ...
2017-03-07
To assess current capabilities for measuring flow within the atmospheric boundary layer, including within wind farms, the U.S. Dept. of Energy sponsored the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign at the Boulder Atmospheric Observatory (BAO) in spring 2015. Herein, we summarize the XPIA field experiment, highlight novel measurement approaches, and quantify uncertainties associated with these measurement methods. Line-of-sight velocities measured by scanning lidars and radars exhibit close agreement with tower measurements, despite differences in measurement volumes. Virtual towers of wind measurements, from multiple lidars or radars, also agree well with tower and profiling lidar measurements. Estimates of windsmore » over volumes from scanning lidars and radars are in close agreement, enabling assessment of spatial variability. Strengths of the radar systems used here include high scan rates, large domain coverage, and availability during most precipitation events, but they struggle at times to provide data during periods with limited atmospheric scatterers. In contrast, for the deployment geometry tested here, the lidars have slower scan rates and less range, but provide more data during non-precipitating atmospheric conditions. Microwave radiometers provide temperature profiles with approximately the same uncertainty as Radio-Acoustic Sounding Systems (RASS). Using a motion platform, we assess motion-compensation algorithms for lidars to be mounted on offshore platforms. As a result, we highlight cases for validation of mesoscale or large-eddy simulations, providing information on accessing the archived dataset. We conclude that modern remote sensing systems provide a generational improvement in observational capabilities, enabling resolution of fine-scale processes critical to understanding inhomogeneous boundary-layer flows.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundquist, Julie K.; Wilczak, James M.; Ashton, Ryan
To assess current capabilities for measuring flow within the atmospheric boundary layer, including within wind farms, the U.S. Dept. of Energy sponsored the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign at the Boulder Atmospheric Observatory (BAO) in spring 2015. Herein, we summarize the XPIA field experiment, highlight novel measurement approaches, and quantify uncertainties associated with these measurement methods. Line-of-sight velocities measured by scanning lidars and radars exhibit close agreement with tower measurements, despite differences in measurement volumes. Virtual towers of wind measurements, from multiple lidars or radars, also agree well with tower and profiling lidar measurements. Estimates of windsmore » over volumes from scanning lidars and radars are in close agreement, enabling assessment of spatial variability. Strengths of the radar systems used here include high scan rates, large domain coverage, and availability during most precipitation events, but they struggle at times to provide data during periods with limited atmospheric scatterers. In contrast, for the deployment geometry tested here, the lidars have slower scan rates and less range, but provide more data during non-precipitating atmospheric conditions. Microwave radiometers provide temperature profiles with approximately the same uncertainty as Radio-Acoustic Sounding Systems (RASS). Using a motion platform, we assess motion-compensation algorithms for lidars to be mounted on offshore platforms. As a result, we highlight cases for validation of mesoscale or large-eddy simulations, providing information on accessing the archived dataset. We conclude that modern remote sensing systems provide a generational improvement in observational capabilities, enabling resolution of fine-scale processes critical to understanding inhomogeneous boundary-layer flows.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundquist, Julie K.; Wilczak, James M.; Ashton, Ryan
The synthesis of new measurement technologies with advances in high performance computing provides an unprecedented opportunity to advance our understanding of the atmosphere, particularly with regard to the complex flows in the atmospheric boundary layer. To assess current measurement capabilities for quantifying features of atmospheric flow within wind farms, the U.S. Dept. of Energy sponsored the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign at the Boulder Atmospheric Observatory (BAO) in spring 2015. Herein, we summarize the XPIA field experiment design, highlight novel approaches to boundary-layer measurements, and quantify measurement uncertainties associated with these experimental methods. Line-of-sight velocities measured bymore » scanning lidars and radars exhibit close agreement with tower measurements, despite differences in measurement volumes. Virtual towers of wind measurements, from multiple lidars or dual radars, also agree well with tower and profiling lidar measurements. Estimates of winds over volumes,conducted with rapid lidar scans, agree with those from scanning radars, enabling assessment of spatial variability. Microwave radiometers provide temperature profiles within and above the boundary layer with approximately the same uncertainty as operational remote sensing measurements. Using a motion platform, we assess motion-compensation algorithms for lidars to be mounted on offshore platforms. Finally, we highlight cases that could be useful for validation of large-eddy simulations or mesoscale numerical weather prediction, providing information on accessing the archived dataset. We conclude that modern remote Lundquist et al. XPIA BAMS Page 4 of 81 sensing systems provide a generational improvement in observational capabilities, enabling resolution of refined processes critical to understanding 61 inhomogeneous boundary-layer flows such as those found in wind farms.« less
GIFTS SM EDU Data Processing and Algorithms
NASA Technical Reports Server (NTRS)
Tian, Jialin; Johnson, David G.; Reisse, Robert A.; Gazarik, Michael J.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three Focal Plane Arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration stage. The calibration procedures can be subdivided into three stages. In the pre-calibration stage, a phase correction algorithm is applied to the decimated and filtered complex interferogram. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected blackbody reference spectra. In the radiometric calibration stage, we first compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. During the post-processing stage, we estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. We then implement a correction scheme that compensates for the effect of fore-optics offsets. Finally, for off-axis pixels, the FPA off-axis effects correction is performed. To estimate the performance of the entire FPA, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation.
Yadav, Ravi K; Begum, Viquar U; Addepalli, Uday K; Senthil, Sirisha; Garudadri, Chandra S; Rao, Harsha L
2016-02-01
To compare the abilities of retinal nerve fiber layer (RNFL) parameters of variable corneal compensation (VCC) and enhanced corneal compensation (ECC) algorithms of scanning laser polarimetry (GDx) in detecting various severities of glaucoma. Two hundred and eighty-five eyes of 194 subjects from the Longitudinal Glaucoma Evaluation Study who underwent GDx VCC and ECC imaging were evaluated. Abilities of RNFL parameters of GDx VCC and ECC to diagnose glaucoma were compared using area under receiver operating characteristic curves (AUC), sensitivities at fixed specificities, and likelihood ratios. After excluding 5 eyes that failed to satisfy manufacturer-recommended quality parameters with ECC and 68 with VCC, 56 eyes of 41 normal subjects and 161 eyes of 121 glaucoma patients [36 eyes with preperimetric glaucoma, 52 eyes with early (MD>-6 dB), 34 with moderate (MD between -6 and -12 dB), and 39 with severe glaucoma (MD<-12 dB)] were included for the analysis. Inferior RNFL, average RNFL, and nerve fiber indicator parameters showed the best AUCs and sensitivities both with GDx VCC and ECC in diagnosing all severities of glaucoma. AUCs and sensitivities of all RNFL parameters were comparable between the VCC and ECC algorithms (P>0.20 for all comparisons). Likelihood ratios associated with the diagnostic categorization of RNFL parameters were comparable between the VCC and ECC algorithms. In scans satisfying the manufacturer-recommended quality parameters, which were significantly greater with ECC than VCC algorithm, diagnostic abilities of GDx ECC and VCC in glaucoma were similar.
NASA Astrophysics Data System (ADS)
Guo, Liwen
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, the simulation transport delay remains a problem. Because of the limitations shown in the three prominent existing delay compensators---the lead/lag filter, the McFarland compensator and the Sobiski/Cardullo predictor---new approaches of compensating the transport delay in a flight simulator have been developed. The first novel compensator is the adaptive predictor making use of the Kalman filter algorithm in a unique manner so that the predictor can provide accurately the desired amount of prediction, significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors it illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator's control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Piloted simulation tests were conducted for assessing the effectiveness of the two novel compensators in comparison to the McFarland predictor and no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. Four metrics---the glide slope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating on the handling qualities---were employed for the analyses. The overall analyses show that while the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator, the state space predictor is fairly superior for short delay and significantly superior for long delay to the McFarland compensator. The state space predictor also achieves better compensation than the adaptive predictor. The results of the evaluation on the effectiveness of these predictors in the piloted tests agree with those in the theoretical offline tests conducted with the recorded simulation aircraft states.
Nam, Haewon
2017-01-01
We propose a novel metal artifact reduction (MAR) algorithm for CT images that completes a corrupted sinogram along the metal trace region. When metal implants are located inside a field of view, they create a barrier to the transmitted X-ray beam due to the high attenuation of metals, which significantly degrades the image quality. To fill in the metal trace region efficiently, the proposed algorithm uses multiple prior images with residual error compensation in sinogram space. Multiple prior images are generated by applying a recursive active contour (RAC) segmentation algorithm to the pre-corrected image acquired by MAR with linear interpolation, where the number of prior image is controlled by RAC depending on the object complexity. A sinogram basis is then acquired by forward projection of the prior images. The metal trace region of the original sinogram is replaced by the linearly combined sinogram of the prior images. Then, the additional correction in the metal trace region is performed to compensate the residual errors occurred by non-ideal data acquisition condition. The performance of the proposed MAR algorithm is compared with MAR with linear interpolation and the normalized MAR algorithm using simulated and experimental data. The results show that the proposed algorithm outperforms other MAR algorithms, especially when the object is complex with multiple bone objects. PMID:28604794
Variability of the atmospheric turbulence in the region lake of Baykal
NASA Astrophysics Data System (ADS)
Botygina, N. N.; Kopylov, E. A.; Lukin, V. P.; Kovadlo, P. G.; Shihovcev, A. Yu.
2015-11-01
The estimations of the fried parameter according to micrometeorological and optical measurements in the atmospheric surface layer in the area of lake Baikal, Baikal astrophysical Observatory. According to the archive of NCEP/NCAR Reanalysis data obtained vertical distribution of temperature pulsations, and revealed the most pronounced atmospheric layers with high turbulence. A comparison of astronomical conditions vision in winter and in summer. By the registration of optical radiation of the Sun with telescopes, ground-based there is a need to compensate for the effects of atmospheric turbulence. Atmospheric turbulence reduces the angular resolution of the observed objects and distorts the structure of the obtained images. To improve image quality, and ideally closer to angular resolution, limited only by diffraction, it is necessary to implement and use adaptive optics system. The specificity of image correction using adaptive optics is that it is necessary not only to compensate for the random jitter of the image as a whole, but also adjust the geometry of the individual parts of the image. Evaluation of atmospheric radius of coherence (Fried parameter) are of interest not only for site-testing research space, but also are the basis for the efficient operation of adaptive optical systems 1 .
NASA Technical Reports Server (NTRS)
Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)
2000-01-01
Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements
An overview of turbulence compensation
NASA Astrophysics Data System (ADS)
Schutte, Klamer; van Eekeren, Adam W. M.; Dijk, Judith; Schwering, Piet B. W.; van Iersel, Miranda; Doelman, Niek J.
2012-09-01
In general, long range visual detection, recognition and identification are hampered by turbulence caused by atmospheric conditions. Much research has been devoted to the field of turbulence compensation. One of the main advantages of turbulence compensation is that it enables visual identification over larger distances. In many (military) scenarios this is of crucial importance. In this paper we give an overview of several software and hardware approaches to compensate for the visual artifacts caused by turbulence. These approaches are very diverse and range from the use of dedicated hardware, such as adaptive optics, to the use of software methods, such as deconvolution and lucky imaging. For each approach the pros and cons are given and it is indicated for which type of scenario this approach is useful. In more detail we describe the turbulence compensation methods TNO has developed in the last years and place them in the context of the different turbulence compensation approaches and TNO's turbulence compensation roadmap. Furthermore we look forward and indicate the upcoming challenges in the field of turbulence compensation.
Turbulence compensation: an overview
NASA Astrophysics Data System (ADS)
van Eekeren, Adam W. M.; Schutte, Klamer; Dijk, Judith; Schwering, Piet B. W.; van Iersel, Miranda; Doelman, Niek J.
2012-06-01
In general, long range visual detection, recognition and identification are hampered by turbulence caused by atmospheric conditions. Much research has been devoted to the field of turbulence compensation. One of the main advantages of turbulence compensation is that it enables visual identification over larger distances. In many (military) scenarios this is of crucial importance. In this paper we give an overview of several software and hardware approaches to compensate for the visual artifacts caused by turbulence. These approaches are very diverse and range from the use of dedicated hardware, such as adaptive optics, to the use of software methods, such as deconvolution and lucky imaging. For each approach the pros and cons are given and it is indicated for which scenario this approach is useful. In more detail we describe the turbulence compensation methods TNO has developed in the last years and place them in the context of the different turbulence compensation approaches and TNO's turbulence compensation roadmap. Furthermore we look forward and indicate the upcoming challenges in the field of turbulence compensation.
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1984-01-01
A prime obstacle to the widespread use of adaptive control is the degradation of performance and possible instability resulting from the presence of unmodeled dynamics. The approach taken is to explicitly include the unstructured model uncertainty in the output error identification algorithm. The order of the compensator is successively increased by including identified modes. During this model building stage, heuristic rules are used to test for convergence prior to designing compensators. Additionally, the recursive identification algorithm as extended to multi-input, multi-output systems. Enhancements were also made to reduce the computational burden of an algorithm for obtaining minimal state space realizations from the inexact, multivariate transfer functions which result from the identification process. A number of potential adaptive control applications for this approach are illustrated using computer simulations. Results indicated that when speed of adaptation and plant stability are not critical, the proposed schemes converge to enhance system performance.
An Efficient Adaptive Angle-Doppler Compensation Approach for Non-Sidelooking Airborne Radar STAP
Shen, Mingwei; Yu, Jia; Wu, Di; Zhu, Daiyin
2015-01-01
In this study, the effects of non-sidelooking airborne radar clutter dispersion on space-time adaptive processing (STAP) is considered, and an efficient adaptive angle-Doppler compensation (EAADC) approach is proposed to improve the clutter suppression performance. In order to reduce the computational complexity, the reduced-dimension sparse reconstruction (RDSR) technique is introduced into the angle-Doppler spectrum estimation to extract the required parameters for compensating the clutter spectral center misalignment. Simulation results to demonstrate the effectiveness of the proposed algorithm are presented. PMID:26053755
Phase 2 development of Great Lakes algorithms for Nimbus-7 coastal zone color scanner
NASA Technical Reports Server (NTRS)
Tanis, Fred J.
1984-01-01
A series of experiments have been conducted in the Great Lakes designed to evaluate the application of the NIMBUS-7 Coastal Zone Color Scanner (CZCS). Atmospheric and water optical models were used to relate surface and subsurface measurements to satellite measured radiances. Absorption and scattering measurements were reduced to obtain a preliminary optical model for the Great Lakes. Algorithms were developed for geometric correction, correction for Rayleigh and aerosol path radiance, and prediction of chlorophyll-a pigment and suspended mineral concentrations. The atmospheric algorithm developed compared favorably with existing algorithms and was the only algorithm found to adequately predict the radiance variations in the 670 nm band. The atmospheric correction algorithm developed was designed to extract needed algorithm parameters from the CZCS radiance values. The Gordon/NOAA ocean algorithms could not be demonstrated to work for Great Lakes waters. Predicted values of chlorophyll-a concentration compared favorably with expected and measured data for several areas of the Great Lakes.
Acceleration and torque feedback for robotic control - Experimental results
NASA Technical Reports Server (NTRS)
Mclnroy, John E.; Saridis, George N.
1990-01-01
Gross motion control of robotic manipulators typically requires significant on-line computations to compensate for nonlinear dynamics due to gravity, Coriolis, centripetal, and friction nonlinearities. One controller proposed by Luo and Saridis avoids these computations by feeding back joint acceleration and torque. This study implements the controller on a Puma 600 robotic manipulator. Joint acceleration measurement is obtained by measuring linear accelerations of each joint, and deriving a computationally efficient transformation from the linear measurements to the angular accelerations. Torque feedback is obtained by using the previous torque sent to the joints. The implementation has stability problems on the Puma 600 due to the extremely high gains inherent in the feedback structure. Since these high gains excite frequency modes in the Puma 600, the algorithm is modified to decrease the gain inherent in the feedback structure. The resulting compensator is stable and insensitive to high frequency unmodeled dynamics. Moreover, a second compensator is proposed which uses acceleration and torque feedback, but still allows nonlinear terms to be fed forward. Thus, by feeding the increment in the easily calculated gravity terms forward, improved responses are obtained. Both proposed compensators are implemented, and the real time results are compared to those obtained with the computed torque algorithm.
NASA Astrophysics Data System (ADS)
Lv, Zeqian; Xu, Xiaohai; Yan, Tianhao; Cai, Yulong; Su, Yong; Zhang, Qingchuan
2018-01-01
In the measurement of plate specimens, traditional two-dimensional (2D) digital image correlation (DIC) is challenged by two aspects: (1) the slant optical axis (misalignment of the optical camera axis and the object surface) and (2) out-of-plane motions (including translations and rotations) of the specimens. There are measurement errors in the results measured by 2D DIC, especially when the out-of-plane motions are big enough. To solve this problem, a novel compensation method has been proposed to correct the unsatisfactory results. The proposed compensation method consists of three main parts: 1) a pre-calibration step is used to determine the intrinsic parameters and lens distortions; 2) a compensation panel (a rigid panel with several markers located at known positions) is mounted to the specimen to track the specimen's motion so that the relative coordinate transformation between the compensation panel and the 2D DIC setup can be calculated using the coordinate transform algorithm; 3) three-dimensional world coordinates of measuring points on the specimen can be reconstructed via the coordinate transform algorithm and used to calculate deformations. Simulations have been carried out to validate the proposed compensation method. Results come out that when the extensometer length is 400 pixels, the strain accuracy reaches 10 με no matter out-of-plane translations (less than 1/200 of the object distance) nor out-of-plane rotations (rotation angle less than 5°) occur. The proposed compensation method leads to good results even when the out-of-plane translation reaches several percents of the object distance or the out-of-plane rotation angle reaches tens of degrees. The proposed compensation method has been applied in tensile experiments to obtain high-accuracy results as well.
NASA Astrophysics Data System (ADS)
Antoine, David; Morel, Andre
1997-02-01
An algorithm is proposed for the atmospheric correction of the ocean color observations by the MERIS instrument. The principle of the algorithm, which accounts for all multiple scattering effects, is presented. The algorithm is then teste, and its accuracy assessed in terms of errors in the retrieved marine reflectances.
NASA Astrophysics Data System (ADS)
Pang, Linyong; Hu, Peter; Satake, Masaki; Tolani, Vikram; Peng, Danping; Li, Ying; Chen, Dongxue
2011-11-01
According to the ITRS roadmap, mask defects are among the top technical challenges to introduce extreme ultraviolet (EUV) lithography into production. Making a multilayer defect-free extreme ultraviolet (EUV) blank is not possible today, and is unlikely to happen in the next few years. This means that EUV must work with multilayer defects present on the mask. The method proposed by Luminescent is to compensate effects of multilayer defects on images by modifying the absorber patterns. The effect of a multilayer defect is to distort the images of adjacent absorber patterns. Although the defect cannot be repaired, the images may be restored to their desired targets by changing the absorber patterns. This method was first introduced in our paper at BACUS 2010, which described a simple pixel-based compensation algorithm using a fast multilayer model. The fast model made it possible to complete the compensation calculations in seconds, instead of days or weeks required for rigorous Finite Domain Time Difference (FDTD) simulations. Our SPIE 2011 paper introduced an advanced compensation algorithm using the Level Set Method for 2D absorber patterns. In this paper the method is extended to consider process window, and allow repair tool constraints, such as permitting etching but not deposition. The multilayer defect growth model is also enhanced so that the multilayer defect can be "inverted", or recovered from the top layer profile using a calibrated model.
Recursive algorithms for bias and gain nonuniformity correction in infrared videos.
Pipa, Daniel R; da Silva, Eduardo A B; Pagliari, Carla L; Diniz, Paulo S R
2012-12-01
Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.
Accelerating Dust Storm Simulation by Balancing Task Allocation in Parallel Computing Environment
NASA Astrophysics Data System (ADS)
Gui, Z.; Yang, C.; XIA, J.; Huang, Q.; YU, M.
2013-12-01
Dust storm has serious negative impacts on environment, human health, and assets. The continuing global climate change has increased the frequency and intensity of dust storm in the past decades. To better understand and predict the distribution, intensity and structure of dust storm, a series of dust storm models have been developed, such as Dust Regional Atmospheric Model (DREAM), the NMM meteorological module (NMM-dust) and Chinese Unified Atmospheric Chemistry Environment for Dust (CUACE/Dust). The developments and applications of these models have contributed significantly to both scientific research and our daily life. However, dust storm simulation is a data and computing intensive process. Normally, a simulation for a single dust storm event may take several days or hours to run. It seriously impacts the timeliness of prediction and potential applications. To speed up the process, high performance computing is widely adopted. By partitioning a large study area into small subdomains according to their geographic location and executing them on different computing nodes in a parallel fashion, the computing performance can be significantly improved. Since spatiotemporal correlations exist in the geophysical process of dust storm simulation, each subdomain allocated to a node need to communicate with other geographically adjacent subdomains to exchange data. Inappropriate allocations may introduce imbalance task loads and unnecessary communications among computing nodes. Therefore, task allocation method is the key factor, which may impact the feasibility of the paralleling. The allocation algorithm needs to carefully leverage the computing cost and communication cost for each computing node to minimize total execution time and reduce overall communication cost for the entire system. This presentation introduces two algorithms for such allocation and compares them with evenly distributed allocation method. Specifically, 1) In order to get optimized solutions, a quadratic programming based modeling method is proposed. This algorithm performs well with small amount of computing tasks. However, its efficiency decreases significantly as the subdomain number and computing node number increase. 2) To compensate performance decreasing for large scale tasks, a K-Means clustering based algorithm is introduced. Instead of dedicating to get optimized solutions, this method can get relatively good feasible solutions within acceptable time. However, it may introduce imbalance communication for nodes or node-isolated subdomains. This research shows both two algorithms have their own strength and weakness for task allocation. A combination of the two algorithms is under study to obtain a better performance. Keywords: Scheduling; Parallel Computing; Load Balance; Optimization; Cost Model
CFO compensation method using optical feedback path for coherent optical OFDM system
NASA Astrophysics Data System (ADS)
Moon, Sang-Rok; Hwang, In-Ki; Kang, Hun-Sik; Chang, Sun Hyok; Lee, Seung-Woo; Lee, Joon Ki
2017-07-01
We investigate feasibility of carrier frequency offset (CFO) compensation method using optical feedback path for coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. Recently proposed CFO compensation algorithms provide wide CFO estimation range in electrical domain. However, their practical compensation range is limited by sampling rate of an analog-to-digital converter (ADC). This limitation has not drawn attention, since the ADC sampling rate was high enough comparing to the data bandwidth and CFO in the wireless OFDM system. For CO-OFDM, the limitation is becoming visible because of increased data bandwidth, laser instability (i.e. large CFO) and insufficient ADC sampling rate owing to high cost. To solve the problem and extend practical CFO compensation range, we propose a CFO compensation method having optical feedback path. By adding simple wavelength control for local oscillator, the practical CFO compensation range can be extended to the sampling frequency range. The feasibility of the proposed method is experimentally investigated.
NASA Astrophysics Data System (ADS)
Jiang, Guo-Qing; Xu, Jing; Wei, Jun
2018-04-01
Two algorithms based on machine learning neural networks are proposed—the shallow learning (S-L) and deep learning (D-L) algorithms—that can potentially be used in atmosphere-only typhoon forecast models to provide flow-dependent typhoon-induced sea surface temperature cooling (SSTC) for improving typhoon predictions. The major challenge of existing SSTC algorithms in forecast models is how to accurately predict SSTC induced by an upcoming typhoon, which requires information not only from historical data but more importantly also from the target typhoon itself. The S-L algorithm composes of a single layer of neurons with mixed atmospheric and oceanic factors. Such a structure is found to be unable to represent correctly the physical typhoon-ocean interaction. It tends to produce an unstable SSTC distribution, for which any perturbations may lead to changes in both SSTC pattern and strength. The D-L algorithm extends the neural network to a 4 × 5 neuron matrix with atmospheric and oceanic factors being separated in different layers of neurons, so that the machine learning can determine the roles of atmospheric and oceanic factors in shaping the SSTC. Therefore, it produces a stable crescent-shaped SSTC distribution, with its large-scale pattern determined mainly by atmospheric factors (e.g., winds) and small-scale features by oceanic factors (e.g., eddies). Sensitivity experiments reveal that the D-L algorithms improve maximum wind intensity errors by 60-70% for four case study simulations, compared to their atmosphere-only model runs.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.
A microprocessor application to a strapdown laser gyro navigator
NASA Technical Reports Server (NTRS)
Giardina, C.; Luxford, E.
1980-01-01
The replacement of analog circuit control loops for laser gyros (path length control, cross axis temperature compensation loops, dither servo and current regulators) with digital filters residing in microcomputers is addressed. In addition to the control loops, a discussion is given on applying the microprocessor hardware to compensation for coning and skulling motion where simple algorithms are processed at high speeds to compensate component output data (digital pulses) for linear and angular vibration motions. Highlights are given on the methodology and system approaches used in replacing differential equations describing the analog system in terms of the mechanized difference equations of the microprocessor. Standard one for one frequency domain techniques are employed in replacing analog transfer functions by their transform counterparts. Direct digital design techniques are also discussed along with their associated benefits. Time and memory loading analyses are also summarized, as well as signal and microprocessor architecture. Trade offs in algorithm, mechanization, time/memory loading, accuracy, and microprocessor architecture are also given.
The wavefront compensation of free space optics utilizing micro corner-cube-reflector arrays
NASA Astrophysics Data System (ADS)
You, Shengzui; Yang, Guowei; Li, Changying; Bi, Meihua; Fan, Bing
2018-01-01
The wavefront compensation effect of micro corner-cube-reflector arrays (MCCRAs) in modulating retroreflector (MRR) free-space optical (FSO) link is investigated theoretically and experimentally. Triangular aperture of MCCRAs has been optically characterized and studied in an indoor atmospheric turbulence channel. The use of the MCCRAs instead of a single corner-cube reflector (CCR) as the reflective device is found to improve dramatically the quality of the reflected beam spot. We draw a conclusion that the MCCRAs can in principle yield a powerful wavefront compensation in MRR FSO communication links.
Soil-plant-atmosphere ammonia exchange associated with calluna vulgaris and deschampsia flexuosa
NASA Astrophysics Data System (ADS)
Schjoerring, Jan K.; Husted, Søren; Poulsen, Mette M.
Ammonia fluxes and compensation points at atmospheric NH 3 concentrations corresponding to those occurring under natural growth conditions (0-26 nmol NH 3 mol air -1) were measured for canopies of two species native to heathland in N.W. Europe, viz. Calluna vulgaris (L.) Hull and Deschampsia flexuosa (L.) Trin. The NH 3 compensation point in 2 yr-old C. vulgaris plants, in which current year's shoots had just started growing, was below the detection limit (0.1 nmol mol -1 at 8°C). Fifty days later, when current year's shoots were elongating and flowers developed, the NH 3 compensation point was approximately 6±2.0 nmol mol -1 at 22°C (0.8±0.3 nmol mol -1 at 8°C). The plants in which the shoot tips had just started growing were characterized by a low N concentration in the shoot dry matter (5.8 mg N g -1 shoot dry weight) and a low photosynthetic CO 2 assimilation compared to the flowering plants in which the average dry matter N concentration in old shoots and woody stems was 7.4 and in new shoots 9.5 mg N g -1 shoot dry weight. Plant-atmosphere NH 3 fluxes in C. vulgaris responded approximately linearly to changes in the atmospheric NH 3 concentration. The maximum net absorption rate at 26 nmol NH 3 mol -1 air was 12 nmol NH 3 m -2 ground surface s -1 (equivalent to 13.3 pmol NH 3 g -1 shoot dry matter s -1). Ammonia absorption in Deschampsia flexuosa plants increased approximately linearly with increasing NH 3 concentrations up to 20 nmol mol -1. The maximum NH 3 absorption was 8.5 nmol m -2 ground surface s -1 (30.4 pmol g -1 shoot dry weight s -1). The NH 3 compensation point at 24°C was 3.0±1.1, and at 31°C 7.5±0.6 nmol mol air -1. These values correspond to a NH 3 compensation point of 0.45±0.15 at 8°C. The soil used for cultivation of C. vulgaris (peat soil with pH 6.9) initially adsorbed NH 3 at a rate which exceeded the absorption by the plant canopy. During a 24 d period following the harvest of the plants soil NH 3 adsorption declined and the soil NH 3 compensation point increased from below the detection limit to 8.0±1.8 nmol NH 3 mol air -1 (22°C). No detectable NH 3 exchange took place between the D. flexuosa soil (sandy soil with pH 6.8) and the atmosphere.
Zhang, Yajun; Chai, Tianyou; Wang, Hong; Wang, Dianhui; Chen, Xinkai
2018-06-01
Complex industrial processes are multivariable and generally exhibit strong coupling among their control loops with heavy nonlinear nature. These make it very difficult to obtain an accurate model. As a result, the conventional and data-driven control methods are difficult to apply. Using a twin-tank level control system as an example, a novel multivariable decoupling control algorithm with adaptive neural-fuzzy inference system (ANFIS)-based unmodeled dynamics (UD) compensation is proposed in this paper for a class of complex industrial processes. At first, a nonlinear multivariable decoupling controller with UD compensation is introduced. Different from the existing methods, the decomposition estimation algorithm using ANFIS is employed to estimate the UD, and the desired estimating and decoupling control effects are achieved. Second, the proposed method does not require the complicated switching mechanism which has been commonly used in the literature. This significantly simplifies the obtained decoupling algorithm and its realization. Third, based on some new lemmas and theorems, the conditions on the stability and convergence of the closed-loop system are analyzed to show the uniform boundedness of all the variables. This is then followed by the summary on experimental tests on a heavily coupled nonlinear twin-tank system that demonstrates the effectiveness and the practicability of the proposed method.
Motion compensation using origin ensembles in awake small animal positron emission tomography
NASA Astrophysics Data System (ADS)
Gillam, John E.; Angelis, Georgios I.; Kyme, Andre Z.; Meikle, Steven R.
2017-02-01
In emission tomographic imaging, the stochastic origin ensembles algorithm provides unique information regarding the detected counts given the measured data. Precision in both voxel and region-wise parameters may be determined for a single data set based on the posterior distribution of the count density allowing uncertainty estimates to be allocated to quantitative measures. Uncertainty estimates are of particular importance in awake animal neurological and behavioral studies for which head motion, unique for each acquired data set, perturbs the measured data. Motion compensation can be conducted when rigid head pose is measured during the scan. However, errors in pose measurements used for compensation can degrade the data and hence quantitative outcomes. In this investigation motion compensation and detector resolution models were incorporated into the basic origin ensembles algorithm and an efficient approach to computation was developed. The approach was validated against maximum liklihood—expectation maximisation and tested using simulated data. The resultant algorithm was then used to analyse quantitative uncertainty in regional activity estimates arising from changes in pose measurement precision. Finally, the posterior covariance acquired from a single data set was used to describe correlations between regions of interest providing information about pose measurement precision that may be useful in system analysis and design. The investigation demonstrates the use of origin ensembles as a powerful framework for evaluating statistical uncertainty of voxel and regional estimates. While in this investigation rigid motion was considered in the context of awake animal PET, the extension to arbitrary motion may provide clinical utility where respiratory or cardiac motion perturb the measured data.
NASA Astrophysics Data System (ADS)
Chen, Shijun; Sun, Fuyu; Bai, Qingsong; Chen, Dawei; Chen, Qiang; Hou, Dong
2017-10-01
We demonstrated a timing fluctuation suppression in outdoor laser-based atmospheric radio-frequency transfer over a 110 m one-way free-space link using an electronic phase compensation technique. Timing fluctuations and Allan Deviation are both measured to characterize the instability of transferred frequency incurred during the transfer process. With transferring a 1 GHz microwave signal over a timing fluctuation suppressed transmission link, the total root-mean-square (rms) timing fluctuation was measured to be 920 femtoseconds in 5000 s, with fractional frequency instability on the order of 1 × 10-12 at 1 s, and order of 2 × 10-16 at 1000 s. This atmospheric frequency transfer scheme with the timing fluctuation suppression technique can be used to fast build an atomic clock-based frequency free-space transmission link since its stability is superior to a commercial Cs and Rb clock.
Broadband Phase Spectroscopy over Turbulent Air Paths
NASA Astrophysics Data System (ADS)
Giorgetta, Fabrizio R.; Rieker, Gregory B.; Baumann, Esther; Swann, William C.; Sinclair, Laura C.; Kofler, Jon; Coddington, Ian; Newbury, Nathan R.
2015-09-01
Broadband atmospheric phase spectra are acquired with a phase-sensitive dual-frequency-comb spectrometer by implementing adaptive compensation for the strong decoherence from atmospheric turbulence. The compensation is possible due to the pistonlike behavior of turbulence across a single spatial-mode path combined with the intrinsic frequency stability and high sampling speed associated with dual-comb spectroscopy. The atmospheric phase spectrum is measured across 2 km of air at each of the 70 000 comb teeth spanning 233 cm-1 across hundreds of near-infrared rovibrational resonances of CO2 , CH4 , and H2O with submilliradian uncertainty, corresponding to a 10-13 refractive index sensitivity. Trace gas concentrations extracted directly from the phase spectrum reach 0.7 ppm uncertainty, demonstrated here for CO2 . While conventional broadband spectroscopy only measures intensity absorption, this approach enables measurement of the full complex susceptibility even in practical open path sensing.
Black Carbon Measurements From Ireland's Transboundary Network (TXB)
NASA Astrophysics Data System (ADS)
Spohn, T. K.; Martin, D.; O'Dowd, C. D. D.
2017-12-01
Black Carbon (BC) is carbonaceous aerosol formed by incomplete fossil fuel combustion. Named for its light absorbing properties, it acts to trap heat in the atmosphere, thus behaving like a greenhouse gas, and is considered a strong, short-lived climate forcer by the International Panel on Climate Change (IPCC). Carbonaceous aerosols from biomass burning (BB) such as forest fires and residential wood burning, also known as brown carbon, affect the ultra violet (UV) light absorption in the atmosphere as well. In 2016 a three node black carbon monitoring network was established in Ireland as part of a Transboundary Monitoring Network (TXB). The three sites (Mace Head, Malin Head, and Carnsore Point) are coastal locations on opposing sides of the country, and offer the opportunity to assess typical northern hemispheric background concentrations as well national and European pollution events. The instruments deployed in this network (Magee Scientific AE33) facilitate elimination of the changes in response due to `aerosol loading' effects; and a real-time calculation of the `loading compensation' parameter which offers insights into aerosol optical properties. Additionally, these instruments have an inbuilt algorithm, which estimates the difference in absorption in the ultraviolet wavelengths (mostly by brown carbon) and the near infrared wavelengths (only by black carbon).Presented here are the first results of the BC measurements from the three Irish stations, including instrument validation, seasonal variation as well as local, regional, and transboundary influences based on air mass trajectories as well as concurrent in-situ observations (meteorological parameters, particle number, and aerosol composition). A comparison of the instrumental algorithm to off-line sensitivity calculations will also be made to assess the contribution of biomass burning to BC pollution events.
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, X.D.; Tsui, B.M.W.; Gregoriou, G.K.
The goal of the investigation was to study the effectiveness of the corrective reconstruction methods in cardiac SPECT using a realistic phantom and to qualitatively and quantitatively evaluate the reconstructed images using bull's-eye plots. A 3D mathematical phantom which realistically models the anatomical structures of the cardiac-torso region of patients was used. The phantom allows simulation of both the attenuation distribution and the uptake of radiopharmaceuticals in different organs. Also, the phantom can be easily modified to simulate different genders and variations in patient anatomy. Two-dimensional projection data were generated from the phantom and included the effects of attenuation andmore » detector response blurring. The reconstruction methods used in the study included the conventional filtered backprojection (FBP) with no attenuation compensation, and the first-order Chang algorithm, an iterative filtered backprojection algorithm (IFBP), the weighted least square conjugate gradient algorithm and the ML-EM algorithm with non-uniform attenuation compensation. The transaxial reconstructed images were rearranged into short-axis slices from which bull's-eye plots of the count density distribution in the myocardium were generated.« less
Automatic Road Gap Detection Using Fuzzy Inference System
NASA Astrophysics Data System (ADS)
Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.
2011-09-01
Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.
Application of a self-compensation mechanism to a rotary-laser scanning measurement system
NASA Astrophysics Data System (ADS)
Guo, Siyang; Lin, Jiarui; Ren, Yongjie; Shi, Shendong; Zhu, Jigui
2017-11-01
In harsh environmental conditions, the relative orientations of transmitters of rotary-laser scanning measuring systems are easily influenced by low-frequency vibrations or creep deformation of the support structure. A self-compensation method that counters this problem is presented. This method is based on an improved workshop Measurement Positioning System (wMPS) with inclinometer-combined transmitters. A calibration method for the spatial rotation between the transmitter and inclinometer with an auxiliary horizontal reference frame is presented. It is shown that the calibration accuracy can be improved by a mechanical adjustment using a special bubble level. The orientation-compensation algorithm of the transmitters is described in detail. The feasibility of this compensation mechanism is validated by Monte Carlo simulations and experiments. The mechanism mainly provides a two-degrees-of-freedom attitude compensation.
Canceling the momentum in a phase-shifting algorithm to eliminate spatially uniform errors.
Hibino, Kenichi; Kim, Yangjin
2016-08-10
In phase-shifting interferometry, phase modulation nonlinearity causes both spatially uniform and nonuniform errors in the measured phase. Conventional linear-detuning error-compensating algorithms only eliminate the spatially variable error component. The uniform error is proportional to the inertial momentum of the data-sampling weight of a phase-shifting algorithm. This paper proposes a design approach to cancel the momentum by using characteristic polynomials in the Z-transform space and shows that an arbitrary M-frame algorithm can be modified to a new (M+2)-frame algorithm that acquires new symmetry to eliminate the uniform error.
A technique for global monitoring of net solar irradiance at the ocean surface. I - Model
NASA Technical Reports Server (NTRS)
Frouin, Robert; Chertock, Beth
1992-01-01
An accurate long-term (84-month) climatology of net surface solar irradiance over the global oceans from Nimbus-7 earth radiation budget (ERB) wide-field-of-view planetary-albedo data is generated via an algorithm based on radiative transfer theory. Net surface solar irradiance is computed as the difference between the top-of-atmosphere incident solar irradiance (known) and the sum of the solar irradiance reflected back to space by the earth-atmosphere system (observed) and the solar irradiance absorbed by atmospheric constituents (modeled). It is shown that the effects of clouds and clear-atmosphere constituents can be decoupled on a monthly time scale, which makes it possible to directly apply the algorithm with monthly averages of ERB planetary-albedo data. Compared theoretically with the algorithm of Gautier et al. (1980), the present algorithm yields higher solar irradiance values in clear and thin cloud conditions and lower values in thick cloud conditions.
Apparatus for controlling air/fuel ratio for internal combustion engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kato, K.; Mizuno, T.
1986-07-08
This patent describes an apparatus for controlling air-fuel ratio of an air-fuel mixture to be supplied to an internal combustion engine having an intake passage, an exhaust passage, an an exhaust gas recirculation passage for recirculating exhaust gases in the exhaust passage to the intake passage therethrough. The apparatus consists of: (a) means for sensing rotational speed of the engine; (b) means for sensing intake pressure in the intake passage; (c) means for sensing atmospheric pressure; (d) means for enabling and disabling exhaust gas recirculation through the exhaust gas recirculation passage in accordance with operating condition of the engine; (e)more » means for determining required amount of fuel in accordance with the sensed rotational speed and the sensed intake pressure; (f) means for determining, when the exhaust gas recirculation is enabled, a first correction value in accordance with the sensed rotational speed, the sensed intake pressure and the sensed atmospheric pressure, the first correction factor being used for correcting fuel amount so as to compensate for the decrease of fuel due to the performance of exhaust gas recirculation and also to compensate for the change in atmospheric pressure; (g) means for determining, when the exhaust gas recirculation is disabled, a second correction value in accordance with the atmospheric pressure, the second correction factor being used so as to compensate for the change in atmospheric pressure; (h) means for correcting the required amount of fuel by the first correction value and the second correction value when the exhaust gas recirculation is enabled and disabled respectively; and (i) means for supplying the engine with the corrected amount of fuel.« less
Johnson, Jennifer E; Berry, Joseph A
2013-10-01
The distribution of nitrogen isotopes in the biosphere has the potential to offer insights into the past, present and future of the nitrogen cycle, but it is challenging to unravel the processes controlling patterns of mixing and fractionation. We present a mathematical model describing a previously overlooked process: nitrogen isotope fractionation during leaf-atmosphere NH3(g ) exchange. The model predicts that when leaf-atmosphere exchange of NH3(g ) occurs in a closed system, the atmospheric reservoir of NH3(g ) equilibrates at a concentration equal to the ammonia compensation point and an isotopic composition 8.1‰ lighter than nitrogen in protein. In an open system, when atmospheric concentrations of NH3(g ) fall below or rise above the compensation point, protein can be isotopically enriched by net efflux of NH3(g ) or depleted by net uptake. Comparison of model output with existing measurements in the literature suggests that this process contributes to variation in the isotopic composition of nitrogen in plants as well as NH3(g ) in the atmosphere, and should be considered in future analyses of nitrogen isotope circulation. The matrix-based modelling approach that is introduced may be useful for quantifying isotope dynamics in other complex systems that can be described by first-order kinetics. © 2013 John Wiley & Sons Ltd.
Validation Studies of the Accuracy of Various SO2 Gas Retrievals in the Thermal InfraRed (8-14 μm)
NASA Astrophysics Data System (ADS)
Gabrieli, A.; Wright, R.; Lucey, P. G.; Porter, J. N.; Honniball, C.; Garbeil, H.; Wood, M.
2016-12-01
Quantifying hazardous SO2 in the atmosphere and in volcanic plumes is important for public health and volcanic eruption prediction. Remote sensing measurements of spectral radiance of plumes contain information on the abundance of SO2. However, in order to convert such measurements into SO2 path-concentrations, reliable inversion algorithms are needed. Various techniques can be employed to derive SO2 path-concentrations. The first approach employs a Partial Least Square Regression model trained using MODTRAN5 simulations for a variety of plume and atmospheric conditions. Radiances at many spectral wavelengths (8-14 μm) were used in the algorithm. The second algorithm uses measurements inside and outside the SO2 plume. Measurements in the plume-free region (background sky) make it possible to remove background atmospheric conditions and any instrumental effects. After atmospheric and instrumental effects are removed, MODTRAN5 is used to fit the SO2 spectral feature and obtain SO2 path-concentrations. The two inversion algorithms described above can be compared with the inversion algorithm for SO2 retrievals developed by Prata and Bernardo (2014). Their approach employs three wavelengths to characterize the plume temperature, the atmospheric background, and the SO2 path-concentration. The accuracy of these various techniques requires further investigation in terms of the effects of different atmospheric background conditions. Validating these inversion algorithms is challenging because ground truth measurements are very difficult. However, if the three separate inversion algorithms provide similar SO2 path-concentrations for actual measurements with various background conditions, then this increases confidence in the results. Measurements of sky radiance when looking through SO2 filled gas cells were collected with a Thermal Hyperspectral Imager (THI) under various atmospheric background conditions. These data were processed using the three inversion approaches, which were tested for convergence on the known SO2 gas cell path-concentrations. For this study, the inversion algorithms were modified to account for the gas cell configuration. Results from these studies will be presented, as well as results from SO2 gas plume measurements at Kīlauea volcano, Hawai'i.
The atmospheric correction algorithm for HY-1B/COCTS
NASA Astrophysics Data System (ADS)
He, Xianqiang; Bai, Yan; Pan, Delu; Zhu, Qiankun
2008-10-01
China has launched her second ocean color satellite HY-1B on 11 Apr., 2007, which carried two remote sensors. The Chinese Ocean Color and Temperature Scanner (COCTS) is the main sensor on HY-1B, and it has not only eight visible and near-infrared wavelength bands similar to the SeaWiFS, but also two more thermal infrared bands to measure the sea surface temperature. Therefore, COCTS has broad application potentiality, such as fishery resource protection and development, coastal monitoring and management and marine pollution monitoring. Atmospheric correction is the key of the quantitative ocean color remote sensing. In this paper, the operational atmospheric correction algorithm of HY-1B/COCTS has been developed. Firstly, based on the vector radiative transfer numerical model of coupled oceanatmosphere system- PCOART, the exact Rayleigh scattering look-up table (LUT), aerosol scattering LUT and atmosphere diffuse transmission LUT for HY-1B/COCTS have been generated. Secondly, using the generated LUTs, the exactly operational atmospheric correction algorithm for HY-1B/COCTS has been developed. The algorithm has been validated using the simulated spectral data generated by PCOART, and the result shows the error of the water-leaving reflectance retrieved by this algorithm is less than 0.0005, which meets the requirement of the exactly atmospheric correction of ocean color remote sensing. Finally, the algorithm has been applied to the HY-1B/COCTS remote sensing data, and the retrieved water-leaving radiances are consist with the Aqua/MODIS results, and the corresponding ocean color remote sensing products have been generated including the chlorophyll concentration and total suspended particle matter concentration.
28 CFR 79.32 - Criteria for eligibility for claims by onsite participants.
Code of Federal Regulations, 2010 CFR
2010-07-01
... following: (a) That the claimant was present onsite at any time during a period of atmospheric nuclear testing; (b) That the claimant was a participant during that period in the atmospheric detonation of a nuclear device; and (c) That after such participation, the claimant contracted a specified compensable...
Analysis of Technology for Solid State Coherent Lidar
NASA Technical Reports Server (NTRS)
Amzajerdian, Farzin
1997-01-01
Over the past few years, considerable advances have been made in the areas of the diode-pumped, eye-safe, solid state lasers, wide bandwidth, semiconductor detectors operating in the near-infrared region. These advances have created new possibilities for the development of low-cost, reliable, and compact coherent lidar systems for measurements of atmospheric winds and aerosol backscattering from a space-based platform. The work performed by the UAH personnel concentrated on design and analyses of solid state pulsed coherent lidar systems capable of measuring atmospheric winds from space, and design and perform laboratory experiments and measurements in support of solid state laser radar remote sensing systems which are to be designed, deployed, and used by NASA to measure atmospheric processes and constituents. A lidar testbed system was designed and analyzed by considering the major space operational and environmental requirements, and its associated physical constraints. The lidar optical system includes a wedge scanner and the compact telescope designed by the UAH personnel. The other major optical components included in the design and analyses were: polarizing beam splitter, routing mirrors, wave plates, signal beam derotator, and lag angle compensator. The testbed lidar optical train was designed and analyzed, and different design options for mounting and packaging the lidar subsystems and components and support structure were investigated. All the optical components are to be mounted in a stress-free and stable manner to allow easy integration and alignment, and long term stability. This lidar system is also intended to be used for evaluating the performance of various lidar subsystems and components that are to be integrated into a flight unit and for demonstrating the integrity of the signal processing algorithms by performing actual atmospheric measurements from a ground station.
NASA Astrophysics Data System (ADS)
Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas
2017-04-01
The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.
NASA Astrophysics Data System (ADS)
Liu, Jinxin; Chen, Xuefeng; Yang, Liangdong; Gao, Jiawei; Zhang, Xingwu
2017-11-01
In the field of active noise and vibration control (ANVC), a considerable part of unwelcome noise and vibration is resulted from rotational machines, making the spectrum of response signal multiple-frequency. Narrowband filtered-x least mean square (NFXLMS) is a very popular algorithm to suppress such noise and vibration. It has good performance since a priori-knowledge of fundamental frequency of the noise source (called reference frequency) is adopted. However, if the priori-knowledge is inaccurate, the control performance will be dramatically degraded. This phenomenon is called reference frequency mismatch (RFM). In this paper, a novel narrowband ANVC algorithm with orthogonal pair-wise reference frequency regulator is proposed to compensate for the RFM problem. Firstly, the RFM phenomenon in traditional NFXLMS is closely investigated both analytically and numerically. The results show that RFM changes the parameter estimation problem of the adaptive controller into a parameter tracking problem. Then, adaptive sinusoidal oscillators with output rectification are introduced as the reference frequency regulator to compensate for the RFM problem. The simulation results show that the proposed algorithm can dramatically suppress the multiple-frequency noise and vibration with an improved convergence rate whether or not there is RFM. Finally, case studies using experimental data are conducted under the conditions of none, small and large RFM. The shaft radial run-out signal of a rotor test-platform is applied to simulate the primary noise, and an IIR model identified from a real steel structure is applied to simulate the secondary path. The results further verify the robustness and effectiveness of the proposed algorithm.
An 'adding' algorithm for the Markov chain formalism for radiation transfer
NASA Technical Reports Server (NTRS)
Esposito, L. W.
1979-01-01
An adding algorithm is presented, that extends the Markov chain method and considers a preceding calculation as a single state of a new Markov chain. This method takes advantage of the description of the radiation transport as a stochastic process. Successive application of this procedure makes calculation possible for any optical depth without increasing the size of the linear system used. It is determined that the time required for the algorithm is comparable to that for a doubling calculation for homogeneous atmospheres. For an inhomogeneous atmosphere the new method is considerably faster than the standard adding routine. It is concluded that the algorithm is efficient, accurate, and suitable for smaller computers in calculating the diffuse intensity scattered by an inhomogeneous planetary atmosphere.
NASA Technical Reports Server (NTRS)
Sparrow, Victor W.; Gionfriddo, Thomas A.
1994-01-01
In this study there were two primary tasks. The first was to develop an algorithm for quantifying the distortion in a sonic boom. Such an algorithm should be somewhat automatic, with minimal human intervention. Once the algorithm was developed, it was used to test the hypothesis that the cause of a sonic boom distortion was due to atmospheric turbulence. This hypothesis testing was the second task. Using readily available sonic boom data, we statistically tested whether there was a correlation between the sonic boom distortion and the distance a boom traveled through atmospheric turbulence.
NASA Astrophysics Data System (ADS)
Shastri, Niket; Pathak, Kamlesh
2018-05-01
The water vapor content in atmosphere plays very important role in climate. In this paper the application of GPS signal in meteorology is discussed, which is useful technique that is used to estimate the perceptible water vapor of atmosphere. In this paper various algorithms like artificial neural network, support vector machine and multiple linear regression are use to predict perceptible water vapor. The comparative studies in terms of root mean square error and mean absolute errors are also carried out for all the algorithms.
Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery
NASA Technical Reports Server (NTRS)
Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana
1989-01-01
A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.
Algorithm for atmospheric corrections of aircraft and satellite imagery
NASA Technical Reports Server (NTRS)
Fraser, R. S.; Ferrare, R. A.; Kaufman, Y. J.; Markham, B. L.; Mattoo, S.
1992-01-01
A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 microns. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.
Using a focal-plane array to estimate antenna pointing errors
NASA Technical Reports Server (NTRS)
Zohar, S.; Vilnrotter, V. A.
1991-01-01
The use of extra collecting horns in the focal plane of an antenna as a means of determining the Direction of Arrival (DOA) of the signal impinging on it, provided it is within the antenna beam, is considered. Our analysis yields a relatively simple algorithm to extract the DOA from the horns' outputs. An algorithm which, in effect, measures the thermal noise of the horns' signals and determines its effect on the uncertainty of the extracted DOA parameters is developed. Both algorithms were implemented in software and tested in simulated data. Based on these tests, it is concluded that this is a viable approach to the DOA determination. Though the results obtained are of general applicability, the particular motivation for the present work is their application to the pointing of a mechanically deformed antenna. It is anticipated that the pointing algorithm developed for a deformed antenna could be obtained as a small perturbation of the algorithm developed for an undeformed antenna. In this context, it should be pointed out that, with a deformed antenna, the array of horns and its associated circuitry constitute the main part of the deformation-compensation system. In this case, the pointing system proposed may be viewed as an additional task carried out by the deformation-compensation hardware.
NASA Astrophysics Data System (ADS)
Sauppe, Sebastian; Hahn, Andreas; Brehm, Marcus; Paysan, Pascal; Seghers, Dieter; Kachelrieß, Marc
2016-03-01
We propose an adapted method of our previously published five-dimensional (5D) motion compensation (MoCo) algorithm1, developed for micro-CT imaging of small animals, to provide for the first time motion artifact-free 5D cone-beam CT (CBCT) images from a conventional flat detector-based CBCT scan of clinical patients. Image quality of retrospectively respiratory- and cardiac-gated volumes from flat detector CBCT scans is deteriorated by severe sparse projection artifacts. These artifacts further complicate motion estimation, as it is required for MoCo image reconstruction. For high quality 5D CBCT images at the same x-ray dose and the same number of projections as todays 3D CBCT we developed a double MoCo approach based on motion vector fields (MVFs) for respiratory and cardiac motion. In a first step our already published four-dimensional (4D) artifact-specific cyclic motion-compensation (acMoCo) approach is applied to compensate for the respiratory patient motion. With this information a cyclic phase-gated deformable heart registration algorithm is applied to the respiratory motion-compensated 4D CBCT data, thus resulting in cardiac MVFs. We apply these MVFs on double-gated images and thereby respiratory and cardiac motion-compensated 5D CBCT images are obtained. Our 5D MoCo approach processing patient data acquired with the TrueBeam 4D CBCT system (Varian Medical Systems). Our double MoCo approach turned out to be very efficient and removed nearly all streak artifacts due to making use of 100% of the projection data for each reconstructed frame. The 5D MoCo patient data show fine details and no motion blurring, even in regions close to the heart where motion is fastest.
Scene-based nonuniformity correction technique for infrared focal-plane arrays.
Liu, Yong-Jin; Zhu, Hong; Zhao, Yi-Gong
2009-04-20
A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias nonuniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an interframe-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demonstrated with simulated and real infrared image sequences. Experimental results indicate that the proposed algorithm exhibits a superior correction effect.
Kanematsu, Nobuyuki
2011-03-07
A broad-beam-delivery system for radiotherapy with protons or ions often employs multiple collimators and a range-compensating filter, which offer complex and potentially useful beam customization. It is however difficult for conventional pencil-beam algorithms to deal with fine structures of these devices due to beam-size growth during transport. This study aims to avoid the difficulty with a novel computational model. The pencil beams are initially defined at the range-compensating filter with angular-acceptance correction for upstream collimation followed by stopping and scattering. They are individually transported with possible splitting near the aperture edge of a downstream collimator to form a sharp field edge. The dose distribution for a carbon-ion beam was calculated and compared with existing experimental data. The penumbra sizes of various collimator edges agreed between them to a submillimeter level. This beam-customization model will be used in the greater framework of the pencil-beam splitting algorithm for accurate and efficient patient dose calculation.
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Smith, W. T.
1990-01-01
Surface errors on parabolic reflector antennas degrade the overall performance of the antenna. Space antenna structures are difficult to build, deploy and control. They must maintain a nearly perfect parabolic shape in a harsh environment and must be lightweight. Electromagnetic compensation for surface errors in large space reflector antennas can be used to supplement mechanical compensation. Electromagnetic compensation for surface errors in large space reflector antennas has been the topic of several research studies. Most of these studies try to correct the focal plane fields of the reflector near the focal point and, hence, compensate for the distortions over the whole radiation pattern. An alternative approach to electromagnetic compensation is presented. The proposed technique uses pattern synthesis to compensate for the surface errors. The pattern synthesis approach uses a localized algorithm in which pattern corrections are directed specifically towards portions of the pattern requiring improvement. The pattern synthesis technique does not require knowledge of the reflector surface. It uses radiation pattern data to perform the compensation.
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
NASA Technical Reports Server (NTRS)
Liu, Xu; Larar, Allen M.; Zhou, Daniel K.; Kizer, Susan H.; Wu, Wan; Barnet, Christopher; Divakarla, Murty; Guo, Guang; Blackwell, Bill; Smith, William L.;
2011-01-01
Different methods for retrieving atmospheric profiles in the presence of clouds from hyperspectral satellite remote sensing data will be described. We will present results from the JPSS cloud-clearing algorithm and NASA Langley cloud retrieval algorithm.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.; Ray, Taylor J.
2013-01-01
Two versions of airborne wind profiling algorithms for the pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia are presented. Each algorithm utilizes different number of line-of-sight (LOS) lidar returns while compensating the adverse effects of different coordinate systems between the aircraft and the Earth. One of the two algorithms APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) estimates wind products using two LOSs. The other algorithm utilizes five LOSs. The airborne lidar data were acquired during the NASA's Genesis and Rapid Intensification Processes (GRIP) campaign in 2010. The wind profile products from the two algorithms are compared with the dropsonde data to validate their results.
USDA-ARS?s Scientific Manuscript database
Postharvest management of apple fruit ripening using controlled atmosphere (CA) cold storage can be enhanced as CA oxygen concentration is decreased to close to the anaerobic compensation point (ACP). Monitoring fruit chlorophyll fluorescence is one technology available to assess fruit response to ...
GIFTS SM EDU Level 1B Algorithms
NASA Technical Reports Server (NTRS)
Tian, Jialin; Gazarik, Michael J.; Reisse, Robert A.; Johnson, David G.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) SensorModule (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiances using a Fourier transform spectrometer (FTS). The GIFTS instrument employs three focal plane arrays (FPAs), which gather measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the GIFTS SM EDU Level 1B algorithms involved in the calibration. The GIFTS Level 1B calibration procedures can be subdivided into four blocks. In the first block, the measured raw interferograms are first corrected for the detector nonlinearity distortion, followed by the complex filtering and decimation procedure. In the second block, a phase correction algorithm is applied to the filtered and decimated complex interferograms. The resulting imaginary part of the spectrum contains only the noise component of the uncorrected spectrum. Additional random noise reduction can be accomplished by applying a spectral smoothing routine to the phase-corrected spectrum. The phase correction and spectral smoothing operations are performed on a set of interferogram scans for both ambient and hot blackbody references. To continue with the calibration, we compute the spectral responsivity based on the previous results, from which, the calibrated ambient blackbody (ABB), hot blackbody (HBB), and scene spectra can be obtained. We now can estimate the noise equivalent spectral radiance (NESR) from the calibrated ABB and HBB spectra. The correction schemes that compensate for the fore-optics offsets and off-axis effects are also implemented. In the third block, we developed an efficient method of generating pixel performance assessments. In addition, a random pixel selection scheme is designed based on the pixel performance evaluation. Finally, in the fourth block, the single pixel algorithms are applied to the entire FPA.
NASA Astrophysics Data System (ADS)
Gianoli, Chiara; Kurz, Christopher; Riboldi, Marco; Bauer, Julia; Fontana, Giulia; Baroni, Guido; Debus, Jürgen; Parodi, Katia
2016-06-01
A clinical trial named PROMETHEUS is currently ongoing for inoperable hepatocellular carcinoma (HCC) at the Heidelberg Ion Beam Therapy Center (HIT, Germany). In this framework, 4D PET-CT datasets are acquired shortly after the therapeutic treatment to compare the irradiation induced PET image with a Monte Carlo PET prediction resulting from the simulation of treatment delivery. The extremely low count statistics of this measured PET image represents a major limitation of this technique, especially in presence of target motion. The purpose of the study is to investigate two different 4D PET motion compensation strategies towards the recovery of the whole count statistics for improved image quality of the 4D PET-CT datasets for PET-based treatment verification. The well-known 4D-MLEM reconstruction algorithm, embedding the motion compensation in the reconstruction process of 4D PET sinograms, was compared to a recently proposed pre-reconstruction motion compensation strategy, which operates in sinogram domain by applying the motion compensation to the 4D PET sinograms. With reference to phantom and patient datasets, advantages and drawbacks of the two 4D PET motion compensation strategies were identified. The 4D-MLEM algorithm was strongly affected by inverse inconsistency of the motion model but demonstrated the capability to mitigate the noise-break-up effects. Conversely, the pre-reconstruction warping showed less sensitivity to inverse inconsistency but also more noise in the reconstructed images. The comparison was performed by relying on quantification of PET activity and ion range difference, typically yielding similar results. The study demonstrated that treatment verification of moving targets could be accomplished by relying on the whole count statistics image quality, as obtained from the application of 4D PET motion compensation strategies. In particular, the pre-reconstruction warping was shown to represent a promising choice when combined with intra-reconstruction smoothing.
NASA Astrophysics Data System (ADS)
Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.
In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with aggregated ground measurements which showed a very good correlation of 0.96 in all four spectral bands (i.e. green, red, NIR and SWIR). In order to quantify the accuracy of the proposed method in the estimation of the surface reflectance, the root mean square error (RMSE) associated to the proposed method was evaluated. The analysis of the ground measured versus retrieved AWiFS reflectance yielded smaller RMSE values in case of all four spectral bands. EOS TERRA/AQUA MODIS derived AOD exhibited very good correlation of 0.92 and the data sets provides an effective means for carrying out atmospheric corrections in an operational way. Keywords: Atmospheric correction, 6S code, MODIS, Spectroradiometer, Sun-Photometer
NASA Astrophysics Data System (ADS)
Fedonin, O. N.; Petreshin, D. I.; Ageenko, A. V.
2018-03-01
In the article, the issue of increasing a CNC lathe accuracy by compensating for the static and dynamic errors of the machine is investigated. An algorithm and a diagnostic system for a CNC machine tool are considered, which allows determining the errors of the machine for their compensation. The results of experimental studies on diagnosing and improving the accuracy of a CNC lathe are presented.
Resolution Enhancement In Ultrasonic Imaging By A Time-Varying Filter
NASA Astrophysics Data System (ADS)
Ching, N. H.; Rosenfeld, D.; Braun, M.
1987-09-01
The study reported here investigates the use of a time-varying filter to compensate for the spreading of ultrasonic pulses due to the frequency dependence of attenuation by tissues. The effect of this pulse spreading is to degrade progressively the axial resolution with increasing depth. The form of compensation required to correct for this effect is impossible to realize exactly. A novel time-varying filter utilizing a bank of bandpass filters is proposed as a realizable approximation of the required compensation. The performance of this filter is evaluated by means of a computer simulation. The limits of its application are discussed. Apart from improving the axial resolution, and hence the accuracy of axial measurements, the compensating filter could be used in implementing tissue characterization algorithms based on attenuation data.
Actuator stiction compensation via variable amplitude pulses.
Arifin, B M S; Munaro, C J; Angarita, O F B; Cypriano, M V G; Shah, S L
2018-02-01
A novel model free stiction compensation scheme is developed which eliminates the oscillations and also reduces valve movement, allowing good setpoint tracking and disturbance rejection. Pulses with varying amplitude are added to the controller output to overcome stiction and when the error becomes smaller than a specified limit, the compensation ceases and remains in a standby mode. The compensation re-starts as soon as the error exceeds the user specified threshold. The ability to cope with uncertainty in friction is a feature achieved by the use of pulses of varying amplitude. The algorithm has been evaluated via simulation and by application on an industrial DCS system interfaced to a pilot scale process with features identical to those found in industry including a valve positioner. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Djordjevic, Ivan B; Xu, Lei; Wang, Ting
2008-09-15
We present two PMD compensation schemes suitable for use in multilevel (M>or=2) block-coded modulation schemes with coherent detection. The first scheme is based on a BLAST-type polarization-interference cancellation scheme, and the second scheme is based on iterative polarization cancellation. Both schemes use the LDPC codes as channel codes. The proposed PMD compensations schemes are evaluated by employing coded-OFDM and coherent detection. When used in combination with girth-10 LDPC codes those schemes outperform polarization-time coding based OFDM by 1 dB at BER of 10(-9), and provide two times higher spectral efficiency. The proposed schemes perform comparable and are able to compensate even 1200 ps of differential group delay with negligible penalty.
Exact and Monte carlo resampling procedures for the Wilcoxon-Mann-Whitney and Kruskal-Wallis tests.
Berry, K J; Mielke, P W
2000-12-01
Exact and Monte Carlo resampling FORTRAN programs are described for the Wilcoxon-Mann-Whitney rank sum test and the Kruskal-Wallis one-way analysis of variance for ranks test. The program algorithms compensate for tied values and do not depend on asymptotic approximations for probability values, unlike most algorithms contained in PC-based statistical software packages.
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less
NASA Astrophysics Data System (ADS)
Wang, Jin; Li, Haoxu; Zhang, Xiaofeng; Wu, Rangzhong
2017-05-01
Indoor positioning using visible light communication has become a topic of intensive research in recent years. Because the normal of the receiver always deviates from that of the transmitter in application, the positioning systems which require that the normal of the receiver be aligned with that of the transmitter have large positioning errors. Some algorithms take the angular vibrations into account; nevertheless, these positioning algorithms cannot meet the requirement of high accuracy or low complexity. A visible light positioning algorithm combined with angular vibration compensation is proposed. The angle information from the accelerometer or other angle acquisition devices is used to calculate the angle of incidence even when the receiver is not horizontal. Meanwhile, a received signal strength technique with high accuracy is employed to determine the location. Moreover, an eight-light-emitting-diode (LED) system model is provided to improve the accuracy. The simulation results show that the proposed system can achieve a low positioning error with low complexity, and the eight-LED system exhibits improved performance. Furthermore, trust region-based positioning is proposed to determine three-dimensional locations and achieves high accuracy in both the horizontal and the vertical components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael
2014-01-14
In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid ismore » used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements« less
NASA Astrophysics Data System (ADS)
Waldmann, I. P.
2016-04-01
Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as the “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.
Coastal Zone Color Scanner atmospheric correction algorithm - Multiple scattering effects
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1987-01-01
Errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm are analyzed. The analysis is based on radiative transfer computations in model atmospheres, in which the aerosols and molecules are distributed vertically in an exponential manner, with most of the aerosol scattering located below the molecular scattering. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates, making it possible to determine the errors along typical CZCS scan lines. Information provided by the analysis makes it possible to judge the efficacy of the current algorithm with the current sensor and to estimate the impact of the algorithm-induced errors on a variety of applications.
Neural Network Compensation for Frequency Cross-Talk in Laser Interferometry
NASA Astrophysics Data System (ADS)
Lee, Wooram; Heo, Gunhaeng; You, Kwanho
The heterodyne laser interferometer acts as an ultra-precise measurement apparatus in semiconductor manufacture. However the periodical nonlinearity property caused from frequency cross-talk is an obstacle to improve the high measurement accuracy in nanometer scale. In order to minimize the nonlinearity error of the heterodyne interferometer, we propose a frequency cross-talk compensation algorithm using an artificial intelligence method. The feedforward neural network trained by back-propagation compensates the nonlinearity error and regulates to minimize the difference with the reference signal. With some experimental results, the improved accuracy is proved through comparison with the position value from a capacitive displacement sensor.
A New Technique for Compensating Joint Limits in a Robot Manipulator
NASA Technical Reports Server (NTRS)
Litt, Jonathan; Hickman, Andre; Guo, Ten-Huei
1996-01-01
A new robust, optimal, adaptive technique for compensating rate and position limits in the joints of a six degree-of-freedom elbow manipulator is presented. In this new algorithm, the unmet demand as a result of actuator saturation is redistributed among the remaining unsaturated joints. The scheme is used to compensate for inadequate path planning, problems such as joint limiting, joint freezing, or even obstacle avoidance, where a desired position and orientation are not attainable due to an unrealizable joint command. Once a joint encounters a limit, supplemental commands are sent to other joints to best track, according to a selected criterion, the desired trajectory.
NASA Astrophysics Data System (ADS)
Wang, Shuang; Liu, Tiegen; Jiang, Junfeng; Liu, Kun; Yin, Jinde; Wu, Fan; Zhao, Bofu; Xue, Lei; Mei, Yunqiao; Wu, Zhenhai
2013-12-01
We present an effective method to compensate the spatial-frequency nonlinearity for polarized low-coherence interferometer with location-dependent dispersion element. Through the use of location-dependent dispersive characteristics, the method establishes the exact relationship between wave number and discrete Fourier transform (DFT) serial number. The jump errors in traditional absolute phase algorithm are also avoided with nonlinearity compensation. We carried out experiments with an optical fiber Fabry-Perot (F-P) pressure sensing system to verify the effectiveness. The demodulated error is less than 0.139kPa in the range of 170kPa when using our nonlinearity compensation process in the demodulation.
NASA Astrophysics Data System (ADS)
Chen, Hao; Zhang, Xinggan; Bai, Yechao; Tang, Lan
2017-01-01
In inverse synthetic aperture radar (ISAR) imaging, the migration through resolution cells (MTRCs) will occur when the rotation angle of the moving target is large, thereby degrading image resolution. To solve this problem, an ISAR imaging method based on segmented preprocessing is proposed. In this method, the echoes of large rotating target are divided into several small segments, and every segment can generate a low-resolution image without MTRCs. Then, each low-resolution image is rotated back to the original position. After image registration and phase compensation, a high-resolution image can be obtained. Simulation and real experiments show that the proposed algorithm can deal with the radar system with different range and cross-range resolutions and significantly compensate the MTRCs.
Ground-to-space optical power transfer. [using laser propulsion for orbit transfer
NASA Technical Reports Server (NTRS)
Mevers, G. E.; Hayes, C. L.; Soohoo, J. F.; Stubbs, R. M.
1978-01-01
Using laser radiation as the energy input to a rocket, it is possible to consider the transfer of large payloads economically between low initial orbits and higher energy orbits. In this paper we will discuss the results of an investigation to use a ground-based High Energy Laser (HEL) coupled to an adaptive antenna to transmit multi-megawatts of power to a satellite in low-earth orbit. Our investigation included diffraction effects, atmospheric transmission efficiency, adaptive compensation for atmospheric turbulence effects, including the servo bandwidth requirements for this correction, and the adaptive compensation for thermal blooming. For these evaluations we developed vertical profile models of atmospheric absorption, strength of optical turbulence (CN2), wind, temperature, and other parameters necessary to calculate system performance. Our atmospheric investigations were performed for CO2, 12C18O2 isotope, CO and DF wavelengths. For all of these considerations, output antenna locations of both sea level and mountain top (3.5 km above sea level) were used. Several adaptive system concepts were evaluated with a multiple source phased array concept being selected. This system uses an adaption technique of phase locking independent laser oscillators. When both system losses and atmospheric effects were assessed, the results predicted an overall power transfer efficiency of slightly greater than 50%.
NASA Astrophysics Data System (ADS)
Shi, Chong; Nakajima, Teruyuki
2018-03-01
Retrieval of aerosol optical properties and water-leaving radiance over ocean is challenging since the latter mostly accounts for ˜ 10 % of the satellite-observed signal and can be easily influenced by the atmospheric scattering. Such an effort would be more difficult in turbid coastal waters due to the existence of optically complex oceanic substances or high aerosol loading. In an effort to solve such problems, we present an optimization approach for the simultaneous determination of aerosol optical thickness (AOT) and normalized water-leaving radiance (nLw) from multispectral satellite measurements. In this algorithm, a coupled atmosphere-ocean radiative transfer model combined with a comprehensive bio-optical oceanic module is used to jointly simulate the satellite-observed reflectance at the top of atmosphere and water-leaving radiance just above the ocean surface. Then, an optimal estimation method is adopted to retrieve AOT and nLw iteratively. The algorithm is validated using Aerosol Robotic Network - Ocean Color (AERONET-OC) products selected from eight OC sites distributed over different waters, consisting of observations that covered glint and non-glint conditions from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument. Results show a good consistency between retrieved and in situ measurements at each site. It is demonstrated that more accurate AOTs are determined based on the simultaneous retrieval method, particularly in shorter wavelengths and sunglint conditions, where the averaged percentage difference (APD) of retrieved AOT is generally reduced by approximate 10 % in visible bands compared with those derived from the standard atmospheric correction (AC) scheme, since all the spectral measurements can be used jointly to increase the information content in the inversion of AOT, and the wind speed is also simultaneously retrieved to compensate the specular reflectance error estimated from the rough ocean surface model. For the retrieval of nLw, atmospheric overcorrection can be avoided in order to have a significant improvement of the inversion of nLw at 412 nm. Furthermore, generally better estimates of band ratios of nLw(443) / nLw(554) and nLw(488) / nLw(554) are obtained using the simultaneous retrieval approach with lower root mean square errors and relative differences than those derived from the standard AC approach in comparison to the AERONET-OC products, as well as the APD values of retrieved Chl which decreased by about 5 %. On the other hand, the standard AC scheme yields a more accurate retrieval of nLw at 488 nm, prompting a further optimization of the oceanic bio-optical module of the current model.
Study on improved Ip-iq APF control algorithm and its application in micro grid
NASA Astrophysics Data System (ADS)
Xie, Xifeng; Shi, Hua; Deng, Haiyingv
2018-01-01
In order to enhance the tracking velocity and accuracy of harmonic detection by ip-iq algorithm, a novel ip-iq control algorithm based on the Instantaneous reactive power theory is presented, the improved algorithm adds the lead correction link to adjust the zero point of the detection system, the Fuzzy Self-Tuning Adaptive PI control is introduced to dynamically adjust the DC-link Voltage, which meets the requirement of the harmonic compensation of the micro grid. Simulation and experimental results verify the proposed method is feasible and effective in micro grid.
NASA Technical Reports Server (NTRS)
Tsay, Si-Chee; Stamnes, Knut; Wiscombe, Warren; Laszlo, Istvan; Einaudi, Franco (Technical Monitor)
2000-01-01
This update reports a state-of-the-art discrete ordinate algorithm for monochromatic unpolarized radiative transfer in non-isothermal, vertically inhomogeneous, but horizontally homogeneous media. The physical processes included are Planckian thermal emission, scattering with arbitrary phase function, absorption, and surface bidirectional reflection. The system may be driven by parallel or isotropic diffuse radiation incident at the top boundary, as well as by internal thermal sources and thermal emission from the boundaries. Radiances, fluxes, and mean intensities are returned at user-specified angles and levels. DISORT has enjoyed considerable popularity in the atmospheric science and other communities since its introduction in 1988. Several new DISORT features are described in this update: intensity correction algorithms designed to compensate for the 8-M forward-peak scaling and obtain accurate intensities even in low orders of approximation; a more general surface bidirectional reflection option; and an exponential-linear approximation of the Planck function allowing more accurate solutions in the presence of large temperature gradients. DISORT has been designed to be an exemplar of good scientific software as well as a program of intrinsic utility. An extraordinary effort has been made to make it numerically well-conditioned, error-resistant, and user-friendly, and to take advantage of robust existing software tools. A thorough test suite is provided to verify the program both against published results, and for consistency where there are no published results. This careful attention to software design has been just as important in DISORT's popularity as its powerful algorithmic content.
Atmospheric transformation of multispectral remote sensor data. [Great Lakes
NASA Technical Reports Server (NTRS)
Turner, R. E. (Principal Investigator)
1977-01-01
The author has identified the following significant results. The effects of earth's atmosphere were accounted for, and a simple algorithm, based upon a radiative transfer model, was developed to determine the radiance at earth's surface free of atmospheric effects. Acutal multispectral remote sensor data for Lake Erie and associated optical thickness data were used to demonstrate the effectiveness of the atmospheric transformation algorithm. The basic transformation was general in nature and could be applied to the large scale processing of multispectral aircraft or satellite remote sensor data.
NASA Technical Reports Server (NTRS)
Han, Jongil; Arya, S. Pal; Shaohua, Shen; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)
2000-01-01
Algorithms are developed to extract atmospheric boundary layer profiles for turbulence kinetic energy (TKE) and energy dissipation rate (EDR), with data from a meteorological tower as input. The profiles are based on similarity theory and scalings for the atmospheric boundary layer. The calculated profiles of EDR and TKE are required to match the observed values at 5 and 40 m. The algorithms are coded for operational use and yield plausible profiles over the diurnal variation of the atmospheric boundary layer.
NASA Astrophysics Data System (ADS)
Dubovik, O.; Litvinov, P.; Lapyonok, T.; Ducos, F.; Fuertes, D.; Huang, X.; Torres, B.; Aspetsberger, M.; Federspiel, C.
2014-12-01
The POLDER imager on board of the PARASOL micro-satellite is the only satellite polarimeter provided ~ 9 years extensive record of detailed polarmertic observations of Earth atmosphere from space. POLDER / PARASOL registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. Such observations have very high sensitivity to the variability of the properties of atmosphere and underlying surface and can not be adequately interpreted using look-up-table retrieval algorithms developed for analyzing mono-viewing intensity only observations traditionally used in atmospheric remote sensing. Therefore, a new enhanced retrieval algorithm GRASP (Generalized Retrieval of Aerosol and Surface Properties) has been developed and applied for processing of PARASOL data. GRASP relies on highly optimized statistical fitting of observations and derives large number of unknowns for each observed pixel. The algorithm uses elaborated model of the atmosphere and fully accounts for all multiple interactions of scattered solar light with aerosol, gases and the underlying surface. All calculations are implemented during inversion and no look-up tables are used. The algorithm is very flexible in utilization of various types of a priori constraints on the retrieved characteristics and in parameterization of surface - atmosphere system. It is also optimized for high performance calculations. The results of the PARASOL data processing will be presented with the emphasis on the discussion of transferability and adaptability of the developed retrieval concept for processing polarimetric observations of other planets. For example, flexibility and possible alternative in modeling properties of aerosol polydisperse mixtures, particle composition and shape, reflectance of surface, etc. will be discussed.
NASA Astrophysics Data System (ADS)
Lee, Kwon-Ho; Kim, Wonkook
2017-04-01
The geostationary ocean color imager-II (GOCI-II), designed to be focused on the ocean environmental monitoring with better spatial (250m for local and 1km for full disk) and spectral resolution (13 bands) then the current operational mission of the GOCI-I. GOCI-II will be launched in 2018. This study presents currently developing algorithm for atmospheric correction and retrieval of surface reflectance over land to be optimized with the sensor's characteristics. We first derived the top-of-atmosphere radiances as the proxy data derived from the parameterized radiative transfer code in the 13 bands of GOCI-II. Based on the proxy data, the algorithm has been made with cloud masking, gas absorption correction, aerosol inversion, computation of aerosol extinction correction. The retrieved surface reflectances are evaluated by the MODIS level 2 surface reflectance products (MOD09). For the initial test period, the algorithm gave error of within 0.05 compared to MOD09. Further work will be progressed to fully implement the GOCI-II Ground Segment system (G2GS) algorithm development environment. These atmospherically corrected surface reflectance product will be the standard GOCI-II product after launch.
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
The controllability of the aeroassist flight experiment atmospheric skip trajectory
NASA Technical Reports Server (NTRS)
Wood, R.
1989-01-01
The Aeroassist Flight Experiment (AFE) will be the first vehicle to simulate a return from geosynchronous orbit, deplete energy during an aerobraking maneuver, and navigate back out of the atmosphere to a low earth orbit It will gather scientific data necessary for future Aeroasisted Orbitl Transfer Vehicles (AOTV's). Critical to mission success is the ability of the atmospheric guidance to accurately attain a targeted post-aeropass orbital apogee while nulling inclination errors and compensating for dispersions in state, aerodynamic, and atmospheric parameters. In typing to satisfy mission constraints, atmospheric entry-interface (EI) conditions, guidance gains, and trajectory. The results of the investigation are presented; emphasizing the adverse effects of dispersed atmospheres on trajectory controllability.
EO-1 analysis applicable to coastal characterization
NASA Astrophysics Data System (ADS)
Burke, Hsiao-hua K.; Misra, Bijoy; Hsu, Su May; Griffin, Michael K.; Upham, Carolyn; Farrar, Kris
2003-09-01
The EO-1 satellite is part of NASA's New Millennium Program (NMP). It consists of three imaging sensors: the multi-spectral Advanced Land Imager (ALI), Hyperion and Atmospheric Corrector. Hyperion provides a high-resolution hyperspectral imager capable of resolving 220 spectral bands (from 0.4 to 2.5 micron) with a 30 m resolution. The instrument images a 7.5 km by 100 km land area per image. Hyperion is currently the only space-borne HSI data source since the launch of EO-1 in late 2000. The discussion begins with the unique capability of hyperspectral sensing to coastal characterization: (1) most ocean feature algorithms are semi-empirical retrievals and HSI has all spectral bands to provide legacy with previous sensors and to explore new information, (2) coastal features are more complex than those of deep ocean that coupled effects are best resolved with HSI, and (3) with contiguous spectral coverage, atmospheric compensation can be done with more accuracy and confidence, especially since atmospheric aerosol effects are the most pronounced in the visible region where coastal feature lie. EO-1 data from Chesapeake Bay from 19 February 2002 are analyzed. In this presentation, it is first illustrated that hyperspectral data inherently provide more information for feature extraction than multispectral data despite Hyperion has lower SNR than ALI. Chlorophyll retrievals are also shown. The results compare favorably with data from other sources. The analysis illustrates the potential value of Hyperion (and HSI in general) data to coastal characterization. Future measurement requirements (air borne and space borne) are also discussed.
NASA Astrophysics Data System (ADS)
Hutton, Brian F.; Lau, Yiu H.
1998-06-01
Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for
data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-
). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than
improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.
NASA Astrophysics Data System (ADS)
Clements, J. M.; Sellers, E. W.; Ryan, D. B.; Caves, K.; Collins, L. M.; Throckmorton, C. S.
2016-12-01
Objective. Dry electrodes have an advantage over gel-based ‘wet’ electrodes by providing quicker set-up time for electroencephalography recording; however, the potentially poorer contact can result in noisier recordings. We examine the impact that this may have on brain-computer interface communication and potential approaches for mitigation. Approach. We present a performance comparison of wet and dry electrodes for use with the P300 speller system in both healthy participants and participants with communication disabilities (ALS and PLS), and investigate the potential for a data-driven dynamic data collection algorithm to compensate for the lower signal-to-noise ratio (SNR) in dry systems. Main results. Performance results from sixteen healthy participants obtained in the standard static data collection environment demonstrate a substantial loss in accuracy with the dry system. Using a dynamic stopping algorithm, performance may have been improved by collecting more data in the dry system for ten healthy participants and eight participants with communication disabilities; however, the algorithm did not fully compensate for the lower SNR of the dry system. An analysis of the wet and dry system recordings revealed that delta and theta frequency band power (0.1-4 Hz and 4-8 Hz, respectively) are consistently higher in dry system recordings across participants, indicating that transient and drift artifacts may be an issue for dry systems. Significance. Using dry electrodes is desirable for reduced set-up time; however, this study demonstrates that online performance is significantly poorer than for wet electrodes for users with and without disabilities. We test a new application of dynamic stopping algorithms to compensate for poorer SNR. Dynamic stopping improved dry system performance; however, further signal processing efforts are likely necessary for full mitigation.
Coastal Zone Color Scanner atmospheric correction - Influence of El Chichon
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Castano, Diego J.
1988-01-01
The addition of an El Chichon-like aerosol layer in the stratosphere is shown to have very little effect on the basic CZCS atmospheric correction algorithm. The additional stratospheric aerosol is found to increase the total radiance exiting the atmosphere, thereby increasing the probability that the sensor will saturate. It is suggested that in the absence of saturation the correction algorithm should perform as well as in the absence of the stratospheric layer.
ECG-gated interventional cardiac reconstruction for non-periodic motion.
Rohkohl, Christopher; Lauritsch, Günter; Biller, Lisa; Hornegger, Joachim
2010-01-01
The 3-D reconstruction of cardiac vasculature using C-arm CT is an active and challenging field of research. In interventional environments patients often do have arrhythmic heart signals or cannot hold breath during the complete data acquisition. This important group of patients cannot be reconstructed with current approaches that do strongly depend on a high degree of cardiac motion periodicity for working properly. In a last year's MICCAI contribution a first algorithm was presented that is able to estimate non-periodic 4-D motion patterns. However, to some degree that algorithm still depends on periodicity, as it requires a prior image which is obtained using a simple ECG-gated reconstruction. In this work we aim to provide a solution to this problem by developing a motion compensated ECG-gating algorithm. It is built upon a 4-D time-continuous affine motion model which is capable of compactly describing highly non-periodic motion patterns. A stochastic optimization scheme is derived which minimizes the error between the measured projection data and the forward projection of the motion compensated reconstruction. For evaluation, the algorithm is applied to 5 datasets of the left coronary arteries of patients that have ignored the breath hold command and/or had arrhythmic heart signals during the data acquisition. By applying the developed algorithm the average visibility of the vessel segments could be increased by 27%. The results show that the proposed algorithm provides excellent reconstruction quality in cases where classical approaches fail. The algorithm is highly parallelizable and a clinically feasible runtime of under 4 minutes is achieved using modern graphics card hardware.
Closed-loop endo-atmospheric ascent guidance for reusable launch vehicle
NASA Astrophysics Data System (ADS)
Sun, Hongsheng
This dissertation focuses on the development of a closed-loop endo-atmospheric ascent guidance algorithm for the 2nd generation reusable launch vehicle. Special attention has been given to the issues that impact on viability, complexity and reliability in on-board implementation. The algorithm is called once every guidance update cycle to recalculate the optimal solution based on the current flight condition, taking into account atmospheric effects and path constraints. This is different from traditional ascent guidance algorithms which operate in a simple open-loop mode inside atmosphere, and later switch to a closed-loop vacuum ascent guidance scheme. The classical finite difference method is shown to be well suited for fast solution of the constrained optimal three-dimensional ascent problem. The initial guesses for the solutions are generated using an analytical vacuum optimal ascent guidance algorithm. Homotopy method is employed to gradually introduce the aerodynamic forces to generate the optimal solution from the optimal vacuum solution. The vehicle chosen for this study is the Lockheed Martin X-33 lifting-body reusable launch vehicle. To verify the algorithm presented in this dissertation, a series of open-loop and closed-loop tests are performed for three different missions. Wind effects are also studied in the closed-loop simulations. For comparison, the solutions for the same missions are also obtained by two independent optimization softwares. The results clearly establish the feasibility of closed-loop endo-atmospheric ascent guidance of rocket-powered launch vehicles. ATO cases are also tested to assess the adaptability of the algorithm to autonomously incorporate the abort modes.
Digital watermarking algorithm research of color images based on quaternion Fourier transform
NASA Astrophysics Data System (ADS)
An, Mali; Wang, Weijiang; Zhao, Zhen
2013-10-01
A watermarking algorithm of color images based on the quaternion Fourier Transform (QFFT) and improved quantization index algorithm (QIM) is proposed in this paper. The original image is transformed by QFFT, the watermark image is processed by compression and quantization coding, and then the processed watermark image is embedded into the components of the transformed original image. It achieves embedding and blind extraction of the watermark image. The experimental results show that the watermarking algorithm based on the improved QIM algorithm with distortion compensation achieves a good tradeoff between invisibility and robustness, and better robustness for the attacks of Gaussian noises, salt and pepper noises, JPEG compression, cropping, filtering and image enhancement than the traditional QIM algorithm.
Extended use of two crossed Babinet compensators for wavefront sensing in adaptive optics
NASA Astrophysics Data System (ADS)
Paul, Lancelot; Kumar Saxena, Ajay
2010-12-01
An extended use of two crossed Babinet compensators as a wavefront sensor for adaptive optics applications is proposed. This method is based on the lateral shearing interferometry technique in two directions. A single record of the fringes in a pupil plane provides the information about the wavefront. The theoretical simulations based on this approach for various atmospheric conditions and other errors of optical surfaces are provided for better understanding of this method. Derivation of the results from a laboratory experiment using simulated atmospheric conditions demonstrates the steps involved in data analysis and wavefront evaluation. It is shown that this method has a higher degree of freedom in terms of subapertures and on the choice of detectors, and can be suitably adopted for real-time wavefront sensing for adaptive optics.
Extracting atmospheric turbulence and aerosol characteristics from passive imagery
NASA Astrophysics Data System (ADS)
Reinhardt, Colin N.; Wayne, D.; McBryde, K.; Cauble, G.
2013-09-01
Obtaining accurate, precise and timely information about the local atmospheric turbulence and extinction conditions and aerosol/particulate content remains a difficult problem with incomplete solutions. It has important applications in areas such as optical and IR free-space communications, imaging systems performance, and the propagation of directed energy. The capability to utilize passive imaging data to extract parameters characterizing atmospheric turbulence and aerosol/particulate conditions would represent a valuable addition to the current piecemeal toolset for atmospheric sensing. Our research investigates an application of fundamental results from optical turbulence theory and aerosol extinction theory combined with recent advances in image-quality-metrics (IQM) and image-quality-assessment (IQA) methods. We have developed an algorithm which extracts important parameters used for characterizing atmospheric turbulence and extinction along the propagation channel, such as the refractive-index structure parameter C2n , the Fried atmospheric coherence width r0 , and the atmospheric extinction coefficient βext , from passive image data. We will analyze the algorithm performance using simulations based on modeling with turbulence modulation transfer functions. An experimental field campaign was organized and data were collected from passive imaging through turbulence of Siemens star resolution targets over several short littoral paths in Point Loma, San Diego, under conditions various turbulence intensities. We present initial results of the algorithm's effectiveness using this field data and compare against measurements taken concurrently with other standard atmospheric characterization equipment. We also discuss some of the challenges encountered with the algorithm, tasks currently in progress, and approaches planned for improving the performance in the near future.
NASA Astrophysics Data System (ADS)
Waldmann, Ingo
2016-10-01
Radiative transfer retrievals have become the standard in modelling of exoplanetary transmission and emission spectra. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain.To address these issues, we have developed the Tau-REx (tau-retrieval of exoplanets) retrieval and the RobERt spectral recognition algorithms. Tau-REx is a bayesian atmospheric retrieval framework using Nested Sampling and cluster computing to fully map these large correlated parameter spaces. Nonetheless, data volumes can become prohibitively large and we must often select a subset of potential molecular/atomic absorbers in an atmosphere.In the era of open-source, automated and self-sufficient retrieval algorithms, such manual input should be avoided. User dependent input could, in worst case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is build to address these issues. RobERt is a deep belief neural (DBN) networks trained to accurately recognise molecular signatures for a wide range of planets, atmospheric thermal profiles and compositions. Using these deep neural networks, we work towards retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.In this talk I will discuss how neural networks and Bayesian Nested Sampling can be used to solve highly degenerate spectral retrieval problems and what 'dreaming' neural networks can tell us about atmospheric characteristics.
Sigint Application for Polymorphous Computing Architecture (PCA): Wideband DF
2006-08-01
Polymorphous Computing Architecture (PCA) program as stated by Robert Graybill is to Develop the computing foundation for agile systems by establishing...ubiquitous MUSIC algorithm rely upon an underlying narrowband signal model [8]. In this case, narrowband means that the signal bandwidth is less than...a wideband DF algorithm is needed to compensate for this model inadequacy. Among the various wideband DF techniques available, the coherent signal
Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle
2013-01-01
The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.
Cassini versus Saturn Illustration
2017-04-04
As depicted in this illustration, Cassini will plunge into Saturn's atmosphere on Sept. 15, 2017. Using its attitude control thrusters, the spacecraft will work to keep its antenna pointed at Earth while it sends its final data, including the composition of Saturn's upper atmosphere. The atmospheric torque will quickly become stronger than what the thrusters can compensate for, and after that point, Cassini will begin to tumble. When this happens, its radio connection to Earth will be severed, ending the mission. Following loss of signal, the spacecraft will burn up like a meteor in Saturn's upper atmosphere. https://photojournal.jpl.nasa.gov/catalog/PIA21440
50 CFR 600.245 - Council member compensation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE MAGNUSON-STEVENS ACT PROVISIONS Council Membership § 600... direct line item on a contractual basis without deductions being made for Social Security or Federal and...
Partial compensation interferometry measurement system for parameter errors of conicoid surface
NASA Astrophysics Data System (ADS)
Hao, Qun; Li, Tengfei; Hu, Yao; Wang, Shaopu; Ning, Yan; Chen, Zhuo
2018-06-01
Surface parameters, such as vertex radius of curvature and conic constant, are used to describe the shape of an aspheric surface. Surface parameter errors (SPEs) are deviations affecting the optical characteristics of an aspheric surface. Precise measurement of SPEs is critical in the evaluation of optical surfaces. In this paper, a partial compensation interferometry measurement system for SPE of a conicoid surface is proposed based on the theory of slope asphericity and the best compensation distance. The system is developed to measure the SPE-caused best compensation distance change and SPE-caused surface shape change and then calculate the SPEs with the iteration algorithm for accuracy improvement. Experimental results indicate that the average relative measurement accuracy of the proposed system could be better than 0.02% for the vertex radius of curvature error and 2% for the conic constant error.
Adaptive compensation of aberrations in ultrafast 3D microscopy using a deformable mirror
NASA Astrophysics Data System (ADS)
Sherman, Leah R.; Albert, O.; Schmidt, Christoph F.; Vdovin, Gleb V.; Mourou, Gerard A.; Norris, Theodore B.
2000-05-01
3D imaging using a multiphoton scanning confocal microscope is ultimately limited by aberrations of the system. We describe a system to adaptively compensate the aberrations with a deformable mirror. We have increased the transverse scanning range of the microscope by three with compensation of off-axis aberrations.We have also significantly increased the longitudinal scanning depth with compensation of spherical aberrations from the penetration into the sample. Our correction is based on a genetic algorithm that uses second harmonic or two-photon fluorescence signal excited by femtosecond pulses from the sample as the enhancement parameter. This allows us to globally optimize the wavefront without a wavefront measurement. To improve the speed of the optimization we use Zernike polynomials as the basis for correction. Corrections can be stored in a database for look-up with future samples.
Simulating large atmospheric phase screens using a woofer-tweeter algorithm.
Buscher, David F
2016-10-03
We describe an algorithm for simulating atmospheric wavefront perturbations over ranges of spatial and temporal scales spanning more than 4 orders of magnitude. An open-source implementation of the algorithm written in Python can simulate the evolution of the perturbations more than an order-of-magnitude faster than real time. Testing of the implementation using metrics appropriate to adaptive optics systems and long-baseline interferometers show accuracies at the few percent level or better.
Scientific impact of MODIS C5 calibration degradation and C6+ improvements
NASA Astrophysics Data System (ADS)
Lyapustin, A.; Wang, Y.; Xiong, X.; Meister, G.; Platnick, S.; Levy, R.; Franz, B.; Korkin, S.; Hilker, T.; Tucker, J.; Hall, F.; Sellers, P.; Wu, A.; Angal, A.
2014-12-01
The Collection 6 (C6) MODIS (Moderate Resolution Imaging Spectroradiometer) land and atmosphere data sets are scheduled for release in 2014. C6 contains significant revisions of the calibration approach to account for sensor aging. This analysis documents the presence of systematic temporal trends in the visible and near-infrared (500 m) bands of the Collection 5 (C5) MODIS Terra and, to lesser extent, in MODIS Aqua geophysical data sets. Sensor degradation is largest in the blue band (B3) of the MODIS sensor on Terra and decreases with wavelength. Calibration degradation causes negative global trends in multiple MODIS C5 products including the dark target algorithm's aerosol optical depth over land and Ångström exponent over the ocean, global liquid water and ice cloud optical thickness, as well as surface reflectance and vegetation indices, including the normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI). As the C5 production will be maintained for another year in parallel with C6, one objective of this paper is to raise awareness of the calibration-related trends for the broad MODIS user community. The new C6 calibration approach removes major calibrations trends in the Level 1B (L1B) data. This paper also introduces an enhanced C6+ calibration of the MODIS data set which includes an additional polarization correction (PC) to compensate for the increased polarization sensitivity of MODIS Terra since about 2007, as well as detrending and Terra-Aqua cross-calibration over quasi-stable desert calibration sites. The PC algorithm, developed by the MODIS ocean biology processing group (OBPG), removes residual scan angle, mirror side and seasonal biases from aerosol and surface reflectance (SR) records along with spectral distortions of SR. Using the multiangle implementation of atmospheric correction (MAIAC) algorithm over deserts, we have also developed a detrending and cross-calibration method which removes residual decadal trends on the order of several tenths of 1% of the top-of-atmosphere (TOA) reflectance in the visible and near-infrared MODIS bands B1-B4, and provides a good consistency between the two MODIS sensors. MAIAC analysis over the southern USA shows that the C6+ approach removed an additional negative decadal trend of Terra ΔNDVI ~ 0.01 as compared to Aqua data. This change is particularly important for analysis of vegetation dynamics and trends in the tropics, e.g., Amazon rainforest, where the morning orbit of Terra provides considerably more cloud-free observations compared to the afternoon Aqua measurements.
Scientific Impact of MODIS C5 Calibration Degradation and C6+ Improvements
NASA Technical Reports Server (NTRS)
Lyapustin, A.; Wang, Y.; Xiong, X.; Meister, G.; Platnick, S.; Levy, R.; Franz, B.; Korkin, S.; Hilker, T.; Tucker, J.;
2014-01-01
The Collection 6 (C6) MODIS (Moderate Resolution Imaging Spectroradiometer) land and atmosphere data sets are scheduled for release in 2014. C6 contains significant revisions of the calibration approach to account for sensor aging. This analysis documents the presence of systematic temporal trends in the visible and near-infrared (500 m) bands of the Collection 5 (C5) MODIS Terra and, to lesser extent, in MODIS Aqua geophysical data sets. Sensor degradation is largest in the blue band (B3) of the MODIS sensor on Terra and decreases with wavelength. Calibration degradation causes negative global trends in multiple MODIS C5 products including the dark target algorithm's aerosol optical depth over land and Ångstrom exponent over the ocean, global liquid water and ice cloud optical thickness, as well as surface reflectance and vegetation indices, including the normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI). As the C5 production will be maintained for another year in parallel with C6, one objective of this paper is to raise awareness of the calibration-related trends for the broad MODIS user community. The new C6 calibration approach removes major calibrations trends in the Level 1B (L1B) data. This paper also introduces an enhanced C6C calibration of the MODIS data set which includes an additional polarization correction (PC) to compensate for the increased polarization sensitivity of MODIS Terra since about 2007, as well as detrending and Terra- Aqua cross-calibration over quasi-stable desert calibration sites. The PC algorithm, developed by the MODIS ocean biology processing group (OBPG), removes residual scan angle, mirror side and seasonal biases from aerosol and surface reflectance (SR) records along with spectral distortions of SR. Using the multiangle implementation of atmospheric correction (MAIAC) algorithm over deserts, we have also developed a detrending and cross-calibration method which removes residual decadal trends on the order of several tenths of 1% of the top-of-atmosphere (TOA) reflectance in the visible and near-infrared MODIS bands B1-B4, and provides a good consistency between the two MODIS sensors. MAIAC analysis over the southern USA shows that the C6C approach removed an additional negative decadal trend of Terra (Delta)NDVI approx.0.01 as compared to Aqua data. This change is particularly important for analysis of vegetation dynamics and trends in the tropics, e.g., Amazon rainforest, where the morning orbit of Terra provides considerably more cloud-free observations compared to the afternoon Aqua measurements.
NASA Astrophysics Data System (ADS)
Vieira, V. M. N. C. S.; Sahlée, E.; Jurus, P.; Clementi, E.; Pettersson, H.; Mateus, M.
2015-09-01
Earth-System and regional models, forecasting climate change and its impacts, simulate atmosphere-ocean gas exchanges using classical yet too simple generalizations relying on wind speed as the sole mediator while neglecting factors as sea-surface agitation, atmospheric stability, current drag with the bottom, rain and surfactants. These were proved fundamental for accurate estimates, particularly in the coastal ocean, where a significant part of the atmosphere-ocean greenhouse gas exchanges occurs. We include several of these factors in a customizable algorithm proposed for the basis of novel couplers of the atmospheric and oceanographic model components. We tested performances with measured and simulated data from the European coastal ocean, having found our algorithm to forecast greenhouse gas exchanges largely different from the forecasted by the generalization currently in use. Our algorithm allows calculus vectorization and parallel processing, improving computational speed roughly 12× in a single cpu core, an essential feature for Earth-System models applications.
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Wick, Gary; Bosilovich, Michael G.
2005-01-01
Remote sensing methodologies for turbulent heat fluxes over oceans depend on driving bulk formulations of fluxes with measured surface winds and estimated near surface thermodynamics from microwave sensors of the Special Sensor Microwave Imager (SSM/I) heritage. We will review recent work with a number of SSM/I-based algorithms and investigate the ability of current data sets to document global, tropical ocean-averaged evaporation changes in association with El Nino and La Nina SST changes. We show that in addition to interannual signals, latent heat flux increases over the period since late 1987 range from approx. .1 to .6 mm/ day are present; these represent trends 2 to 3 times larger than the NCEP Reanalysis. Since atmospheric storage cannot account for the difference, and since compensating evapotranspiration changes over land are highly unlikely to be this large, these evaporation estimates cannot be reconciled with ocean precipitation records such as those produced by the Global Precipitation Climatology Project, GPCP. The reasons for the disagreement include less than adequate intercalibration between SSM/I sensors providing winds and water vapor for driving the algorithms, biases due to the assumption that column integrated water vapor mirrors near surface water vapor variations, and other factors as well. The reanalyses have their own problems with spin-up during assimilation, lack of constraining input data at the ocean surface, and amplitude of synoptic transients.
Using random forests to diagnose aviation turbulence.
Williams, John K
Atmospheric turbulence poses a significant hazard to aviation, with severe encounters costing airlines millions of dollars per year in compensation, aircraft damage, and delays due to required post-event inspections and repairs. Moreover, attempts to avoid turbulent airspace cause flight delays and en route deviations that increase air traffic controller workload, disrupt schedules of air crews and passengers and use extra fuel. For these reasons, the Federal Aviation Administration and the National Aeronautics and Space Administration have funded the development of automated turbulence detection, diagnosis and forecasting products. This paper describes a methodology for fusing data from diverse sources and producing a real-time diagnosis of turbulence associated with thunderstorms, a significant cause of weather delays and turbulence encounters that is not well-addressed by current turbulence forecasts. The data fusion algorithm is trained using a retrospective dataset that includes objective turbulence reports from commercial aircraft and collocated predictor data. It is evaluated on an independent test set using several performance metrics including receiver operating characteristic curves, which are used for FAA turbulence product evaluations prior to their deployment. A prototype implementation fuses data from Doppler radar, geostationary satellites, a lightning detection network and a numerical weather prediction model to produce deterministic and probabilistic turbulence assessments suitable for use by air traffic managers, dispatchers and pilots. The algorithm is scheduled to be operationally implemented at the National Weather Service's Aviation Weather Center in 2014.
A distributed automatic target recognition system using multiple low resolution sensors
NASA Astrophysics Data System (ADS)
Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj
2008-04-01
In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.
NASA Technical Reports Server (NTRS)
Dave, J. V.
1976-01-01
Two computer algorithms are described. These algorithms were used for computing the aximuth-independent component of the intensity of the monochromatic radiation emerging at the top of a pseudo-spherical atmosphere with arbitrary vertical distribution of ozone, and with any arbitrary height distribution of up to two different kinds of aerosol. This atmospheric model was assumed to rest on a surface obeying Lambert's law of reflection.
Wind profiling based on the optical beam intensity statistics in a turbulent atmosphere.
Banakh, Victor A; Marakasov, Dimitrii A
2007-10-01
Reconstruction of the wind profile from the statistics of intensity fluctuations of an optical beam propagating in a turbulent atmosphere is considered. The equations for the spatiotemporal correlation function and the spectrum of weak intensity fluctuations of a Gaussian beam are obtained. The algorithms of wind profile retrieval from the spatiotemporal intensity spectrum are described and the results of end-to-end computer experiments on wind profiling based on the developed algorithms are presented. It is shown that the developed algorithms allow retrieval of the wind profile from the turbulent optical beam intensity fluctuations with acceptable accuracy in many practically feasible laser measurements set up in the atmosphere.
NASA Technical Reports Server (NTRS)
Powell, Richard W.
1998-01-01
This paper describes the development and evaluation of a numerical roll reversal predictor-corrector guidance algorithm for the atmospheric flight portion of the Mars Surveyor Program 2001 Orbiter and Lander missions. The Lander mission utilizes direct entry and has a demanding requirement to deploy its parachute within 10 km of the target deployment point. The Orbiter mission utilizes aerocapture to achieve a precise captured orbit with a single atmospheric pass. Detailed descriptions of these predictor-corrector algorithms are given. Also, results of three and six degree-of-freedom Monte Carlo simulations which include navigation, aerodynamics, mass properties and atmospheric density uncertainties are presented.
NASA Technical Reports Server (NTRS)
Toon, Owen B.; Mckay, C. P.; Ackerman, T. P.; Santhanam, K.
1989-01-01
The solution of the generalized two-stream approximation for radiative transfer in homogeneous multiple scattering atmospheres is extended to vertically inhomogeneous atmospheres in a manner which is numerically stable and computationally efficient. It is shown that solar energy deposition rates, photolysis rates, and infrared cooling rates all may be calculated with the simple modifications of a single algorithm. The accuracy of the algorithm is generally better than 10 percent, so that other uncertainties, such as in absorption coefficients, may often dominate the error in calculation of the quantities of interest to atmospheric studies.
NASA Technical Reports Server (NTRS)
Aires, F.; Chedin, A.; Scott, N. A.; Rossow, W. B.; Hansen, James E. (Technical Monitor)
2001-01-01
Abstract In this paper, a fast atmospheric and surface temperature retrieval algorithm is developed for the high resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. This algorithm is constructed on the basis of a neural network technique that has been regularized by introduction of a priori information. The performance of the resulting fast and accurate inverse radiative transfer model is presented for a large divE:rsified dataset of radiosonde atmospheres including rare events. Two configurations are considered: a tropical-airmass specialized scheme and an all-air-masses scheme.
Investigation of ammonia air-surface exchange processes in a ...
Recent assessments of atmospheric deposition in North America note the increasing importance of reduced (NHx = NH3 + NH4+) forms of nitrogen (N) relative to oxidized forms. This shift in in the composition of inorganic nitrogen deposition has both ecological and policy implications. Deposition budgets developed from inferential models applied at the landscape scale, as well as regional and global chemical transport models, indicate that NH3 dry deposition contributes a significant portion of inorganic N deposition in many areas. However, the bidirectional NH3 flux algorithms employed in these models have not been extensively evaluated for North American conditions (e.g, atmospheric chemistry, meteorology, biogeochemistry). Further understanding of the processes controlling NH3 air-surface exchange in natural systems is critically needed. Based on preliminary results from the Southern Appalachian Nitrogen Deposition Study (SANDS), this presentation examines processes of NH3 air-surface exchange in a deciduous montane forest at the Coweeta Hydrologic Laboratory in western North Carolina. A combination of measurements and modeling are used to investigate net fluxes of NH3 above the forest and sources and sinks of NH3 within the canopy and forest floor. Measurements of biogeochemical NH4+ pools are used to characterize emission potential and NH3 compensation points of canopy foliage (i.e., green vegetation), leaf litter, and soil and their relation to NH3 fluxes
NASA Technical Reports Server (NTRS)
Johnson, P. R.; Bardusch, R. E.
1974-01-01
A hydraulic control loading system for aircraft simulation was analyzed to find the causes of undesirable low frequency oscillations and loading effects in the output. The hypothesis of mechanical compliance in the control linkage was substantiated by comparing the behavior of a mathematical model of the system with previously obtained experimental data. A compensation scheme based on the minimum integral of the squared difference between desired and actual output was shown to be effective in reducing the undesirable output effects. The structure of the proposed compensation was computed by use of a dynamic programing algorithm and a linear state space model of the fixed elements in the system.
Hysteresis compensation of piezoelectric deformable mirror based on Prandtl-Ishlinskii model
NASA Astrophysics Data System (ADS)
Ma, Jianqiang; Tian, Lei; Li, Yan; Yang, Zongfeng; Cui, Yuguo; Chu, Jiaru
2018-06-01
Hysteresis of piezoelectric deformable mirror (DM) reduces the closed-loop bandwidth and the open-loop correction accuracy of adaptive optics (AO) systems. In this work, a classical Prandtl-Ishlinskii (PI) model is employed to model the hysteresis behavior of a unimorph DM with 20 actuators. A modified control algorithm combined with the inverse PI model is developed for piezoelectric DMs. With the help of PI model, the hysteresis of the DM was reduced effectively from about 9% to 1%. Furthermore, open-loop regenerations of low-order aberrations with or without hysteresis compensation were carried out. The experimental results demonstrate that the regeneration accuracy with PI model compensation is significantly improved.
NASA Astrophysics Data System (ADS)
Yashvantrai Vyas, Bhargav; Maheshwari, Rudra Prakash; Das, Biswarup
2016-06-01
Application of series compensation in extra high voltage (EHV) transmission line makes the protection job difficult for engineers, due to alteration in system parameters and measurements. The problem amplifies with inclusion of electronically controlled compensation like thyristor controlled series compensation (TCSC) as it produce harmonics and rapid change in system parameters during fault associated with TCSC control. This paper presents a pattern recognition based fault type identification approach with support vector machine. The scheme uses only half cycle post fault data of three phase currents to accomplish the task. The change in current signal features during fault has been considered as discriminatory measure. The developed scheme in this paper is tested over a large set of fault data with variation in system and fault parameters. These fault cases have been generated with PSCAD/EMTDC on a 400 kV, 300 km transmission line model. The developed algorithm has proved better for implementation on TCSC compensated line with its improved accuracy and speed.
NASA Astrophysics Data System (ADS)
Yang, Qi; Deng, Bin; Wang, Hongqiang; Zhang, Ye; Qin, Yuliang
2018-01-01
Imaging, classification, and recognition techniques of ballistic targets in midcourse have always been the focus of research in the radar field for military applications. However, the high velocity translation of ballistic targets will subject range profile and Doppler to translation, slope, and fold, which are especially severe in the terahertz region. Therefore, a two-step translation compensation method based on envelope alignment is presented. The rough compensation is based on the traditional envelope alignment algorithm in inverse synthetic aperture radar imaging, and the fine compensation is supported by distance fitting. Then, a wideband imaging radar system with a carrier frequency of 0.32 THz is introduced, and an experiment on a precession missile model is carried out. After translation compensation with the method proposed in this paper, the range profile and the micro-Doppler distributions unaffected by translation are obtained, providing an important foundation for the high-resolution imaging and micro-Doppler extraction of the terahertz radar.
An innovative approach to compensator design
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The primary goal is to present for a control system a computer-aided-compensator design technique from a frequency domain point of view. The thesis for developing this technique is to describe the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. In order to do this several definitions in regard to measuring the performance of a system in the frequency domain are given. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. Then for applying the constraint improvement algorithm generalized gradients for the constraints are derived. Finally, the necessary theory is incorporated in a computer program called CIP (compensator improvement program).
NASA Technical Reports Server (NTRS)
Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)
2001-01-01
A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Wang, Menghua
1992-01-01
The first step in the Coastal Zone Color Scanner (CZCS) atmospheric-correction algorithm is the computation of the Rayleigh-scattering (RS) contribution, L sub r, to the radiance leaving the top of the atmosphere over the ocean. In the present algorithm, L sub r is computed by assuming that the ocean surface is flat. Calculations of the radiance leaving an RS atmosphere overlying a rough Fresnel-reflecting ocean are presented to evaluate the radiance error caused by the flat-ocean assumption. Simulations are carried out to evaluate the error incurred when the CZCS-type algorithm is applied to a realistic ocean in which the surface is roughened by the wind. In situations where there is no direct sun glitter, it is concluded that the error induced by ignoring the Rayleigh-aerosol interaction is usually larger than that caused by ignoring the surface roughness. This suggests that, in refining algorithms for future sensors, more effort should be focused on dealing with the Rayleigh-aerosol interaction than on the roughness of the sea surface.
NASA Astrophysics Data System (ADS)
Lipton, A.; Moncet, J. L.; Payne, V.; Lynch, R.; Polonsky, I. N.
2017-12-01
We will present recent results from an algorithm for producing climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. Developments to be presented include the impact of a radiance-based pre-classification method for the atmospheric background. In addition to improving retrieval performance, pre-classification has the potential to reduce the sensitivity of the retrievals to the climatological data from which the background estimate and its error covariance are derived. We will also discuss evaluation of a method for mitigating the effect of clouds on the radiances, and enhancements of the radiative transfer forward model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk
Here, we introduce the RobERt (Robotic Exoplanet Recognition) algorithm for the classification of exoplanetary emission spectra. Spectral retrieval of exoplanetary atmospheres frequently requires the preselection of molecular/atomic opacities to be defined by the user. In the era of open-source, automated, and self-sufficient retrieval algorithms, manual input should be avoided. User dependent input could, in worst-case scenarios, lead to incomplete models and biases in the retrieval. The RobERt algorithm is based on deep-belief neural (DBN) networks trained to accurately recognize molecular signatures for a wide range of planets, atmospheric thermal profiles, and compositions. Reconstructions of the learned features, also referred to as themore » “dreams” of the network, indicate good convergence and an accurate representation of molecular features in the DBN. Using these deep neural networks, we work toward retrieval algorithms that themselves understand the nature of the observed spectra, are able to learn from current and past data, and make sensible qualitative preselections of atmospheric opacities to be used for the quantitative stage of the retrieval process.« less
Development and Evaluation of Algorithms for Breath Alcohol Screening.
Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael
2016-04-01
Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
System design of the annular suspension and pointing system /ASPS/
NASA Technical Reports Server (NTRS)
Cunningham, D. C.; Gismondi, T. P.; Wilson, G. W.
1978-01-01
This paper presents the control system design for the Annular Suspension and Pointing System. Actuator sizing and configuration of the system are explained, and the control laws developed for linearizing and compensating the magnetic bearings, roll induction motor and gimbal torquers are given. Decoupling, feedforward and error compensation for the vernier and gimbal controllers is developed. The algorithm for computing the strapdown attitude reference is derived, and the allowable sampling rates, time delays and quantization of control signals are specified.
Adaptive Control of Four-Leg VSC Based DSTATCOM in Distribution System
NASA Astrophysics Data System (ADS)
Singh, Bhim; Arya, Sabha Raj
2014-01-01
This work discusses an experimental performance of a four-leg Distribution Static Compensator (DSTATCOM) using an adaptive filter based approach. It is used for estimation of reference supply currents through extracting the fundamental active power components of three-phase distorted load currents. This control algorithm is implemented on an assembled DSTATCOM for harmonics elimination, neutral current compensation and load balancing, under nonlinear loads. Experimental results are discussed, and it is noticed that DSTATCOM is effective solution to perform satisfactory performance under load dynamics.
Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.
Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron
2017-09-01
During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding registration errors of 0.4 ± 0.3 mm, 0.2 ± 0.4 mm, and 0.8 ± 0.5°. The continuous method performed registration significantly faster (P < 0.05) than the user initiated method, with observed computation times of 35 ± 8 ms, 43 ± 16 ms, and 27 ± 5 ms for in-plane, out-of-plane, and roll motions, respectively, and corresponding registration errors of 0.2 ± 0.3 mm, 0.7 ± 0.4 mm, and 0.8 ± 1.0°. The presented method encourages real-time implementation of motion compensation algorithms in prostate biopsy with clinically acceptable registration errors. Continuous motion compensation demonstrated registration accuracy with submillimeter and subdegree error, while performing < 50 ms computation times. Image registration technique approaching the frame rate of an ultrasound system offers a key advantage to be smoothly integrated to the clinical workflow. In addition, this technique could be used further for a variety of image-guided interventional procedures to treat and diagnose patients by improving targeting accuracy. © 2017 American Association of Physicists in Medicine.
Water Quality Monitoring for Lake Constance with a Physically Based Algorithm for MERIS Data.
Odermatt, Daniel; Heege, Thomas; Nieke, Jens; Kneubühler, Mathias; Itten, Klaus
2008-08-05
A physically based algorithm is used for automatic processing of MERIS level 1B full resolution data. The algorithm is originally used with input variables for optimization with different sensors (i.e. channel recalibration and weighting), aquatic regions (i.e. specific inherent optical properties) or atmospheric conditions (i.e. aerosol models). For operational use, however, a lake-specific parameterization is required, representing an approximation of the spatio-temporal variation in atmospheric and hydrooptic conditions, and accounting for sensor properties. The algorithm performs atmospheric correction with a LUT for at-sensor radiance, and a downhill simplex inversion of chl-a, sm and y from subsurface irradiance reflectance. These outputs are enhanced by a selective filter, which makes use of the retrieval residuals. Regular chl-a sampling measurements by the Lake's protection authority coinciding with MERIS acquisitions were used for parameterization, training and validation.
NASA Astrophysics Data System (ADS)
Pantazis, Alexandros; Papayannis, Alexandros; Georgoussis, Georgios
2018-04-01
In this paper we present a development of novel algorithms and techniques implemented within the Laser Remote Sensing Laboratory (LRSL) of the National Technical University of Athens (NTUA), in collaboration with Raymetrics S.A., in order to incorporate them into a 3-Dimensional (3D) lidar. The lidar is transmitting at 355 nm in the eye safe region and the measurements then are transposed to the visual range at 550 nm, according to the World Meteorological Organization (WMO) and the International Civil Aviation Organization (ICAO) rules of daytime visibility. These algorithms are able to provide horizontal, slant and vertical visibility for tower aircraft controllers, meteorologists, but also from pilot's point of view. Other algorithms are also provided for detection of atmospheric layering in any given direction and vertical angle, along with the detection of the Planetary Boundary Layer Height (PBLH).
Banakh, V A; Marakasov, D A
2007-08-01
Reconstruction of a wind profile based on the statistics of plane-wave intensity fluctuations in a turbulent atmosphere is considered. The algorithm for wind profile retrieval from the spatiotemporal spectrum of plane-wave weak intensity fluctuations is described, and the results of end-to-end computer experiments on wind profiling based on the developed algorithm are presented. It is shown that the reconstructing algorithm allows retrieval of a wind profile from turbulent plane-wave intensity fluctuations with acceptable accuracy.
NASA Astrophysics Data System (ADS)
Gambacorta, A.; Nalli, N. R.; Tan, C.; Iturbide-Sanchez, F.; Wilson, M.; Zhang, K.; Xiong, X.; Barnet, C. D.; Sun, B.; Zhou, L.; Wheeler, A.; Reale, A.; Goldberg, M.
2017-12-01
The NOAA Unique Combined Atmospheric Processing System (NUCAPS) is the NOAA operational algorithm to retrieve thermodynamic and composition variables from hyper spectral thermal sounders such as CrIS, IASI and AIRS. The combined use of microwave sounders, such as ATMS, AMSU and MHS, enables full atmospheric sounding of the atmospheric column under all-sky conditions. NUCAPS retrieval products are accessible in near real time (about 1.5 hour delay) through the NOAA Comprehensive Large Array-data Stewardship System (CLASS). Since February 2015, NUCAPS retrievals have been also accessible via Direct Broadcast, with unprecedented low latency of less than 0.5 hours. NUCAPS builds on a long-term, multi-agency investment on algorithm research and development. The uniqueness of this algorithm consists in a number of features that are key in providing highly accurate and stable atmospheric retrievals, suitable for real time weather and air quality applications. Firstly, maximizing the use of the information content present in hyper spectral thermal measurements forms the foundation of the NUCAPS retrieval algorithm. Secondly, NUCAPS is a modular, name-list driven design. It can process multiple hyper spectral infrared sounders (on Aqua, NPP, MetOp and JPSS series) by mean of the same exact retrieval software executable and underlying spectroscopy. Finally, a cloud-clearing algorithm and a synergetic use of microwave radiance measurements enable full vertical sounding of the atmosphere, under all-sky regimes. As we transition toward improved hyper spectral missions, assessing retrieval skill and consistency across multiple platforms becomes a priority for real time users applications. Focus of this presentation is a general introduction on the recent improvements in the delivery of the NUCAPS full spectral resolution upgrade and an overview of the lessons learned from the 2017 Hazardous Weather Test bed Spring Experiment. Test cases will be shown on the use of NPP and MetOp NUCAPS under pre-convective, capping inversion and dry layer intrusion events.
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.
2006-01-01
Retrieving surface longwave radiation from space has been a difficult task since the surface downwelling longwave radiation (SDLW) are integrations from radiation emitted by the entire atmosphere, while those emitted from the upper atmosphere are absorbed before reaching the surface. It is particularly problematic when thick clouds are present since thick clouds will virtually block all the longwave radiation from above, while satellites observe atmosphere emissions mostly from above the clouds. Zhou and Cess developed an algorithm for retrieving SDLW based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for areas that were covered with ice clouds. An improved version of the algorithm was developed that prevents the large errors in the SDLW at low water vapor amounts. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths measured from the Cloud and the Earth's Radiant Energy System (CERES) satellites to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for the Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing. It will be incorporated in the CERES project as one of the empirical surface radiation algorithms.
Wavelet methods in multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Helin, T.; Yudytskiy, M.
2013-08-01
The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.
NASA Technical Reports Server (NTRS)
Emmitt, G. D.; Wood, S. A.; Morris, M.
1990-01-01
Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.
Disk-integrated reflection light curves of planets
NASA Astrophysics Data System (ADS)
Garcia Munoz, A.
2014-03-01
The light scattered by a planet atmosphere contains valuable information on the planet's composition and aerosol content. Typically, the interpretation of that information requires elaborate radiative transport models accounting for the absorption and scattering processes undergone by the star photons on their passage through the atmosphere. I have been working on a particular family of algorithms based on Backward Monte Carlo (BMC) integration for solving the multiple-scattering problem in atmospheric media. BMC algorithms simulate statistically the photon trajectories in the reverse order that they actually occur, i.e. they trace the photons from the detector through the atmospheric medium and onwards to the illumination source following probability laws dictated by the medium's optical properties. BMC algorithms are versatile, as they can handle diverse viewing and illumination geometries, and can readily accommodate various physical phenomena. As will be shown, BMC algorithms are very well suited for the prediction of magnitudes integrated over a planet's disk (whether uniform or not). Disk-integrated magnitudes are relevant in the current context of exploration of extrasolar planets because spatial resolution of these objects will not be technologically feasible in the near future. I have been working on various predictions for the disk-integrated properties of planets that demonstrate the capacities of the BMC algorithm. These cases include the variability of the Earth's integrated signal caused by diurnal and seasonal changes in the surface reflectance and cloudiness, or by sporadic injection of large amounts of volcanic particles into the atmosphere. Since the implemented BMC algorithm includes a polarization mode, these examples also serve to illustrate the potential of polarimetry in the characterization of both Solar System and extrasolar planets. The work is complemented with the analysis of disk-integrated photometric observations of Earth and Venus drawn from various sources.
NASA Technical Reports Server (NTRS)
Gordon, H. R.
1981-01-01
For an estimation of the concentration of phytoplankton pigments in the oceans on the basis of Nimbus-7 CZCS imagery, it is necessary to remove the effects of the intervening atmosphere from the satellite imagery. The principle effect of the atmosphere is a loss in contrast caused by the addition of a substantial amount of radiance (path radiance) to that scatttered out of the water. Gordon (1978) has developed a technique which shows considerable promise for removal of these atmospheric effects. Attention is given to the correction algorithm, and its application to CZCS imagery. An alternate method under study for affecting the atmospheric correction requires a knowledge of 'clear water' subsurface upwelled radiance as a function of solar angle and pigment concentration.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2011-01-01
The Goddard DISC has generated products derived from AIRS/AMSU-A observations, starting from September 2002 when the AIRS instrument became stable, using the AIRS Science Team Version-5 retrieval algorithm. The AIRS Science Team Version-6 retrieval algorithm will be finalized in September 2011. This paper describes some of the significant improvements contained in the Version-6 retrieval algorithm, compared to that used in Version-5, with an emphasis on the improvement of atmospheric temperature profiles, ocean and land surface skin temperatures, and ocean and land surface spectral emissivities. AIRS contains 2378 spectral channels covering portions of the spectral region 650 cm(sup -1) (15.38 micrometers) - 2665 cm(sup -1) (3.752 micrometers). These spectral regions contain significant absorption features from two CO2 absorption bands, the 15 micrometers (longwave) CO2 band, and the 4.3 micrometers (shortwave) CO2 absorption band. There are also two atmospheric window regions, the 12 micrometer - 8 micrometer (longwave) window, and the 4.17 micrometer - 3.75 micrometer (shortwave) window. Historically, determination of surface and atmospheric temperatures from satellite observations was performed using primarily observations in the longwave window and CO2 absorption regions. According to cloud clearing theory, more accurate soundings of both surface skin and atmospheric temperatures can be obtained under partial cloud cover conditions if one uses observations in longwave channels to determine coefficients which generate cloud cleared radiances R(sup ^)(sub i) for all channels, and uses R(sup ^)(sub i) only from shortwave channels in the determination of surface and atmospheric temperatures. This procedure is now being used in the AIRS Version-6 Retrieval Algorithm. Results are presented for both daytime and nighttime conditions showing improved Version-6 surface and atmospheric soundings under partial cloud cover.
NASA Astrophysics Data System (ADS)
Liu, Lei; Guo, Rui; Wu, Jun-an
2017-02-01
Crosstalk is a main factor for wrong distance measurement by ultrasonic sensors, and this problem becomes more difficult to deal with under Doppler effects. In this paper, crosstalk reduction with Doppler shifts on small platforms is focused on, and a fast echo matching algorithm (FEMA) is proposed on the basis of chaotic sequences and pulse coding technology, then verified through applying it to match practical echoes. Finally, we introduce how to select both better mapping methods for chaotic sequences, and algorithm parameters for higher achievable maximum of cross-correlation peaks. The results indicate the following: logistic mapping is preferred to generate good chaotic sequences, with high autocorrelation even when the length is very limited; FEMA can not only match echoes and calculate distance accurately with an error degree mostly below 5%, but also generates nearly the same calculation cost level for static or kinematic ranging, much lower than that by direct Doppler compensation (DDC) with the same frequency compensation step; The sensitivity to threshold value selection and performance of FEMA depend significantly on the achievable maximum of cross-correlation peaks, and a higher peak is preferred, which can be considered as a criterion for algorithm parameter optimization under practical conditions.
NASA Astrophysics Data System (ADS)
Mahto, Tarkeshwar; Mukherjee, V.
2016-09-01
In the present work, a two-area thermal-hybrid interconnected power system, consisting of a thermal unit in one area and a hybrid wind-diesel unit in other area is considered. Capacitive energy storage (CES) and CES with static synchronous series compensator (SSSC) are connected to the studied two-area model to compensate for varying load demand, intermittent output power and area frequency oscillation. A novel quasi-opposition harmony search (QOHS) algorithm is proposed and applied to tune the various tunable parameters of the studied power system model. Simulation study reveals that inclusion of CES unit in both the areas yields superb damping performance for frequency and tie-line power deviation. From the simulation results it is further revealed that inclusion of SSSC is not viable from both technical as well as economical point of view as no considerable improvement in transient performance is noted with its inclusion in the tie-line of the studied power system model. The results presented in this paper demonstrate the potential of the proposed QOHS algorithm and show its effectiveness and robustness for solving frequency and power drift problems of the studied power systems. Binary coded genetic algorithm is taken for sake of comparison.
Veligdan, James T.
1993-01-01
Atmospheric effects on sighting measurements are compensated for by adjusting any sighting measurements using a correction factor that does not depend on atmospheric state conditions such as temperature, pressure, density or turbulence. The correction factor is accurately determined using a precisely measured physical separation between two color components of a light beam (or beams) that has been generated using either a two-color laser or two lasers that project different colored beams. The physical separation is precisely measured by fixing the position of a short beam pulse and measuring the physical separation between the two fixed-in-position components of the beam. This precisely measured physical separation is then used in a relationship that includes the indexes of refraction for each of the two colors of the laser beam in the atmosphere through which the beam is projected, thereby to determine the absolute displacement of one wavelength component of the laser beam from a straight line of sight for that projected component of the beam. This absolute displacement is useful to correct optical measurements, such as those developed in surveying measurements that are made in a test area that includes the same dispersion effects of the atmosphere on the optical measurements. The means and method of the invention are suitable for use with either single-ended systems or a double-ended systems.
CNES-NASA Studies of the Mars Sample Return Orbiter Aerocapture Phase
NASA Technical Reports Server (NTRS)
Fraysse, H.; Powell, R.; Rousseau, S.; Striepe, S.
2000-01-01
A Mars Sample Return (MSR) mission has been proposed as a joint CNES (Centre National d'Etudes Spatiales) and NASA effort in the ongoing Mars Exploration Program. The MSR mission is designed to return the first samples of Martian soil to Earth. The primary elements of the mission are a lander, rover, ascent vehicle, orbiter, and an Earth entry vehicle. The Orbiter has been allocated only 2700 kg on the launch phase to perform its part of the mission. This mass restriction has led to the decision to use an aerocapture maneuver at Mars for the orbiter. Aerocapture replaces the initial propulsive capture maneuver with a single atmospheric pass. This atmospheric pass will result in the proper apoapsis, but a periapsis raise maneuver is required at the first apoapsis. The use of aerocapture reduces the total mass requirement by approx. 45% for the same payload. This mission will be the first to use the aerocapture technique. Because the spacecraft is flying through the atmosphere, guidance algorithms must be developed that will autonomously provide the proper commands to reach the desired orbit while not violating any of the design parameters (e.g. maximum deceleration, maximum heating rate, etc.). The guidance algorithm must be robust enough to account for uncertainties in delivery states, atmospheric conditions, mass properties, control system performance, and aerodynamics. To study this very critical phase of the mission, a joint CNES-NASA technical working group has been formed. This group is composed of atmospheric trajectory specialists from CNES, NASA Langley Research Center and NASA Johnson Space Center. This working group is tasked with developing and testing guidance algorithms, as well as cross-validating CNES and NASA flight simulators for the Mars atmospheric entry phase of this mission. The final result will be a recommendation to CNES on the algorithm to use, and an evaluation of the flight risks associated with the algorithm. This paper will describe the aerocapture phase of the MSR mission, the main principles of the guidance algorithms that are under development, the atmospheric entry simulators developed for the evaluations, the process for the evaluations, and preliminary results from the evaluations.
Shen, Gang; Zhu, Zhencai; Zhao, Jinsong; Zhu, Weidong; Tang, Yu; Li, Xiang
2017-03-01
This paper focuses on an application of an electro-hydraulic force tracking controller combined with an offline designed feedback controller (ODFC) and an online adaptive compensator in order to improve force tracking performance of an electro-hydraulic force servo system (EHFS). A proportional-integral controller has been employed and a parameter-based force closed-loop transfer function of the EHFS is identified by a continuous system identification algorithm. By taking the identified system model as a nominal plant model, an H ∞ offline design method is employed to establish an optimized feedback controller with consideration of the performance, control efforts, and robustness of the EHFS. In order to overcome the disadvantage of the offline designed controller and cope with the varying dynamics of the EHFS, an online adaptive compensator with a normalized least-mean-square algorithm is cascaded to the force closed-loop system of the EHFS compensated by the ODFC. Some comparative experiments are carried out on a real-time EHFS using an xPC rapid prototype technology, and the proposed controller yields a better force tracking performance improvement. Copyright © 2016. Published by Elsevier Ltd.
Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.
Bexelius, Tobias; Sohlberg, Antti
2018-06-01
Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.
A holistic calibration method with iterative distortion compensation for stereo deflectometry
NASA Astrophysics Data System (ADS)
Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian
2018-07-01
This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.
NASA Astrophysics Data System (ADS)
Wu, Peilin; Zhang, Qunying; Fei, Chunjiao; Fang, Guangyou
2017-04-01
Aeromagnetic gradients are typically measured by optically pumped magnetometers mounted on an aircraft. Any aircraft, particularly helicopters, produces significant levels of magnetic interference. Therefore, aeromagnetic compensation is essential, and least square (LS) is the conventional method used for reducing interference levels. However, the LSs approach to solving the aeromagnetic interference model has a few difficulties, one of which is in handling multicollinearity. Therefore, we propose an aeromagnetic gradient compensation method, specifically targeted for helicopter use but applicable on any airborne platform, which is based on the ɛ-support vector regression algorithm. The structural risk minimization criterion intrinsic to the method avoids multicollinearity altogether. Local aeromagnetic anomalies can be retained, and platform-generated fields are suppressed simultaneously by constructing an appropriate loss function and kernel function. The method was tested using an unmanned helicopter and obtained improvement ratios of 12.7 and 3.5 in the vertical and horizontal gradient data, respectively. Both of these values are probably better than those that would have been obtained from the conventional method applied to the same data, had it been possible to do so in a suitable comparative context. The validity of the proposed method is demonstrated by the experimental result.
Genetic algorithm optimized triply compensated pulses in NMR spectroscopy
NASA Astrophysics Data System (ADS)
Manu, V. S.; Veglia, Gianluigi
2015-11-01
Sensitivity and resolution in NMR experiments are affected by magnetic field inhomogeneities (of both external and RF), errors in pulse calibration, and offset effects due to finite length of RF pulses. To remedy these problems, built-in compensation mechanisms for these experimental imperfections are often necessary. Here, we propose a new family of phase-modulated constant-amplitude broadband pulses with high compensation for RF inhomogeneity and heteronuclear coupling evolution. These pulses were optimized using a genetic algorithm (GA), which consists in a global optimization method inspired by Nature's evolutionary processes. The newly designed π and π / 2 pulses belong to the 'type A' (or general rotors) symmetric composite pulses. These GA-optimized pulses are relatively short compared to other general rotors and can be used for excitation and inversion, as well as refocusing pulses in spin-echo experiments. The performance of the GA-optimized pulses was assessed in Magic Angle Spinning (MAS) solid-state NMR experiments using a crystalline U-13C, 15N NAVL peptide as well as U-13C, 15N microcrystalline ubiquitin. GA optimization of NMR pulse sequences opens a window for improving current experiments and designing new robust pulse sequences.
Iterative motion compensation approach for ultrasonic thermal imaging
NASA Astrophysics Data System (ADS)
Fleming, Ioana; Hager, Gregory; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad
2015-03-01
As thermal imaging attempts to estimate very small tissue motion (on the order of tens of microns), it can be negatively influenced by signal decorrelation. Patient's breathing and cardiac cycle generate shifts in the RF signal patterns. Other sources of movement could be found outside the patient's body, like transducer slippage or small vibrations due to environment factors like electronic noise. Here, we build upon a robust displacement estimation method for ultrasound elastography and we investigate an iterative motion compensation algorithm, which can detect and remove non-heat induced tissue motion at every step of the ablation procedure. The validation experiments are performed on laboratory induced ablation lesions in ex-vivo tissue. The ultrasound probe is either held by the operator's hand or supported by a robotic arm. We demonstrate the ability to detect and remove non-heat induced tissue motion in both settings. We show that removing extraneous motion helps unmask the effects of heating. Our strain estimation curves closely mirror the temperature changes within the tissue. While previous results in the area of motion compensation were reported for experiments lasting less than 10 seconds, our algorithm was tested on experiments that lasted close to 20 minutes.
Improvement and implementation for Canny edge detection algorithm
NASA Astrophysics Data System (ADS)
Yang, Tao; Qiu, Yue-hong
2015-07-01
Edge detection is necessary for image segmentation and pattern recognition. In this paper, an improved Canny edge detection approach is proposed due to the defect of traditional algorithm. A modified bilateral filter with a compensation function based on pixel intensity similarity judgment was used to smooth image instead of Gaussian filter, which could preserve edge feature and remove noise effectively. In order to solve the problems of sensitivity to the noise in gradient calculating, the algorithm used 4 directions gradient templates. Finally, Otsu algorithm adaptively obtain the dual-threshold. All of the algorithm simulated with OpenCV 2.4.0 library in the environments of vs2010, and through the experimental analysis, the improved algorithm has been proved to detect edge details more effectively and with more adaptability.
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
Intelligent Optical Systems Using Adaptive Optics
NASA Technical Reports Server (NTRS)
Clark, Natalie
2012-01-01
Until recently, the phrase adaptive optics generally conjured images of large deformable mirrors being integrated into telescopes to compensate for atmospheric turbulence. However, the development of smaller, cheaper devices has sparked interest for other aerospace and commercial applications. Variable focal length lenses, liquid crystal spatial light modulators, tunable filters, phase compensators, polarization compensation, and deformable mirrors are becoming increasingly useful for other imaging applications including guidance navigation and control (GNC), coronagraphs, foveated imaging, situational awareness, autonomous rendezvous and docking, non-mechanical zoom, phase diversity, and enhanced multi-spectral imaging. The active components presented here allow flexibility in the optical design, increasing performance. In addition, the intelligent optical systems presented offer advantages in size and weight and radiation tolerance.
Inan, Omer T; Baran Pouyan, Maziyar; Javaid, Abdul Q; Dowling, Sean; Etemadi, Mozziyar; Dorier, Alexis; Heller, J Alex; Bicen, A Ozan; Roy, Shuvo; De Marco, Teresa; Klein, Liviu
2018-01-01
Remote monitoring of patients with heart failure (HF) using wearable devices can allow patient-specific adjustments to treatments and thereby potentially reduce hospitalizations. We aimed to assess HF state using wearable measurements of electrical and mechanical aspects of cardiac function in the context of exercise. Patients with compensated (outpatient) and decompensated (hospitalized) HF were fitted with a wearable ECG and seismocardiogram sensing patch. Patients stood at rest for an initial recording, performed a 6-minute walk test, and then stood at rest for 5 minutes of recovery. The protocol was performed at the time of outpatient visit or at 2 time points (admission and discharge) during an HF hospitalization. To assess patient state, we devised a method based on comparing the similarity of the structure of seismocardiogram signals after exercise compared with rest using graph mining (graph similarity score). We found that graph similarity score can assess HF patient state and correlates to clinical improvement in 45 patients (13 decompensated, 32 compensated). A significant difference was found between the groups in the graph similarity score metric (44.4±4.9 [decompensated HF] versus 35.2±10.5 [compensated HF]; P <0.001). In the 6 decompensated patients with longitudinal data, we found a significant change in graph similarity score from admission (decompensated) to discharge (compensated; 44±4.1 [admitted] versus 35±3.9 [discharged]; P <0.05). Wearable technologies recording cardiac function and machine learning algorithms can assess compensated and decompensated HF states by analyzing cardiac response to submaximal exercise. These techniques can be tested in the future to track the clinical status of outpatients with HF and their response to pharmacological interventions. © 2018 American Heart Association, Inc.
A spherical aberration-free microscopy system for live brain imaging.
Ue, Yoshihiro; Monai, Hiromu; Higuchi, Kaori; Nishiwaki, Daisuke; Tajima, Tetsuya; Okazaki, Kenya; Hama, Hiroshi; Hirase, Hajime; Miyawaki, Atsushi
2018-06-02
The high-resolution in vivo imaging of mouse brain for quantitative analysis of fine structures, such as dendritic spines, requires objectives with high numerical apertures (NAs) and long working distances (WDs). However, this imaging approach is often hampered by spherical aberration (SA) that results from the mismatch of refractive indices in the optical path and becomes more severe with increasing depth of target from the brain surface. Whereas a revolving objective correction collar has been designed to compensate SA, its adjustment requires manual operation and is inevitably accompanied by considerable focal shift, making it difficult to acquire the best image of a given fluorescent object. To solve the problems, we have created an objective-attached device and formulated a fast iterative algorithm for the realization of an automatic SA compensation system. The device coordinates the collar rotation and the Z-position of an objective, enabling correction collar adjustment while stably focusing on a target. The algorithm provides the best adjustment on the basis of the calculated contrast of acquired images. Together, they enable the system to compensate SA at a given depth. As proof of concept, we applied the SA compensation system to in vivo two-photon imaging with a 25 × water-immersion objective (NA, 1.05; WD, 2 mm). It effectively reduced SA regardless of location, allowing quantitative and reproducible analysis of fine structures of YFP-labeled neurons in the mouse cerebral cortical layers. Interestingly, although the cortical structure was optically heterogeneous along the z-axis, the refractive index of each layer could be assessed on the basis of the compensation degree. It was also possible to make fully corrected three-dimensional reconstructions of YFP-labeled neurons in live brain samples. Our SA compensation system, called Deep-C, is expected to bring out the best in all correction-collar-equipped objectives for imaging deep regions of heterogeneous tissues. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
Correcting Satellite Image Derived Surface Model for Atmospheric Effects
NASA Technical Reports Server (NTRS)
Emery, William; Baldwin, Daniel
1998-01-01
This project was a continuation of the project entitled "Resolution Earth Surface Features from Repeat Moderate Resolution Satellite Imagery". In the previous study, a Bayesian Maximum Posterior Estimate (BMPE) algorithm was used to obtain a composite series of repeat imagery from the Advanced Very High Resolution Radiometer (AVHRR). The spatial resolution of the resulting composite was significantly greater than the 1 km resolution of the individual AVHRR images. The BMPE algorithm utilized a simple, no-atmosphere geometrical model for the short-wave radiation budget at the Earth's surface. A necessary assumption of the algorithm is that all non geometrical parameters remain static over the compositing period. This assumption is of course violated by temporal variations in both the surface albedo and the atmospheric medium. The effect of the albedo variations is expected to be minimal since the variations are on a fairly long time scale compared to the compositing period, however, the atmospheric variability occurs on a relatively short time scale and can be expected to cause significant errors in the surface reconstruction. The current project proposed to incorporate an atmospheric correction into the BMPE algorithm for the purpose of investigating the effects of a variable atmosphere on the surface reconstructions. Once the atmospheric effects were determined, the investigation could be extended to include corrections various cloud effects, including short wave radiation through thin cirrus clouds. The original proposal was written for a three year project, funded one year at a time. The first year of the project focused on developing an understanding of atmospheric corrections and choosing an appropriate correction model. Several models were considered and the list was narrowed to the two best suited. These were the 5S and 6S shortwave radiation models developed at NASA/GODDARD and tested extensively with data from the AVHRR instrument. Although the 6S model was a successor to the 5S and slightly more advanced, the 5S was selected because outputs from the individual components comprising the short-wave radiation budget were more easily separated. The separation was necessary since both the 5S and 6S did not include geometrical corrections for terrain, a fundamental constituent of the BMPE algorithm. The 5S correction code was incorporated into the BMPE algorithm and many sensitivity studies were performed.
Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Conboy, B. (Technical Monitor)
1999-01-01
Significant accomplishments made during the present reporting period include: 1) Installed spectral optimization algorithm in the SeaDas image processing environment and successfully processed SeaWiFS imagery. The results were superior to the standard SeaWiFS algorithm (the MODIS prototype) in a turbid atmosphere off the US East Coast, but similar in a clear (typical) oceanic atmosphere; 2) Inverted ACE-2 LIDAR measurements coupled with sun photometer-derived aerosol optical thickness to obtain the vertical profile of aerosol optical thickness. The profile was validated with simultaneous aircraft measurements; and 3) Obtained LIDAR and CIMEL measurements of typical maritime and mineral dust-dominated marine atmosphere in the U.S. Virgin Islands. Contemporaneous SeaWiFS imagery were also acquired.
Meterological correction of optical beam refraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lukin, V.P.; Melamud, A.E.; Mironov, V.L.
1986-02-01
At the present time laser reference systems (LRS's) are widely used in agrotechnology and in geodesy. The demands for accuracy in LRS's constantly increase, so that a study of error sources and means of considering and correcting them is of practical importance. A theoretical algorithm is presented for correction of the regular component of atmospheric refraction for various types of hydrostatic stability of the atmospheric layer adjacent to the earth. The algorithm obtained is compared to regression equations obtained by processing an experimental data base. It is shown that within admissible accuracy limits the refraction correction algorithm obtained permits constructionmore » of correction tables and design of optical systems with programmable correction for atmospheric refraction on the basis of rapid meteorological measurements.« less
NASA Technical Reports Server (NTRS)
Spratlin, Kenneth Milton
1987-01-01
An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.
Temperature Effects and Compensation-Control Methods
Xia, Dunzhu; Chen, Shuling; Wang, Shourong; Li, Hongsheng
2009-01-01
In the analysis of the effects of temperature on the performance of microgyroscopes, it is found that the resonant frequency of the microgyroscope decreases linearly as the temperature increases, and the quality factor changes drastically at low temperatures. Moreover, the zero bias changes greatly with temperature variations. To reduce the temperature effects on the microgyroscope, temperature compensation-control methods are proposed. In the first place, a BP (Back Propagation) neural network and polynomial fitting are utilized for building the temperature model of the microgyroscope. Considering the simplicity and real-time requirements, piecewise polynomial fitting is applied in the temperature compensation system. Then, an integral-separated PID (Proportion Integration Differentiation) control algorithm is adopted in the temperature control system, which can stabilize the temperature inside the microgyrocope in pursuing its optimal performance. Experimental results reveal that the combination of microgyroscope temperature compensation and control methods is both realizable and effective in a miniaturized microgyroscope prototype. PMID:22408509
Self-compensating tensiometer and method
Hubbell, Joel M.; Sisson, James B.
2003-01-01
A pressure self-compensating tensiometer and method to in situ determine below grade soil moisture potential of earthen soil independent of changes in the volume of water contained within the tensiometer chamber, comprising a body having first and second ends, a porous material defining the first body end, a liquid within the body, a transducer housing submerged in the liquid such that a transducer sensor within the housing is kept below the working fluid level in the tensiometer and in fluid contact with the liquid and the ambient atmosphere.
Sodano, M J
1991-01-01
The author describes an innovative "work unit compensation" system that acts as an adjunct to existing personnel payment structures. The process, developed as a win-win alternative for both employees and their institution, includes a reward system for the entire department and insures a team atmosphere. The Community Medical Center in Toms River, New Jersey developed the plan which sets the four basic goals: to be fair, economical, lasting and transferable (FELT). The plan has proven to be a useful tool in retention and recruitment of qualified personnel.
NASA Technical Reports Server (NTRS)
Korkin, S.; Lyapustin, A.
2012-01-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer s rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request.
Evaluation of atmospheric correction algorithms for processing SeaWiFS data
NASA Astrophysics Data System (ADS)
Ransibrahmanakul, Varis; Stumpf, Richard; Ramachandran, Sathyadev; Hughes, Kent
2005-08-01
To enable the production of the best chlorophyll products from SeaWiFS data NOAA (Coastwatch and NOS) evaluated the various atmospheric correction algorithms by comparing the satellite derived water reflectance derived for each algorithm with in situ data. Gordon and Wang (1994) introduced a method to correct for Rayleigh and aerosol scattering in the atmosphere so that water reflectance may be derived from the radiance measured at the top of the atmosphere. However, since the correction assumed near infrared scattering to be negligible in coastal waters an invalid assumption, the method over estimates the atmospheric contribution and consequently under estimates water reflectance for the lower wavelength bands on extrapolation. Several improved methods to estimate near infrared correction exist: Siegel et al. (2000); Ruddick et al. (2000); Stumpf et al. (2002) and Stumpf et al. (2003), where an absorbing aerosol correction is also applied along with an additional 1.01% calibration adjustment for the 412 nm band. The evaluation show that the near infrared correction developed by Stumpf et al. (2003) result in an overall minimum error for U.S. waters. As of July 2004, NASA (SEADAS) has selected this as the default method for the atmospheric correction used to produce chlorophyll products.
Implementation of a rapid correction algorithm for adaptive optics using a plenoptic sensor
NASA Astrophysics Data System (ADS)
Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.
2016-09-01
Adaptive optics relies on the accuracy and speed of a wavefront sensor in order to provide quick corrections to distortions in the optical system. In weaker cases of atmospheric turbulence often encountered in astronomical fields, a traditional Shack-Hartmann sensor has proved to be very effective. However, in cases of stronger atmospheric turbulence often encountered near the surface of the Earth, atmospheric turbulence no longer solely causes small tilts in the wavefront. Instead, lasers passing through strong or "deep" atmospheric turbulence encounter beam breakup, which results in interference effects and discontinuities in the incoming wavefront. In these situations, a Shack-Hartmann sensor can no longer effectively determine the shape of the incoming wavefront. We propose a wavefront reconstruction and correction algorithm based around the plenoptic sensor. The plenoptic sensor's design allows it to match and exceed the wavefront sensing capabilities of a Shack-Hartmann sensor for our application. Novel wavefront reconstruction algorithms can take advantage of the plenoptic sensor to provide a rapid wavefront reconstruction necessary for real time turbulence. To test the integrity of the plenoptic sensor and its reconstruction algorithms, we use artificially generated turbulence in a lab scale environment to simulate the structure and speed of outdoor atmospheric turbulence. By analyzing the performance of our system with and without the closed-loop plenoptic sensor adaptive optics system, we can show that the plenoptic sensor is effective in mitigating real time lab generated atmospheric turbulence.
Analysis and design of a high power laser adaptive phased array transmitter
NASA Technical Reports Server (NTRS)
Mevers, G. E.; Soohoo, J. F.; Winocur, J.; Massie, N. A.; Southwell, W. H.; Brandewie, R. A.; Hayes, C. L.
1977-01-01
The feasibility of delivering substantial quantities of optical power to a satellite in low earth orbit from a ground based high energy laser (HEL) coupled to an adaptive antenna was investigated. Diffraction effects, atmospheric transmission efficiency, adaptive compensation for atmospheric turbulence effects, including the servo bandwidth requirements for this correction, and the adaptive compensation for thermal blooming were examined. To evaluate possible HEL sources, atmospheric investigations were performed for the CO2, (C-12)(O-18)2 isotope, CO and DF wavelengths using output antenna locations of both sea level and mountain top. Results indicate that both excellent atmospheric and adaption efficiency can be obtained for mountain top operation with a micron isotope laser operating at 9.1 um, or a CO laser operating single line (P10) at about 5.0 (C-12)(O-18)2um, which was a close second in the evaluation. Four adaptive power transmitter system concepts were generated and evaluated, based on overall system efficiency, reliability, size and weight, advanced technology requirements and potential cost. A multiple source phased array was selected for detailed conceptual design. The system uses a unique adaption technique of phase locking independent laser oscillators which allows it to be both relatively inexpensive and most reliable with a predicted overall power transfer efficiency of 53%.
NASA Astrophysics Data System (ADS)
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
A satellite AOT derived from the ground sky transmittance measurements
NASA Astrophysics Data System (ADS)
Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Tan, K. C.; Wong, C. J.; Saleh, N. Mohd.
2008-10-01
The optical properties of aerosols such as smoke from burning vary due to aging processes and these particles reach larger sizes at high concentrations. The objectives of this study are to develop and evaluate an algorithm for estimating atmospheric optical thickness from Landsat TM image. This study measured the sky transmittance at the ground using a handheld spectroradiometer in a wide wavelength spectrum to retrieve atmospheric optical thickness. The in situ measurement of atmospheric transmittance data were collected simultaneously with the acquisition of remotely sensed satellite data. The digital numbers for the three visible bands corresponding to the in situ locations were extracted and then converted into reflectance values. The reflectance measured from the satellite was subtracted by the amount given by the surface reflectance to obtain the atmospheric reflectance. These atmospheric reflectance values were used for calibration of the AOT algorithm. This study developed an empirical method to estimate the AOT values from the sky transmittance values. Finally, a AOT map was generated using the proposed algorithm and colour-coded for visual interpretation.
Motion Estimation and Compensation Strategies in Dynamic Computerized Tomography
NASA Astrophysics Data System (ADS)
Hahn, Bernadette N.
2017-12-01
A main challenge in computerized tomography consists in imaging moving objects. Temporal changes during the measuring process lead to inconsistent data sets, and applying standard reconstruction techniques causes motion artefacts which can severely impose a reliable diagnostics. Therefore, novel reconstruction techniques are required which compensate for the dynamic behavior. This article builds on recent results from a microlocal analysis of the dynamic setting, which enable us to formulate efficient analytic motion compensation algorithms for contour extraction. Since these methods require information about the dynamic behavior, we further introduce a motion estimation approach which determines parameters of affine and certain non-affine deformations directly from measured motion-corrupted Radon-data. Our methods are illustrated with numerical examples for both types of motion.
A dimension reduction method for flood compensation operation of multi-reservoir system
NASA Astrophysics Data System (ADS)
Jia, B.; Wu, S.; Fan, Z.
2017-12-01
Multiple reservoirs cooperation compensation operations coping with uncontrolled flood play vital role in real-time flood mitigation. This paper come up with a reservoir flood compensation operation index (ResFCOI), which formed by elements of flood control storage, flood inflow volume, flood transmission time and cooperation operations period, then establish a flood cooperation compensation operations model of multi-reservoir system, according to the ResFCOI to determine a computational order of each reservoir, and lastly the differential evolution algorithm is implemented for computing single reservoir flood compensation optimization in turn, so that a dimension reduction method is formed to reduce computational complexity. Shiguan River Basin with two large reservoirs and an extensive uncontrolled flood area, is used as a case study, results show that (a) reservoirs' flood discharges and the uncontrolled flood are superimposed at Jiangjiaji Station, while the formed flood peak flow is as small as possible; (b) cooperation compensation operations slightly increase in usage of flood storage capacity in reservoirs, when comparing to rule-based operations; (c) it takes 50 seconds in average when computing a cooperation compensation operations scheme. The dimension reduction method to guide flood compensation operations of multi-reservoir system, can make each reservoir adjust its flood discharge strategy dynamically according to the uncontrolled flood magnitude and pattern, so as to mitigate the downstream flood disaster.
Testing trivializing maps in the Hybrid Monte Carlo algorithm
Engel, Georg P.; Schaefer, Stefan
2011-01-01
We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CPN−1 model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied. PMID:21969733
Development of a Sunspot Tracking System
NASA Technical Reports Server (NTRS)
Taylor, Jaime R.
1998-01-01
Large solar flares produce a significant amount of energetic particles which pose a hazard for human activity in space. In the hope of understanding flare mechanisms and thus better predicting solar flares, NASA's Marshall Space Flight Center (MSFC) developed an experimental vector magnetograph (EXVM) polarimeter to measure the Sun's magnetic field. The EXVM will be used to perform ground-based solar observations and will provide a proof of concept for the design of a similar instrument for the Japanese Solar-B space mission. The EXVM typically operates for a period of several minutes. During this time there is image motion due to atmospheric fluctuation and telescope wind loading. To optimize the EXVM performance an image motion compensation device (sunspot tracker) is needed. The sunspot tracker consists of two parts, an image motion determination system and an image deflection system. For image motion determination a CCD or CID camera is used to digitize an image, than an algorithm is applied to determine the motion. This motion or error signal is sent to the image deflection system which moves the image back to its original location. Both of these systems are under development. Two algorithms are available for sunspot tracking which require the use of only one row and one column of image data. To implement these algorithms, two identical independent systems are being developed, one system for each axis of motion. Two CID cameras have been purchased; the data from each camera will be used to determine image motion for each direction. The error signal generated by the tracking algorithm will be sent to an image deflection system consisting of an actuator and a mirror constrained to move about one axis. Magnetostrictive actuators were chosen to move the mirror over piezoelectrics due to their larger driving force and larger range of motion. The actuator and mirror mounts are currently under development.
High-resolution studies of the structure of the solar atmosphere using a new imaging algorithm
NASA Technical Reports Server (NTRS)
Karovska, Margarita; Habbal, Shadia Rifai
1991-01-01
The results of the application of a new image restoration algorithm developed by Ayers and Dainty (1988) to the multiwavelength EUV/Skylab observations of the solar atmosphere are presented. The application of the algorithm makes it possible to reach a resolution better than 5 arcsec, and thus study the structure of the quiet sun on that spatial scale. The results show evidence for discrete looplike structures in the network boundary, 5-10 arcsec in size, at temperatures of 100,000 K.
Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description
Schmidt, Gail; Jenkerson, Calli B.; Masek, Jeffrey; Vermote, Eric; Gao, Feng
2013-01-01
The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.
Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1997-01-01
Significant accomplishments made during the present reporting period are as follows: (1) We developed a new method for identifying the presence of absorbing aerosols and, simultaneously, performing atmospheric correction. The algorithm consists of optimizing the match between the top-of-atmosphere radiance spectrum and the result of models of both the ocean and aerosol optical properties; (2) We developed an algorithm for providing an accurate computation of the diffuse transmittance of the atmosphere given an aerosol model. A module for inclusion into the MODIS atmospheric-correction algorithm was completed; (3) We acquired reflectance data for oceanic whitecaps during a cruise on the RV Ka'imimoana in the Tropical Pacific (Manzanillo, Mexico to Honolulu, Hawaii). The reflectance spectrum of whitecaps was found to be similar to that for breaking waves in the surf zone measured by Frouin, Schwindling and Deschamps, however, the drop in augmented reflectance from 670 to 860 nm was not as great, and the magnitude of the augmented reflectance was significantly less than expected; and (4) We developed a method for the approximate correction for the effects of the MODIS polarization sensitivity. The correction, however, requires adequate characterization of the polarization sensitivity of MODIS prior to launch.
Orżanowski, Tomasz
2016-01-01
This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.
Hybrid flower pollination algorithm strategies for t-way test suite generation.
Nasser, Abdullah B; Zamli, Kamal Z; Alsewari, AbdulRahman A; Ahmed, Bestoun S
2018-01-01
The application of meta-heuristic algorithms for t-way testing has recently become prevalent. Consequently, many useful meta-heuristic algorithms have been developed on the basis of the implementation of t-way strategies (where t indicates the interaction strength). Mixed results have been reported in the literature to highlight the fact that no single strategy appears to be superior compared with other configurations. The hybridization of two or more algorithms can enhance the overall search capabilities, that is, by compensating the limitation of one algorithm with the strength of others. Thus, hybrid variants of the flower pollination algorithm (FPA) are proposed in the current work. Four hybrid variants of FPA are considered by combining FPA with other algorithmic components. The experimental results demonstrate that FPA hybrids overcome the problems of slow convergence in the original FPA and offers statistically superior performance compared with existing t-way strategies in terms of test suite size.
Hybrid flower pollination algorithm strategies for t-way test suite generation
Zamli, Kamal Z.; Alsewari, AbdulRahman A.
2018-01-01
The application of meta-heuristic algorithms for t-way testing has recently become prevalent. Consequently, many useful meta-heuristic algorithms have been developed on the basis of the implementation of t-way strategies (where t indicates the interaction strength). Mixed results have been reported in the literature to highlight the fact that no single strategy appears to be superior compared with other configurations. The hybridization of two or more algorithms can enhance the overall search capabilities, that is, by compensating the limitation of one algorithm with the strength of others. Thus, hybrid variants of the flower pollination algorithm (FPA) are proposed in the current work. Four hybrid variants of FPA are considered by combining FPA with other algorithmic components. The experimental results demonstrate that FPA hybrids overcome the problems of slow convergence in the original FPA and offers statistically superior performance compared with existing t-way strategies in terms of test suite size. PMID:29718918
NASA Astrophysics Data System (ADS)
Nakhostin, M.; Hitomi, K.
2012-05-01
The energy resolution of thallium bromide (TlBr) detectors is significantly limited by charge-trapping effect and pulse ballistic deficit, caused by the slow charge collection time. A digital pulse processing algorithm has been developed aiming to compensate for charge-trapping effect, while minimizing pulse ballistic deficit. The algorithm is examined using a 1 mm thick TlBr detector and an excellent energy resolution of 3.37% at 662 keV is achieved at room temperature. The pulse processing algorithms are presented in recursive form, suitable for real-time implementations.
Robust mosiacs of close-range high-resolution images
NASA Astrophysics Data System (ADS)
Song, Ran; Szymanski, John E.
2008-03-01
This paper presents a robust algorithm which relies only on the information contained within the captured images for the construction of massive composite mosaic images from close-range and high-resolution originals, such as those obtained when imaging architectural and heritage structures. We first apply Harris algorithm to extract a selection of corners and, then, employ both the intensity correlation and the spatial correlation between the corresponding corners for matching them. Then we estimate the eight-parameter projective transformation matrix by the genetic algorithm. Lastly, image fusion using a weighted blending function together with intensity compensation produces an effective seamless mosaic image.
Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft
NASA Astrophysics Data System (ADS)
Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei
2018-05-01
Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).
NASA Astrophysics Data System (ADS)
Pathak, P.; Guyon, O.; Jovanovic, N.; Lozi, J.; Martinache, F.; Minowa, Y.; Kudo, T.; Kotani, T.; Takami, H.
2018-02-01
Adaptive optic (AO) systems delivering high levels of wavefront correction are now common at observatories. One of the main limitations to image quality after wavefront correction comes from atmospheric refraction. An atmospheric dispersion compensator (ADC) is employed to correct for atmospheric refraction. The correction is applied based on a look-up table consisting of dispersion values as a function of telescope elevation angle. The look-up table-based correction of atmospheric dispersion results in imperfect compensation leading to the presence of residual dispersion in the point spread function (PSF) and is insufficient when sub-milliarcsecond precision is required. The presence of residual dispersion can limit the achievable contrast while employing high-performance coronagraphs or can compromise high-precision astrometric measurements. In this paper, we present the first on-sky closed-loop correction of atmospheric dispersion by directly using science path images. The concept behind the measurement of dispersion utilizes the chromatic scaling of focal plane speckles. An adaptive speckle grid generated with a deformable mirror (DM) that has a sufficiently large number of actuators is used to accurately measure the residual dispersion and subsequently correct it by driving the ADC. We have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system on-sky closed-loop correction of residual dispersion to <1 mas across H-band. This work will aid in the direct detection of habitable exoplanets with upcoming extremely large telescopes (ELTs) and also provide a diagnostic tool to test the performance of instruments which require sub-milliarcsecond correction.
Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm
NASA Technical Reports Server (NTRS)
Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)
2004-01-01
In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.
Completely automated open-path FT-IR spectrometry.
Griffiths, Peter R; Shao, Limin; Leytem, April B
2009-01-01
Atmospheric analysis by open-path Fourier-transform infrared (OP/FT-IR) spectrometry has been possible for over two decades but has not been widely used because of the limitations of the software of commercial instruments. In this paper, we describe the current state-of-the-art of the hardware and software that constitutes a contemporary OP/FT-IR spectrometer. We then describe advances that have been made in our laboratory that have enabled many of the limitations of this type of instrument to be overcome. These include not having to acquire a single-beam background spectrum that compensates for absorption features in the spectra of atmospheric water vapor and carbon dioxide. Instead, an easily measured "short path-length" background spectrum is used for calculation of each absorbance spectrum that is measured over a long path-length. To accomplish this goal, the algorithm used to calculate the concentrations of trace atmospheric molecules was changed from classical least-squares regression (CLS) to partial least-squares regression (PLS). For calibration, OP/FT-IR spectra are measured in pristine air over a wide variety of path-lengths, temperatures, and humidities, ratioed against a short-path background, and converted to absorbance; the reference spectrum of each analyte is then multiplied by randomly selected coefficients and added to these background spectra. Automatic baseline correction for small molecules with resolved rotational fine structure, such as ammonia and methane, is effected using wavelet transforms. A novel method of correcting for the effect of the nonlinear response of mercury cadmium telluride detectors is also incorporated. Finally, target factor analysis may be used to detect the onset of a given pollutant when its concentration exceeds a certain threshold. In this way, the concentration of atmospheric species has been obtained from OP/FT-IR spectra measured at intervals of 1 min over a period of many hours with no operator intervention.
Fourier domain preconditioned conjugate gradient algorithm for atmospheric tomography.
Yang, Qiang; Vogel, Curtis R; Ellerbroek, Brent L
2006-07-20
By 'atmospheric tomography' we mean the estimation of a layered atmospheric turbulence profile from measurements of the pupil-plane phase (or phase gradients) corresponding to several different guide star directions. We introduce what we believe to be a new Fourier domain preconditioned conjugate gradient (FD-PCG) algorithm for atmospheric tomography, and we compare its performance against an existing multigrid preconditioned conjugate gradient (MG-PCG) approach. Numerical results indicate that on conventional serial computers, FD-PCG is as accurate and robust as MG-PCG, but it is from one to two orders of magnitude faster for atmospheric tomography on 30 m class telescopes. Simulations are carried out for both natural guide stars and for a combination of finite-altitude laser guide stars and natural guide stars to resolve tip-tilt uncertainty.
Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha
2012-11-01
Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.
Refractive Index Compensation in Over-Determined Interferometric Systems
Lazar, Josef; Holá, Miroslava; Číp, Ondřej; Čížek, Martin; Hrabina, Jan; Buchta, Zdeněk
2012-01-01
We present an interferometric technique based on a differential interferometry setup for measurement under atmospheric conditions. The key limiting factor in any interferometric dimensional measurement are fluctuations of the refractive index of air representing a dominating source of uncertainty when evaluated indirectly from the physical parameters of the atmosphere. Our proposal is based on the concept of an over-determined interferometric setup where a reference length is derived from a mechanical frame made from a material with a very low thermal coefficient. The technique allows one to track the variations of the refractive index of air on-line directly in the line of the measuring beam and to compensate for the fluctuations. The optical setup consists of three interferometers sharing the same beam path where two measure differentially the displacement while the third evaluates the changes in the measuring range, acting as a tracking refractometer. The principle is demonstrated in an experimental setup. PMID:23202037
Refractive index compensation in over-determined interferometric systems.
Lazar, Josef; Holá, Miroslava; Číp, Ondřej; Čížek, Martin; Hrabina, Jan; Buchta, Zdeněk
2012-10-19
We present an interferometric technique based on a differential interferometry setup for measurement under atmospheric conditions. The key limiting factor in any interferometric dimensional measurement are fluctuations of the refractive index of air representing a dominating source of uncertainty when evaluated indirectly from the physical parameters of the atmosphere. Our proposal is based on the concept of an over-determined interferometric setup where a reference length is derived from a mechanical frame made from a material with a very low thermal coefficient. The technique allows one to track the variations of the refractive index of air on-line directly in the line of the measuring beam and to compensate for the fluctuations. The optical setup consists of three interferometers sharing the same beam path where two measure differentially the displacement while the third evaluates the changes in the measuring range, acting as a tracking refractometer. The principle is demonstrated in an experimental setup.
NASA Technical Reports Server (NTRS)
Gordon, H. R.; Brown, J. W.; Clark, D. K.; Brown, O. B.; Evans, R. H.; Broenkow, W. W.
1983-01-01
The processing algorithms used for relating the apparent color of the ocean observed with the Coastal-Zone Color Scanner on Nimbus-7 to the concentration of phytoplankton pigments (principally the pigment responsible for photosynthesis, chlorophyll-a) are developed and discussed in detail. These algorithms are applied to the shelf and slope waters of the Middle Atlantic Bight and also to Sargasso Sea waters. In all, four images are examined, and the resulting pigment concentrations are compared to continuous measurements made along ship tracks. The results suggest that over the 0.08-1.5 mg/cu m range, the error in the retrieved pigment concentration is of the order of 30-40% for a variety of atmospheric turbidities. In three direct comparisons between ship-measured and satellite-retrieved values of the water-leaving radiance, the atmospheric correction algorithm retrieved the water-leaving radiance with an average error of about 10%. This atmospheric correction algorithm does not require any surface measurements for its application.
Atmospheric correction of SeaWiFS imagery for turbid coastal and inland waters.
Ruddick, K G; Ovidio, F; Rijkeboer, M
2000-02-20
The standard SeaWiFS atmospheric correction algorithm, designed for open ocean water, has been extended for use over turbid coastal and inland waters. Failure of the standard algorithm over turbid waters can be attributed to invalid assumptions of zero water-leaving radiance for the near-infrared bands at 765 and 865 nm. In the present study these assumptions are replaced by the assumptions of spatial homogeneity of the 765:865-nm ratios for aerosol reflectance and for water-leaving reflectance. These two ratios are imposed as calibration parameters after inspection of the Rayleigh-corrected reflectance scatterplot. The performance of the new algorithm is demonstrated for imagery of Belgian coastal waters and yields physically realistic water-leaving radiance spectra. A preliminary comparison with in situ radiance spectra for the Dutch Lake Markermeer shows significant improvement over the standard atmospheric correction algorithm. An analysis is made of the sensitivity of results to the choice of calibration parameters, and perspectives for application of the method to other sensors are briefly discussed.
Accurate beacon positioning method for satellite-to-ground optical communication.
Wang, Qiang; Tong, Ling; Yu, Siyuan; Tan, Liying; Ma, Jing
2017-12-11
In satellite laser communication systems, accurate positioning of the beacon is essential for establishing a steady laser communication link. For satellite-to-ground optical communication, the main influencing factors on the acquisition of the beacon are background noise and atmospheric turbulence. In this paper, we consider the influence of background noise and atmospheric turbulence on the beacon in satellite-to-ground optical communication, and propose a new locating algorithm for the beacon, which takes the correlation coefficient obtained by curve fitting for image data as weights. By performing a long distance laser communication experiment (11.16 km), we verified the feasibility of this method. Both simulation and experiment showed that the new algorithm can accurately obtain the position of the centroid of beacon. Furthermore, for the distortion of the light spot through atmospheric turbulence, the locating accuracy of the new algorithm was 50% higher than that of the conventional gray centroid algorithm. This new approach will be beneficial for the design of satellite-to ground optical communication systems.
NASA Technical Reports Server (NTRS)
Hlavka, Dennis L.; Palm, S. P.; Welton, E. J.; Hart, W. D.; Spinhirne, J. D.; McGill, M.; Mahesh, A.; Starr, David OC. (Technical Monitor)
2001-01-01
The Geoscience Laser Altimeter System (GLAS) is scheduled for launch on the ICESat satellite as part of the NASA EOS mission in 2002. GLAS will be used to perform high resolution surface altimetry and will also provide a continuously operating atmospheric lidar to profile clouds, aerosols, and the planetary boundary layer with horizontal and vertical resolution of 175 and 76.8 m, respectively. GLAS is the first active satellite atmospheric profiler to provide global coverage. Data products include direct measurements of the heights of aerosol and cloud layers, and the optical depth of transmissive layers. In this poster we provide an overview of the GLAS atmospheric data products, present a simulated GLAS data set, and show results from the simulated data set using the GLAS data processing algorithm. Optical results from the ER-2 Cloud Physics Lidar (CPL), which uses many of the same processing algorithms as GLAS, show algorithm performance with real atmospheric conditions during the Southern African Regional Science Initiative (SAFARI 2000).
Polarimetric Remote Sensing of Atmospheric Particulate Pollutants
NASA Astrophysics Data System (ADS)
Li, Z.; Zhang, Y.; Hong, J.
2018-04-01
Atmospheric particulate pollutants not only reduce atmospheric visibility, change the energy balance of the troposphere, but also affect human and vegetation health. For monitoring the particulate pollutants, we establish and develop a series of inversion algorithms based on polarimetric remote sensing technology which has unique advantages in dealing with atmospheric particulates. A solution is pointed out to estimate the near surface PM2.5 mass concentrations from full remote sensing measurements including polarimetric, active and infrared remote sensing technologies. It is found that the mean relative error of PM2.5 retrieved by full remote sensing measurements is 35.5 % in the case of October 5th 2013, improved to a certain degree compared to previous studies. A systematic comparison with the ground-based observations further indicates the effectiveness of the inversion algorithm and reliability of results. A new generation of polarized sensors (DPC and PCF), whose observation can support these algorithms, will be onboard GF series satellites and launched by China in the near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, W; Jung, J; Kang, Y
Purpose: To quantitatively analyze the influence image processing for Moire elimination has in digital radiography by comparing the image acquired from optimized anti-scattered grid only and the image acquired from software processing paired with misaligned low-frequency grid. Methods: Special phantom, which does not create scattered radiation, was used to acquire non-grid reference images and they were acquired without any grids. A set of images was acquired with optimized grid, aligned to pixel of a detector and other set of images was acquired with misaligned low-frequency grid paired with Moire elimination processing algorithm. X-ray technique used was based on consideration tomore » Bucky factor derived from non-grid reference images. For evaluation, we analyze by comparing pixel intensity of acquired images with grids to that of reference images. Results: When compared to image acquired with optimized grid, images acquired with Moire elimination processing algorithm showed 10 to 50% lower mean contrast value of ROI. Severe distortion of images was found with when the object’s thickness was measured at 7 or less pixels. In this case, contrast value measured from images acquired with Moire elimination processing algorithm was under 30% of that taken from reference image. Conclusion: This study shows the potential risk of Moire compensation images in diagnosis. Images acquired with misaligned low-frequency grid results in Moire noise and Moire compensation processing algorithm used to remove this Moire noise actually caused an image distortion. As a result, fractures and/or calcifications which are presented in few pixels only may not be diagnosed properly. In future work, we plan to evaluate the images acquired without grid but based on 100% image processing and the potential risks it possesses.« less
Binarization algorithm for document image with complex background
NASA Astrophysics Data System (ADS)
Miao, Shaojun; Lu, Tongwei; Min, Feng
2015-12-01
The most important step in image preprocessing for Optical Character Recognition (OCR) is binarization. Due to the complex background or varying light in the text image, binarization is a very difficult problem. This paper presents the improved binarization algorithm. The algorithm can be divided into several steps. First, the background approximation can be obtained by the polynomial fitting, and the text is sharpened by using bilateral filter. Second, the image contrast compensation is done to reduce the impact of light and improve contrast of the original image. Third, the first derivative of the pixels in the compensated image are calculated to get the average value of the threshold, then the edge detection is obtained. Fourth, the stroke width of the text is estimated through a measuring of distance between edge pixels. The final stroke width is determined by choosing the most frequent distance in the histogram. Fifth, according to the value of the final stroke width, the window size is calculated, then a local threshold estimation approach can begin to binaries the image. Finally, the small noise is removed based on the morphological operators. The experimental result shows that the proposed method can effectively remove the noise caused by complex background and varying light.
Adaptive optics based non-null interferometry for optical free form surfaces test
NASA Astrophysics Data System (ADS)
Zhang, Lei; Zhou, Sheng; Li, Jingsong; Yu, Benli
2018-03-01
An adaptive optics based non-null interferometry (ANI) is proposed for optical free form surfaces testing, in which an open-loop deformable mirror (DM) is employed as a reflective compensator, to compensate various low-order aberrations flexibly. The residual wavefront aberration is treated by the multi-configuration ray tracing (MCRT) algorithm. The MCRT algorithm based on the simultaneous ray tracing for multiple system models, in which each model has different DM surface deformation. With the MCRT algorithm, the final figure error can be extracted together with the surface misalignment aberration correction after the initial system calibration. The flexible test for free form surface is achieved with high accuracy, without auxiliary device for DM deformation monitoring. Experiments proving the feasibility, repeatability and high accuracy of the ANI were carried out to test a bi-conic surface and a paraboloidal surface, with a high stable ALPAOTM DM88. The accuracy of the final test result of the paraboloidal surface was better than 1/20 Μ PV value. It is a successful attempt in research of flexible optical free form surface metrology and would have enormous potential in future application with the development of the DM technology.
Semiparametric modeling: Correcting low-dimensional model error in parametric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Tyrus, E-mail: thb11@psu.edu; Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, 503 Walker Building, University Park, PA 16802-5013
2016-03-01
In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consistsmore » of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.« less
Study on transient beam loading compensation for China ADS proton linac injector II
NASA Astrophysics Data System (ADS)
Gao, Zheng; He, Yuan; Wang, Xian-Wu; Chang, Wei; Zhang, Rui-Feng; Zhu, Zheng-Long; Zhang, Sheng-Hu; Chen, Qi; Powers, Tom
2016-05-01
Significant transient beam loading effects were observed during beam commissioning tests of prototype II of the injector for the accelerator driven sub-critical (ADS) system, which took place at the Institute of Modern Physics, Chinese Academy of Sciences, between October and December 2014. During these tests experiments were performed with continuous wave (CW) operation of the cavities with pulsed beam current, and the system was configured to make use of a prototype digital low level radio frequency (LLRF) controller. The system was originally operated in pulsed mode with a simple proportional plus integral and deviation (PID) feedback control algorithm, which was not able to maintain the desired gradient regulation during pulsed 10 mA beam operations. A unique simple transient beam loading compensation method which made use of a combination of proportional and integral (PI) feedback and feedforward control algorithm was implemented in order to significantly reduce the beam induced transient effect in the cavity gradients. The superconducting cavity field variation was reduced to less than 1.7% after turning on this control algorithm. The design and experimental results of this system are presented in this paper. Supported by National Natural Science Foundation of China (91426303, 11525523)
NASA Astrophysics Data System (ADS)
Zhang, Junzhi; Li, Yutong; Lv, Chen; Gou, Jinfang; Yuan, Ye
2017-03-01
The flexibility of the electrified powertrain system elicits a negative effect upon the cooperative control performance between regenerative and hydraulic braking and the active damping control performance. Meanwhile, the connections among sensors, controllers, and actuators are realized via network communication, i.e., controller area network (CAN), that introduces time-varying delays and deteriorates the control performances of the closed-loop control systems. As such, the goal of this paper is to develop a control algorithm to cope with all these challenges. To this end, the models of the stochastic network induced time-varying delays, based on a real in-vehicle network topology and on a flexible electrified powertrain, were firstly built. In order to further enhance the control performances of active damping and cooperative control of regenerative and hydraulic braking, the time-varying delays compensation algorithm for the electrified powertrain active damping during regenerative braking was developed based on a predictive scheme. The augmented system is constructed and the H∞ performance is analyzed. Based on this analysis, the control gains are derived by solving a nonlinear minimization problem. The simulations and hardware-in-loop (HIL) tests were carried out to validate the effectiveness of the developed algorithm. The test results show that the active damping and cooperative control performances are enhanced significantly.
[Design and implementation of real-time continuous glucose monitoring instrument].
Huang, Yonghong; Liu, Hongying; Tian, Senfu; Jia, Ziru; Wang, Zi; Pi, Xitian
2017-12-01
Real-time continuous glucose monitoring can help diabetics to control blood sugar levels within the normal range. However, in the process of practical monitoring, the output of real-time continuous glucose monitoring system is susceptible to glucose sensor and environment noise, which will influence the measurement accuracy of the system. Aiming at this problem, a dual-calibration algorithm for the moving-window double-layer filtering algorithm combined with real-time self-compensation calibration algorithm is proposed in this paper, which can realize the signal drift compensation for current data. And a real-time continuous glucose monitoring instrument based on this study was designed. This real-time continuous glucose monitoring instrument consisted of an adjustable excitation voltage module, a current-voltage converter module, a microprocessor and a wireless transceiver module. For portability, the size of the device was only 40 mm × 30 mm × 5 mm and its weight was only 30 g. In addition, a communication command code algorithm was designed to ensure the security and integrity of data transmission in this study. Results of experiments in vitro showed that current detection of the device worked effectively. A 5-hour monitoring of blood glucose level in vivo showed that the device could continuously monitor blood glucose in real time. The relative error of monitoring results of the designed device ranged from 2.22% to 7.17% when comparing to a portable blood meter.
NASA Astrophysics Data System (ADS)
Loughman, Robert; Bhartia, Pawan K.; Chen, Zhong; Xu, Philippe; Nyaku, Ernest; Taha, Ghassan
2018-05-01
The theoretical basis of the Ozone Mapping and Profiler Suite (OMPS) Limb Profiler (LP) Version 1 aerosol extinction retrieval algorithm is presented. The algorithm uses an assumed bimodal lognormal aerosol size distribution to retrieve aerosol extinction profiles at 675 nm from OMPS LP radiance measurements. A first-guess aerosol extinction profile is updated by iteration using the Chahine nonlinear relaxation method, based on comparisons between the measured radiance profile at 675 nm and the radiance profile calculated by the Gauss-Seidel limb-scattering (GSLS) radiative transfer model for a spherical-shell atmosphere. This algorithm is discussed in the context of previous limb-scattering aerosol extinction retrieval algorithms, and the most significant error sources are enumerated. The retrieval algorithm is limited primarily by uncertainty about the aerosol phase function. Horizontal variations in aerosol extinction, which violate the spherical-shell atmosphere assumed in the version 1 algorithm, may also limit the quality of the retrieved aerosol extinction profiles significantly.
Coherent Detection of High-Rate Optical PPM Signals
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor; Fernandez, Michela Munoz
2006-01-01
A method of coherent detection of high-rate pulse-position modulation (PPM) on a received laser beam has been conceived as a means of reducing the deleterious effects of noise and atmospheric turbulence in free-space optical communication using focal-plane detector array technologies. In comparison with a receiver based on direct detection of the intensity modulation of a PPM signal, a receiver based on the present method of coherent detection performs well at much higher background levels. In principle, the coherent-detection receiver can exhibit quantum-limited performance despite atmospheric turbulence. The key components of such a receiver include standard receiver optics, a laser that serves as a local oscillator, a focal-plane array of photodetectors, and a signal-processing and data-acquisition assembly needed to sample the focal-plane fields and reconstruct the pulsed signal prior to detection. The received PPM-modulated laser beam and the local-oscillator beam are focused onto the photodetector array, where they are mixed in the detection process. The two lasers are of the same or nearly the same frequency. If the two lasers are of different frequencies, then the coherent detection process is characterized as heterodyne and, using traditional heterodyne-detection terminology, the difference between the two laser frequencies is denoted the intermediate frequency (IF). If the two laser beams are of the same frequency and remain aligned in phase, then the coherent detection process is characterized as homodyne (essentially, heterodyne detection at zero IF). As a result of the inherent squaring operation of each photodetector, the output current includes an IF component that contains the signal modulation. The amplitude of the IF component is proportional to the product of the local-oscillator signal amplitude and the PPM signal amplitude. Hence, by using a sufficiently strong local-oscillator signal, one can make the PPM-modulated IF signal strong enough to overcome thermal noise in the receiver circuits: this is what makes it possible to achieve near-quantum-limited detection in the presence of strong background. Following quantum-limited coherent detection, the outputs of the individual photodetectors are automatically aligned in phase by use of one or more adaptive array compensation algorithms [e.g., the least-mean-square (LMS) algorithm]. Then the outputs are combined and the resulting signal is processed to extract the high-rate information, as though the PPM signal were received by a single photodetector. In a continuing series of experiments to test this method (see Fig. 1), the local oscillator has a wavelength of 1,064 nm, and another laser is used as a signal transmitter at a slightly different wavelength to establish an IF of about 6 MHz. There are 16 photodetectors in a 4 4 focal-plane array; the detector outputs are digitized at a sampling rate of 25 MHz, and the signals in digital form are combined by use of the LMS algorithm. Convergence of the adaptive combining algorithm in the presence of simulated atmospheric turbulence for optical PPM signals has already been demonstrated in the laboratory; the combined output is shown in Fig. 2(a), and Fig. 2(b) shows the behavior of the phase of the combining weights as a function of time (or samples). We observe that the phase of the weights has a sawtooth shape due to the continuously changing phase in the down-converted output, which is not exactly at zero frequency. Detailed performance analysis of this coherent free-space optical communication system in the presence of simulated atmospheric turbulence is currently under way.
An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP
NASA Astrophysics Data System (ADS)
Moncet, J. L.
2015-12-01
We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from climatological or numerical forecast data, those sources are problematic for climate applications due to the imprint of biases from past climate analyses or from model error.
Cutti, Andrea Giovanni; Cappello, Angelo; Davalli, Angelo
2006-01-01
Soft tissue artefact is the dominant error source for upper extremity motion analyses that use skin-mounted markers, especially in humeral axial rotation. A new in vivo technique is presented that is based on the definition of a humerus bone-embedded frame almost "artefact free" but influenced by the elbow orientation in the measurement of the humeral axial rotation, and on an algorithm designed to solve this kinematic coupling. The technique was validated in vivo in a study of six healthy subjects who performed five arm-movement tasks. For each task the similarity between a gold standard pattern and the axial rotation pattern before and after the application of the compensation algorithm was evaluated in terms of explained variance, gain, phase and offset. In addition the root mean square error between the patterns was used as a global similarity estimator. After the application, for four out of five tasks, patterns were highly correlated, in phase, with almost equal gain and limited offset; the root mean square error decreased from the original 9 degrees to 3 degrees . The proposed technique appears to help compensate for the soft tissue artefact affecting axial rotation. A further development is also proposed to make the technique effective also for the pure prono-supination task.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiyko, V V; Kislov, V I; Ofitserov, E N
2015-08-31
In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of themore » mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)« less
Low-Cost Ultrasonic Distance Sensor Arrays with Networked Error Correction
Dai, Hongjun; Zhao, Shulin; Jia, Zhiping; Chen, Tianzhou
2013-01-01
Distance has been one of the basic factors in manufacturing and control fields, and ultrasonic distance sensors have been widely used as a low-cost measuring tool. However, the propagation of ultrasonic waves is greatly affected by environmental factors such as temperature, humidity and atmospheric pressure. In order to solve the problem of inaccurate measurement, which is significant within industry, this paper presents a novel ultrasonic distance sensor model using networked error correction (NEC) trained on experimental data. This is more accurate than other existing approaches because it uses information from indirect association with neighboring sensors, which has not been considered before. The NEC technique, focusing on optimization of the relationship of the topological structure of sensor arrays, is implemented for the compensation of erroneous measurements caused by the environment. We apply the maximum likelihood method to determine the optimal fusion data set and use a neighbor discovery algorithm to identify neighbor nodes at the top speed. Furthermore, we adopt the NEC optimization algorithm, which takes full advantage of the correlation coefficients for neighbor sensors. The experimental results demonstrate that the ranging errors of the NEC system are within 2.20%; furthermore, the mean absolute percentage error is reduced to 0.01% after three iterations of this method, which means that the proposed method performs extremely well. The optimized method of distance measurement we propose, with the capability of NEC, would bring a significant advantage for intelligent industrial automation. PMID:24013491
NASA Technical Reports Server (NTRS)
Grecu, Mircea; Olson, William S.; Shie, Chung-Lin; L'Ecuyer, Tristan S.; Tao, Wei-Kuo
2009-01-01
In this study, satellite passive microwave sensor observations from the TRMM Microwave Imager (TMI) are utilized to make estimates of latent + eddy sensible heating rates (Q1-QR) in regions of precipitation. The TMI heating algorithm (TRAIN) is calibrated, or "trained" using relatively accurate estimates of heating based upon spaceborne Precipitation Radar (PR) observations collocated with the TMI observations over a one-month period. The heating estimation technique is based upon a previously described Bayesian methodology, but with improvements in supporting cloud-resolving model simulations, an adjustment of precipitation echo tops to compensate for model biases, and a separate scaling of convective and stratiform heating components that leads to an approximate balance between estimated vertically-integrated condensation and surface precipitation. Estimates of Q1-QR from TMI compare favorably with the PR training estimates and show only modest sensitivity to the cloud-resolving model simulations of heating used to construct the training data. Moreover, the net condensation in the corresponding annual mean satellite latent heating profile is within a few percent of the annual mean surface precipitation rate over the tropical and subtropical oceans where the algorithm is applied. Comparisons of Q1 produced by combining TMI Q1-QR with independently derived estimates of QR show reasonable agreement with rawinsonde-based analyses of Q1 from two field campaigns, although the satellite estimates exhibit heating profile structure with sharper and more intense heating peaks than the rawinsonde estimates. 2
NASA Technical Reports Server (NTRS)
Liu, X.; Kizer, S.; Barnet, C.; Dvakarla, M.; Zhou, D. K.; Larar, A. M.
2012-01-01
The Joint Polar Satellite System (JPSS) is a U.S. National Oceanic and Atmospheric Administration (NOAA) mission in collaboration with the U.S. National Aeronautical Space Administration (NASA) and international partners. The NPP Cross-track Infrared Microwave Sounding Suite (CrIMSS) consists of the infrared (IR) Crosstrack Infrared Sounder (CrIS) and the microwave (MW) Advanced Technology Microwave Sounder (ATMS). The CrIS instrument is hyperspectral interferometer, which measures high spectral and spatial resolution upwelling infrared radiances. The ATMS is a 22-channel radiometer similar to Advanced Microwave Sounding Units (AMSU) A and B. It measures top of atmosphere MW upwelling radiation and provides capability of sounding below clouds. The CrIMSS Environmental Data Record (EDR) algorithm provides three EDRs, namely the atmospheric vertical temperature, moisture and pressure profiles (AVTP, AVMP and AVPP, respectively), with the lower tropospheric AVTP and the AVMP being JPSS Key Performance Parameters (KPPs). The operational CrIMSS EDR an algorithm was originally designed to run on large IBM computers with dedicated data management subsystem (DMS). We have ported the operational code to simple Linux systems by replacing DMS with appropriate interfaces. We also changed the interface of the operational code so that we can read data from both the CrIMSS science code and the operational code and be able to compare lookup tables, parameter files, and output results. The detail of the CrIMSS EDR algorithm is described in reference [1]. We will present results of testing the CrIMSS EDR operational algorithm using proxy data generated from the Infrared Atmospheric Sounding Interferometer (IASI) satellite data and from the NPP CrIS/ATMS data.
Ocean observations with EOS/MODIS: Algorithm development and post launch studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.
1996-01-01
An investigation of the influence of stratospheric aerosol on the performance of the atmospheric correction algorithm is nearly complete. The results indicate how the performance of the algorithm is degraded if the stratospheric aerosol is ignored. Use of the MODIS 1380 nm band to effect a correction for stratospheric aerosols was also studied. Simple algorithms such as subtracting the reflectance at 1380 nm from the visible and near infrared bands can significantly reduce the error; however, only if the diffuse transmittance of the aerosol layer is taken into account. The atmospheric correction code has been modified for use with absorbing aerosols. Tests of the code showed that, in contrast to non absorbing aerosols, the retrievals were strongly influenced by the vertical structure of the aerosol, even when the candidate aerosol set was restricted to a set appropriate to the absorbing aerosol. This will further complicate the problem of atmospheric correction in an atmosphere with strongly absorbing aerosols. Our whitecap radiometer system and solar aureole camera were both tested at sea and performed well. Investigation of a technique to remove the effects of residual instrument polarization sensitivity were initiated and applied to an instrument possessing (approx.) 3-4 times the polarization sensitivity expected for MODIS. Preliminary results suggest that for such an instrument, elimination of the polarization effect is possible at the required level of accuracy by estimating the polarization of the top-of-atmosphere radiance to be that expected for a pure Rayleigh scattering atmosphere. This may be of significance for design of a follow-on MODIS instrument. W.M. Balch participated on two month-long cruises to the Arabian sea, measuring coccolithophore abundance, production, and optical properties. A thorough understanding of the relationship between calcite abundance and light scatter, in situ, will provide the basis for a generic suspended calcite algorithm.
Synthesis of atmospheric turbulence point spread functions by sparse and redundant representations
NASA Astrophysics Data System (ADS)
Hunt, Bobby R.; Iler, Amber L.; Bailey, Christopher A.; Rucci, Michael A.
2018-02-01
Atmospheric turbulence is a fundamental problem in imaging through long slant ranges, horizontal-range paths, or uplooking astronomical cases through the atmosphere. An essential characterization of atmospheric turbulence is the point spread function (PSF). Turbulence images can be simulated to study basic questions, such as image quality and image restoration, by synthesizing PSFs of desired properties. In this paper, we report on a method to synthesize PSFs of atmospheric turbulence. The method uses recent developments in sparse and redundant representations. From a training set of measured atmospheric PSFs, we construct a dictionary of "basis functions" that characterize the atmospheric turbulence PSFs. A PSF can be synthesized from this dictionary by a properly weighted combination of dictionary elements. We disclose an algorithm to synthesize PSFs from the dictionary. The algorithm can synthesize PSFs in three orders of magnitude less computing time than conventional wave optics propagation methods. The resulting PSFs are also shown to be statistically representative of the turbulence conditions that were used to construct the dictionary.
A study of digital gyro compensation loops. [data conversion routines and breadboard models
NASA Technical Reports Server (NTRS)
1975-01-01
The feasibility is discussed of replacing existing state-of-the-art analog gyro compensation loops with digital computations. This was accomplished by designing appropriate compensation loops for the dry turned TDF gyro, selecting appropriate data conversion and processing techniques and algorithms, and breadboarding the design for laboratory evaluation. A breadboard design was established in which one axis of a Teledyne turned-gimbal TDF gyro was caged digitally while the other was caged using conventional analog electronics. The digital loop was designed analytically to closely resemble the analog loop in performance. The breadboard was subjected to various static and dynamic tests in order to establish the relative stability characteristics and frequency responses of the digital and analog loops. Several variations of the digital loop configuration were evaluated. The results were favorable.
NASA Astrophysics Data System (ADS)
Kim, Wonhee; Chen, Xu; Lee, Youngwoo; Chung, Chung Choo; Tomizuka, Masayoshi
2018-05-01
A discrete-time backstepping control algorithm is proposed for reference tracking of systems affected by both broadband disturbances at low frequencies and narrow band disturbances at high frequencies. A discrete time DOB, which is constructed based on infinite impulse response filters is applied to compensate for narrow band disturbances at high frequencies. A discrete-time nonlinear damping backstepping controller with an augmented observer is proposed to track the desired output and to compensate for low frequency broadband disturbances along with a disturbance observer, for rejecting narrow band high frequency disturbances. This combination has the merit of simultaneously compensating both broadband disturbances at low frequencies and narrow band disturbances at high frequencies. The performance of the proposed method is validated via experiments.
A robust H.264/AVC video watermarking scheme with drift compensation.
Jiang, Xinghao; Sun, Tanfeng; Zhou, Yue; Wang, Wan; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression.
A Robust H.264/AVC Video Watermarking Scheme with Drift Compensation
Sun, Tanfeng; Zhou, Yue; Shi, Yun-Qing
2014-01-01
A robust H.264/AVC video watermarking scheme for copyright protection with self-adaptive drift compensation is proposed. In our scheme, motion vector residuals of macroblocks with the smallest partition size are selected to hide copyright information in order to hold visual impact and distortion drift to a minimum. Drift compensation is also implemented to reduce the influence of watermark to the most extent. Besides, discrete cosine transform (DCT) with energy compact property is applied to the motion vector residual group, which can ensure robustness against intentional attacks. According to the experimental results, this scheme gains excellent imperceptibility and low bit-rate increase. Malicious attacks with different quantization parameters (QPs) or motion estimation algorithms can be resisted efficiently, with 80% accuracy on average after lossy compression. PMID:24672376
Research on the range side lobe suppression method for modulated stepped frequency radar signals
NASA Astrophysics Data System (ADS)
Liu, Yinkai; Shan, Tao; Feng, Yuan
2018-05-01
The magnitude of time-domain range sidelobe of modulated stepped frequency radar affects the imaging quality of inverse synthetic aperture radar (ISAR). In this paper, the cause of high sidelobe in modulated stepped frequency radar imaging is analyzed first in real environment. Then, the chaos particle swarm optimization (CPSO) is used to select the amplitude and phase compensation factors according to the minimum sidelobe criterion. Finally, the compensated one-dimensional range images are obtained. Experimental results show that the amplitude-phase compensation method based on CPSO algorithm can effectively reduce the sidelobe peak value of one-dimensional range images, which outperforms the common sidelobe suppression methods and avoids the coverage of weak scattering points by strong scattering points due to the high sidelobes.
NASA Astrophysics Data System (ADS)
Loughman, R. P.; Bhartia, P. K.; Moy, L.; Kramarova, N. A.; Wargan, K.
2016-12-01
Many remote sensing techniques used to monitor the Earth's upper atmosphere fall into the broad category of "limb viewing" (LV) measurements, which includes any method for which the line of sight (LOS) fails to intersect the surface. Occultation, limb emission and limb scattering (LS) measurements are all LV methods that offer strong sensitivity to changes in the atmosphere near the tangent point of the LOS, due to the enhanced geometric path through the tangent layer (where the concentration also typically peaks, for most atmospheric species). But many of the retrieval algorithms used to interpret LV measurements assume that the atmosphere consists of "spherical shells", in which the atmospheric properties vary only with altitude (creating a 1D atmosphere). This assumption simplifies the analysis, but at the possible price of misinterpreting measurements made in the real atmosphere. In this presentation, we focus on the problem of LOS inhomogeneity for LS measurements made by the OMPS Limb Profiler (LP) instrument during the 2015 ozone hole period. The GSLS radiative transfer model (RTM) used in the default OMPS LP algorithms assumes a spherical-shell atmosphere defined at levels spaced 1 km apart, with extinction coefficients assumed to vary linearly with height between levels. Several recent improvements enable an updated single-scattering version of the GSLS RTM to ingest 3D MERRA-2 analysis fields (including temperature, pressure, and ozone concentration) when creating the model atmosphere, by introducing flexible altitude grids, flexible atmospheric specification along the LOS, and improved treatment of the radiative transfer within each atmospheric layer. As a result, the effect of LOS inhomogeneity on the current (1D) OMPS LP retrieval algorithm can now be studied theoretically, using realistic 3D atmospheric profiles. This work also represents a step towards enabling OMPS LP data to be ingested as part of future data assimilation efforts.
Compensators: An alternative IMRT delivery technique
Chang, Sha X.; Cullip, Timothy J.; Deschesne, Katharin M.; Miller, Elizabeth P.; Rosenman, Julian G.
2004-01-01
Seven years of experience in compensator intensity‐modulated radiotherapy (IMRT) clinical implementation are presented. An inverse planning dose optimization algorithm was used to generate intensity modulation maps, which were delivered via either the compensator or segmental multileaf collimator (MLC) IMRT techniques. The in‐house developed compensator‐IMRT technique is presented with the focus on several design issues. The dosimetry of the delivery techniques was analyzed for several clinical cases. The treatment time for both delivery techniques on Siemens accelerators was retrospectively analyzed based on the electronic treatment record in LANTIS for 95 patients. We found that the compensator technique consistently took noticeably less time for treatment of equal numbers of fields compared to the segmental technique. The typical time needed to fabricate a compensator was 13 min, 3 min of which was manual processing. More than 80% of the approximately 700 compensators evaluated had a maximum deviation of less than 5% from the calculation in intensity profile. Seventy‐two percent of the patient treatment dosimetry measurements for 340 patients have an error of no more than 5%. The pros and cons of different IMRT compensator materials are also discussed. Our experience shows that the compensator‐IMRT technique offers robustness, excellent intensity modulation resolution, high treatment delivery efficiency, simple fabrication and quality assurance (QA) procedures, and the flexibility to be used in any teletherapy unit. PACS numbers: 87.53Mr, 87.53Tf PMID:15753937
Image compression using quad-tree coding with morphological dilation
NASA Astrophysics Data System (ADS)
Wu, Jiaji; Jiang, Weiwei; Jiao, Licheng; Wang, Lei
2007-11-01
In this paper, we propose a new algorithm which integrates morphological dilation operation to quad-tree coding, the purpose of doing this is to compensate each other's drawback by using quad-tree coding and morphological dilation operation respectively. New algorithm can not only quickly find the seed significant coefficient of dilation but also break the limit of block boundary of quad-tree coding. We also make a full use of both within-subband and cross-subband correlation to avoid the expensive cost of representing insignificant coefficients. Experimental results show that our algorithm outperforms SPECK and SPIHT. Without using any arithmetic coding, our algorithm can achieve good performance with low computational cost and it's more suitable to mobile devices or scenarios with a strict real-time requirement.
Fault Location Based on Synchronized Measurements: A Comprehensive Survey
Al-Mohammed, A. H.; Abido, M. A.
2014-01-01
This paper presents a comprehensive survey on transmission and distribution fault location algorithms that utilize synchronized measurements. Algorithms based on two-end synchronized measurements and fault location algorithms on three-terminal and multiterminal lines are reviewed. Series capacitors equipped with metal oxide varistors (MOVs), when set on a transmission line, create certain problems for line fault locators and, therefore, fault location on series-compensated lines is discussed. The paper reports the work carried out on adaptive fault location algorithms aiming at achieving better fault location accuracy. Work associated with fault location on power system networks, although limited, is also summarized. Additionally, the nonstandard high-frequency-related fault location techniques based on wavelet transform are discussed. Finally, the paper highlights the area for future research. PMID:24701191
Image-classification-based global dimming algorithm for LED backlights in LCDs
NASA Astrophysics Data System (ADS)
Qibin, Feng; Huijie, He; Dong, Han; Lei, Zhang; Guoqiang, Lv
2015-07-01
Backlight dimming can help LCDs reduce power consumption and improve CR. With fixed parameters, dimming algorithm cannot achieve satisfied effects for all kinds of images. The paper introduces an image-classification-based global dimming algorithm. The proposed classification method especially for backlight dimming is based on luminance and CR of input images. The parameters for backlight dimming level and pixel compensation are adaptive with image classifications. The simulation results show that the classification based dimming algorithm presents 86.13% power reduction improvement compared with dimming without classification, with almost same display quality. The prototype is developed. There are no perceived distortions when playing videos. The practical average power reduction of the prototype TV is 18.72%, compared with common TV without dimming.
NOAA-NASA Coastal Zone Color Scanner reanalysis effort.
Gregg, Watson W; Conkright, Margarita E; O'Reilly, John E; Patt, Frederick S; Wang, Menghua H; Yoder, James A; Casey, Nancy W
2002-03-20
Satellite observations of global ocean chlorophyll span more than two decades. However, incompatibilities between processing algorithms prevent us from quantifying natural variability. We applied a comprehensive reanalysis to the Coastal Zone Color Scanner (CZCS) archive, called the National Oceanic and Atmospheric Administration and National Aeronautics and Space Administration (NOAA-NASA) CZCS reanalysis (NCR) effort. NCR consisted of (1) algorithm improvement (AI), where CZCS processing algorithms were improved with modernized atmospheric correction and bio-optical algorithms and (2) blending where in situ data were incorporated into the CZCS AI to minimize residual errors. Global spatial and seasonal patterns of NCR chlorophyll indicated remarkable correspondence with modern sensors, suggesting compatibility. The NCR permits quantitative analyses of interannual and interdecadal trends in global ocean chlorophyll.
Polarimeter Blind Deconvolution Using Image Diversity
2007-09-01
significant presence when imaging through turbulence and its ease of production in the labora- tory. An innovative algorithm for detection and estimation...1.2.2.2 Atmospheric Turbulence . Atmospheric turbulence spatially distorts the wavefront as light passes through it and causes blurring of images in an...intensity image . Various values of β are used in the experiments. The optimal β value varied with the input and the algorithm . The hybrid seemed to
Theoretical algorithms for satellite-derived sea surface temperatures
NASA Astrophysics Data System (ADS)
Barton, I. J.; Zavody, A. M.; O'Brien, D. M.; Cutten, D. R.; Saunders, R. W.; Llewellyn-Jones, D. T.
1989-03-01
Reliable climate forecasting using numerical models of the ocean-atmosphere system requires accurate data sets of sea surface temperature (SST) and surface wind stress. Global sets of these data will be supplied by the instruments to fly on the ERS 1 satellite in 1990. One of these instruments, the Along-Track Scanning Radiometer (ATSR), has been specifically designed to provide SST in cloud-free areas with an accuracy of 0.3 K. The expected capabilities of the ATSR can be assessed using transmission models of infrared radiative transfer through the atmosphere. The performances of several different models are compared by estimating the infrared brightness temperatures measured by the NOAA 9 AVHRR for three standard atmospheres. Of these, a computationally quick spectral band model is used to derive typical AVHRR and ATSR SST algorithms in the form of linear equations. These algorithms show that a low-noise 3.7-μm channel is required to give the best satellite-derived SST and that the design accuracy of the ATSR is likely to be achievable. The inclusion of extra water vapor information in the analysis did not improve the accuracy of multiwavelength SST algorithms, but some improvement was noted with the multiangle technique. Further modeling is required with atmospheric data that include both aerosol variations and abnormal vertical profiles of water vapor and temperature.
Ting, Lai-Lei; Chuang, Ho-Chiao; Liao, Ai-Ho; Kuo, Chia-Chun; Yu, Hsiao-Wei; Zhou, Yi-Liang; Tien, Der-Chi; Jeng, Shiu-Chen; Chiou, Jeng-Fong
2018-05-01
This study proposed respiratory motion compensation system (RMCS) combined with an ultrasound image tracking algorithm (UITA) to compensate for respiration-induced tumor motion during radiotherapy, and to address the problem of inaccurate radiation dose delivery caused by respiratory movement. This study used an ultrasound imaging system to monitor respiratory movements combined with the proposed UITA and RMCS for tracking and compensation of the respiratory motion. Respiratory motion compensation was performed using prerecorded human respiratory motion signals and also sinusoidal signals. A linear accelerator was used to deliver radiation doses to GAFchromic EBT3 dosimetry film, and the conformity index (CI), root-mean-square error, compensation rate (CR), and planning target volume (PTV) were used to evaluate the tracking and compensation performance of the proposed system. Human respiratory pattern signals were captured using the UITA and compensated by the RMCS, which yielded CR values of 34-78%. In addition, the maximum coronal area of the PTV ranged from 85.53 mm 2 to 351.11 mm 2 (uncompensated), which reduced to from 17.72 mm 2 to 66.17 mm 2 after compensation, with an area reduction ratio of up to 90%. In real-time monitoring of the respiration compensation state, the CI values for 85% and 90% isodose areas increased to 0.7 and 0.68, respectively. The proposed UITA and RMCS can reduce the movement of the tracked target relative to the LINAC in radiation therapy, thereby reducing the required size of the PTV margin and increasing the effect of the radiation dose received by the treatment target. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Curt, Petersen F.; Bodnar, Michael R.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.
2009-02-01
While imaging over long distances is critical to a number of security and defense applications, such as homeland security and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent atmosphere in the path between the region under observation and the imaging system, which can severely degrade captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image information; however, the computational complexity of such approaches has prohibited real-time deployment and hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and manufactured an embedded image processing system based on commodity hardware which can compensate for these atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop PC, in a form-factor that is compact, low-power, and field-deployable.
[Carbonyl compounds emission and uptake by plant: Research progress].
Li, Jian; Cai, Jing; Yan, Liu-Shui; Li, Ling-Na; Tao, Min
2013-02-01
This paper reviewed the researches on the carbonyl compounds emission and uptake by plants, and discussed the compensation point of the bidirectional exchange of carbonyl compounds between plants and atmosphere. The uptake by leaf stomata and stratum corneum is the principal way for the purification of air aldehydes by plants. After entering into plant leaves, most parts of carbonyl compounds can be metabolized into organic acid, glucide, amino acid, and carbon dioxide, etc. , by the endoenzymes in leaves. The exchange direction of the carbonyl compounds between plants and atmosphere can be preliminarily predicted by the compensation point and the concentrations of ambient carbonyl compounds. This paper summarized the analytical methods such as DNPH/HPLC/UV and PFPH/GC/MS used for the determination of carbonyl compounds emitted from plants or in plant leaves. The main research interests in the future were pointed out, e. g. , to improve and optimize the analytical methods for the determination of carbonyl compounds emitted from plants and the researches on systems (e. g. , plant-soil system), to enlarge the detection species of carbonyl compounds emitted from plants, to screen the plant species which can effectively metabolize the pollutants, and to popularize the phytoremediation techniques for atmospheric
Li, Shaobai; Wang, Yun; Wang, Qi; Ma, Xianxian; Wang, Longxiao; Zhao, Weiqian; Zhang, Xusheng
2018-05-10
In this paper, we propose a new measurement and compensation method for the eccentricity of the inertial confinement fusion (ICF) capsule, which combines computer vision and the laser differential confocal method to align the capsule in rotation measurement. This technique measures the eccentricity of the capsule by obtaining the sub-pixel profile with a moment-based algorithm, then performs the preliminary alignment by the two-dimensional adjustment. Next, we use the laser differential confocal sensor to measure the height data of the equatorial surface of the capsule by turning it around, then obtain and compensate the remaining eccentricity ultimately. This method is a non-contact, automatic, rapid, high-precision measurement and compensation technique of eccentricity for the capsule. Theoretical analyses and preliminary experiments indicate that the maximum measurement range of eccentricity of this proposed method is 1.8 mm for the capsule with a diameter of 1 mm, and it could eliminate the eccentricity to less than 0.5 μm in 30 s.
NASA Astrophysics Data System (ADS)
Han, Ke-Zhen; Feng, Jian; Cui, Xiaohong
2017-10-01
This paper considers the fault-tolerant optimised tracking control (FTOTC) problem for unknown discrete-time linear system. A research scheme is proposed on the basis of data-based parity space identification, reinforcement learning and residual compensation techniques. The main characteristic of this research scheme lies in the parity-space-identification-based simultaneous tracking control and residual compensation. The specific technical line consists of four main contents: apply subspace aided method to design observer-based residual generator; use reinforcement Q-learning approach to solve optimised tracking control policy; rely on robust H∞ theory to achieve noise attenuation; adopt fault estimation triggered by residual generator to perform fault compensation. To clarify the design and implementation procedures, an integrated algorithm is further constructed to link up these four functional units. The detailed analysis and proof are subsequently given to explain the guaranteed FTOTC performance of the proposed conclusions. Finally, a case simulation is provided to verify its effectiveness.
An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST
NASA Astrophysics Data System (ADS)
Hang, Xu; Jun, Zhao
2018-05-01
Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.
Linear time-invariant controller design for two-channel decentralized control systems
NASA Technical Reports Server (NTRS)
Desoer, Charles A.; Gundes, A. Nazli
1987-01-01
This paper analyzes a linear time-invariant two-channel decentralized control system with a 2 x 2 strictly proper plant. It presents an algorithm for the algebraic design of a class of decentralized compensators which stabilize the given plant.
Force-sensed interface for control and training space robot
NASA Astrophysics Data System (ADS)
Moiseev, O. S.; Sarsadskikh, A. S.; Povalyaev, N. D.; Gorbunov, V. I.; Kulakov, F. M.; Vasilev, V. V.
2018-05-01
A method of positional and force-torque control of robots is proposed. Prototypes of the system and the master handle have been created. Algorithm of bias estimation and gravity compensation for force-torque sensor and force-torque trajectory correction are described.
High Performance Compression of Science Data
NASA Technical Reports Server (NTRS)
Storer, James A.; Carpentieri, Bruno; Cohn, Martin
1994-01-01
Two papers make up the body of this report. One presents a single-pass adaptive vector quantization algorithm that learns a codebook of variable size and shape entries; the authors present experiments on a set of test images showing that with no training or prior knowledge of the data, for a given fidelity, the compression achieved typically equals or exceeds that of the JPEG standard. The second paper addresses motion compensation, one of the most effective techniques used in interframe data compression. A parallel block-matching algorithm for estimating interframe displacement of blocks with minimum error is presented. The algorithm is designed for a simple parallel architecture to process video in real time.
Fuzzy logic applications to control engineering
NASA Astrophysics Data System (ADS)
Langari, Reza
1993-12-01
This paper presents the results of a project presently under way at Texas A&M which focuses on the use of fuzzy logic in integrated control of manufacturing systems. The specific problems investigated here include diagnosis of critical tool wear in machining of metals via a neuro-fuzzy algorithm, as well as compensation of friction in mechanical positioning systems via an adaptive fuzzy logic algorithm. The results indicate that fuzzy logic in conjunction with conventional algorithmic based approaches or neural nets can prove useful in dealing with the intricacies of control/monitoring of manufacturing systems and can potentially play an active role in multi-modal integrated control systems of the future.
Blind deconvolution post-processing of images corrected by adaptive optics
NASA Astrophysics Data System (ADS)
Christou, Julian C.
1995-08-01
Experience with the adaptive optics system at the Starfire Optical Range has shown that the point spread function is non-uniform and varies both spatially and temporally as well as being object dependent. Because of this, the application of a standard linear and non-linear deconvolution algorithms make it difficult to deconvolve out the point spread function. In this paper we demonstrate the application of a blind deconvolution algorithm to adaptive optics compensated data where a separate point spread function is not needed.
NASA Technical Reports Server (NTRS)
Nisenson, P.; Papaliolios, C.
1983-01-01
An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average arre biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.
In-vivo study of blood flow in capillaries using μPIV method
NASA Astrophysics Data System (ADS)
Kurochkin, Maxim A.; Fedosov, Ivan V.; Tuchin, Valery V.
2014-01-01
A digital optical system for intravital capillaroscopy has been developed. It implements the particle image velocimetry (PIV) based approach for measurements of red blood cells velocity in individual capillary of human nailfold. We propose to use a digital real time stabilization technique for compensation of impact of involuntary movements of a finger on results of measurements. Image stabilization algorithm is based on correlation of feature tracking. The efficiency of designed image stabilization algorithm was experimentally demonstrated.
Latest experience in design of piezoelectric-driven fine-steering mirrors
NASA Astrophysics Data System (ADS)
Marth, Harry; Donat, Michael; Pohlhammer, Charles F.
1992-01-01
The European Space Organization (ESO) requested Physik Instrumente (PI) to develop a system to compensate for atmospherically induced image jitter in astronomical telescopes. The product, designated S-380 by PI, is a sophisticated adaptive optic system using closed loop piezoelectric actuators and momentum compensation to significantly improve telescope resolution during long integrations by correcting for image jitter in real time. Optimizing the design of this system involved solving several interdependent problems, including: (1) selection of the motion system, (2) arrangement of the pivot points and actuators, (3) momentum compensation, and (4) selection of the sensor system. This paper presents the trade-offs leading to final design of the S-380 system, some supporting technical analysis and ongoing efforts at PI to provide fast tilting platforms for larger mirrors.
Tang, Bo-Hui; Wu, Hua-; Li, Zhao-Liang; Nerry, Françoise
2012-07-30
This work addressed the validation of the MODIS-derived bidirectional reflectivity retrieval algorithm in mid-infrared (MIR) channel, proposed by Tang and Li [Int. J. Remote Sens. 29, 4907 (2008)], with ground-measured data, which were collected from a field campaign that took place in June 2004 at the ONERA (Office National d'Etudes et de Recherches Aérospatiales) center of Fauga-Mauzac, on the PIRRENE (Programme Interdisciplinaire de Recherche sur la Radiométrie en Environnement Extérieur) experiment site [Opt. Express 15, 12464 (2007)]. The leaving-surface spectral radiances measured by a BOMEM (MR250 Series) Fourier transform interferometer were used to calculate the ground brightness temperatures with the combination of the inversion of the Planck function and the spectral response functions of MODIS channels 22 and 23, and then to estimate the ground brightness temperature without the contribution of the solar direct beam and the bidirectional reflectivity by using Tang and Li's proposed algorithm. On the other hand, the simultaneously measured atmospheric profiles were used to obtain the atmospheric parameters and then to calculate the ground brightness temperature without the contribution of the solar direct beam, based on the atmospheric radiative transfer equation in the MIR region. Comparison of those two kinds of brightness temperature obtained by two different methods indicated that the Root Mean Square Error (RMSE) between the brightness temperatures estimated respectively using Tang and Li's algorithm and the atmospheric radiative transfer equation is 1.94 K. In addition, comparison of the hemispherical-directional reflectances derived by Tang and Li's algorithm with those obtained from the field measurements showed that the RMSE is 0.011, which indicates that Tang and Li's algorithm is feasible to retrieve the bidirectional reflectivity in MIR channel from MODIS data.
Seismic random noise removal by delay-compensation time-frequency peak filtering
NASA Astrophysics Data System (ADS)
Yu, Pengjun; Li, Yue; Lin, Hongbo; Wu, Ning
2017-06-01
Over the past decade, there has been an increasing awareness of time-frequency peak filtering (TFPF) due to its outstanding performance in suppressing non-stationary and strong seismic random noise. The traditional approach based on time-windowing achieves local linearity and meets the unbiased estimation. However, the traditional TFPF (including the improved algorithms with alterable window lengths) could hardly relieve the contradiction between removing noise and recovering the seismic signal, and this situation is more obvious in wave crests and troughs, even for alterable window lengths (WL). To improve the efficiency of the algorithm, the following TFPF in the time-space domain is applied, such as in the Radon domain and radial trace domain. The time-space transforms obtain a reduced-frequency input to reduce the TFPF error and stretch the desired signal along a certain direction, therefore the time-space development brings an improvement by both enhancing reflection events and attenuating noise. It still proves limited in application because the direction should be matched as a straight line or quadratic curve. As a result, waveform distortion and false seismic events may appear when processing the complex stratum record. The main emphasis in this article is placed on the time-space TFPF applicable expansion. The reconstructed signal in delay-compensation TFPF, which is generated according to the similarity among the reflection events, overcomes the limitation of the direction curve fitting. Moreover, the reconstructed signal just meets the TFPF linearity unbiased estimation and integrates signal reservation with noise attenuation. Experiments on both the synthetic model and field data indicate that delay-compensation TFPF has a better performance over the conventional filtering algorithms.
NASA Technical Reports Server (NTRS)
Lyapustin, A.; Wang, Y.; Laszlo, I.; Hilker, T.; Hall, F.; Sellers, P.; Tucker, J.; Korkin, S.
2012-01-01
This paper describes the atmospheric correction (AC) component of the Multi-Angle Implementation of Atmospheric Correction algorithm (MAIAC) which introduces a new way to compute parameters of the Ross-Thick Li-Sparse (RTLS) Bi-directional reflectance distribution function (BRDF), spectral surface albedo and bidirectional reflectance factors (BRF) from satellite measurements obtained by the Moderate Resolution Imaging Spectroradiometer (MODIS). MAIAC uses a time series and spatial analysis for cloud detection, aerosol retrievals and atmospheric correction. It implements a moving window of up to 16 days of MODIS data gridded to 1 km resolution in a selected projection. The RTLS parameters are computed directly by fitting the cloud-free MODIS top of atmosphere (TOA) reflectance data stored in the processing queue. The RTLS retrieval is applied when the land surface is stable or changes slowly. In case of rapid or large magnitude change (as for instance caused by disturbance), MAIAC follows the MODIS operational BRDF/albedo algorithm and uses a scaling approach where the BRDF shape is assumed stable but its magnitude is adjusted based on the latest single measurement. To assess the stability of the surface, MAIAC features a change detection algorithm which analyzes relative change of reflectance in the Red and NIR bands during the accumulation period. To adjust for the reflectance variability with the sun-observer geometry and allow comparison among different days (view geometries), the BRFs are normalized to the fixed view geometry using the RTLS model. An empirical analysis of MODIS data suggests that the RTLS inversion remains robust when the relative change of geometry-normalized reflectance stays below 15%. This first of two papers introduces the algorithm, a second, companion paper illustrates its potential by analyzing MODIS data over a tropical rainforest and assessing errors and uncertainties of MAIAC compared to conventional MODIS products.
Ocean and atmosphere feedbacks affecting AMOC hysteresis in a GCM
NASA Astrophysics Data System (ADS)
Jackson, L. C.; Smith, R. S.; Wood, R. A.
2017-07-01
Theories suggest that the Atlantic Meridional Overturning Circulation (AMOC) can exhibit a hysteresis where, for a given input of fresh water into the north Atlantic, there are two possible states: one with a strong overturning in the north Atlantic (on) and the other with a reverse Atlantic cell (off). A previous study showed hysteresis of the AMOC for the first time in a coupled general circulation model (Hawkins et al. in Geophys Res Lett. doi: 10.1029/2011GL047208, 2011). In this study we show that the hysteresis found by Hawkins et al. (2011) is sensitive to the method with which the fresh water input is compensated. If this compensation is applied throughout the volume of the global ocean, rather than at the surface, the region of hysteresis is narrower and the off states are very different: when the compensation is applied at the surface, a strong Pacific overturning cell and a strong Atlantic reverse cell develops; when the compensation is applied throughout the volume there is little change in the Pacific and only a weak Atlantic reverse cell develops. We investigate the mechanisms behind the transitions between the on and off states in the two experiments, and find that the difference in hysteresis is due to the different off states. We find that the development of the Pacific overturning cell results in greater atmospheric moisture transport into the North Atlantic, and also is likely responsible for a stronger Atlantic reverse cell. These both act to stabilize the off state of the Atlantic overturning.
Wang, Menghua; Shi, Wei; Jiang, Lide
2012-01-16
A regional near-infrared (NIR) ocean normalized water-leaving radiance (nL(w)(λ)) model is proposed for atmospheric correction for ocean color data processing in the western Pacific region, including the Bohai Sea, Yellow Sea, and East China Sea. Our motivation for this work is to derive ocean color products in the highly turbid western Pacific region using the Geostationary Ocean Color Imager (GOCI) onboard South Korean Communication, Ocean, and Meteorological Satellite (COMS). GOCI has eight spectral bands from 412 to 865 nm but does not have shortwave infrared (SWIR) bands that are needed for satellite ocean color remote sensing in the turbid ocean region. Based on a regional empirical relationship between the NIR nL(w)(λ) and diffuse attenuation coefficient at 490 nm (K(d)(490)), which is derived from the long-term measurements with the Moderate-resolution Imaging Spectroradiometer (MODIS) on the Aqua satellite, an iterative scheme with the NIR-based atmospheric correction algorithm has been developed. Results from MODIS-Aqua measurements show that ocean color products in the region derived from the new proposed NIR-corrected atmospheric correction algorithm match well with those from the SWIR atmospheric correction algorithm. Thus, the proposed new atmospheric correction method provides an alternative for ocean color data processing for GOCI (and other ocean color satellite sensors without SWIR bands) in the turbid ocean regions of the Bohai Sea, Yellow Sea, and East China Sea, although the SWIR-based atmospheric correction approach is still much preferred. The proposed atmospheric correction methodology can also be applied to other turbid coastal regions.
Determination of boundary layer top on the basis of the characteristics of atmospheric particles
NASA Astrophysics Data System (ADS)
Liu, Boming; Ma, Yingying; Gong, Wei; Zhang, Ming; Yang, Jian
2018-04-01
The planetary boundary layer (PBL) is the lowest layer of the atmosphere that can be directly influenced with the Earth's surface. This layer can also respond to surface forcing. The determination of the PBL is significant to environmental and climate research. PBL can also serve as an input parameter for further data processing with atmospheric models. Traditional detection algorithms are susceptible to errors associated with the vertical distribution of aerosol concentrations. To overcome this limitation, a maximum difference search (MDS) algorithm was proposed to calculate the top of the boundary layer based on differences in particle characteristics. The top positions of the PBL from MDS algorithm under different convection states were compared with those from conventional methods. Experimental results demonstrated that the MDS method can determine the top of the boundary layer precisely. The proposed algorithm can also be used to calculate the top of the PBL accurately under weak convection conditions where the traditional methods cannot be applied. Finally, experimental data from June 2015 to December 2015 were analysed to verify the reliability of the MDS algorithm. The correlation coefficients R2 (RMSE) between the results of MDS algorithm and radiosonde measurements were 0.53 (115 m), 0.79 (141 m) and 0.96 (43 m) under weak, moderate and strong convections, respectively. These findings indicated that the proposed method possessed a good feasibility and stability.
Accelerated simulation of stochastic particle removal processes in particle-resolved aerosol models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, J.H.; Michelotti, M.D.; Riemer, N.
2016-10-01
Stochastic particle-resolved methods have proven useful for simulating multi-dimensional systems such as composition-resolved aerosol size distributions. While particle-resolved methods have substantial benefits for highly detailed simulations, these techniques suffer from high computational cost, motivating efforts to improve their algorithmic efficiency. Here we formulate an algorithm for accelerating particle removal processes by aggregating particles of similar size into bins. We present the Binned Algorithm for particle removal processes and analyze its performance with application to the atmospherically relevant process of aerosol dry deposition. We show that the Binned Algorithm can dramatically improve the efficiency of particle removals, particularly for low removalmore » rates, and that computational cost is reduced without introducing additional error. In simulations of aerosol particle removal by dry deposition in atmospherically relevant conditions, we demonstrate about 50-times increase in algorithm efficiency.« less
Towards frameless maskless SRS through real-time 6DoF robotic motion compensation.
Belcher, Andrew H; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D
2017-11-13
Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient's skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system's effectiveness in maintaining the target's 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system's effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system's success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.
Towards frameless maskless SRS through real-time 6DoF robotic motion compensation
NASA Astrophysics Data System (ADS)
Belcher, Andrew H.; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D.
2017-12-01
Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient’s skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system’s effectiveness in maintaining the target’s 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system’s effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system’s success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS treatments, potentially able to achieve the same or better treatment accuracies compared to traditional frame-based approaches.
Adaptive piezoelectric sensoriactuator
NASA Technical Reports Server (NTRS)
Clark, Jr., Robert L. (Inventor); Vipperman, Jeffrey S. (Inventor); Cole, Daniel G. (Inventor)
1996-01-01
An adaptive algorithm implemented in digital or analog form is used in conjunction with a voltage controlled amplifier to compensate for the feedthrough capacitance of piezoelectric sensoriactuator. The mechanical response of the piezoelectric sensoriactuator is resolved from the electrical response by adaptively altering the gain imposed on the electrical circuit used for compensation. For wideband, stochastic input disturbances, the feedthrough capacitance of the sensoriactuator can be identified on-line, providing a means of implementing direct-rate-feedback control in analog hardware. The device is capable of on-line system health monitoring since a quasi-stable dynamic capacitance is indicative of sustained health of the piezoelectric element.
Wavelength scanning digital interference holography for high-resolution ophthalmic imaging
NASA Astrophysics Data System (ADS)
Potcoava, Mariana C.; Kim, M. K.; Kay, Christine N.
2009-02-01
An improved digital interference holography (DIH) technique suitable for fundus images is proposed. This technique incorporates a dispersion compensation algorithm to compensate for the unknown axial length of the eye. Using this instrument we acquired successfully tomographic fundus images in human eye with narrow axial resolution less than 5μm. The optic nerve head together with the surrounding retinal vasculature were constructed. We were able to quantify a depth of 84μm between the retinal fiber and the retinal pigmented epithelium layers. DIH provides high resolution 3D information which could potentially aid in guiding glaucoma diagnosis and treatment.
Consideration of computer limitations in implementing on-line controls. M.S. Thesis
NASA Technical Reports Server (NTRS)
Roberts, G. K.
1976-01-01
A formal statement of the optimal control problem which includes the interval of dicretization as an optimization parameter, and extend this to include selection of a control algorithm as part of the optimization procedure, is formulated. The performance of the scalar linear system depends on the discretization interval. Discrete-time versions of the output feedback regulator and an optimal compensator, and the use of these results in presenting an example of a system for which fast partial-state-feedback control better minimizes a quadratic cost than either a full-state feedback control or a compensator, are developed.
a Universal De-Noising Algorithm for Ground-Based LIDAR Signal
NASA Astrophysics Data System (ADS)
Ma, Xin; Xiang, Chengzhi; Gong, Wei
2016-06-01
Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.
Land, P E; Haigh, J D
1997-12-20
In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.
NASA Technical Reports Server (NTRS)
Ramanathan, V.; Callis, L. B.; Boughner, R. E.
1976-01-01
A radiative-convective model is proposed for estimating the sensitivity of the atmospheric radiative heating rates and atmospheric and surface temperatures to perturbations in the concentration of O3 and NO2 in the stratosphere. Contribution to radiative energy transfer within the atmosphere from H2O, CO2, O3, and NO2 is considered. It is found that the net solar radiation absorbed by the earth-atmosphere system decreases with a reduction in O3; if the reduction of O3 is accompanied by an increase in NO2, there is a compensating effect due to solar absorption by NO2. The surface temperature and atmospheric temperature decrease with decreasing stratospheric O3. Another major conclusion is the strong sensitivity of surface temperature to the vertical distribution of O3 within the atmosphere. The results should be considered as reflecting the sensitivity of the proposed model rather than the sensitivity of the actual earth-atmosphere system.
A frequency dependent preconditioned wavelet method for atmospheric tomography
NASA Astrophysics Data System (ADS)
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2013-12-01
Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.
Zhang, T; Gordon, H R
1997-04-20
We report a sensitivity analysis for the algorithm presented by Gordon and Zhang [Appl. Opt. 34, 5552 (1995)] for inverting the radiance exiting the top and bottom of the atmosphere to yield the aerosol-scattering phase function [P(?)] and single-scattering albedo (omega(0)). The study of the algorithm's sensitivity to radiometric calibration errors, mean-zero instrument noise, sea-surface roughness, the curvature of the Earth's atmosphere, the polarization of the light field, and incorrect assumptions regarding the vertical structure of the atmosphere, indicates that the retrieved omega(0) has excellent stability even for very large values (~2) of the aerosol optical thickness; however, the error in the retrieved P(?) strongly depends on the measurement error and on the assumptions made in the retrieval algorithm. The retrieved phase functions in the blue are usually poor compared with those in the near infrared.
Duan, Qian-Qian; Yang, Gen-Ke; Pan, Chang-Chun
2014-01-01
A hybrid optimization algorithm combining finite state method (FSM) and genetic algorithm (GA) is proposed to solve the crude oil scheduling problem. The FSM and GA are combined to take the advantage of each method and compensate deficiencies of individual methods. In the proposed algorithm, the finite state method makes up for the weakness of GA which is poor at local searching ability. The heuristic returned by the FSM can guide the GA algorithm towards good solutions. The idea behind this is that we can generate promising substructure or partial solution by using FSM. Furthermore, the FSM can guarantee that the entire solution space is uniformly covered. Therefore, the combination of the two algorithms has better global performance than the existing GA or FSM which is operated individually. Finally, a real-life crude oil scheduling problem from the literature is used for conducting simulation. The experimental results validate that the proposed method outperforms the state-of-art GA method. PMID:24772031
Quantitative Robust Control Engineering: Theory and Applications
2006-09-01
30]. Gutman, PO., Baril , C. Neuman, L. (1994), An algorithm for computing value sets of uncertain transfer functions in factored real form...linear compensation design for saturating unstable uncertain plants. Int. J. Control, Vol. 44, pp. 1137-1146. [90]. Oldak S., Baril C. and Gutman
Software Cost-Estimation Model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1985-01-01
Software Cost Estimation Model SOFTCOST provides automated resource and schedule model for software development. Combines several cost models found in open literature into one comprehensive set of algorithms. Compensates for nearly fifty implementation factors relative to size of task, inherited baseline, organizational and system environment and difficulty of task.
Huang, Chung-Yuan; Wen, Tzai-Hung
2014-01-01
Immediate treatment with an automated external defibrillator (AED) increases out-of-hospital cardiac arrest (OHCA) patient survival potential. While considerable attention has been given to determining optimal public AED locations, spatial and temporal factors such as time of day and distance from emergency medical services (EMSs) are understudied. Here we describe a geocomputational genetic algorithm with a new stirring operator (GANSO) that considers spatial and temporal cardiac arrest occurrence factors when assessing the feasibility of using Taipei 7-Eleven stores as installation locations for AEDs. Our model is based on two AED conveyance modes, walking/running and driving, involving service distances of 100 and 300 meters, respectively. Our results suggest different AED allocation strategies involving convenience stores in urban settings. In commercial areas, such installations can compensate for temporal gaps in EMS locations when responding to nighttime OHCA incidents. In residential areas, store installations can compensate for long distances from fire stations, where AEDs are currently held in Taipei.
Ni-MH battery charger with a compensator for electric vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, H.W.; Han, C.S.; Kim, C.S.
1996-09-01
The development of a high-performance battery and safe and reliable charging methods are two important factors for commercialization of the Electric Vehicles (EV). Hyundai and Ovonic together spent many years in the research on optimum charging method for Ni-MH battery. This paper presents in detail the results of intensive experimental analysis, performed by Hyundai in collaboration with Ovonic. An on-board Ni-MH battery charger and its controller which are designed to use as a standard home electricity supply are described. In addition, a 3 step constant current recharger with the temperature and the battery aging compensator is proposed. This has amore » multi-loop algorithm function to detect its 80% and fully charged state, and carry out equalization charging control. The algorithm is focused on safety, reliability, efficiency, charging speed and thermal management (maintaining uniform temperatures within a battery pack). It is also designed to minimize the necessity for user input.« less
Wang, Jun; Zheng, Jiao; Lu, Hong; Yan, Qing; Wang, Li; Liu, Jingjing; Hua, Dengxin
2017-11-01
Atmospheric temperature is one of the important parameters for the description of the atmospheric state. Most of the detection approaches to atmospheric temperature monitoring are based on rotational Raman scattering for better understanding atmospheric dynamics, thermodynamics, atmospheric transmission, and radiation. In this paper, we present a fine-filter method based on wavelength division multiplexing, incorporating a fiber Bragg grating in the visible spectrum for the rotational Raman scattering spectrum. To achieve high-precision remote sensing, the strong background noise is filtered out by using the secondary cascaded light paths. Detection intensity and the signal-to-noise ratio are improved by increasing the utilization rate of return signal form atmosphere. Passive temperature compensation is employed to reduce the temperature sensitivity of fiber Bragg grating. In addition, the proposed method provides a feasible solution for the filter system with the merits of miniaturization, high anti-interference, and high stability in the space-based platform.
NASA Astrophysics Data System (ADS)
Bellotti, A.; Steffes, P. G.
2016-12-01
The Juno Microwave Radiometer (MWR) has six channels ranging from 1.36-50 cm and the ability to peer deep into the Jovian atmosphere. An Artifical Neural Network algorithm has been developed to rapidly perform inversion for the deep abundance of ammonia, the deep abundance of water vapor, and atmospheric "stretch" (a parameter that reflects the deviation from a wet adiabate in the higher atmosphere). This algorithm is "trained" by using simulated emissions at the six wavelengths computed using the Juno atmospheric microwave radiative transfer (JAMRT) model presented by Oyafuso et al. (This meeting). By exploiting the emission measurements conducted at six wavelengths and at various incident angles, the neural network can provide preliminary results to a useful precison in a computational method hundreds of times faster than conventional methods. This can quickly provide important insights into the variability and structure of the Jovian atmosphere.
NASA Astrophysics Data System (ADS)
Niu, Chaojun; Han, Xiang'e.
2015-10-01
Adaptive optics (AO) technology is an effective way to alleviate the effect of turbulence on free space optical communication (FSO). A new adaptive compensation method can be used without a wave-front sensor. Artificial bee colony algorithm (ABC) is a population-based heuristic evolutionary algorithm inspired by the intelligent foraging behaviour of the honeybee swarm with the advantage of simple, good convergence rate, robust and less parameter setting. In this paper, we simulate the application of the improved ABC to correct the distorted wavefront and proved its effectiveness. Then we simulate the application of ABC algorithm, differential evolution (DE) algorithm and stochastic parallel gradient descent (SPGD) algorithm to the FSO system and analyze the wavefront correction capabilities by comparison of the coupling efficiency, the error rate and the intensity fluctuation in different turbulence before and after the correction. The results show that the ABC algorithm has much faster correction speed than DE algorithm and better correct ability for strong turbulence than SPGD algorithm. Intensity fluctuation can be effectively reduced in strong turbulence, but not so effective in week turbulence.
Mars Entry Atmospheric Data System Modelling and Algorithm Development
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.
2009-01-01
The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.
Chroma-preserved luma controlling technique using YCbCr color space
NASA Astrophysics Data System (ADS)
Lee, Sooyeon; Kwak, Youngshin; Kim, Youn Jin
2013-02-01
YCbCr color space composed of luma and chominance components is preferred for its ease of image processing. However the non-orthogonality between YCbCr components induces unwanted perceived chroma change as controlling luma values. In this study, a new method was designed for the unwanted chroma change compensation generated by luma change. For six different YCC_hue angles, data points named `Original data' generated with uniformly distributed luma and Cb, Cr values. Then the weight values were applied to luma values of `Original data' set resulting in `Test data' set followed by `new YCC_chroma' calculation having miminum CIECAM02 ΔC between original and test data for `Test data' set. Finally mathematical model is developed to predict amount of YCC_chroma values to compensate CIECAM02 chroma changes. This model implemented for luma controlling algorithm having constant perceived chroma. The performance was tested numerically using data points and images. After compensation the result is improved 51.69% than that before compensation when CIECAM02 Δ C between `Original data' and `Test data' after compensation is compared. When new model is applied to test images, there is 32.03% improvement.
Yen, Chih-Ta; Chen, Wen-Bin
2016-01-01
Chromatic dispersion from optical fiber is the most important problem that produces temporal skews and destroys the rectangular structure of code patterns in the spectra-amplitude-coding-based optical code-division multiple-access (SAC-OCDMA) system. Thus, the balance detection scheme does not work perfectly to cancel multiple access interference (MAI) and the system performance will be degraded. Orthogonal frequency-division multiplexing (OFDM) is the fastest developing technology in the academic and industrial fields of wireless transmission. In this study, the radio-over-fiber system is realized by integrating OFDM and OCDMA via polarization multiplexing scheme. The electronic dispersion compensation (EDC) equalizer element of OFDM integrated with the dispersion compensation fiber (DCF) is used in the proposed radio-over-fiber (RoF) system, which can efficiently suppress the chromatic dispersion influence in long-haul transmitted distance. A set of length differences for 10 km-long single-mode fiber (SMF) and 4 km-long DCF is to verify the compensation scheme by relative equalizer algorithms and constellation diagrams. In the simulation result, the proposed dispersion mechanism successfully compensates the dispersion from SMF and the system performance with dispersion equalizer is highly improved. PMID:27618042
Compensating the intensity fall-off effect in cone-beam tomography by an empirical weight formula.
Chen, Zikuan; Calhoun, Vince D; Chang, Shengjiang
2008-11-10
The Feldkamp-David-Kress (FDK) algorithm is widely adopted for cone-beam reconstruction due to its one-dimensional filtered backprojection structure and parallel implementation. In a reconstruction volume, the conspicuous cone-beam artifact manifests as intensity fall-off along the longitudinal direction (the gantry rotation axis). This effect is inherent to circular cone-beam tomography due to the fact that a cone-beam dataset acquired from circular scanning fails to meet the data sufficiency condition for volume reconstruction. Upon observations of the intensity fall-off phenomenon associated with the FDK reconstruction of a ball phantom, we propose an empirical weight formula to compensate for the fall-off degradation. Specifically, a reciprocal cosine can be used to compensate the voxel values along longitudinal direction during three-dimensional backprojection reconstruction, in particular for boosting the values of voxels at positions with large cone angles. The intensity degradation within the z plane, albeit insignificant, can also be compensated by using the same weight formula through a parameter for radial distance dependence. Computer simulations and phantom experiments are presented to demonstrate the compensation effectiveness of the fall-off effect inherent in circular cone-beam tomography.
NASA Astrophysics Data System (ADS)
Liu, Zengjun; Wang, Lei; Li, Kui; Gao, Jiaxin
2017-05-01
Hybrid inertial navigation system (HINS) is a new kind of inertial navigation system (INS), which combines advantages of platform INS, strap-down INS and rotational INS. HINS has a physical platform to isolate the angular motion as platform INS does, HINS also uses strap-down attitude algorithms and applies rotation modulation technique. Tri-axis HINS has three gimbals to isolate the angular motion in the dynamic base, in which way the system can reduce the effects of angular motion and improve the positioning precision. However, the angular motion will affect the compensation of some error parameters, especially for the lever arm effect. The lever arm effect caused by position errors between the accelerometers and rotation center cannot be ignored due to the rapid rotation of inertial measurement unit (IMU) and it will cause fluctuation and stage in velocity in HINS. The influences of angular motion on the lever arm effect compensation are analyzed firstly in this paper, and then the compensation method of lever arm effect based on the photoelectric encoders in dynamic base is proposed. Results of experiments on turntable show that after compensation, the fluctuations and stages in velocity curve disappear.
NASA Astrophysics Data System (ADS)
Penenko, Alexey; Penenko, Vladimir; Nuterman, Roman; Baklanov, Alexander; Mahura, Alexander
2015-11-01
Atmospheric chemistry dynamics is studied with convection-diffusion-reaction model. The numerical Data Assimilation algorithm presented is based on the additive-averaged splitting schemes. It carries out ''fine-grained'' variational data assimilation on the separate splitting stages with respect to spatial dimensions and processes i.e. the same measurement data is assimilated to different parts of the split model. This design has efficient implementation due to the direct data assimilation algorithms of the transport process along coordinate lines. Results of numerical experiments with chemical data assimilation algorithm of in situ concentration measurements on real data scenario have been presented. In order to construct the scenario, meteorological data has been taken from EnviroHIRLAM model output, initial conditions from MOZART model output and measurements from Airbase database.
NASA Astrophysics Data System (ADS)
Woeger, Friedrich; Rimmele, Thomas
2009-10-01
We analyze the effect of anisoplanatic atmospheric turbulence on the measurement accuracy of an extended-source Hartmann-Shack wavefront sensor (HSWFS). We have numerically simulated an extended-source HSWFS, using a scenery of the solar surface that is imaged through anisoplanatic atmospheric turbulence and imaging optics. Solar extended-source HSWFSs often use cross-correlation algorithms in combination with subpixel shift finding algorithms to estimate the wavefront gradient, two of which were tested for their effect on the measurement accuracy. We find that the measurement error of an extended-source HSWFS is governed mainly by the optical geometry of the HSWFS, employed subpixel finding algorithm, and phase anisoplanatism. Our results show that effects of scintillation anisoplanatism are negligible when cross-correlation algorithms are used.
Infrared remote sensing of the vertical and horizontal distribution of clouds
NASA Technical Reports Server (NTRS)
Chahine, M. T.; Haskins, R. D.
1982-01-01
An algorithm has been developed to derive the horizontal and vertical distribution of clouds from the same set of infrared radiance data used to retrieve atmospheric temperature profiles. The method leads to the determination of the vertical atmospheric temperature structure and the cloud distribution simultaneously, providing information on heat sources and sinks, storage rates and transport phenomena in the atmosphere. Experimental verification of this algorithm was obtained using the 15-micron data measured by the NOAA-VTPR temperature sounder. After correcting for water vapor emission, the results show that the cloud cover derived from 15-micron data is less than that obtained from visible data.
Zhou, Xinpeng; Wei, Guohua; Wu, Siliang; Wang, Dawei
2016-03-11
This paper proposes a three-dimensional inverse synthetic aperture radar (ISAR) imaging method for high-speed targets in short-range using an impulse radar. According to the requirements for high-speed target measurement in short-range, this paper establishes the single-input multiple-output (SIMO) antenna array, and further proposes a missile motion parameter estimation method based on impulse radar. By analyzing the motion geometry relationship of the warhead scattering center after translational compensation, this paper derives the receiving antenna position and the time delay after translational compensation, and thus overcomes the shortcomings of conventional translational compensation methods. By analyzing the motion characteristics of the missile, this paper estimates the missile's rotation angle and the rotation matrix by establishing a new coordinate system. Simulation results validate the performance of the proposed algorithm.
Strapdown system performance optimization test evaluations (SPOT), volume 1
NASA Technical Reports Server (NTRS)
Blaha, R. J.; Gilmore, J. P.
1973-01-01
A three axis inertial system was packaged in an Apollo gimbal fixture for fine grain evaluation of strapdown system performance in dynamic environments. These evaluations have provided information to assess the effectiveness of real-time compensation techniques and to study system performance tradeoffs to factors such as quantization and iteration rate. The strapdown performance and tradeoff studies conducted include: (1) Compensation models and techniques for the inertial instrument first-order error terms were developed and compensation effectivity was demonstrated in four basic environments; single and multi-axis slew, and single and multi-axis oscillatory. (2) The theoretical coning bandwidth for the first-order quaternion algorithm expansion was verified. (3) Gyro loop quantization was identified to affect proportionally the system attitude uncertainty. (4) Land navigation evaluations identified the requirement for accurate initialization alignment in order to pursue fine grain navigation evaluations.
NASA Astrophysics Data System (ADS)
Greene, S. E.; Ridgwell, A. J.; Schmidt, D. N.; Kirtland Turner, S.; Paelike, H.; Thomas, E.
2014-12-01
The carbonate compensation depth (CCD) is the depth below which negligible calcium carbonate is preserved in marine sediments. The long-term position of the CCD is often considered to be a powerful constraint on palaeoclimate and atmospheric CO2 concentration due to the requirement that carbonate burial balance riverine weathering over long timescales. The requirement that weathering and burial be in balance is clear, but it is less certain that burial compensates for changes in weathering via shoaling or deepening of the CCD. Because most carbonate burial occurs well above the CCD , changes in weathering fluxes may be primarily accommodated by increasing or decreasing carbonate burial at shallower depths, i.e., at or near the lysocline, the depth range over which carbonate dissolution markedly increases. Indeed, recent earth system modelling studies have suggested that the position of the CCD is relatively insensitive to changes in atmospheric pCO2. Additionally, studies have questioned the nature and strength of the relationship between the CCD, carbonate saturation state in the water column, and lysocline. To test the relationship between palaeoclimate and the location of the CCD, we reconstructed the global, long-term CCD behaviour across major Cenozoic climate transitions: the late Paleocene - early Eocene long-term warming trend (study interval ~58 to 49 Ma) and the late Eocene - early Oligocene cooling and glaciation (study interval ~38 to 27 Ma). We use Earth system modelling (GENIE) to explore the links between atmospheric pCO2 and the CCD, isolating and teasing apart the roles of total dissolved inorganic carbon, temperature, circulation, and productivity in determining the CCD.
Nimbus 4 IRIS spectra in the 750-1250 wavelengths/cm atmospheric window region
NASA Technical Reports Server (NTRS)
Kunde, V. G.; Conrath, B. J.; Hanel, R. A.; Prabhakara, C.
1974-01-01
Present operational schemes for infrared remote sounding measurements of surface temperature use the 899 wavelengths/cm atmospheric window region. Spectra from the Nimbus 4 IRIS in the 750 to 1250 wavelengths/cm region are analyzed. Comparison of the actual surface temperature and the observed brightness temperature at 10 wavelengths/cm resolution shows that the clearest windows were at 936 and 960 wavelengths/cm. Although there is a small amount of CO2 absorption in these regions, this is compensated for by a decrease in water vapor continuum absorption. Atmospheric absorption was 0.5 K less than experienced by the 899 wavelengths/cm window.
A Comparison of Two Skip Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Rea, Jeremy R.; Putnam, Zachary R.
2007-01-01
The Orion capsule vehicle will have a Lift-to-Drag ratio (L/D) of 0.3-0.35. For an Apollo-like direct entry into the Earth's atmosphere from a lunar return trajectory, this L/D will give the vehicle a maximum range of about 2500 nm and a maximum crossrange of 216 nm. In order to y longer ranges, the vehicle lift must be used to loft the trajectory such that the aerodynamic forces are decreased. A Skip-Trajectory results if the vehicle leaves the sensible atmosphere and a second entry occurs downrange of the atmospheric exit point. The Orion capsule is required to have landing site access (either on land or in water) inside the Continental United States (CONUS) for lunar returns anytime during the lunar month. This requirement means the vehicle must be capable of flying ranges of at least 5500 nm. For the L/D of the vehicle, this is only possible with the use of a guided Skip-Trajectory. A skip entry guidance algorithm is necessary to achieve this requirement. Two skip entry guidance algorithms have been developed: the Numerical Skip Entry Guidance (NSEG) algorithm was developed at NASA/JSC and PredGuid was developed at Draper Laboratory. A comparison of these two algorithms will be presented in this paper. Each algorithm has been implemented in a high-fidelity, 6 degree-of-freedom simulation called the Advanced NASA Technology Architecture for Exploration Studies (ANTARES). NASA and Draper engineers have completed several monte carlo analyses in order to compare the performance of each algorithm in various stress states. Each algorithm has been tested for entry-to-target ranges to include direct entries and skip entries of varying length. Dispersions have been included on the initial entry interface state, vehicle mass properties, vehicle aerodynamics, atmosphere, and Reaction Control System (RCS). Performance criteria include miss distance to the target, RCS fuel usage, maximum g-loads and heat rates for the first and second entry, total heat load, and control system saturation. The comparison of the performance criteria has led to a down select and guidance merger that will take the best ideas from each algorithm to create one skip entry guidance algorithm for the Orion vehicle.
Description of algorithms for processing Coastal Zone Color Scanner (CZCS) data
NASA Technical Reports Server (NTRS)
Zion, P. M.
1983-01-01
The algorithms for processing coastal zone color scanner (CZCS) data to geophysical units (pigment concentration) are described. Current public domain information for processing these data is summarized. Calibration, atmospheric correction, and bio-optical algorithms are presented. Three CZCS data processing implementations are compared.
Han, Lianghao; Dong, Hua; McClelland, Jamie R; Han, Liangxiu; Hawkes, David J; Barratt, Dean C
2017-07-01
This paper presents a new hybrid biomechanical model-based non-rigid image registration method for lung motion estimation. In the proposed method, a patient-specific biomechanical modelling process captures major physically realistic deformations with explicit physical modelling of sliding motion, whilst a subsequent non-rigid image registration process compensates for small residuals. The proposed algorithm was evaluated with 10 4D CT datasets of lung cancer patients. The target registration error (TRE), defined as the Euclidean distance of landmark pairs, was significantly lower with the proposed method (TRE = 1.37 mm) than with biomechanical modelling (TRE = 3.81 mm) and intensity-based image registration without specific considerations for sliding motion (TRE = 4.57 mm). The proposed method achieved a comparable accuracy as several recently developed intensity-based registration algorithms with sliding handling on the same datasets. A detailed comparison on the distributions of TREs with three non-rigid intensity-based algorithms showed that the proposed method performed especially well on estimating the displacement field of lung surface regions (mean TRE = 1.33 mm, maximum TRE = 5.3 mm). The effects of biomechanical model parameters (such as Poisson's ratio, friction and tissue heterogeneity) on displacement estimation were investigated. The potential of the algorithm in optimising biomechanical models of lungs through analysing the pattern of displacement compensation from the image registration process has also been demonstrated. Copyright © 2017 Elsevier B.V. All rights reserved.
Control algorithms for aerobraking in the Martian atmosphere
NASA Technical Reports Server (NTRS)
Ward, Donald T.; Shipley, Buford W., Jr.
1991-01-01
The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.
NASA Technical Reports Server (NTRS)
Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri
1992-01-01
The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.
SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring
NASA Astrophysics Data System (ADS)
Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.
2015-12-01
We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.
An Automated Algorithm for Identifying and Tracking Transverse Waves in Solar Images
NASA Astrophysics Data System (ADS)
Weberg, Micah J.; Morton, Richard J.; McLaughlin, James A.
2018-01-01
Recent instrumentation has demonstrated that the solar atmosphere supports omnipresent transverse waves, which could play a key role in energizing the solar corona. Large-scale studies are required in order to build up an understanding of the general properties of these transverse waves. To help facilitate this, we present an automated algorithm for identifying and tracking features in solar images and extracting the wave properties of any observed transverse oscillations. We test and calibrate our algorithm using a set of synthetic data, which includes noise and rotational effects. The results indicate an accuracy of 1%–2% for displacement amplitudes and 4%–10% for wave periods and velocity amplitudes. We also apply the algorithm to data from the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory and find good agreement with previous studies. Of note, we find that 35%–41% of the observed plumes exhibit multiple wave signatures, which indicates either the superposition of waves or multiple independent wave packets observed at different times within a single structure. The automated methods described in this paper represent a significant improvement on the speed and quality of direct measurements of transverse waves within the solar atmosphere. This algorithm unlocks a wide range of statistical studies that were previously impractical.
NASA Astrophysics Data System (ADS)
Wen, Qin
2017-04-01
Using a coupled Earth climate model, freshwater experiments are performed to study the Bjerknes compensation (BJC) between meridional atmosphere heat transport (AHT) and meridional ocean heat transport (OHT). Freshwater hosing in the North Atlantic weakens the Atlantic meridional overturning circulation (AMOC) and thus reduces the northward OHT in the Atlantic significantly, leading to a cooling (warming) in surface layer in the Northern (Southern) Hemisphere. This results in an enhanced Hadley Cell and northward AHT. Meanwhile, the OHT in the Indo-Pacific is increased in response to the Hadley Cell change, partially offsetting the reduced OHT in the Atlantic. Two compensations occur here: compensation between the AHT and the Atlantic OHT, and that between the Indo-Pacific OHT and the Atlantic OHT. The AHT change compensates the OHT change very well in the extratropics, while the former overcompensates the latter in the tropics due to the Indo-Pacific change. The BJC can be understood from the viewpoint of large-scale circulation change. However, the intrinsic mechanism of BJC is related to the climate feedback of Earth system. Our coupled model experiments confirm that the occurrence of BJC is an intrinsic requirement of local energy balance, and local climate feedback determines the extent of BJC, consistent with previous theoretical results. Even during the transient period of climate change in the model, the BJC is well established when the ocean heat storage is slowly varying and its change is weaker than the net heat flux changes at the ocean surface and the top of the atmosphere. The BJC can be deduced from the local climate feedback. Under the freshwater forcing, the overcompensation in the tropics (undercompensation in the extratropics) is mainly caused by the positive longwave feedback related to cloud (negative longwave feedback related to surface temperature change). Different dominant feedbacks determine different BJC scenarios in different regions, which are in essence constrained by local energy balance.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Liu, Xu; Larar, Allen M.; Smith, William L.; Yang, Ping; Schluessel, Peter; Strow, Larrabee
2007-01-01
An advanced retrieval algorithm with a fast radiative transfer model, including cloud effects, is used for atmospheric profile and cloud parameter retrieval. This physical inversion scheme has been developed, dealing with cloudy as well as cloud-free radiance observed with ultraspectral infrared sounders, to simultaneously retrieve surface, atmospheric thermodynamic, and cloud microphysical parameters. A fast radiative transfer model, which applies to the clouded atmosphere, is used for atmospheric profile and cloud parameter retrieval. A one-dimensional (1-d) variational multivariable inversion solution is used to improve an iterative background state defined by an eigenvector-regression-retrieval. The solution is iterated in order to account for non-linearity in the 1-d variational solution. This retrieval algorithm is applied to the MetOp satellite Infrared Atmospheric Sounding Interferometer (IASI) launched on October 19, 2006. IASI possesses an ultra-spectral resolution of 0.25 cm(exp -1) and a spectral coverage from 645 to 2760 cm(exp -1). Preliminary retrievals of atmospheric soundings, surface properties, and cloud optical/microphysical properties with the IASI measurements are obtained and presented.
Interferometric atmospheric refractive-index environmental monitor
NASA Astrophysics Data System (ADS)
Ludman, Jacques E.; Ludman, Jacques J.; Callahan, Heidi; Robinson, John; Davis, Seth; Caulfield, H. John; Watt, David; Sampson, John L.; Hunt, Arlon
1995-06-01
Long, open-path, outdoor interferometric measurement of the index of refraction as a function of wavelength (spectral refractivity) requires a number of innovations. These include active compensation for vibration and turbulence. The use of electronic compensation produces an electronic signal that is ideal for extracting data. This allows the appropriate interpretation of those data and the systematic and fast scanning of the spectrum by the use of bandwidths that are intermediate between lasers (narrow bandwidth) and white light (broad bandwidth). An Environmental Interferometer that incorporates these features should be extremely valuable in both pollutant detection and pollutant identification. Spectral refractivity measurements complement the information available
NASA Astrophysics Data System (ADS)
Kurien, Binoy G.; Ashcom, Jonathan B.; Shah, Vinay N.; Rachlin, Yaron; Tarokh, Vahid
2017-01-01
Atmospheric turbulence presents a fundamental challenge to Fourier phase recovery in optical interferometry. Typical reconstruction algorithms employ Bayesian inference techniques which rely on prior knowledge of the scene under observation. In contrast, redundant spacing calibration (RSC) algorithms employ redundancy in the baselines of the interferometric array to directly expose the contribution of turbulence, thereby enabling phase recovery for targets of arbitrary and unknown complexity. Traditionally RSC algorithms have been applied directly to single-exposure measurements, which are reliable only at high photon flux in general. In scenarios of low photon flux, such as those arising in the observation of dim objects in space, one must instead rely on time-averaged, atmosphere-invariant quantities such as the bispectrum. In this paper, we develop a novel RSC-based algorithm for prior-less phase recovery in which we generalize the bispectrum to higher order atmosphere-invariants (n-spectra) for improved sensitivity. We provide a strategy for selection of a high-signal-to-noise ratio set of n-spectra using the graph-theoretic notion of the minimum cycle basis. We also discuss a key property of this set (wrap-invariance), which then enables reliable application of standard linear estimation techniques to recover the Fourier phases from the 2π-wrapped n-spectra phases. For validation, we analyse the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures, and corroborate this analysis with simulation results showing performance near an atmosphere-oracle Cramer-Rao bound. Lastly, we apply techniques from the field of compressed sensing to perform image reconstruction from the estimated complex visibilities.
LANL MTI science team experience
NASA Astrophysics Data System (ADS)
Balick, Lee K.; Borel, Christopher C.; Chylek, Petr; Clodius, William B.; Davis, Anthony B.; Henderson, Bradley G.; Galbraith, Amy E.; Lawson, Stefanie L.; Pope, Paul A.; Rodger, Andrew P.; Theiler, James P.
2003-12-01
The Multispectral Thermal Imager (MTI) is a technology test and demonstration satellite whose primary mission involved a finite number of technical objectives. MTI was not designed, or supported, to become a general purpose operational satellite. The role of the MTI science team is to provide a core group of system-expert scientists who perform the scientific development and technical evaluations needed to meet programmatic objectives. Another mission for the team is to develop algorithms to provide atmospheric compensation and quantitative retrieval of surface parameters to a relatively small community of MTI users. Finally, the science team responds and adjusts to unanticipated events in the life of the satellite. Broad or general lessons learned include the value of working closely with the people who perform the calibration of the data as well as those providing archived image and retrieval products. Close interaction between the Los Alamos National Laboratory (LANL) teams was very beneficial to the overall effort as well as the science effort. Secondly, as time goes on we make increasing use of gridded global atmospheric data sets which are products of global weather model data assimilation schemes. The Global Data Assimilation System information is available globally every six hours and the Rapid Update Cycle products are available over much of the North America and its coastal regions every hour. Additionally, we did not anticipate the quantity of validation data or time needed for thorough algorithm validation. Original validation plans called for a small number of intensive validation campaigns soon after launch. One or two intense validation campaigns are needed but are not sufficient to define performance over a range of conditions or for diagnosis of deviations between ground and satellite products. It took more than a year to accumulate a good set of validation data. With regard to the specific programmatic objectives, we feel that we can do a reasonable job on retrieving surface water temperatures well within the 1°C objective under good observing conditions. Before the loss of the onboard calibration system, sea surface retrievals were usually within 0.5°C. After that, the retrievals are usually within 0.8°C during the day and 0.5°C at night. Daytime atmospheric water vapor retrievals have a scatter that was anticipated: within 20%. However, there is error in using the Aerosol Robotic Network retrievals as validation data which may be due to some combination of calibration uncertainties, errors in the ground retrievals, the method of comparison, and incomplete physics. Calibration of top-of-atmosphere radiance measurements to surface reflectance has proven daunting. We are not alone here: it is a difficult problem to solve generally and the main issue is proper compensation for aerosol effects. Getting good reflectance validation data over a number of sites has proven difficult but, when assumptions are met, the algorithm usually performs quite well. Aerosol retrievals for off-nadir views seem to perform better than near-nadir views and the reason for this is under investigation. Land surface temperature retrieval and temperature-emissivity separations are difficult to perform accurately with multispectral sensors. An interactive cloud masking system was implemented for production use. Clouds are so spectrally and spatially variable that users are encouraged to carefully evaluate the delivered mask for their own needs. The same is true for the water mask. This mask is generated from a spectral index that works well for deep, clear water, but there is much variability in water spectral reflectance inland and along coasts. The value of the second-look maneuvers has not yet been fully or systematically evaluated. Early experiences indicated that the original intentions have marginal value for MTI objectives, but potentially important new ideas have been developed. Image registration (the alignment of data from different focal planes) and band-to-band registration has been a difficult problem to solve, at least for mass production of the images in a processing pipeline. The problems, and their solutions, are described in another paper.
LANL MTI science team experience
NASA Astrophysics Data System (ADS)
Balick, Lee K.; Borel, Christopher C.; Chylek, Petr; Clodius, William B.; Davis, Anthony B.; Henderson, Bradley G.; Galbraith, Amy E.; Lawson, Stefanie L.; Pope, Paul A.; Rodger, Andrew P.; Theiler, James P.
2004-01-01
The Multispectral Thermal Imager (MTI) is a technology test and demonstration satellite whose primary mission involved a finite number of technical objectives. MTI was not designed, or supported, to become a general purpose operational satellite. The role of the MTI science team is to provide a core group of system-expert scientists who perform the scientific development and technical evaluations needed to meet programmatic objectives. Another mission for the team is to develop algorithms to provide atmospheric compensation and quantitative retrieval of surface parameters to a relatively small community of MTI users. Finally, the science team responds and adjusts to unanticipated events in the life of the satellite. Broad or general lessons learned include the value of working closely with the people who perform the calibration of the data as well as those providing archived image and retrieval products. Close interaction between the Los Alamos National Laboratory (LANL) teams was very beneficial to the overall effort as well as the science effort. Secondly, as time goes on we make increasing use of gridded global atmospheric data sets which are products of global weather model data assimilation schemes. The Global Data Assimilation System information is available globally every six hours and the Rapid Update Cycle products are available over much of the North America and its coastal regions every hour. Additionally, we did not anticipate the quantity of validation data or time needed for thorough algorithm validation. Original validation plans called for a small number of intensive validation campaigns soon after launch. One or two intense validation campaigns are needed but are not sufficient to define performance over a range of conditions or for diagnosis of deviations between ground and satellite products. It took more than a year to accumulate a good set of validation data. With regard to the specific programmatic objectives, we feel that we can do a reasonable job on retrieving surface water temperatures well within the 1°C objective under good observing conditions. Before the loss of the onboard calibration system, sea surface retrievals were usually within 0.5°C. After that, the retrievals are usually within 0.8°C during the day and 0.5°C at night. Daytime atmospheric water vapor retrievals have a scatter that was anticipated: within 20%. However, there is error in using the Aerosol Robotic Network retrievals as validation data which may be due to some combination of calibration uncertainties, errors in the ground retrievals, the method of comparison, and incomplete physics. Calibration of top-of-atmosphere radiance measurements to surface reflectance has proven daunting. We are not alone here: it is a difficult problem to solve generally and the main issue is proper compensation for aerosol effects. Getting good reflectance validation data over a number of sites has proven difficult but, when assumptions are met, the algorithm usually performs quite well. Aerosol retrievals for off-nadir views seem to perform better than near-nadir views and the reason for this is under investigation. Land surface temperature retrieval and temperature-emissivity separations are difficult to perform accurately with multispectral sensors. An interactive cloud masking system was implemented for production use. Clouds are so spectrally and spatially variable that users are encouraged to carefully evaluate the delivered mask for their own needs. The same is true for the water mask. This mask is generated from a spectral index that works well for deep, clear water, but there is much variability in water spectral reflectance inland and along coasts. The value of the second-look maneuvers has not yet been fully or systematically evaluated. Early experiences indicated that the original intentions have marginal value for MTI objectives, but potentially important new ideas have been developed. Image registration (the alignment of data from different focal planes) and band-to-band registration has been a difficult problem to solve, at least for mass production of the images in a processing pipeline. The problems, and their solutions, are described in another paper.
Ripple distribution for nonlinear fiber-optic channels.
Sorokina, Mariia; Sygletos, Stylianos; Turitsyn, Sergei
2017-02-06
We demonstrate data rates above the threshold imposed by nonlinearity on conventional optical signals by applying novel probability distribution, which we call ripple distribution, adapted to the properties of the fiber channel. Our results offer a new direction for signal coding, modulation and practical nonlinear distortions compensation algorithms.
Multimedia transmission in MC-CDMA using adaptive subcarrier power allocation and CFO compensation
NASA Astrophysics Data System (ADS)
Chitra, S.; Kumaratharan, N.
2018-02-01
Multicarrier code division multiple access (MC-CDMA) system is one of the most effective techniques in fourth-generation (4G) wireless technology, due to its high data rate, high spectral efficiency and resistance to multipath fading. However, MC-CDMA systems are greatly deteriorated by carrier frequency offset (CFO) which is due to Doppler shift and oscillator instabilities. It leads to loss of orthogonality among the subcarriers and causes intercarrier interference (ICI). Water filling algorithm (WFA) is an efficient resource allocation algorithm to solve the power utilisation problems among the subcarriers in time-dispersive channels. The conventional WFA fails to consider the effect of CFO. To perform subcarrier power allocation with reduced CFO and to improve the capacity of MC-CDMA system, residual CFO compensated adaptive subcarrier power allocation algorithm is proposed in this paper. The proposed technique allocates power only to subcarriers with high channel to noise power ratio. The performance of the proposed method is evaluated using random binary data and image as source inputs. Simulation results depict that the bit error rate performance and ICI reduction capability of the proposed modified WFA offered superior performance in both power allocation and image compression for high-quality multimedia transmission in the presence of CFO and imperfect channel state information conditions.
Scattering calculation and image reconstruction using elevation-focused beams
Duncan, David P.; Astheimer, Jeffrey P.; Waag, Robert C.
2009-01-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering. PMID:19425653
Scattering calculation and image reconstruction using elevation-focused beams.
Duncan, David P; Astheimer, Jeffrey P; Waag, Robert C
2009-05-01
Pressure scattered by cylindrical and spherical objects with elevation-focused illumination and reception has been analytically calculated, and corresponding cross sections have been reconstructed with a two-dimensional algorithm. Elevation focusing was used to elucidate constraints on quantitative imaging of three-dimensional objects with two-dimensional algorithms. Focused illumination and reception are represented by angular spectra of plane waves that were efficiently computed using a Fourier interpolation method to maintain the same angles for all temporal frequencies. Reconstructions were formed using an eigenfunction method with multiple frequencies, phase compensation, and iteration. The results show that the scattered pressure reduces to a two-dimensional expression, and two-dimensional algorithms are applicable when the region of a three-dimensional object within an elevation-focused beam is approximately constant in elevation. The results also show that energy scattered out of the reception aperture by objects contained within the focused beam can result in the reconstructed values of attenuation slope being greater than true values at the boundary of the object. Reconstructed sound speed images, however, appear to be relatively unaffected by the loss in scattered energy. The broad conclusion that can be drawn from these results is that two-dimensional reconstructions require compensation to account for uncaptured three-dimensional scattering.
Executive Reentry: The Problems of Repatriation
ERIC Educational Resources Information Center
Cagney, William F.
1975-01-01
There are many ways in which a company can ease a foreign executive's transition from a foreign assignment to a United States headquarters job. Assistance in housing, accommodations, compensation, taxes, and home office atmosphere are just a few areas in which the company can help. (Author)
Compensation of non-ideal beam splitter polarization distortion effect in Michelson interferometer
NASA Astrophysics Data System (ADS)
Liu, Yeng-Cheng; Lo, Yu-Lung; Liao, Chia-Chi
2016-02-01
A composite optical structure consisting of two quarter-wave plates and a single half-wave plate is proposed for compensating for the polarization distortion induced by a non-ideal beam splitter in a Michelson interferometer. In the proposed approach, the optimal orientations of the optical components within the polarization compensator are determined using a genetic algorithm (GA) such that the beam splitter can be treated as a free-space medium and modeled using a unit Mueller matrix accordingly. Two implementations of the proposed polarization controller are presented. In the first case, the compensator is placed in the output arm of Michelson interferometer such that the state of polarization of the interfered output light is equal to that of the input light. However, in this configuration, the polarization effects induced by the beam splitter in the two arms of the interferometer structure cannot be separately addressed. Consequently, in the second case, compensator structures are placed in the Michelson interferometer for compensation on both the scanning and reference beams. The practical feasibility of the proposed approach is introduced by considering a Mueller polarization-sensitive (PS) optical coherence tomography (OCT) structure with three polarization controllers in the input, reference and sample arms, respectively. In general, the results presented in this study show that the proposed polarization controller provides an effective and experimentally-straightforward means of compensating for the polarization distortion effects induced by the non-ideal beam splitters in Michelson interferometers and Mueller PS-OCT structures.
Guo, Xiaoting; Sun, Changku; Wang, Peng
2017-08-01
This paper investigates the multi-rate inertial and vision data fusion problem in nonlinear attitude measurement systems, where the sampling rate of the inertial sensor is much faster than that of the vision sensor. To fully exploit the high frequency inertial data and obtain favorable fusion results, a multi-rate CKF (Cubature Kalman Filter) algorithm with estimated residual compensation is proposed in order to adapt to the problem of sampling rate discrepancy. During inter-sampling of slow observation data, observation noise can be regarded as infinite. The Kalman gain is unknown and approaches zero. The residual is also unknown. Therefore, the filter estimated state cannot be compensated. To obtain compensation at these moments, state error and residual formulas are modified when compared with the observation data available moments. Self-propagation equation of the state error is established to propagate the quantity from the moments with observation to the moments without observation. Besides, a multiplicative adjustment factor is introduced as Kalman gain, which acts on the residual. Then the filter estimated state can be compensated even when there are no visual observation data. The proposed method is tested and verified in a practical setup. Compared with multi-rate CKF without residual compensation and single-rate CKF, a significant improvement is obtained on attitude measurement by using the proposed multi-rate CKF with inter-sampling residual compensation. The experiment results with superior precision and reliability show the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Shields, C. A.; Ullrich, P. A.; Rutz, J. J.; Wehner, M. F.; Ralph, M.; Ruby, L.
2017-12-01
Atmospheric rivers (ARs) are long, narrow filamentary structures that transport large amounts of moisture in the lower layers of the atmosphere, typically from subtropical regions to mid-latitudes. ARs play an important role in regional hydroclimate by supplying significant amounts of precipitation that can alleviate drought, or in extreme cases, produce dangerous floods. Accurately detecting, or tracking, ARs is important not only for weather forecasting, but is also necessary to understand how these events may change under global warming. Detection algorithms are used on both regional and global scales, and most accurately, using high resolution datasets, or model output. Different detection algorithms can produce different answers. Detection algorithms found in the current literature fall broadly into two categories: "time-stitching", where the AR is tracked with a Lagrangian approach through time and space; and "counting", where ARs are identified for a single point in time for a single location. Counting routines can be further subdivided into algorithms that use absolute thresholds with specific geometry, to algorithms that use relative thresholds, to algorithms based on statistics, to pattern recognition and machine learning techniques. With such a large diversity in detection code, differences in AR tracking and "counts" can vary widely from technique to technique. Uncertainty increases for future climate scenarios, where the difference between relative and absolute thresholding produce vastly different counts, simply due to the moister background state in a warmer world. In an effort to quantify the uncertainty associated with tracking algorithms, the AR detection community has come together to participate in ARTMIP, the Atmospheric River Tracking Method Intercomparison Project. Each participant will provide AR metrics to the greater group by applying their code to a common reanalysis dataset. MERRA2 data was chosen for both temporal and spatial resolution. After completion of this first phase, Tier 1, ARTMIP participants may choose to contribute to Tier 2, which will range from reanalysis uncertainty, to analysis of future climate scenarios from high resolution model output. ARTMIP's experimental design, techniques, and preliminary metrics will be presented.
Tang, Bohui; Bi, Yuyun; Li, Zhao-Liang; Xia, Jun
2008-01-01
On the basis of the radiative transfer theory, this paper addressed the estimate of Land Surface Temperature (LST) from the Chinese first operational geostationary meteorological satellite-FengYun-2C (FY-2C) data in two thermal infrared channels (IR1, 10.3-11.3 μm and IR2, 11.5-12.5 μm), using the Generalized Split-Window (GSW) algorithm proposed by Wan and Dozier (1996). The coefficients in the GSW algorithm corresponding to a series of overlapping ranging of the mean emissivity, the atmospheric Water Vapor Content (WVC), and the LST were derived using a statistical regression method from the numerical values simulated with an accurate atmospheric radiative transfer model MODTRAN 4 over a wide range of atmospheric and surface conditions. The simulation analysis showed that the LST could be estimated by the GSW algorithm with the Root Mean Square Error (RMSE) less than 1 K for the sub-ranges with the Viewing Zenith Angle (VZA) less than 30° or for the sub-rangs with VZA less than 60° and the atmospheric WVC less than 3.5 g/cm2 provided that the Land Surface Emissivities (LSEs) are known. In order to determine the range for the optimum coefficients of the GSW algorithm, the LSEs could be derived from the data in MODIS channels 31 and 32 provided by MODIS/Terra LST product MOD11B1, or be estimated either according to the land surface classification or using the method proposed by Jiang et al. (2006); and the WVC could be obtained from MODIS total precipitable water product MOD05, or be retrieved using Li et al.' method (2003). The sensitivity and error analyses in term of the uncertainty of the LSE and WVC as well as the instrumental noise were performed. In addition, in order to compare the different formulations of the split-window algorithms, several recently proposed split-window algorithms were used to estimate the LST with the same simulated FY-2C data. The result of the intercomparsion showed that most of the algorithms give comparable results. PMID:27879744
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gräfe, James; Khan, Rao; Meyer, Tyler
2014-08-15
In this study we investigate the deliverability of dosimetric plans generated by the irregular surface compensator (ISCOMP) algorithm for 6 MV photon beams in Eclipse (Varian Medical System, CA). In contrast to physical tissue compensation, the electronic ISCOMP uses MLCs to dynamically modulate the fluence of a photon beam in order to deliver a uniform dose at a user defined plane in tissue. This method can be used to shield critical organs that are located within the treatment portal or improve dose uniformity by tissue compensation in inhomogeneous regions. Three site specific plans and a set of test fields weremore » evaluated using the γ-metric of 3%/ 3 mm on Varian EPID, MapCHECK, and Gafchromic EBT3 film with a clinical tolerance of >95% passing rates. Point dose measurements with an NRCC calibrated ionization chamber were also performed to verify the absolute dose delivered. In all cases the MapCHECK measured plans met the gamma criteria. The mean passing rate for the six EBT3 film field measurements was 96.2%, with only two fields at 93.4 and 94.0% passing rates. The EPID plans passed for fields encompassing the central ∼10 × 10 cm{sup 2} region of the detector; however for larger fields and greater off-axis distances discrepancies were observed and attributed to the profile corrections and modeling of backscatter in the portal dose calculation. The magnitude of the average percentage difference for 21 ion chamber point dose measurements and 17 different fields was 1.4 ± 0.9%, and the maximum percentage difference was −3.3%. These measurements qualify the algorithm for routine clinical use subject to the same pre-treatment patient specific QA as IMRT.« less
NASA Technical Reports Server (NTRS)
Abbott, Terence S.
2015-01-01
This paper presents an overview of the fifth revision to an algorithm specifically designed to support NASA's Airborne Precision Spacing concept. This algorithm is referred to as the Airborne Spacing for Terminal Arrival Routes version 12 (ASTAR12). This airborne self-spacing concept is trajectory-based, allowing for spacing operations prior to the aircraft being on a common path. Because this algorithm is trajectory-based, it also has the inherent ability to support required-time-of- arrival (RTA) operations. This algorithm was also designed specifically to support a standalone, non-integrated implementation in the spacing aircraft. This current revision to the algorithm includes a ground speed feedback term to compensate for slower than expected traffic aircraft speeds based on the accepted air traffic control tendency to slow aircraft below the nominal arrival speeds when they are farther from the airport.