NASA Astrophysics Data System (ADS)
Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui
2015-03-01
We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.
NASA Astrophysics Data System (ADS)
Li, Zhaokun; Cao, Jingtai; Zhao, Xiaohui; Liu, Wei
2015-03-01
A conventional adaptive optics (AO) system is widely used to compensate atmospheric turbulence in free space optical (FSO) communication systems, but wavefront measurements based on phase-conjugation principle are not desired under strong scintillation circumstances. In this study we propose a novel swarm intelligence optimization algorithm, which is called modified shuffled frog leaping algorithm (MSFL), to compensate the wavefront aberration. Simulation and experiments results show that MSFL algorithm performs well in the atmospheric compensation and it can increase the coupling efficiency in receiver terminal and significantly improve the performance of the FSO communication systems.
Laboratory atmospheric compensation experiment
NASA Astrophysics Data System (ADS)
Drutman, C.; Moran, James P.; Faria-e-Maia, Francisco; Hyman, Howard; Russell, Jeffrey A.
1993-06-01
This paper describes an in-house experiment that was performed at the Avco Research Labs/Textron to test a proprietary atmospheric phase compensation algorithm. Since the laser energies of interest were small enough that thermal blooming was not an issue, it was only necessary to simulate the effect of atmospheric turbulence. This was achieved by fabricating phase screens that mimicked Kolmogorov phase statistics. A simulated atmosphere was constructed from these phase screens and the phase at the simulated ground was measured with a digital heterodyne interferometer. The result of this effort was an initial verification of our proprietary algorithm two years before the field experiment.
Atmospheric Compensation for Uplink Arrays via Radiometry
NASA Technical Reports Server (NTRS)
Nessel, James A.; Acosta, Roberto J.
2010-01-01
Uplink arrays for communications applications are gaining increased visibility within the NASA and military community due to the enhanced flexibility and reliability they provide. When compared with the conventional large, single aperture antennas currently comprising the Deep Space Network (DSN), for example, smaller aperture antenna arrays have the benefits of providing fault tolerance (reduced single-point failure), reduced maintenance cost, and enhanced capabilities such as electronic beam-steering and multi-beam operation. However, signal combining of antenna array elements spaced many wavelengths apart becomes problematic due to the inherent instability of earth's turbulent atmosphere, particularly at the frequencies of interest to the DSN (i.e., Ka-band). Degradation in the power combining of the individual elements comprising the array arises due to uncorrelated phase errors introduced as the signals propagate through the troposphere. It is well known that the fundamental source of this error is due to the inhomogeneous distribution of water vapor in the atmosphere [1]. Several techniques have been proposed to circumvent this issue, including the use of phase calibration towers and a moon bounce to generate a feedback loop which would provide a means of intermittent calibration of the system phase errors (thermal drifts, atmosphere) [2,3]. However, these techniques require repositioning of the antenna elements to perform this operation which ultimately results in reduced system availability. And, though they are sufficient for compensating for slow varying phase drifts, they are insufficient to compensate for faster varying phase errors, such as those introduced by the atmosphere. In this paper, preliminary radiometry and interferometry measurements collected by the NASA Glenn Research Center are analyzed and indicate that the use of optimized water vapor radiometers as a feedback system in a communications platform could provide the necessary atmospheric
Rain compensation algorithm for ACTS mobile terminal
NASA Technical Reports Server (NTRS)
Levitt, Barry K.
1992-01-01
The initial advanced communication technology satellite (ACTS) mobile terminal (AMT) demonstrations will involve two-way communications between the high-bit-rate link evaluation terminal (HBR-LET), which is a fixed terminal (FT), and a van-housed mobile terminal (MT). The HBR-LET has the capability of adjusting its transmitted uplink power over an approximately 10-dB range to compensate for forward uplink rain attenuation. However, because of size and weight limitations, the MT cannot use power control as a rain compensation technique. Consequently, the AMT rain compensation algorithm (RCA) is based on a formula for varying the transmitted data rate in either direction to maintain link performance within acceptable limits. The objective of the AMT RCA is to ensure reliable operation in both the forward and return directions despite the possibility of uplink or downlink fading due to rain events in the vicinity of the FT or MT. In particular, the RCA must maintain at least a 3-dB link margin at the highest possible transmission rate (AMT can operate at 9.6, 4.8, or 2.4 kb/s) permitted by the prevailing channel conditions. The 3-dB minimum link margin is a system design safety factor to accommodate conceivable implementation losses.
Aeroballistic analyses for the Atmospheric Compensation Experiment
Millard, W.A.
1986-01-01
The Atmospheric Compensation Experiment (ACE) involved illuminating a sounding rocket payload with a low power laser from the Air Force Maui Optical Site (AMOS), Mt. Haleakala, HI. This experiment, sponsored by DARPA and SDIO, included four launches of Terrier Malemute II rocket vehicles from the Kauai Test Facility during the period July through Dec., 1985. The purpose of ACE was to demonstrate an adaptive optics technology that allowed the efficient transfer of power from the laser to the space target. This paper discusses the rationale used in selecting the launch site and the requirements for the carrier rocket system. Each payload carried a light detector array along its longitudinal axis, and it was necessary that this array be oriented perpendicular to the line of sight from AMOS to the payload. The design requirements for the payload attitude control system to satisfy this requirement are presented. Flight test results from the four tests showing flight performance and payload pointing data are included.
Compensating image degradation due to atmospheric turbulence in anisoplanatic conditions
NASA Astrophysics Data System (ADS)
Huebner, Claudia S.
2009-05-01
In imaging applications the prevalent effects of atmospheric turbulence comprise image dancing and image blurring. Suggestions from the field of image processing to compensate for these turbulence effects and restore degraded imagery include Motion-Compensated Averaging (MCA) for image sequences. In isoplanatic conditions, such an averaged image can be considered as a non-distorted image that has been blurred by an unknown Point Spread Function (PSF) of the same size as the pixel motions due to the turbulence and a blind deconvolution algorithm can be employed for the final image restoration. However, when imaging over a long horizontal path close to the ground, conditions are likely to be anisoplanatic and image dancing will effect local image displacements between consecutive frames rather than global shifts only. Therefore, in this paper, a locally operating variant of the MCA-procedure is proposed, utilizing Block Matching (BM) in order to identify and re-arrange uniformly displaced image parts. For the final restoration a multistage blind deconvolution algorithm is used and the corresponding deconvolution results are presented and evaluated.
Template based illumination compensation algorithm for multiview video coding
NASA Astrophysics Data System (ADS)
Li, Xiaoming; Jiang, Lianlian; Ma, Siwei; Zhao, Debin; Gao, Wen
2010-07-01
Recently multiview video coding (MVC) standard has been finalized as an extension of H.264/AVC by Joint Video Team (JVT). In the project Joint Multiview Video Model (JMVM) for the standardization, illumination compensation (IC) is adopted as a useful tool. In this paper, a novel illumination compensation algorithm based on template is proposed. The basic idea of the algorithm is that the illumination of the current block has a strong correlation with its adjacent template. Based on this idea, firstly a template based illumination compensation method is presented, and then a template models selection strategy is devised to improve the illumination compensation performance. The experimental results show that the proposed algorithm can improve the coding efficiency significantly.
Broadband beamforming compensation algorithm in CI front-end acquisition
2013-01-01
Background To increase the signal to noise ratio (SNR) and to suppress directional noise in front-end signal acquisition, microphone array technologies are being applied in the cochlear implant (CI). Due to size constraints, the dual microphone-based system is most suitable for actual application. However, direct application of the array technology will result in the low frequency roll-off problem, which can noticeably distort the desired signal. Methods In this paper, we theoretically analyze the roll-off characteristic on the basis of CI parameters and present a new low-complexity compensation algorithm. We obtain the linearized frequency response of the two-microphone array from modeling and analysis for further algorithm realization. Realization and results Linear method was used to approximate the theoretical response with adjustable delay and weight parameters. A CI dual-channel hardware platform is constructed for experimental research. Experimental results show that our algorithm performs well in compensation and realization. Discussions We discuss the effect from environment noise. Actual daily noise with more low-frequency energy will weaken the algorithm performance. A balance between low-frequency distortion and corresponding low-frequency noise need to be considered. Conclusions Our novel compensation algorithm uses linear function to obtain the desired system response, which is a low computational-complexity method for CI real-time processing. Algorithm performance is tested in CI CIS modulation and the influence of experimental distance and environmental noise were further analyzed to evaluate algorithm constraint. PMID:23442782
An adaptive algorithm for motion compensated color image coding
NASA Technical Reports Server (NTRS)
Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming
1987-01-01
This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.
Fu, Shiyao; Zhang, Shikun; Wang, Tonglu; Gao, Chunqing
2016-07-15
We propose a scheme that uses a probe Gaussian beam and the Gerchberg-Saxton (GS) algorithm to realize the pre-turbulence compensation of beams carrying orbital angular momentum (OAM). In the experiment, spatial light modulators are utilized to simulate the turbulent atmosphere and upload the retrieval holograms. A probe Gaussian beam is used to detect the turbulence. Then, the retrieval holograms, which can correct the phase distortion of the OAM beams, are obtained by the GS algorithm. The experimental results show that single or multiplexed OAM beams can be compensated well. The compensation performances under different iterations are also analyzed. PMID:27420491
Haze compensation and atmospheric correction for Sentinel-2 data
NASA Astrophysics Data System (ADS)
Makarau, Aliaksei; Richter, Rudolf; Zekoll, Viktoria; Reinartz, Peter
2016-04-01
Sentinel-2 data offer the opportunity to analyse landcover at a high spatial accuracy together with a wide swath. Nevertheless, the high data volume requires a per granule analysis. This may lead to border effects (difference in the radiance/reflectance values) between the neighbouring granules during atmospheric correction. Especially in case of high variations of the aerosol optical thickness (AOT) across the granules, especially in case of haze, the atmospherically corrected mosaicked products often show granule border effects. To overcome these artefacts a dehazing prior to the atmospheric correction is performed. The dehazing compensates only for the haze thickness keeping the AOT fraction for further estimation and compensation in the atmospheric correction chain. This approach results in a smoother AOT map estimate and a corresponding bottom of atmosphere (BOA) reflectance with low or no border artefacts. Using digital elevation models (DEMs) allows a better labelling of haze and a higher accuracy of the dehazing. The DEM analysis rejects high elevation areas where bright surfaces might erroneously be classified as haze, thus reducing the probability of misclassification. The dehazing and atmospheric correction are implemented in the DLR's ATCOR software. An example of a numeric evaluation of atmospheric correction products (AOT and BOA reflectance) is given. It demonstrates a smooth transition between the granules in the AOT map leading to a proper estimate of the BOA reflectance data.
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
New inverse synthetic aperture radar algorithm for translational motion compensation
NASA Astrophysics Data System (ADS)
Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.
1991-10-01
Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.
NASA Astrophysics Data System (ADS)
Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald
2005-10-01
Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.
Genetic algorithm optimized triply compensated pulses in NMR spectroscopy
NASA Astrophysics Data System (ADS)
Manu, V. S.; Veglia, Gianluigi
2015-11-01
Sensitivity and resolution in NMR experiments are affected by magnetic field inhomogeneities (of both external and RF), errors in pulse calibration, and offset effects due to finite length of RF pulses. To remedy these problems, built-in compensation mechanisms for these experimental imperfections are often necessary. Here, we propose a new family of phase-modulated constant-amplitude broadband pulses with high compensation for RF inhomogeneity and heteronuclear coupling evolution. These pulses were optimized using a genetic algorithm (GA), which consists in a global optimization method inspired by Nature's evolutionary processes. The newly designed π and π / 2 pulses belong to the 'type A' (or general rotors) symmetric composite pulses. These GA-optimized pulses are relatively short compared to other general rotors and can be used for excitation and inversion, as well as refocusing pulses in spin-echo experiments. The performance of the GA-optimized pulses was assessed in Magic Angle Spinning (MAS) solid-state NMR experiments using a crystalline U-13C, 15N NAVL peptide as well as U-13C, 15N microcrystalline ubiquitin. GA optimization of NMR pulse sequences opens a window for improving current experiments and designing new robust pulse sequences.
Genetic algorithm optimized triply compensated pulses in NMR spectroscopy.
Manu, V S; Veglia, Gianluigi
2015-11-01
Sensitivity and resolution in NMR experiments are affected by magnetic field inhomogeneities (of both external and RF), errors in pulse calibration, and offset effects due to finite length of RF pulses. To remedy these problems, built-in compensation mechanisms for these experimental imperfections are often necessary. Here, we propose a new family of phase-modulated constant-amplitude broadband pulses with high compensation for RF inhomogeneity and heteronuclear coupling evolution. These pulses were optimized using a genetic algorithm (GA), which consists in a global optimization method inspired by Nature's evolutionary processes. The newly designed π and π/2 pulses belong to the 'type A' (or general rotors) symmetric composite pulses. These GA-optimized pulses are relatively short compared to other general rotors and can be used for excitation and inversion, as well as refocusing pulses in spin-echo experiments. The performance of the GA-optimized pulses was assessed in Magic Angle Spinning (MAS) solid-state NMR experiments using a crystalline U-(13)C, (15)N NAVL peptide as well as U-(13)C, (15)N microcrystalline ubiquitin. GA optimization of NMR pulse sequences opens a window for improving current experiments and designing new robust pulse sequences. PMID:26473327
Error compensation algorithm for patient positioning robotics system
NASA Astrophysics Data System (ADS)
Murty, Pilaka V.; Talpasanu, Ilie; Roz, Mugur A.
2009-03-01
Surgeons in various medical areas (orthopedic surgery, neurosurgery, dentistry etc.) are using motor-driven drilling tools to make perforations in hard tissues (bone, enamel, dentine, cementum etc.) When the penetration requires very precise angles and accurate alignment with respect to different targets, precision cannot be obtained by using visual estimation and hand-held tools. Robots have been designed to allow for very accurate relative positioning of the patient and the surgical tools, and in certain classes of applications the location of bone target and inclination of the surgical tool can be accurately specified with respect to an inertial frame of reference. However, patient positioning errors as well as position changes during surgery can jeopardize the precision of the operation, and drilling parameters have to be dynamically adjusted. In this paper the authors present a quantitative method to evaluate the corrected position and inclination of the drilling tool, to account for translational and rotational errors in displaced target position. The compensation algorithm applies principles of inverse kinematics wherein a faulty axis in space caused by the translational and rotational errors of the target position is identified with an imaginary true axis in space by enforcing identity through a modified trajectory. In the absence of any specific application, this algorithm is verified on Solid Works, a commercial CAD tool and found to be correct. An example problem given at the end vindicates this statement.
Sensor Saturation Compensated Smoothing Algorithm for Inertial Sensor Based Motion Tracking
Dang, Quoc Khanh; Suh, Young Soo
2014-01-01
In this paper, a smoothing algorithm for compensating inertial sensor saturation is proposed. The sensor saturation happens when a sensor measures a value that is larger than its dynamic range. This can lead to a considerable accumulated error. To compensate the lost information in saturated sensor data, we propose a smoothing algorithm in which the saturation compensation is formulated as an optimization problem. Based on a standard smoothing algorithm with zero velocity intervals, two saturation estimation methods were proposed. Simulation and experiments prove that the proposed methods are effective in compensating the sensor saturation. PMID:24806740
Atmospheric Correction Algorithm for Hyperspectral Imagery
R. J. Pollina
1999-09-01
In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolute calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.
NASA Astrophysics Data System (ADS)
Wu, Zhi-Xu; Bai, Hua; Cui, Xiang-Qun
2015-05-01
The wavefront measuring range and recovery precision of a curvature sensor can be improved by an intensity compensation algorithm. However, in a focal system with a fast f-number, especially a telescope with a large field of view, the accuracy of this algorithm cannot meet the requirements. A theoretical analysis of the corrected intensity compensation algorithm in a focal system with a fast f-number is first introduced and afterwards the mathematical equations used in this algorithm are expressed. The corrected result is then verified through simulation. The method used by such a simulation can be described as follows. First, the curvature signal from a focal system with a fast f-number is simulated by Monte Carlo ray tracing; then the wavefront result is calculated by the inner loop of the FFT wavefront recovery algorithm and the outer loop of the intensity compensation algorithm. Upon comparing the intensity compensation algorithm of an ideal system with the corrected intensity compensation algorithm, we reveal that the recovered precision of the curvature sensor can be greatly improved by the corrected intensity compensation algorithm. Supported by the National Natural Science Foundation of China.
NASA Astrophysics Data System (ADS)
Zhao, Zixin; Zhao, Hong; Gu, Feifei; Zhang, Lu
2013-04-01
Sub-aperture stitching (SAS) testing method is an effective way to extend the lateral and vertical dynamic range of a conventional interferometer. However, the center of each sub-aperture could be in error because of the complex motion of the mechanical platform. To eliminate the affection of lateral location error in the final stitching result, a lateral location error compensation algorithm is introduced and the ability of the algorithm to compensate the lateral location error is analyzed. Finally, a 152.4mm concave parabolic mirror is tested using SAS method with the compensation algorithm. The result showed that the algorithm can effectively compensate the lateral location error caused by the mechanical motion. The proposal of the algorithm can reduce high requirement of mechanical platform, which provides a feasible method for the practical application of the engineering.
Computational algorithms for simulations in atmospheric optics.
Konyaev, P A; Lukin, V P
2016-04-20
A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors. PMID:27140113
NASA Astrophysics Data System (ADS)
Son, Kyungchan; Lim, Sung-Yong; Lee, Jae-seong; Jeong, Wooyoung; Yang, Hyunseok
2016-09-01
In holographic data storage, tilt is one of the critical disturbances. There are two types of tilt: tangential and radial. In real systems, tangential and radial tilt occur simultaneously. Thus, it is difficult to measure and compensate for tilt. In this study, using a quadratic window, which compares the intensity of a certain area, a tilt error signal was generated and compensated for with the proposed algorithm. The compensated image obtained satisfied a 0.3 dB tolerance.
Springback compensation algorithm for tool design in creep age forming of large aluminum alloy plate
NASA Astrophysics Data System (ADS)
Xu, Xiaolong; Zhan, Lihua; Huang, Minghui
2013-12-01
The creep unified constitutive equations, which was built based on the age forming mechanism of aluminum alloy, was integrated with the commercial finite element analysis software MSC.MARC via the user defined subroutine, CREEP, and the creep age forming process simulations for7055 aluminum alloy plate parts were conducted. Then the springback of the workpiece after forming was calculated by ATOS Professional Software. Based on the combination between simulation results and calculation of springback by ATOS for the formed plate, a new weighted springback compensation algorithm for tool surface modification was developed. The compensate effects between the new algorithm and other overall compensation algorithms on the tool surface are compared. The results show that, the maximal forming error of the workpiece was reduced to below 0.2mm after 5 times compensations with the new weighted algorithm, while error rebound phenomenon occurred and the maximal forming error cannot be reduced to 0.3mm even after 6 times compensations with fixed or variable compensation coefficient, which are based on the overall compensation algorithm.
A Comprehensive Study of Three Delay Compensation Algorithms for Flight Simulators
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Houck, Jacob A.; Kelly, Lon C.; Wolters, Thomas E.
2005-01-01
This paper summarizes a comprehensive study of three predictors used for compensating the transport delay in a flight simulator; The McFarland, Adaptive and State Space Predictors. The paper presents proof that the stochastic approximation algorithm can achieve the best compensation among all four adaptive predictors, and intensively investigates the relationship between the state space predictor s compensation quality and its reference model. Piloted simulation tests show that the adaptive predictor and state space predictor can achieve better compensation of transport delay than the McFarland predictor.
Enhancements to an Atmospheric Ascent Guidance Algorithm
NASA Technical Reports Server (NTRS)
Dukeman, Greg A.
2003-01-01
Enhancements to an advanced ascent guidance algorithm for rocket-powered launch vehicles are described. A general method has been developed for conveniently and efficiently handling the common case of (asymmetric) launch vehicles with unbalanced thrust and aerodynamic moments. The new part of this development concerns the treatment of endo-atmospheric flight. An alternative method for handing the transversality conditions has been developed that eliminates the need for a priori elimination of the constant multipliers that adjoin the terminal state constraints to the performance index. As a result, new constraints can be formulated and implemented with relative ease. The problem of burn-coast-burn trajectory optimization is treated using a modified multiple shooting technique.
A Novel Control algorithm based DSTATCOM for Load Compensation
NASA Astrophysics Data System (ADS)
R, Sreejith; Pindoriya, Naran M.; Srinivasan, Babji
2015-11-01
Distribution Static Compensator (DSTATCOM) has been used as a custom power device for voltage regulation and load compensation in the distribution system. Controlling the switching angle has been the biggest challenge in DSTATCOM. Till date, Proportional Integral (PI) controller is widely used in practice for load compensation due to its simplicity and ability. However, PI Controller fails to perform satisfactorily under parameters variations, nonlinearities, etc. making it very challenging to arrive at best/optimal tuning values for different operating conditions. Fuzzy logic and neural network based controllers require extensive training and perform better under limited perturbations. Model predictive control (MPC) is a powerful control strategy, used in the petrochemical industry and its application has been spread to different fields. MPC can handle various constraints, incorporate system nonlinearities and utilizes the multivariate/univariate model information to provide an optimal control strategy. Though it finds its application extensively in chemical engineering, its utility in power systems is limited due to the high computational effort which is incompatible with the high sampling frequency in these systems. In this paper, we propose a DSTATCOM based on Finite Control Set Model Predictive Control (FCS-MPC) with Instantaneous Symmetrical Component Theory (ISCT) based reference current extraction is proposed for load compensation and Unity Power Factor (UPF) action in current control mode. The proposed controller performance is evaluated for a 3 phase, 3 wire, 415 V, 50 Hz distribution system in MATLAB Simulink which demonstrates its applicability in real life situations.
NASA Astrophysics Data System (ADS)
Yuan, Xiuhua; Wang, Jin; Huang, Dexiu; Liu, Deming
2004-12-01
The Free Space optical communication (FSO) or wireless optical communication, utilizes the atmospheric medium as transmission channel, where random variety such as fog, atomy and atmosphere flash and the atmospheric turbulence will badly affect the propagation of light, the receiving signal is easily swung and drifted with the change of weather. In this paper, we discussed the attenuation of the atmospheric channel and analyzed the signal characteristics in the condition of the atmospheric overfall, for the OOK modulation, discussed the receiving signal distribution in the atmospheric channel taking account for the noise gain of the light detector, and based on the principle of the Hartman-Shack sensor, we designed a wave-front distortion compensation system with fiber coupler. The signal fading resulted from wave-front phase distortion was compensated effectively by using the compensation system.
Atmospheric channel for bistatic optical communication: simulation algorithms
NASA Astrophysics Data System (ADS)
Belov, V. V.; Tarasenkov, M. V.
2015-11-01
Three algorithms of statistical simulation of the impulse response (IR) for the atmospheric optical communication channel are considered, including algorithms of local estimate and double local estimate and the algorithm suggested by us. On the example of a homogeneous molecular atmosphere it is demonstrated that algorithms of double local estimate and the suggested algorithm are more efficient than the algorithm of local estimate. For small optical path length, the proposed algorithm is more efficient, and for large optical path length, the algorithm of double local estimate is more efficient. Using the proposed algorithm, the communication quality is estimated for a particular case of the atmospheric channel under conditions of intermediate turbidity. The communication quality is characterized by the maximum IR, time of maximum IR, integral IR, and bandwidth of the communication channel. Calculations of these criteria demonstrated that communication is most efficient when the point of intersection of the directions toward the source and the receiver is most close to the source point.
Adaptive filter design based on the LMS algorithm for delay elimination in TCR/FC compensators.
Hooshmand, Rahmat Allah; Torabian Esfahani, Mahdi
2011-04-01
Thyristor controlled reactor with fixed capacitor (TCR/FC) compensators have the capability of compensating reactive power and improving power quality phenomena. Delay in the response of such compensators degrades their performance. In this paper, a new method based on adaptive filters (AF) is proposed in order to eliminate delay and increase the response of the TCR compensator. The algorithm designed for the adaptive filters is performed based on the least mean square (LMS) algorithm. In this design, instead of fixed capacitors, band-pass LC filters are used. To evaluate the filter, a TCR/FC compensator was used for nonlinear and time varying loads of electric arc furnaces (EAFs). These loads caused occurrence of power quality phenomena in the supplying system, such as voltage fluctuation and flicker, odd and even harmonics and unbalancing in voltage and current. The above design was implemented in a realistic system model of a steel complex. The simulation results show that applying the proposed control in the TCR/FC compensator efficiently eliminated delay in the response and improved the performance of the compensator in the power system. PMID:21193194
Vyas, Bhargav Y; Das, Biswarup; Maheshwari, Rudra Prakash
2016-08-01
This paper presents the Chebyshev neural network (ChNN) as an improved artificial intelligence technique for power system protection studies and examines the performances of two ChNN learning algorithms for fault classification of series compensated transmission line. The training algorithms are least-square Levenberg-Marquardt (LSLM) and recursive least-square algorithm with forgetting factor (RLSFF). The performances of these algorithms are assessed based on their generalization capability in relating the fault current parameters with an event of fault in the transmission line. The proposed algorithm is fast in response as it utilizes postfault samples of three phase currents measured at the relaying end corresponding to half-cycle duration only. After being trained with only a small part of the generated fault data, the algorithms have been tested over a large number of fault cases with wide variation of system and fault parameters. Based on the studies carried out in this paper, it has been found that although the RLSFF algorithm is faster for training the ChNN in the fault classification application for series compensated transmission lines, the LSLM algorithm has the best accuracy in testing. The results prove that the proposed ChNN-based method is accurate, fast, easy to design, and immune to the level of compensations. Thus, it is suitable for digital relaying applications. PMID:25314714
Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation
Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu
2015-01-01
To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401
Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation.
Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu
2015-01-01
To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401
TIGER: Development of Thermal Gradient Compensation Algorithms and Techniques
NASA Technical Reports Server (NTRS)
Hereford, James; Parker, Peter A.; Rhew, Ray D.
2004-01-01
In a wind tunnel facility, the direct measurement of forces and moments induced on the model are performed by a force measurement balance. The measurement balance is a precision-machined device that has strain gages at strategic locations to measure the strain (i.e., deformations) due to applied forces and moments. The strain gages convert the strain (and hence the applied force) to an electrical voltage that is measured by external instruments. To address the problem of thermal gradients on the force measurement balance NASA-LaRC has initiated a research program called TIGER - Thermally-Induced Gradients Effects Research. The ultimate goals of the TIGER program are to: (a) understand the physics of the thermally-induced strain and its subsequent impact on load measurements and (b) develop a robust thermal gradient compensation technique. This paper will discuss the impact of thermal gradients on force measurement balances, specific aspects of the TIGER program (the design of a special-purpose balance, data acquisition and data analysis challenges), and give an overall summary.
NASA Astrophysics Data System (ADS)
Wright, L.; Karpowicz, B. M.; Kindel, B. C.; Schmidt, S.; Leisso, N.; Kampe, T. U.; Pilewskie, P.
2014-12-01
A wide variety of critical information regarding bioclimate, biodiversity, and biogeochemistry is embedded in airborne hyperspectral imagery. Most, if not all of the primary signal relies upon first deriving the surface reflectance of land cover and vegetation from measured hyperspectral radiance. This places stringent requirements on terrain, and atmospheric compensation algorithms to accurately derive surface reflectance properties. An observatory designed to measure bioclimate, biodiversity, and biogeochemistry variables from surface reflectance must take great care in developing an approach which chooses algorithms with the highest accuracy, along with providing those algorithms with data necessary to describe the physical mechanisms that affect the measured at sensor radiance. The Airborne Observation Platform (AOP) part of the National Ecological Observatory Network (NEON) is developing such an approach. NEON is a continental-scale ecological observation platform designed to collect and disseminate data to enable the understanding and forecasting of the impacts of climate change, land use change, and invasive species on ecology. The instrumentation package used by the AOP includes a visible and shortwave infrared hyperspectral imager, waveform LiDAR, and high resolution (RGB) digital camera. In addition to airborne measurements, ground-based CIMEL sun photometers will be used to help characterize atmospheric aerosol loading, and ground validation measurements with field spectrometers will be made at select NEON sites. While the core instrumentation package provides critical information to derive surface reflectance of land surfaces and vegetation, the addition of a Solar Spectral Irradiance Radiometer (SSIR) is being investigated as an additional source of data to help identify and characterize atmospheric aerosol, and cloud contributions contributions to the radiance measured by the hyperspectral imager. The addition of the SSIR provides the opportunity to
Optimization algorithm in adaptive PMD compensation in 10Gb/s optical communication system
NASA Astrophysics Data System (ADS)
Diao, Cao; Li, Tangjun; Wang, Muguang; Gong, Xiangfeng
2005-02-01
In this paper, the optimization algorithms are introduced in adaptive PMD compensation in 10Gb/s optical communication system. The PMD monitoring technique based on degree of polarization (DOP) is adopted. DOP can be a good indicator of PMD with monotonically deceasing of DOP as differential group delay (DGD) increasing. In order to use DOP as PMD monitoring feedback signal, it is required to emulate the state of DGD in the transmission circuitry. A PMD emulator is designed. A polarization controller (PC) is used in fiber multiplexer to adjust the polarization state of optical signal, and at the output of the fiber multiplexer a polarizer is used. After the feedback signal reach the control computer, the optimization program run to search the global optimization spot and through the PC to control the PMD. Several popular modern nonlinear optimization algorithms (Tabu Search, Simulated Annealing, Genetic Algorithm, Artificial Neural Networks, Ant Colony Optimization etc.) are discussed and the comparisons among them are made to choose the best optimization algorithm. Every algorithm has its advantage and disadvantage, but in this circs the Genetic Algorithm (GA) may be the best. It eliminates the worsen spots constantly and lets them have no chance to enter the circulation. So it has the quicker convergence velocity and less time. The PMD can be compensated in very few steps by using this algorithm. As a result, the maximum compensation ability of the one-stage PMD and two-stage PMD can be made in very short time, and the dynamic compensation time is no more than 10ms.
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
NASA Astrophysics Data System (ADS)
Wu, Kaizhi; Zhang, Xuming; Chen, Guangxie; Weng, Fei; Ding, Mingyue
2013-10-01
Images acquired in free breathing using contrast enhanced ultrasound exhibit a periodic motion that needs to be compensated for if a further accurate quantification of the hepatic perfusion analysis is to be executed. In this work, we present an algorithm to compensate the respiratory motion by effectively combining the PCA (Principal Component Analysis) method and block matching method. The respiratory kinetics of the ultrasound hepatic perfusion image sequences was firstly extracted using the PCA method. Then, the optimal phase of the obtained respiratory kinetics was detected after normalizing the motion amplitude and determining the image subsequences of the original image sequences. The image subsequences were registered by the block matching method using cross-correlation as the similarity. Finally, the motion-compensated contrast images can be acquired by using the position mapping and the algorithm was evaluated by comparing the TICs extracted from the original image sequences and compensated image subsequences. Quantitative comparisons demonstrated that the average fitting error estimated of ROIs (region of interest) was reduced from 10.9278 +/- 6.2756 to 5.1644 +/- 3.3431 after compensating.
Comparision of algorithms for incoming atmospheric long-wave radiation
Technology Transfer Automated Retrieval System (TEKTRAN)
While numerous algorithms exist for predicting incident atmospheric long-wave radiation under clear (Lclr) and cloudy skies, only a handful of comparisons have been published to assess the accuracy of the different algorithms. Virtually no comparisons have been made for both clear and cloudy skies ...
MTRC compensation in high-resolution ISAR imaging via improved polar format algorithm
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Hao; Li, Na; Xu, Shiyou; Chen, Zengping
2014-10-01
Migration through resolution cells (MTRC) is generated in high-resolution inverse synthetic aperture radar (ISAR) imaging. A MTRC compensation algorithm for high-resolution ISAR imaging based on improved polar format algorithm (PFA) is proposed in this paper. Firstly, in the situation that a rigid-body target stably flies, the initial value of the rotation angle and center of the target is obtained from the rotation of radar line of sight (RLOS) and high range resolution profile (HRRP). Then, the PFA is iteratively applied to the echo data to search the optimization solution based on minimum entropy criterion. The procedure starts with the estimated initial rotation angle and center, and terminated when the entropy of the compensated ISAR image is minimized. To reduce the computational load, the 2-D iterative search is divided into two 1-D search. One is carried along the rotation angle and the other one is carried along rotation center. Each of the 1-D searches is realized by using of the golden section search method. The accurate rotation angle and center can be obtained when the iterative search terminates. Finally, apply the PFA to compensate the MTRC by the use of the obtained optimized rotation angle and center. After MTRC compensation, the ISAR image can be best focused. Simulated and real data demonstrate the effectiveness and robustness of the proposed algorithm.
Heat Transport Compensation in Atmosphere and Ocean over the Past 22,000 Years
Yang, Haijun; Zhao, Yingying; Liu, Zhengyu; Li, Qing; He, Feng; Zhang, Qiong
2015-01-01
The Earth’s climate has experienced dramatic changes over the past 22,000 years; however, the total meridional heat transport (MHT) of the climate system remains stable. A 22,000-year-long simulation using an ocean-atmosphere coupled model shows that the changes in atmosphere and ocean MHT are significant but tend to be out of phase in most regions, mitigating the total MHT change, which helps to maintain the stability of the Earth’s overall climate. A simple conceptual model is used to understand the compensation mechanism. The simple model can reproduce qualitatively the evolution and compensation features of the MHT over the past 22,000 years. We find that the global energy conservation requires the compensation changes in the atmosphere and ocean heat transports. The degree of compensation is mainly determined by the local climate feedback between surface temperature and net radiation flux at the top of the atmosphere. This study suggests that an internal mechanism may exist in the climate system, which might have played a role in constraining the global climate change over the past 22,000 years. PMID:26567710
Heat Transport Compensation in Atmosphere and Ocean over the Past 22,000 Years.
Yang, Haijun; Zhao, Yingying; Liu, Zhengyu; Li, Qing; He, Feng; Zhang, Qiong
2015-01-01
The Earth's climate has experienced dramatic changes over the past 22,000 years; however, the total meridional heat transport (MHT) of the climate system remains stable. A 22,000-year-long simulation using an ocean-atmosphere coupled model shows that the changes in atmosphere and ocean MHT are significant but tend to be out of phase in most regions, mitigating the total MHT change, which helps to maintain the stability of the Earth's overall climate. A simple conceptual model is used to understand the compensation mechanism. The simple model can reproduce qualitatively the evolution and compensation features of the MHT over the past 22,000 years. We find that the global energy conservation requires the compensation changes in the atmosphere and ocean heat transports. The degree of compensation is mainly determined by the local climate feedback between surface temperature and net radiation flux at the top of the atmosphere. This study suggests that an internal mechanism may exist in the climate system, which might have played a role in constraining the global climate change over the past 22,000 years. PMID:26567710
NASA Astrophysics Data System (ADS)
Cook, M. J.; Schott, J. R.
2013-12-01
An automated process for the atmospheric compensation for a Landsat land surface temperature product has been developed. Landsat data are very attractive for a global land surface temperature product because the spatial and temporal resolution and range of the imagery make them well matched to applications for the study of agriculture, the environment, weather, and climate among others. However, Landsat's single thermal band requires per-pixel atmospheric compensation and emissivity; this work focuses on the atmospheric compensation aspect of the process and will be integrated with ASTER derived emissivity data to output a land surface temperature product. For the same reasons Landsat is attractive, an automated atmospheric compensation technique is challenging; it requires atmospheric characterization over a large area and long time scale at an acceptable resolution. Using North American Regional Reanalysis (NARR) data, MODTRAN radiative transfer code, and a number of interpolation techniques, a tool has been developed to generate the necessary radiative transfer parameters at each pixel for any radiometrically calibrated North American Landsat scene in the archive. Initial validation of predicted temperatures using ground truth water temperatures from platforms and buoys verifies the fidelity of the process with good performance when the atmosphere is accurately characterized. However, performance is poorer when the composition of the atmosphere is not as well understood. Because of the desired automation and extent of the tool, we are limited in the availability of acceptable atmospheric profile data. The goal is to understand sources of error in order to predict and characterize the uncertainty in the retrieved temperatures. While the performance has been extensively tested using a number of NOAA buoys with bulk temperature measurements corrected to skin temperature, traditional error analysis is complicated by the atmospheric reanalysis, radiative transfer
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1994-01-01
An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.
Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery
NASA Technical Reports Server (NTRS)
Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana
1989-01-01
A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.
Characterization of atmospheric contaminant sources using adaptive evolutionary algorithms
NASA Astrophysics Data System (ADS)
Cervone, Guido; Franzese, Pasquale; Grajdeanu, Adrian
2010-10-01
The characteristics of an unknown source of emissions in the atmosphere are identified using an Adaptive Evolutionary Strategy (AES) methodology based on ground concentration measurements and a Gaussian plume model. The AES methodology selects an initial set of source characteristics including position, size, mass emission rate, and wind direction, from which a forward dispersion simulation is performed. The error between the simulated concentrations from the tentative source and the observed ground measurements is calculated. Then the AES algorithm prescribes the next tentative set of source characteristics. The iteration proceeds towards minimum error, corresponding to convergence towards the real source. The proposed methodology was used to identify the source characteristics of 12 releases from the Prairie Grass field experiment of dispersion, two for each atmospheric stability class, ranging from very unstable to stable atmosphere. The AES algorithm was found to have advantages over a simple canonical ES and a Monte Carlo (MC) method which were used as benchmarks.
Infrared micro-scanning error compensation algorithm based on edge location
NASA Astrophysics Data System (ADS)
Gao, Hang; Chen, Qian; Sui, Xiubao
2015-03-01
For area-array thermal imaging devices, an essential factor affecting the system imaging quality is the sub-sampling caused by oversized discrete sampling pitch. In order to obtain higher spatial resolution, staring infrared focal plane array (IRFPA) gets multi-frame sub-sampling images by micro-scanning movement to achieve an adequate spatial sampling frequency. However, influenced by external environment and the accuracy of the scanning system itself, the relative displacement between the detector and the scene cannot be absolutely precisely controlled, but exist some error, which will affect the final performance of the reconstructed high-resolution image. We analyzed the distribution of the error and then proposed an infrared micro-scanning error compensation algorithm based on edge location, which is inspired by human retina fixational eye movement pattern. It first locates the edge point in the reconstruction unit and finds the corresponding characteristic values. Later on, matches the characteristic value with the fixed templates and reorders the pixel responses in reconstruction unit utilizing the gray correlation. Finally, it compensates the error real-timely through repeated update and iteration. We apply the algorithm in video sequences acquired by 4-step infrared micro-scanning system. The experiment results show that when aligning to a static scene or stationary region in dynamic scene, the algorithm possesses good resolution enhancement effect, particularly, can improve the clarity and the accuracy of static image edge details.
Doppler-based motion compensation algorithm for focusing the signature of a rotorcraft.
Goldman, Geoffrey H
2013-02-01
A computationally efficient algorithm was developed and tested to compensate for the effects of motion on the acoustic signature of a rotorcraft. For target signatures with large spectral peaks that vary slowly in amplitude and have near constant frequency, the time-varying Doppler shift can be tracked and then removed from the data. The algorithm can be used to preprocess data for classification, tracking, and nulling algorithms. The algorithm was tested on rotorcraft data. The average instantaneous frequency of the first harmonic of a rotorcraft was tracked with a fixed-lag smoother. Then, state space estimates of the frequency were used to calculate a time warping that removed the effect of a time-varying Doppler shift from the data. The algorithm was evaluated by analyzing the increase in the amplitude of the harmonics in the spectrum of a rotorcraft. The results depended upon the frequency of the harmonics and the processing interval duration. Under good conditions, the results for the fundamental frequency of the target (~11 Hz) almost achieved an estimated upper bound. The results for higher frequency harmonics had larger increases in the amplitude of the peaks, but significantly lower than the estimated upper bounds. PMID:23363088
NASA Astrophysics Data System (ADS)
Rencurrel, M. C.; Rose, B. E. J.
2015-12-01
The poleward transport of energy is a key aspect of the climate system, with surface ocean currents presently dominating the transport out of deep tropics. A classic study by Stone (1978) proposed that the total heat transport is determined by astronomical parameters and is highly insensitive to the detailed atmosphere-ocean dynamics. On the other hand, previous modeling work has shown that past continental configurations could have produced substantially different tropical ocean heat transport (OHT). How thoroughly does the atmosphere compensate for changes in ocean transport in terms of the top-of-atmosphere (TOA) radiative budget, what are the relevant mechanisms, and what are the consequences for surface temperature and climate on tectonic timescales? We examine these issues in a suite of aquaplanet GCM simulations subject to large prescribed variations in OHT. We find substantial but incomplete compensation, in which adjustment of the atmospheric Hadley circulation plays a key role. We then separate out the dynamical and thermodynamical components of the adjustment mechanism. Increased OHT tends to warm the mid- to high latitudes without cooling the tropics due asymmetries in radiative feedback processes. The warming is accompanied by hydrological cycle changes that are completely different from those driven by greenhouse gases, suggesting that drivers of past global change might be detectable from combinations of hydroclimate and temperature proxies.
Atmospheric turbulence and sensor system effects on biometric algorithm performance
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy
2015-05-01
Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.
Kececioglu, O Fatih; Gani, Ahmet; Sekkeli, Mustafa
2016-01-01
The main objective of the present paper is to introduce a new approach for measuring and calculation of fundamental power components in the case of various distorted waveforms including those containing harmonics. The parameters of active, reactive, apparent power and power factor, are measured and calculated by using Goertzel algorithm instead of fast Fourier transformation which is commonly used. The main advantage of utilizing Goertzel algorithm is to minimize computational load and trigonometric equations. The parameters measured in the new technique are applied to a fixed capacitor-thyristor controlled reactor based static VAr compensation system to achieve accurate power factor correction for the first time. This study is implemented both simulation and experimentally. PMID:27047717
Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm
NASA Technical Reports Server (NTRS)
Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.
2011-01-01
An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.
Genetic algorithms for optimal reactive power compensation planning on the national grid system
NASA Astrophysics Data System (ADS)
Pilgrim, J. D.
This work investigates the use of Genetic Algorithms (GAs) for optimal Reactive power Compensation Planning (RCP) of practical power systems. In particular, RCP of the transmission system of England and Wales as owned and operated by National Grid is considered. The GA is used to simultaneously solve both the siting problem---optimisation of the installation of new devices---and the operational problem---optimisation of preventive transformer taps and the controller characteristics of dynamic compensation devices. A computer package called Genetic Compensation Placement (GCP) has been developed which uses an Integer coded GA (IGA) to solve the RCP problem. The RCP problem is implemented as a multi-objective optimisation: in the interests of security, the number of system and operational constraint violations and the deviation of the busbar voltages from the ideal are all minimised for the base (intact) case and the contingent cases. In the interests of cost reduction, the reactive power cost is minimised for the base case. The reactive power cost encompasses the costs incurred from the installation of reactive power sources and the utilisation of new and existing dynamic reactive power compensation devices. GCP is compared to SCORPION (a planning program currently being used by National Grid) which uses a combination of linear programming and heuristic back-tracking. Results are presented for a practical test system developed with the cooperation of National Grid, and it is found that GCP produces solutions that are cheaper than solutions found by SCORPION and perform extremely well: an improvement in voltage profiles, a decrease in complex power mismatches, and a reduction in MVolt Amps-reactive (VAr) utilisation were observed.
Finite element-wavelet hybrid algorithm for atmospheric tomography.
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2014-03-01
Reconstruction of the refractive index fluctuations in the atmosphere, or atmospheric tomography, is an underlying problem of many next generation adaptive optics (AO) systems, such as the multiconjugate adaptive optics or multiobject adaptive optics (MOAO). The dimension of the problem for the extremely large telescopes, such as the European Extremely Large Telescope (E-ELT), suggests the use of iterative schemes as an alternative to the matrix-vector multiply (MVM) methods. Recently, an algorithm based on the wavelet representation of the turbulence has been introduced in [Inverse Probl.29, 085003 (2013)] by the authors to solve the atmospheric tomography using the conjugate gradient iteration. The authors also developed an efficient frequency-dependent preconditioner for the wavelet method in a later work. In this paper we study the computational aspects of the wavelet algorithm. We introduce three new techniques, the dual domain discretization strategy, a scale-dependent preconditioner, and a ground layer multiscale method, to derive a method that is globally O(n), parallelizable, and compact with respect to memory. We present the computational cost estimates and compare the theoretical numerical performance of the resulting finite element-wavelet hybrid algorithm with the MVM. The quality of the method is evaluated in terms of an MOAO simulation for the E-ELT on the European Southern Observatory (ESO) end-to-end simulation system OCTOPUS. The method is compared to the ESO version of the Fractal Iterative Method [Proc. SPIE7736, 77360X (2010)] in terms of quality. PMID:24690653
NASA Astrophysics Data System (ADS)
McCrae, Jack E.; Van Zandt, Noah; Cusumano, Salvatore J.; Fiorino, Steven T.
2013-05-01
Beam propagation from a laser phased array system through the turbulent atmosphere is simulated and the ability of such a system to compensate for the atmosphere via piston-only phase control of the sub-apertures is evaluated. Directed energy (DE) applications demand more power than most lasers can produce, consequently many schemes for high power involve combining the beams from many smaller lasers into one. When many smaller lasers are combined into a phased array, phase control of the individual sub-apertures will be necessary to create a high-quality beam. Phase control of these sub-apertures could then be used to do more, such as focus, steer, and compensate for atmospheric turbulence. Atmospheric turbulence is well known to degrade the performance of both imaging systems and laser systems. Adaptive optics can be used to mitigate this degradation. Adaptive optics ordinarily involves a deformable mirror, but with phase control on each sub-aperture the need for a deformable mirror is eliminated. The simulation conducted here evaluates performance gain for a 127 element phased array in a hexagonal pattern with piston-only phase control on each element over an uncompensated array for varying levels of atmospheric turbulence. While most simulations were carried out against a 10 km tactical scenario, the turbulence profile was adjusted so performance could be evaluated as a function of the Fried Parameter (r0) and the log-amplitude variance somewhat independently. This approach is demonstrated to be generally effective with the largest percentage improvement occurring when r0 is close to the sub-aperture diameter.
Control algorithms for aerobraking in the Martian atmosphere
NASA Technical Reports Server (NTRS)
Ward, Donald T.; Shipley, Buford W., Jr.
1991-01-01
The Analytic Predictor Corrector (APC) and Energy Controller (EC) atmospheric guidance concepts were adapted to control an interplanetary vehicle aerobraking in the Martian atmosphere. Changes are made to the APC to improve its robustness to density variations. These changes include adaptation of a new exit phase algorithm, an adaptive transition velocity to initiate the exit phase, refinement of the reference dynamic pressure calculation and two improved density estimation techniques. The modified controller with the hybrid density estimation technique is called the Mars Hybrid Predictor Corrector (MHPC), while the modified controller with a polynomial density estimator is called the Mars Predictor Corrector (MPC). A Lyapunov Steepest Descent Controller (LSDC) is adapted to control the vehicle. The LSDC lacked robustness, so a Lyapunov tracking exit phase algorithm is developed to guide the vehicle along a reference trajectory. This algorithm, when using the hybrid density estimation technique to define the reference path, is called the Lyapunov Hybrid Tracking Controller (LHTC). With the polynomial density estimator used to define the reference trajectory, the algorithm is called the Lyapunov Tracking Controller (LTC). These four new controllers are tested using a six degree of freedom computer simulation to evaluate their robustness. The MHPC, MPC, LHTC, and LTC show dramatic improvements in robustness over the APC and EC.
NASA Technical Reports Server (NTRS)
Nerheim, N.
1989-01-01
Blind pointing of the Deep Space Network (DSN) 70-meter antennas can be improved if distortions of the antenna structure caused by unpredictable environmental loads can be measured in real-time, and the resulting boresight shifts evaluated and incorporated into the pointing control loops. The measurement configuration of a proposed pointing compensation system includes an optical range sensor that measures distances to selected points on the antenna surface. The effect of atmospheric turbulence on the accuracy of optical distance measurements and a method to make in-situ determinations of turbulence-induced measurement errors are discussed.
Mars Entry Atmospheric Data System Modelling and Algorithm Development
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Beck, Roger E.; OKeefe, Stephen A.; Siemers, Paul; White, Brady; Engelund, Walter C.; Munk, Michelle M.
2009-01-01
The Mars Entry Atmospheric Data System (MEADS) is being developed as part of the Mars Science Laboratory (MSL), Entry, Descent, and Landing Instrumentation (MEDLI) project. The MEADS project involves installing an array of seven pressure transducers linked to ports on the MSL forebody to record the surface pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the total pressure, dynamic pressure, Mach number, angle of attack, and angle of sideslip. Secondary objectives are to estimate atmospheric winds by coupling the pressure measurements with the on-board Inertial Measurement Unit (IMU) data. This paper provides details of the algorithm development, MEADS system performance based on calibration, and uncertainty analysis for the aerodynamic and atmospheric quantities of interest. The work presented here is part of the MEDLI performance pre-flight validation and will culminate with processing flight data after Mars entry in 2012.
Coastal Zone Color Scanner atmospheric correction algorithm: multiple scattering effects.
Gordon, H R; Castaño, D J
1987-06-01
An analysis of the errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm is presented in detail. This was prompted by the observations of others that significant errors would be encountered if the present algorithm were applied to a hypothetical instrument possessing higher radiometric sensitivity than the present CZCS. This study provides CZCS users sufficient information with which to judge the efficacy of the current algorithm with the current sensor and enables them to estimate the impact of the algorithm-induced errors on their applications in a variety of situations. The greatest source of error is the assumption that the molecular and aerosol contributions to the total radiance observed at the sensor can be computed separately. This leads to the requirement that a value epsilon'(lambda,lambda(0)) for the atmospheric correction parameter, which bears little resemblance to its theoretically meaningful counterpart, must usually be employed in the algorithm to obtain an accurate atmospheric correction. The behavior of '(lambda,lambda(0)) with the aerosol optical thickness and aerosol phase function is thoroughly investigated through realistic modeling of radiative transfer in a stratified atmosphere over a Fresnel reflecting ocean. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates allowing elucidation of the errors along typical CZCS scan lines; this is important since, in the normal application of the algorithm, it is assumed that the same value of can be used for an entire CZCS scene or at least for a reasonably large subscene. Two types of variation of ' are found in models for which it would be constant in the single scattering approximation: (1) variation with scan angle in scenes in which a relatively large portion of the aerosol scattering phase function would be examined
Aerosol Retrieval and Atmospheric Correction Algorithms for EPIC
NASA Astrophysics Data System (ADS)
Wang, Y.; Lyapustin, A.; Marshak, A.; Korkin, S.; Herman, J. R.
2011-12-01
EPIC is a multi-spectral imager onboard planned Deep Space Climate ObserVatoRy (DSCOVR) designed for observations of the full illuminated disk of the Earth with high temporal and coarse spatial resolution (10 km) from Lagrangian L1 point. During the course of the day, EPIC will view the same Earth surface area in the full range of solar and view zenith angles at equator with fixed scattering angle near the backscattering direction. This talk will describe a new aerosol retrieval/atmospheric correction algorithm developed for EPIC and tested with EPIC Simulator data. This algorithm uses the time series approach and consists of two stages: the first stage is designed to periodically re-initialize the surface spectral bidirectional reflectance (BRF) on stable low AOD days. Such days can be selected based on the same measured reflectance between the morning and afternoon reciprocal view geometries of EPIC. On the second stage, the algorithm will monitor the diurnal cycle of aerosol optical depth and fine mode fraction based on the known spectral surface BRF. Testing of the developed algorithm with simulated EPIC data over continental USA showed a good accuracy of AOD retrievals (10-20%) except over very bright surfaces.
Aerosol Retrieval and Atmospheric Correction Algorithms for EPIC
NASA Technical Reports Server (NTRS)
Wang, Yujie; Lyapustin, Alexei; Marshak, Alexander; Korkin, Sergey; Herman, Jay
2011-01-01
EPIC is a multi-spectral imager onboard planned Deep Space Climate ObserVatoRy (DSCOVR) designed for observations of the full illuminated disk of the Earth with high temporal and coarse spatial resolution (10 km) from Lagrangian L1 point. During the course of the day, EPIC will view the same Earth surface area in the full range of solar and view zenith angles at equator with fixed scattering angle near the backscattering direction. This talk will describe a new aerosol retrieval/atmospheric correction algorithm developed for EPIC and tested with EPIC Simulator data. This algorithm uses the time series approach and consists of two stages: the first stage is designed to periodically re-initialize the surface spectral bidirectional reflectance (BRF) on stable low AOD days. Such days can be selected based on the same measured reflectance between the morning and afternoon reciprocal view geometries of EPIC. On the second stage, the algorithm will monitor the diurnal cycle of aerosol optical depth and fine mode fraction based on the known spectral surface BRF. Testing of the developed algorithm with simulated EPIC data over continental USA showed a good accuracy of AOD retrievals (10-20%) except over very bright surfaces.
A Novel Modified Omega-K Algorithm for Synthetic Aperture Imaging Lidar through the Atmosphere
Guo, Liang; Xing, Mendao; Tang, Yu; Dan, Jing
2008-01-01
The spatial resolution of a conventional imaging lidar system is constrained by the diffraction limit of the telescope's aperture. The combination of the lidar and synthetic aperture (SA) processing techniques may overcome the diffraction limit and pave the way for a higher resolution air borne or space borne remote sensor. Regarding the lidar transmitting frequency modulation continuous-wave (FMCW) signal, the motion during the transmission of a sweep and the reception of the corresponding echo were expected to be one of the major problems. The given modified Omega-K algorithm takes the continuous motion into account, which can compensate for the Doppler shift induced by the continuous motion efficiently and azimuth ambiguity for the low pulse recurrence frequency limited by the tunable laser. And then, simulation of Phase Screen (PS) distorted by atmospheric turbulence following the von Karman spectrum by using Fourier Transform is implemented in order to simulate turbulence. Finally, the computer simulation shows the validity of the modified algorithm and if in the turbulence the synthetic aperture length does not exceed the similar coherence length of the atmosphere for SAIL, we can ignore the effect of the turbulence.
NASA Astrophysics Data System (ADS)
Zhao, Minghui; Zhao, Xuesen; Li, Zengqiang; Sun, Tao
2014-08-01
In the non-rotational symmetrical microstrcture surfaces generation using turning method with Fast Tool Servo(FTS), non-uniform distribution of the interpolation data points will lead to long processing cycle and poor surface quality. To improve this situation, nearly arc-length tool path generation algorithm is proposed, which generates tool tip trajectory points in nearly arc-length instead of the traditional interpolation rule of equal angle and adds tool radius compensation. All the interpolation points are equidistant in radial distribution because of the constant feeding speed in X slider, the high frequency tool radius compensation components are in both X direction and Z direction, which makes X slider difficult to follow the input orders due to its large mass. Newton iterative method is used to calculate the neighboring contour tangent point coordinate value with the interpolation point X position as initial value, in this way, the new Z coordinate value is gotten, and the high frequency motion components in X direction is decomposed into Z direction. Taking a typical microstructure with 4μm PV value for test, which is mixed with two 70μm wave length sine-waves, the max profile error at the angle of fifteen is less than 0.01μm turning by a diamond tool with big radius of 80μm. The sinusoidal grid is machined on a ultra-precision lathe succesfully, the wavelength is 70.2278μm the Ra value is 22.81nm evaluated by data points generated by filtering out the first five harmonics.
Algorithmic vs. finite difference Jacobians for infrared atmospheric radiative transfer
NASA Astrophysics Data System (ADS)
Schreier, Franz; Gimeno García, Sebastián; Vasquez, Mayte; Xu, Jian
2015-10-01
Jacobians, i.e. partial derivatives of the radiance and transmission spectrum with respect to the atmospheric state parameters to be retrieved from remote sensing observations, are important for the iterative solution of the nonlinear inverse problem. Finite difference Jacobians are easy to implement, but computationally expensive and possibly of dubious quality; on the other hand, analytical Jacobians are accurate and efficient, but the implementation can be quite demanding. GARLIC, our "Generic Atmospheric Radiation Line-by-line Infrared Code", utilizes algorithmic differentiation (AD) techniques to implement derivatives w.r.t. atmospheric temperature and molecular concentrations. In this paper, we describe our approach for differentiation of the high resolution infrared and microwave spectra and provide an in-depth assessment of finite difference approximations using "exact" AD Jacobians as a reference. The results indicate that the "standard" two-point finite differences with 1 K and 1% perturbation for temperature and volume mixing ratio, respectively, can exhibit substantial errors, and central differences are significantly better. However, these deviations do not transfer into the truncated singular value decomposition solution of a least squares problem. Nevertheless, AD Jacobians are clearly recommended because of the superior speed and accuracy.
Models and algorithms for vision through the atmosphere
NASA Astrophysics Data System (ADS)
Narasimhan, Srinivasa G.
2004-11-01
Current vision systems are designed to perform in clear weather. Needless to say, in any outdoor application, there is no escape from bad weather. Ultimately, computer vision systems must include mechanisms that enable them to function (even if somewhat less reliably) in the presence of haze, fog, rain, hail and snow. We begin by studying the visual manifestations of different weather conditions. For this, we draw on what is already known about atmospheric optics, and identify effects caused by bad weather that can be turned to our advantage; we are not only interested in what bad weather does to vision but also what it can do for vision. This thesis presents a novel and comprehensive set of models, algorithms and image datasets for better image understanding in bad weather. The models presented here can be broadly classified into single scattering and multiple scattering models. Existing single scattering models like attenuation and airlight form the basis of three new models viz., the contrast model, the dichromatic model and the polarization model. Each of these models is suited to different types of atmospheric and illumination conditions as well as different sensor types. Based on these models, we develop algorithms to recover pertinent scene properties, such as 3D structure, and clear day scene contrasts and colors, from one or more images taken under poor weather conditions. Next, we present an analytic model for multiple scattering of light in a scattering medium. From a single image of a light source immersed in a medium, interesting properties of the medium can be estimated. If the medium is the atmosphere, the weather condition and the visibility of the atmosphere can be estimated. These quantities can in turn be used to remove the glows around sources obtaining a clear picture of the scene. Based on these results, the camera serves as a "visual weather meter". Our analytic model can be used to analyze scattering in virtually any scattering medium
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
The desire to create more complex visual scenes in modern flight simulators outpaces recent increases in processor speed. As a result, simulation transport delay remains a problem. New approaches for compensating the transport delay in a flight simulator have been developed and are presented in this report. The lead/lag filter, the McFarland compensator and the Sobiski/Cardullo state space filter are three prominent compensators. The lead/lag filter provides some phase lead, while introducing significant gain distortion in the same frequency interval. The McFarland predictor can compensate for much longer delay and cause smaller gain error in low frequencies than the lead/lag filter, but the gain distortion beyond the design frequency interval is still significant, and it also causes large spikes in prediction. Though, theoretically, the Sobiski/Cardullo predictor, a state space filter, can compensate the longest delay with the least gain distortion among the three, it has remained in laboratory use due to several limitations. The first novel compensator is an adaptive predictor that makes use of the Kalman filter algorithm in a unique manner. In this manner the predictor can accurately provide the desired amount of prediction, while significantly reducing the large spikes caused by the McFarland predictor. Among several simplified online adaptive predictors, this report illustrates mathematically why the stochastic approximation algorithm achieves the best compensation results. A second novel approach employed a reference aircraft dynamics model to implement a state space predictor on a flight simulator. The practical implementation formed the filter state vector from the operator s control input and the aircraft states. The relationship between the reference model and the compensator performance was investigated in great detail, and the best performing reference model was selected for implementation in the final tests. Theoretical analyses of data from offline
Validation of aerosol estimation in atmospheric correction algorithm ATCOR
NASA Astrophysics Data System (ADS)
Pflug, B.; Main-Knorn, M.; Makarau, A.; Richter, R.
2015-04-01
Atmospheric correction of satellite images is necessary for many applications of remote sensing, i.e. computation of vegetation indices and biomass estimation. The first step in atmospheric correction is estimation of the actual aerosol properties. Due to the spatial and temporal variability of aerosol amount and type, this step becomes crucial for an accurate correction of satellite data. Consequently, the validation of aerosol estimation contributes to the validation of atmospheric correction algorithms. In this study we present the validation of aerosol estimation using own sun photometer measurements in Central Europe and measurements of AERONET-stations at different locations in the world. Our ground-based sun photometer measurements of vertical column aerosoloptical thickness (AOT) spectra are performed synchronously to overpasses of the satellites RapidEye, Landsat 5, Landsat 7 and Landsat 8. Selected AERONET data are collocated to Landsat 8 overflights. The validation of the aerosol retrieval is conducted by a direct comparison of ground-measured AOT with satellite derived AOT using the ATCOR tool for the selected satellite images. The mean uncertainty found in our experiments is AOT550nm ~ 0.03±0.02 for cloudless conditions with cloud+haze fraction below 1%. This AOT uncertainty approximately corresponds to an uncertainty in surface albedo of ρ ~ 0.003. Inclusion of cloudy and hazy satellite images into the analysis results in mean AOT550nm ~ 0.04±0.03 for both RapidEye and Landsat imagery. About 1/3 of samples perform with the AOT uncertainty better than 0.02 and about 2/3 perform with AOT uncertainty better than 0.05.
NASA Technical Reports Server (NTRS)
Pagnutti, Mary
2006-01-01
This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.
NASA Technical Reports Server (NTRS)
Guo, Liwen; Cardullo, Frank M.; Kelly, Lon C.
2007-01-01
This report summarizes the results of delay measurement and piloted performance tests that were conducted to assess the effectiveness of the adaptive compensator and the state space compensator for alleviating the phase distortion of transport delay in the visual system in the VMS at the NASA Langley Research Center. Piloted simulation tests were conducted to assess the effectiveness of two novel compensators in comparison to the McFarland predictor and the baseline system with no compensation. Thirteen pilots with heterogeneous flight experience executed straight-in and offset approaches, at various delay configurations, on a flight simulator where different predictors were applied to compensate for transport delay. The glideslope and touchdown errors, power spectral density of the pilot control inputs, NASA Task Load Index, and Cooper-Harper rating of the handling qualities were employed for the analyses. The overall analyses show that the adaptive predictor results in slightly poorer compensation for short added delay (up to 48 ms) and better compensation for long added delay (up to 192 ms) than the McFarland compensator. The analyses also show that the state space predictor is fairly superior for short delay and significantly superior for long delay than the McFarland compensator.
An Algorithm to Atmospherically Correct Visible and Thermal Airborne Imagery
NASA Technical Reports Server (NTRS)
Rickman, Doug L.; Luvall, Jeffrey C.; Schiller, Stephen; Arnold, James E. (Technical Monitor)
2000-01-01
The program Watts implements a system of physically based models developed by the authors, described elsewhere, for the removal of atmospheric effects in multispectral imagery. The band range we treat covers the visible, near IR and the thermal IR. Input to the program begins with atmospheric pal red models specifying transmittance and path radiance. The system also requires the sensor's spectral response curves and knowledge of the scanner's geometric definition. Radiometric characterization of the sensor during data acquisition is also necessary. While the authors contend that active calibration is critical for serious analytical efforts, we recognize that most remote sensing systems, either airborne or space borne, do not as yet attain that minimal level of sophistication. Therefore, Watts will also use semi-active calibration where necessary and available. All of the input is then reduced to common terms, in terms of the physical units. From this it Is then practical to convert raw sensor readings into geophysically meaningful units. There are a large number of intricate details necessary to bring an algorithm or this type to fruition and to even use the program. Further, at this stage of development the authors are uncertain as to the optimal presentation or minimal analytical techniques which users of this type of software must have. Therefore, Watts permits users to break out and analyze the input in various ways. Implemented in REXX under OS/2 the program is designed with attention to the probability that it will be ported to other systems and other languages. Further, as it is in REXX, it is relatively simple for anyone that is literate in any computer language to open the code and modify to meet their needs. The authors have employed Watts in their research addressing precision agriculture and urban heat island.
NASA Technical Reports Server (NTRS)
Freedman, Ellis; Ryan, Robert; Pagnutti, Mary; Holekamp, Kara; Gasser, Gerald; Carver, David; Greer, Randy
2007-01-01
Spectral Dark Subtraction (SDS) provides good ground reflectance estimates across a variety of atmospheric conditions with no knowledge of those conditions. The algorithm may be sensitive to errors from stray light, calibration, and excessive haze/water vapor. SDS seems to provide better estimates than traditional algorithms using on-site atmospheric measurements much of the time.
Wang, Wei; Chen, Xiyuan
2016-08-10
Modeling and compensation of temperature drift is an important method for improving the precision of fiber-optic gyroscopes (FOGs). In this paper, a new method of modeling and compensation for FOGs based on improved particle swarm optimization (PSO) and support vector machine (SVM) algorithms is proposed. The convergence speed and reliability of PSO are improved by introducing a dynamic inertia factor. The regression accuracy of SVM is improved by introducing a combined kernel function with four parameters and piecewise regression with fixed steps. The steps are as follows. First, the parameters of the combined kernel functions are optimized by the improved PSO algorithm. Second, the proposed kernel function of SVM is used to carry out piecewise regression, and the regression model is also obtained. Third, the temperature drift is compensated for by the regression data. The regression accuracy of the proposed method (in the case of mean square percentage error indicators) increased by 83.81% compared to the traditional SVM. PMID:27534465
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Guilin; Zhu, Dengchao; Li, Shengyi
2015-02-01
In order to meet the requirement of aerodynamics, the infrared domes or windows with conformal and thin-wall structure becomes the development trend of high-speed aircrafts in the future. But these parts usually have low stiffness, the cutting force will change along with the axial position, and it is very difficult to meet the requirement of shape accuracy by single machining. Therefore, on-machine measurement and compensating turning are used to control the shape errors caused by the fluctuation of cutting force and the change of stiffness. In this paper, on the basis of ultra precision diamond lathe, a contact measuring system with five DOFs is developed to achieve on-machine measurement of conformal thin-wall parts with high accuracy. According to high gradient surface, the optimizing algorithm is designed on the distribution of measuring points by using the data screening method. The influence rule of sampling frequency is analyzed on measuring errors, the best sampling frequency is found out based on planning algorithm, the effect of environmental factors and the fitting errors are controlled within lower range, and the measuring accuracy of conformal dome is greatly improved in the process of on-machine measurement. According to MgF2 conformal dome with high gradient, the compensating turning is implemented by using the designed on-machine measuring algorithm. The shape error is less than PV 0.8μm, greatly superior compared with PV 3μm before compensating turning, which verifies the correctness of measuring algorithm.
Yang, Jun; Liang, Bin; Zhang, Tao; Song, Jingyan
2011-01-01
The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10−5 pixels. PMID:22164021
Feng, Yibo; Li, Xisheng; Zhang, Xiaojuan
2015-01-01
We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS) gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF), the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to -2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation. PMID:25985165
Feng, Yibo; Li, Xisheng; Zhang, Xiaojuan
2015-01-01
We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS) gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF), the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to −2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation. PMID:25985165
Design of Jitter Compensation Algorithm for Robot Vision Based on Optical Flow and Kalman Filter
Wang, B. R.; Jin, Y. L.; Shao, D. L.; Xu, Y.
2014-01-01
Image jitters occur in the video of the autonomous robot moving on bricks road, which will reduce robot operation precision based on vision. In order to compensate the image jitters, the affine transformation kinematics were established for obtaining the six image motion parameters. The feature point pair detecting method was designed based on Eigen-value of the feature windows gradient matrix, and the motion parameters equation was solved using the least square method and the matching point pairs got based on the optical flow. The condition number of coefficient matrix was proposed to quantificationally analyse the effect of matching errors on parameters solving errors. Kalman filter was adopted to smooth image motion parameters. Computing cases show that more point pairs are beneficial for getting more precise motion parameters. The integrated jitters compensation software was developed with feature points detecting in subwindow. And practical experiments were conducted on two mobile robots. Results show that the compensation costing time is less than frame sample time and Kalman filter is valid for robot vision jitters compensation. PMID:24600320
Design of jitter compensation algorithm for robot vision based on optical flow and Kalman filter.
Wang, B R; Jin, Y L; Shao, D L; Xu, Y
2014-01-01
Image jitters occur in the video of the autonomous robot moving on bricks road, which will reduce robot operation precision based on vision. In order to compensate the image jitters, the affine transformation kinematics were established for obtaining the six image motion parameters. The feature point pair detecting method was designed based on Eigen-value of the feature windows gradient matrix, and the motion parameters equation was solved using the least square method and the matching point pairs got based on the optical flow. The condition number of coefficient matrix was proposed to quantificationally analyse the effect of matching errors on parameters solving errors. Kalman filter was adopted to smooth image motion parameters. Computing cases show that more point pairs are beneficial for getting more precise motion parameters. The integrated jitters compensation software was developed with feature points detecting in subwindow. And practical experiments were conducted on two mobile robots. Results show that the compensation costing time is less than frame sample time and Kalman filter is valid for robot vision jitters compensation. PMID:24600320
Chun, Se Young
2016-03-01
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples. PMID:26941855
Coupled Inertial Navigation and Flush Air Data Sensing Algorithm for Atmosphere Estimation
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark
2015-01-01
This paper describes an algorithm for atmospheric state estimation that is based on a coupling between inertial navigation and flush air data sensing pressure measurements. In this approach, the full navigation state is used in the atmospheric estimation algorithm along with the pressure measurements and a model of the surface pressure distribution to directly estimate atmospheric winds and density using a nonlinear weighted least-squares algorithm. The approach uses a high fidelity model of atmosphere stored in table-look-up form, along with simplified models of that are propagated along the trajectory within the algorithm to provide prior estimates and covariances to aid the air data state solution. Thus, the method is essentially a reduced-order Kalman filter in which the inertial states are taken from the navigation solution and atmospheric states are estimated in the filter. The algorithm is applied to data from the Mars Science Laboratory entry, descent, and landing from August 2012. Reasonable estimates of the atmosphere and winds are produced by the algorithm. The observability of winds along the trajectory are examined using an index based on the discrete-time observability Gramian and the pressure measurement sensitivity matrix. The results indicate that bank reversals are responsible for adding information content to the system. The algorithm is then applied to the design of the pressure measurement system for the Mars 2020 mission. The pressure port layout is optimized to maximize the observability of atmospheric states along the trajectory. Linear covariance analysis is performed to assess estimator performance for a given pressure measurement uncertainty. The results indicate that the new tightly-coupled estimator can produce enhanced estimates of atmospheric states when compared with existing algorithms.
NASA Astrophysics Data System (ADS)
Liu, Jony Jiang; Carhart, Gary W.; Beresnev, Leonid A.; Aubailly, Mathieu; Jackson, Christopher R.; Ejzak, Garrett; Kiamilev, Fouad E.
2014-09-01
Atmospheric turbulences can significantly deteriorate the performance of long-range conventional imaging systems and create difficulties for target identification and recognition. Our in-house developed adaptive optics (AO) system, which contains high-performance deformable mirrors (DMs) and the fast stochastic parallel gradient decent (SPGD) control mechanism, allows effective compensation of such turbulence-induced wavefront aberrations and result in significant improvement on the image quality. In addition, we developed advanced digital synthetic imaging and processing technique, "lucky-region" fusion (LRF), to mitigate the image degradation over large field-of-view (FOV). The LRF algorithm extracts sharp regions from each image obtained from a series of short exposure frames and fuses them into a final improved image. We further implemented such algorithm into a VIRTEX-7 field programmable gate array (FPGA) and achieved real-time video processing. Experiments were performed by combining both AO and hardware implemented LRF processing technique over a near-horizontal 2.3km atmospheric propagation path. Our approach can also generate a universal real-time imaging and processing system with a general camera link input, a user controller interface, and a DVI video output.
Rain detection and removal algorithm using motion-compensated non-local mean filter
NASA Astrophysics Data System (ADS)
Song, B. C.; Seo, S. J.
2015-03-01
This paper proposed a novel rain detection and removal algorithm robust against camera motions. It is very difficult to detect and remove rain in video with camera motion. So, most previous works assume that camera is fixed. However, these methods are not useful for application. The proposed algorithm initially detects possible rain streaks by using spatial properties such as luminance and structure of rain streaks. Then, the rain streak candidates are selected based on Gaussian distribution model. Next, a non-rain block matching algorithm is performed between adjacent frames to find similar blocks to each including rain pixels. If the similar blocks to the block are obtained, the rain region of the block is reconstructed by non-local mean (NLM) filtering using the similar neighbors. Experimental results show that the proposed method outperforms previous works in terms of objective and subjective visual quality.
An Atmospheric Guidance Algorithm Testbed for the Mars Surveyor Program 2001 Orbiter and Lander
NASA Technical Reports Server (NTRS)
Striepe, Scott A.; Queen, Eric M.; Powell, Richard W.; Braun, Robert D.; Cheatwood, F. McNeil; Aguirre, John T.; Sachi, Laura A.; Lyons, Daniel T.
1998-01-01
An Atmospheric Flight Team was formed by the Mars Surveyor Program '01 mission office to develop aerocapture and precision landing testbed simulations and candidate guidance algorithms. Three- and six-degree-of-freedom Mars atmospheric flight simulations have been developed for testing, evaluation, and analysis of candidate guidance algorithms for the Mars Surveyor Program 2001 Orbiter and Lander. These simulations are built around the Program to Optimize Simulated Trajectories. Subroutines were supplied by Atmospheric Flight Team members for modeling the Mars atmosphere, spacecraft control system, aeroshell aerodynamic characteristics, and other Mars 2001 mission specific models. This paper describes these models and their perturbations applied during Monte Carlo analyses to develop, test, and characterize candidate guidance algorithms.
Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael
2014-01-14
In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.
NASA Astrophysics Data System (ADS)
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
In a companion manuscript (Frolov et al 2014 New J. Phys. 16 art. no.) , we developed a novel optimization method for the placement, sizing, and operation of flexible alternating current transmission system (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide series compensation (SC) via modification of line inductance. In this sequel manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (˜2700 nodes and ˜3300 lines). The results from the 30-bus network are used to study the general properties of the solutions, including nonlocality and sparsity. The Polish grid is used to demonstrate the computational efficiency of the heuristics that leverage sequential linearization of power flow constraints, and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, we can use the algorithm to solve a Polish transmission grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (i) uniform load growth, (ii) multiple overloaded configurations, and (iii) sequential generator retirements.
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less
Ahmed, Ashik; Al-Amin, Rasheduzzaman; Amin, Ruhul
2014-01-01
This paper proposes designing of Static Synchronous Series Compensator (SSSC) based damping controller to enhance the stability of a Single Machine Infinite Bus (SMIB) system by means of Invasive Weed Optimization (IWO) technique. Conventional PI controller is used as the SSSC damping controller which takes rotor speed deviation as the input. The damping controller parameters are tuned based on time integral of absolute error based cost function using IWO. Performance of IWO based controller is compared to that of Particle Swarm Optimization (PSO) based controller. Time domain based simulation results are presented and performance of the controllers under different loading conditions and fault scenarios is studied in order to illustrate the effectiveness of the IWO based design approach. PMID:25140288
NASA Astrophysics Data System (ADS)
Li, Xiao-Xing; Hu, Jing; Chung, Kwansoo; Zhou, Guo-Feng; Yao, Rao
2011-08-01
We present the study of a finite element method for die contour to compensate spring-back in sheet metal forming process, which is based on the genetic algorithm and isotropic-kinematic hardening laws. The Chaboche type combined isotropic-kinematic hardening law was formulated and used to account for the Bauschinger and transient behavior in the finite element analysis. Using a S shape stretch bending process as an example, it was demonstrated that the new method optimizes the die profile effectively. The good performance of the die profile optimized utilizing the new method was also experimentally verified , confirming that the new method might be more effective in cost reduction than common design practices in practical applications.
Veligdan, James T.
1993-01-01
Atmospheric effects on sighting measurements are compensated for by adjusting any sighting measurements using a correction factor that does not depend on atmospheric state conditions such as temperature, pressure, density or turbulence. The correction factor is accurately determined using a precisely measured physical separation between two color components of a light beam (or beams) that has been generated using either a two-color laser or two lasers that project different colored beams. The physical separation is precisely measured by fixing the position of a short beam pulse and measuring the physical separation between the two fixed-in-position components of the beam. This precisely measured physical separation is then used in a relationship that includes the indexes of refraction for each of the two colors of the laser beam in the atmosphere through which the beam is projected, thereby to determine the absolute displacement of one wavelength component of the laser beam from a straight line of sight for that projected component of the beam. This absolute displacement is useful to correct optical measurements, such as those developed in surveying measurements that are made in a test area that includes the same dispersion effects of the atmosphere on the optical measurements. The means and method of the invention are suitable for use with either single-ended systems or a double-ended systems.
CEMERLL: The Propagation of an Atmosphere-Compensated Laser Beam to the Apollo 15 Lunar Array
NASA Technical Reports Server (NTRS)
Fugate, R. Q.; Leatherman, P. R.; Wilson, K. E.
1997-01-01
Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes.
Ma, Shaodong; Wilkinson, Antony J; Paulson, Kevin S
2014-02-01
A non-linear control method, known as Variable Structure Control (VSC), is employed to reduce the duration of ultrasonic (US) transducer transients. A physically realizable system using a simplified form of the VSC algorithm is proposed for standard piezoelectric transducers and simulated. Results indicate a VSC-controlled transmitter reduces the transient duration to less than a carrier wave cycle. Applications include high capacity ultrasound communication and localization systems. PMID:23993746
Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael
2014-01-14
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l_{1} norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l_{1} norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polishmore » Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less
NASA Astrophysics Data System (ADS)
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-01
We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.
NASA Technical Reports Server (NTRS)
Wang, Menghua
2003-01-01
The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.
Brightness-compensated 3-D optical flow algorithm for monitoring cochlear motion patterns
NASA Astrophysics Data System (ADS)
von Tiedemann, Miriam; Fridberger, Anders; Ulfendahl, Mats; de Monvel, Jacques Boutet
2010-09-01
A method for three-dimensional motion analysis designed for live cell imaging by fluorescence confocal microscopy is described. The approach is based on optical flow computation and takes into account brightness variations in the image scene that are not due to motion, such as photobleaching or fluorescence variations that may reflect changes in cellular physiology. The 3-D optical flow algorithm allowed almost perfect motion estimation on noise-free artificial sequences, and performed with a relative error of <10% on noisy images typical of real experiments. The method was applied to a series of 3-D confocal image stacks from an in vitro preparation of the guinea pig cochlea. The complex motions caused by slow pressure changes in the cochlear compartments were quantified. At the surface of the hearing organ, the largest motion component was the transverse one (normal to the surface), but significant radial and longitudinal displacements were also present. The outer hair cell displayed larger radial motion at their basolateral membrane than at their apical surface. These movements reflect mechanical interactions between different cellular structures, which may be important for communicating sound-evoked vibrations to the sensory cells. A better understanding of these interactions is important for testing realistic models of cochlear mechanics.
NASA Astrophysics Data System (ADS)
Tonooka, Hideyuki; Palluconi, Frank D.
2002-02-01
The standard atmospheric correction algorithm for five thermal infrared (TIR) bands of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is currently based on radiative transfer computations with global assimilation data on a pixel-by-pixel basis. In the present paper, we verify this algorithm using 100 ASTER scenes globally acquired during the early mission period. In this verification, the max-min difference (MMD) of the water surface emissivity retrieved from each scene is used as an atmospheric correction error index, since the water surface emissivity is well known; if the MMD retrieved is large, an atmospheric correction error also will be possibly large. As the results, the error of the MMD retrieved by the standard atmospheric correction algorithm and a typical temperature/emissivity separation algorithm is shown to be remarkably related with precipitable water vapor, latitude, elevation, and surface temperature. It is also mentioned that the expected error on the MMD retrieved is 0.05 for the precipitable water vapor of 3 cm.
An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP
NASA Astrophysics Data System (ADS)
Moncet, J. L.
2015-12-01
We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from
Mars Entry Atmospheric Data System Trajectory Reconstruction Algorithms and Flight Results
NASA Technical Reports Server (NTRS)
Karlgaard, Christopher D.; Kutty, Prasad; Schoenenberger, Mark; Shidner, Jeremy; Munk, Michelle
2013-01-01
The Mars Entry Atmospheric Data System is a part of the Mars Science Laboratory, Entry, Descent, and Landing Instrumentation project. These sensors are a system of seven pressure transducers linked to ports on the entry vehicle forebody to record the pressure distribution during atmospheric entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. Specifically, angle of attack, angle of sideslip, dynamic pressure, Mach number, and freestream atmospheric properties are reconstructed from the measured pressures. Such data allows for the aerodynamics to become decoupled from the assumed atmospheric properties, allowing for enhanced trajectory reconstruction and performance analysis as well as an aerodynamic reconstruction, which has not been possible in past Mars entry reconstructions. This paper provides details of the data processing algorithms that are utilized for this purpose. The data processing algorithms include two approaches that have commonly been utilized in past planetary entry trajectory reconstruction, and a new approach for this application that makes use of the pressure measurements. The paper describes assessments of data quality and preprocessing, and results of the flight data reduction from atmospheric entry, which occurred on August 5th, 2012.
NASA Technical Reports Server (NTRS)
Guo, Li-Wen; Cardullo, Frank M.; Telban, Robert J.; Houck, Jacob A.; Kelly, Lon C.
2003-01-01
A study was conducted employing the Visual Motion Simulator (VMS) at the NASA Langley Research Center, Hampton, Virginia. This study compared two motion cueing algorithms, the NASA adaptive algorithm and a new optimal control based algorithm. Also, the study included the effects of transport delays and the compensation thereof. The delay compensation algorithm employed is one developed by Richard McFarland at NASA Ames Research Center. This paper reports on the analyses of the results of analyzing the experimental data collected from preliminary simulation tests. This series of tests was conducted to evaluate the protocols and the methodology of data analysis in preparation for more comprehensive tests which will be conducted during the spring of 2003. Therefore only three pilots were used. Nevertheless some useful results were obtained. The experimental conditions involved three maneuvers; a straight-in approach with a rotating wind vector, an offset approach with turbulence and gust, and a takeoff with and without an engine failure shortly after liftoff. For each of the maneuvers the two motion conditions were combined with four delay conditions (0, 50, 100 & 200ms), with and without compensation.
NASA Astrophysics Data System (ADS)
Sofiev, M.; Vira, J.; Kouznetsov, R.; Prank, M.; Soares, J.; Genikhovich, E.
2015-11-01
The paper presents the transport module of the System for Integrated modeLling of Atmospheric coMposition SILAM v.5 based on the advection algorithm of Michael Galperin. This advection routine, so far weakly presented in the international literature, is positively defined, stable at any Courant number, and efficient computationally. We present the rigorous description of its original version, along with several updates that improve its monotonicity and shape preservation, allowing for applications to long-living species in conditions of complex atmospheric flows. The scheme is connected with other parts of the model in a way that preserves the sub-grid mass distribution information that is a cornerstone of the advection algorithm. The other parts include the previously developed vertical diffusion algorithm combined with dry deposition, a meteorological pre-processor, and chemical transformation modules. The quality of the advection routine is evaluated using a large set of tests. The original approach has been previously compared with several classic algorithms widely used in operational dispersion models. The basic tests were repeated for the updated scheme and extended with real-wind simulations and demanding global 2-D tests recently suggested in the literature, which allowed one to position the scheme with regard to sophisticated state-of-the-art approaches. The advection scheme performance was fully comparable with other algorithms, with a modest computational cost. This work was the last project of Dr. Sci. Michael Galperin, who passed away on 18 March 2008.
Tolbert, N E; Benker, C; Beck, E
1995-11-21
The O2 and CO2 compensation points (O2 and CO2) of plants in a closed system depend on the ratio of CO2 and O2 concentrations in air and in the chloroplast and the specificities of ribulose bisphosphate carboxylase/oxygenase (Rubisco). The photosynthetic O2 is defined as the atmospheric O2 level, with a given CO2 level and temperature, at which net O2 exchange is zero. In experiments with C3 plants, the O2 with 220 ppm CO2 is 23% O2; O2 increases to 27% with 350 ppm CO2 and to 35% O2 with 700 ppm CO2. At O2 levels below the O2, CO2 uptake and reduction are accompanied by net O2 evolution. At O2 levels above the O2, net O2 uptake occurs with a reduced rate of CO2 fixation, more carbohydrates are oxidized by photorespiration to products of the C2 oxidative photosynthetic carbon cycle, and plants senesce prematurely. The CO2 increases from 50 ppm CO2 with 21% O2 to 220 ppm with 100% O2. At a low CO2/high O2 ratio that inhibits the carboxylase activity of Rubisco, much malate accumulates, which suggests that the oxygen-insensitive phosphoenolpyruvate carboxylase becomes a significant component of the lower CO2 fixation rate. Because of low global levels of CO2 and a Rubisco specificity that favors the carboxylase activity, relatively rapid changes in the atmospheric CO2 level should control the permissive O2 that could lead to slow changes in the immense O2 pool. PMID:11607591
Performance evaluation of operational atmospheric correction algorithms over the East China Seas
NASA Astrophysics Data System (ADS)
He, Shuangyan; He, Mingxia; Fischer, Jürgen
2016-04-01
To acquire high-quality operational data products for Chinese in-orbit and scheduled ocean color sensors, the performances of two operational atmospheric correction (AC) algorithms (ESA MEGS 7.4.1 and NASA SeaDAS 6.1) were evaluated over the East China Seas (ECS) using MERIS data. The spectral remote sensing reflectance R rs(λ), aerosol optical thickness (AOT), and Ångström exponent (α) retrieved using the two algorithms were validated using in situ measurements obtained between May 2002 and October 2009. Match-ups of R rs, AOT, and α between the in situ and MERIS data were obtained through strict exclusion criteria. Statistical analysis of R rs(λ) showed a mean percentage difference (MPD) of 9%-13% in the 490-560 nm spectral range, and significant overestimation was observed at 413 nm (MPD>72%). The AOTs were overestimated (MPD>32%), and although the ESA algorithm outperformed the NASA algorithm in the blue-green bands, the situation was reversed in the red-near-infrared bands. The value of α was obviously underestimated by the ESA algorithm (MPD=41%) but not by the NASA algorithm (MPD=35%). To clarify why the NASA algorithm performed better in the retrieval of α, scatter plots of the α single scattering albedo (SSA) density were prepared. These α-SSA density scatter plots showed that the applicability of the aerosol models used by the NASA algorithm over the ECS is better than that used by the ESA algorithm, although neither aerosol model is suitable for the ECS region. The results of this study provide a reference to both data users and data agencies regarding the use of operational data products and the investigation into the improvement of current AC schemes over the ECS.
Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm
NASA Technical Reports Server (NTRS)
Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)
2004-01-01
In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.
NASA Technical Reports Server (NTRS)
Liu, X.; Kizer, S.; Barnet, C.; Dvakarla, M.; Zhou, D. K.; Larar, A. M.
2012-01-01
The Joint Polar Satellite System (JPSS) is a U.S. National Oceanic and Atmospheric Administration (NOAA) mission in collaboration with the U.S. National Aeronautical Space Administration (NASA) and international partners. The NPP Cross-track Infrared Microwave Sounding Suite (CrIMSS) consists of the infrared (IR) Crosstrack Infrared Sounder (CrIS) and the microwave (MW) Advanced Technology Microwave Sounder (ATMS). The CrIS instrument is hyperspectral interferometer, which measures high spectral and spatial resolution upwelling infrared radiances. The ATMS is a 22-channel radiometer similar to Advanced Microwave Sounding Units (AMSU) A and B. It measures top of atmosphere MW upwelling radiation and provides capability of sounding below clouds. The CrIMSS Environmental Data Record (EDR) algorithm provides three EDRs, namely the atmospheric vertical temperature, moisture and pressure profiles (AVTP, AVMP and AVPP, respectively), with the lower tropospheric AVTP and the AVMP being JPSS Key Performance Parameters (KPPs). The operational CrIMSS EDR an algorithm was originally designed to run on large IBM computers with dedicated data management subsystem (DMS). We have ported the operational code to simple Linux systems by replacing DMS with appropriate interfaces. We also changed the interface of the operational code so that we can read data from both the CrIMSS science code and the operational code and be able to compare lookup tables, parameter files, and output results. The detail of the CrIMSS EDR algorithm is described in reference [1]. We will present results of testing the CrIMSS EDR operational algorithm using proxy data generated from the Infrared Atmospheric Sounding Interferometer (IASI) satellite data and from the NPP CrIS/ATMS data.
Validation and robustness of an atmospheric correction algorithm for hyperspectral images
NASA Astrophysics Data System (ADS)
Boucher, Yannick; Poutier, Laurent; Achard, Veronique; Lenot, Xavier; Miesch, Christophe
2002-08-01
The Optics Department of ONERA has developed and implemented an inverse algorithm, COSHISE, to correct hyperspectral images of the atmosphere effects in the visible-NIR-SWIR domain (0,4-2,5 micrometers ). This algorithm automatically determine the integrated water-vapor content for each pixel, from the radiance at sensor level by using a LIRR-type (Linear Regression Ratio) technique. It then retrieves the spectral reflectance at ground level using atmospheric parameters computed with Modtran4, included the water-vapor spatial dependence as obtained in the first stop. The adjacency effects are taken into account using spectral kernels obtained by two Monte-Carlo codes. Results obtained with COCHISE code on real hyperspectral data are first compared to ground based reflectance measurements. AVIRIS images of Railroad Valley Playa, CA, and HyMap images of Hartheim, France, are use. The inverted reflectance agrees perfectly with the measurement at ground level for the AVIRIS data set, which validates COCHISE algorithm/ for the HyMap data set, the results are still good but cannot be considered as validating the code. The robustness of COCHISE code is evaluated. For this, spectral radiance images are modeled at the sensor level, with the direct algorithm COMANCHE, which is the reciprocal code of COCHISE. The COCHISE algorithm is then used to compute the reflectance at ground level from the simulated at-sensor radiance. A sensitivity analysis has been performed, as a function of errors on several atmospheric parameter and instruments defaults, by comparing the retrieved reflectance with the original one. COCHISE code shows a quite good robustness to errors on input parameter, except for aerosol type.
Algorithm for Atmospheric and Glint Corrections of Satellite Measurements of Ocean Pigment
NASA Technical Reports Server (NTRS)
Fraser, Robert S.; Mattoo, Shana; Yeh, Eueng-Nan; McClain, C. R.
1997-01-01
An algorithm is developed to correct satellite measurements of ocean color for atmospheric and surface reflection effects. The algorithm depends on taking the difference between measured and tabulated radiances for deriving water-leaving radiances. 'ne tabulated radiances are related to the measured radiance where the water-leaving radiance is negligible (670 nm). The tabulated radiances are calculated for rough surface reflection, polarization of the scattered light, and multiple scattering. The accuracy of the tables is discussed. The method is validated by simulating the effect of different wind speeds than that for which the lookup table is calculated, and aerosol models different from the maritime model for which the table is computed. The derived water-leaving radiances are accurate enough to compute the pigment concentration with an error of less than q 15% for wind speeds of 6 and 10 m/s and an urban atmosphere with aerosol optical thickness of 0.20 at lambda 443 nm and decreasing to 0.10 at lambda 670 nm. The pigment accuracy is less for wind speeds less than 6 m/s and is about 30% for a model with aeolian dust. On the other hand, in a preliminary comparison with coastal zone color scanner (CZCS) measurements this algorithm and the CZCS operational algorithm produced values of pigment concentration in one image that agreed closely.
NASA Technical Reports Server (NTRS)
Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)
2000-01-01
Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.
2003-01-01
This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.
Algorithm for Simulating Atmospheric Turbulence and Aeroelastic Effects on Simulator Motion Systems
NASA Technical Reports Server (NTRS)
Ercole, Anthony V.; Cardullo, Frank M.; Kelly, Lon C.; Houck, Jacob A.
2012-01-01
Atmospheric turbulence produces high frequency accelerations in aircraft, typically greater than the response to pilot input. Motion system equipped flight simulators must present cues representative of the aircraft response to turbulence in order to maintain the integrity of the simulation. Currently, turbulence motion cueing produced by flight simulator motion systems has been less than satisfactory because the turbulence profiles have been attenuated by the motion cueing algorithms. This report presents a new turbulence motion cueing algorithm, referred to as the augmented turbulence channel. Like the previous turbulence algorithms, the output of the channel only augments the vertical degree of freedom of motion. This algorithm employs a parallel aircraft model and an optional high bandwidth cueing filter. Simulation of aeroelastic effects is also an area where frequency content must be preserved by the cueing algorithm. The current aeroelastic implementation uses a similar secondary channel that supplements the primary motion cue. Two studies were conducted using the NASA Langley Visual Motion Simulator and Cockpit Motion Facility to evaluate the effect of the turbulence channel and aeroelastic model on pilot control input. Results indicate that the pilot is better correlated with the aircraft response, when the augmented channel is in place.
NASA Technical Reports Server (NTRS)
Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)
2001-01-01
The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.
The Algorithm Theoretical Basis Document for the GLAS Atmospheric Data Products
NASA Technical Reports Server (NTRS)
Palm, Stephen P.; Hart, William D.; Hlavka, Dennis L.; Welton, Ellsworth J.; Spinhirne, James D.
2012-01-01
The purpose of this document is to present a detailed description of the algorithm theoretical basis for each of the GLAS data products. This will be the final version of this document. The algorithms were initially designed and written based on the authors prior experience with high altitude lidar data on systems such as the Cloud and Aerosol Lidar System (CALS) and the Cloud Physics Lidar (CPL), both of which fly on the NASA ER-2 high altitude aircraft. These lidar systems have been employed in many field experiments around the world and algorithms have been developed to analyze these data for a number of atmospheric parameters. CALS data have been analyzed for cloud top height, thin cloud optical depth, cirrus cloud emittance (Spinhirne and Hart, 1990) and boundary layer depth (Palm and Spinhirne, 1987, 1998). The successor to CALS, the CPL, has also been extensively deployed in field missions since 2000 including the validation of GLAS and CALIPSO. The CALS and early CPL data sets also served as the basis for the construction of simulated GLAS data sets which were then used to develop and test the GLAS analysis algorithms.
A procedure for testing the quality of LANDSAT atmospheric correction algorithms
NASA Technical Reports Server (NTRS)
Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.
1982-01-01
There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.
An improved atmospheric correction algorithm for applying MERIS data to very turbid inland waters
NASA Astrophysics Data System (ADS)
Jaelani, Lalu Muhamad; Matsushita, Bunkei; Yang, Wei; Fukushima, Takehiko
2015-07-01
Atmospheric correction (AC) is a necessary process when quantitatively monitoring water quality parameters from satellite data. However, it is still a major challenge to carry out AC for turbid coastal and inland waters. In this study, we propose an improved AC algorithm named N-GWI (new standard Gordon and Wang's algorithms with an iterative process and a bio-optical model) for applying MERIS data to very turbid inland waters (i.e., waters with a water-leaving reflectance at 864.8 nm between 0.001 and 0.01). The N-GWI algorithm incorporates three improvements to avoid certain invalid assumptions that limit the applicability of the existing algorithms in very turbid inland waters. First, the N-GWI uses a fixed aerosol type (coastal aerosol) but permits aerosol concentration to vary at each pixel; this improvement omits a complicated requirement for aerosol model selection based only on satellite data. Second, it shifts the reference band from 670 nm to 754 nm to validate the assumption that the total absorption coefficient at the reference band can be replaced by that of pure water, and thus can avoid the uncorrected estimation of the total absorption coefficient at the reference band in very turbid waters. Third, the N-GWI generates a semi-analytical relationship instead of an empirical one for estimation of the spectral slope of particle backscattering. Our analysis showed that the N-GWI improved the accuracy of atmospheric correction in two very turbid Asian lakes (Lake Kasumigaura, Japan and Lake Dianchi, China), with a normalized mean absolute error (NMAE) of less than 22% for wavelengths longer than 620 nm. However, the N-GWI exhibited poor performance in moderately turbid waters (the NMAE values were larger than 83.6% in the four American coastal waters). The applicability of the N-GWI, which includes both advantages and limitations, was discussed.
[A quickly atmospheric correction method for HJ-1 CCD with deep blue algorithm].
Wang, Zhong-Ting; Wang, Hong-Mei; Li, Qing; Zhao, Shao-Hua; Li, Shen-Shen; Chen, Liang-Fu
2014-03-01
In the present, for the characteristic of HJ-1 CCD camera, after receiving aerosol optical depth (AOD) from deep blue algorithm which was developed by Hsu et al. assisted by MODerate-resolution imaging spectroradiometer (MODIS) surface reflectance database, bidirectional reflectance distribution function (BRDF) correction with Kernel-Driven Model, and the calculation of viewing geometry with auxiliary data, a new atmospheric correction method of HJ-1 CCD was developed which can be used over vegetation, soil and so on. And, when the CCD data is processed to correct atmospheric influence, with look up table (LUT) and bilinear interpolation, atmospheric correction of HJ-1 CCD is completed quickly by grid calculation of atmospheric parameters and matrix operations of interface define language (IDL). The experiment over China North Plain on July 3rd, 2012 shows that by our method, the atmospheric influence was corrected well and quickly (one CCD image of 1 GB can be corrected in eight minutes), and the reflectance after correction over vegetation and soil was close to the spectrum of vegetation and soil. The comparison with MODIS reflectance product shows that for the advantage of high resolution, the corrected reflectance image of HJ-1 is finer than that of MODIS, and the correlation coefficient of the reflectance over typical surface is greater than 0.9. Error analysis shows that the recognition error of aerosol type leads to 0. 05 absolute error of surface reflectance in near infrared band, which is larger than that in visual bands, and the 0. 02 error of reflectance database leads to 0.01 absolute error of surface reflectance of atmospheric correction in green and red bands. PMID:25208402
NASA Technical Reports Server (NTRS)
Shia, Run-Lie; Ha, Yuk Lung; Wen, Jun-Shan; Yung, Yuk L.
1990-01-01
Extensive testing of the advective scheme proposed by Prather (1986) has been carried out in support of the California Institute of Technology-Jet Propulsion Laboratory two-dimensional model of the middle atmosphere. The original scheme is generalized to include higher-order moments. In addition, it is shown how well the scheme works in the presence of chemistry as well as eddy diffusion. Six types of numerical experiments including simple clock motion and pure advection in two dimensions have been investigated in detail. By comparison with analytic solutions, it is shown that the new algorithm can faithfully preserve concentration profiles, has essentially no numerical diffusion, and is superior to a typical fourth-order finite difference scheme.
NASA Astrophysics Data System (ADS)
Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.
In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with
NASA Technical Reports Server (NTRS)
Korkin, S.; Lyapustin, A.
2012-01-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer s rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request.
Detection of Atmospheric Rivers: An Algorithm for Global Climatology and Model Evaluation Studies
NASA Astrophysics Data System (ADS)
Guan, B.; Waliser, D. E.
2015-12-01
Atmospheric rivers (ARs) are narrow, elongated, synoptic jets of water vapor that play important roles in the global water cycle and regional weather and hydrology. Previous studies have developed techniques for the identification of ARs based on intensity and/or geometry thresholds indicative of AR conditions. Such techniques have facilitated the investigation of ARs on local to regional scales. Recent advancement in the understanding of AR's global signatures and impacts (including those in less explored areas such as Greenland and Antarctica), and the need for understanding the representation of key AR characteristics in global weather/climate models motivate the development and evaluation of AR detection techniques suitable for global climatological and model evaluation studies. In this work, an objective AR detection algorithm is developed based on thresholding global, 6-hourly fields of integrated water vapor transport (IVT) derived from ERA-Interim reanalysis. Long, narrow filaments of enhanced IVT are detected by applying a set of intensity and geometry criteria, along with other considerations. Key output of the algorithm includes the AR shape boundary, main axis, location of landfalls, and a tabulated list of the basic statistics such as length, width, and mean IVT strength/direction of each detected AR. Sensitivity of detection is examined for selected parameters, and the result is evaluated and compared with an independent database of landfalling ARs in the west coast of North America based on satellite images of integrated water vapor (Neiman et al. 2008). Global distribution of key AR characteristics, and examples of their modulation by climate variability, will be presented.
NASA Technical Reports Server (NTRS)
Wehrbein, W. M.; Leovy, C. B.
1982-01-01
The circulation of the middle atmosphere of the earth (15-90 km) is driven by the unequal distribution of net radiative heating. Calculations have shown that local radiative heating is nearly balanced by radiative cooling throughout parts of the stratosphere and mesosphere. The 15 micrometer band of CO2 is the dominant component of the infrared cooling. The present investigation is concerned with an algorithm regarding the involved cooling process. The algorithm was designed for the semispectral primitive equation model of the stratosphere and mesosphere described by Holton and Wehrbein (1980). The model consists of 16 layers, each nominally 5 km thick, between the base of the stratosphere at 100 mb (approximately 16 km) and the base of the thermosphere (approximately 96 km). The considered algorithm provides a convenient means of incorporating cooling due to CO2 into dynamical models of the middle atmosphere.
NASA Astrophysics Data System (ADS)
Maruyama, A.; Pyles, D.; Paw U, K.
2009-12-01
The thermal environment in the plant canopy affects plants’ growth processes such as flowering and ripening. High temperatures often cause grain sterility and poor filling in serial crops, and reduce their production in tropical and temperate regions. With global warming predicted, these effects have become a major concern worldwide. In this study, we observed the plant body temperature profiles for the rice canopy and simulate them using a higher-order closure micrometeorological model to understand the relationship between plant temperatures and atmospheric condition. Experiments were conducted in rice paddy during 2007-summer season under warm temperate climate in Japan. Leaf temperatures at three different height (0.3, 0.5, 0.7m) and panicle temperatures at 0.9m were measured using fine-thermocouples. The UC Davis Advanced Canopy-Atmosphere-Soil Algorithm (ACASA) was used to calculate plant body temperature profiles in the canopy. ACASA is based on the radiation transfer, higher-order closure of turbulent equations for mass and heat exchange, and detailed plant physiological parameterization for the canopy-atmosphere-soil system. Water temperature was almost constant of 21-23 C throughout the summer because of continuous irrigation. Therefore, larger difference between air temperature at 2 m and water temperature was found on daytime. Observed leaf/panicle temperature was lower near the water surface and higher on upper layer in the canopy. Difference of temperatures between 0.3 m and 0.9 m was around 3-4 C for daytime, and around 1-2 C for nighttime. Calculated result of ACASA recreated these trends of plant temperature profile sufficiently. However, the relationship between plant and air temperature in the canopy was a little different from observed, i.e. observed leaf/panicle temperature were almost the same as air temperature, in contrast the simulated air temperature was 0.5-1.5 C higher than plant temperatures for the both of daytime and night time
NASA Technical Reports Server (NTRS)
Spratlin, Kenneth Milton
1987-01-01
An adaptive numeric predictor-corrector guidance is developed for atmospheric entry vehicles which utilize lift to achieve maximum footprint capability. Applicability of the guidance design to vehicles with a wide range of performance capabilities is desired so as to reduce the need for algorithm redesign with each new vehicle. Adaptability is desired to minimize mission-specific analysis and planning. The guidance algorithm motivation and design are presented. Performance is assessed for application of the algorithm to the NASA Entry Research Vehicle (ERV). The dispersions the guidance must be designed to handle are presented. The achievable operational footprint for expected worst-case dispersions is presented. The algorithm performs excellently for the expected dispersions and captures most of the achievable footprint.
NASA Technical Reports Server (NTRS)
Coney, Thom A.
1996-01-01
Performance status of the Adaptive Rain Fade Compensation includes: (1) The rain fade protocol is functional detecting fades, providing an additional 10 dB of margin and seamless transitions to and from coded operation; (2) The stabilization of the link margins and the optimization of rain fade decision thresholds has resulted in improved BER performance; (3) Characterization of the fade compensation algorithm is ongoing.
NASA Technical Reports Server (NTRS)
Green, R. O.; Conel, J. E.
1993-01-01
The Airborne Visible/Infrared Imaging Spectrometer measures spatial images of the total upwelling spectral radiance from 400 to 2500 nm through 10 nm spectral channels. Quantitative research and application objectives for surface investigations require conversion of the measured radiance to surface reflectance or surface leaving radiance. To calculate apparent surface reflectance an estimation of atmospheric water vapor abundance, cirrus cloud effects, surface pressure elevation and aerosol optical depth is also required. Algorithms for the estimation of these parameters from the AVIRIS data themselves are described. Based upon these determined atmospheric parameters we show an example of the calculation of apparent surface reflectance from the AVIRIS-measured radiance using a radiative transfer code.
NASA Astrophysics Data System (ADS)
Schneider, Sébastien; Jacques, Diederik; Mallants, Dirk
2010-05-01
Numerical models are of precious help for predicting water fluxes in the vadose zone and more specifically in Soil-Vegetation-Atmosphere (SVA) systems. For such simulations, robust models and representative soil hydraulic parameters are required. Calibration of unsaturated hydraulic properties is known to be a difficult optimization problem due to the high non-linearity of the water flow equations. Therefore, robust methods are needed to avoid the optimization process to lead to non-optimal parameters. Evolutionary algorithms and specifically genetic algorithms (GAs) are very well suited for those complex parameter optimization problems. Additionally, GAs offer the opportunity to assess the confidence in the hydraulic parameter estimations, because of the large number of model realizations. The SVA system in this study concerns a pine stand on a heterogeneous sandy soil (podzol) in the Campine region in the north of Belgium. Throughfall and other meteorological data and water contents at different soil depths have been recorded during one year at a daily time step in two lysimeters. The water table level, which is varying between 95 and 170 cm, has been recorded with intervals of 0.5 hour. The leaf area index was measured as well at some selected time moments during the year in order to evaluate the energy which reaches the soil and to deduce the potential evaporation. Water contents at several depths have been recorded. Based on the profile description, five soil layers have been distinguished in the podzol. Two models have been used for simulating water fluxes: (i) a mechanistic model, the HYDRUS-1D model, which solves the Richards' equation, and (ii) a compartmental model, which treats the soil profile as a bucket into which water flows until its maximum capacity is reached. A global sensitivity analysis (Morris' one-at-a-time sensitivity analysis) was run previously to the calibration, in order to check the sensitivity in the chosen parameter search space. For
NASA Technical Reports Server (NTRS)
Wilheit, T. T.; Chang, A. T. C.
1979-01-01
A formalism was developed which can be used to interpret the data in terms of sea surface temperature, sea surface wind speed, and the atmospheric overburden of water vapor and liquid water. It was shown with reasonable instrumental performance assumptions, these parameters could be derived to useful accuracies. Although the algorithms were not derived for use in rain, it is shown that, at least, token rain rates can be tolerated without invalidating the retrieved geophysical parameters.
NASA Technical Reports Server (NTRS)
Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri
1992-01-01
The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.
Comparison of atmospheric correction algorithms for the Coastal Zone Color Scanner
NASA Technical Reports Server (NTRS)
Tanis, F. J.; Jain, S. C.
1984-01-01
Before Nimbus-7 Costal Zone Color Scanner (CZC) data can be used to distinguish between coastal water types, methods must be developed for the removal of spatial variations in aerosol path radiance. These can dominate radiance measurements made by the satellite. An assessment is presently made of the ability of four different algorithms to quantitatively remove haze effects; each was adapted for the extraction of the required scene-dependent parameters during an initial pass through the data set The CZCS correction algorithms considered are (1) the Gordon (1981, 1983) algorithm; (2) the Smith and Wilson (1981) iterative algorityhm; (3) the pseudooptical depth method; and (4) the residual component algorithm.
NASA Astrophysics Data System (ADS)
Korkin, S.; Lyapustin, A.
2012-12-01
The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD
NASA Astrophysics Data System (ADS)
Dash, P.; Walker, N. D.; Mishra, D. R.; Hu, C.; D'Sa, E. J.; Pinckney, J. L.
2011-12-01
Cyanobacteria represent a major harmful algal group in fresh to brackish water environments. Lac des Allemands, a freshwater lake located southwest of New Orleans, Louisiana on the upper end of the Barataria Estuary, provides a natural laboratory for remote characterization of cyanobacteria blooms because of their seasonal occurrence. The Ocean Colour Monitor (OCM) sensor provides radiance measurements similar to SeaWiFS but with higher spatial resolution. However, OCM does not have a standard atmospheric correction procedure, and it is difficult to find a detailed description of the entire atmospheric correction procedure for ocean (or lake) in one place. Atmospheric correction of satellite data over small lakes and estuaries (Case 2 waters) is also challenging due to difficulties in estimation of aerosol scattering accurately in these areas. Therefore, an atmospheric correction procedure was written for processing OCM data, based on the extensive work done for SeaWiFS. Since OCM-retrieved radiances were abnormally low in the blue wavelength region, a vicarious calibration procedure was also developed. Empirical inversion algorithms were developed to convert the OCM remote sensing reflectance (Rrs) at bands centered at 510.6 and 556.4 nm to concentrations of phycocyanin (PC), the primary cyanobacterial pigment. A holistic approach was followed to minimize the influence of other optically active constituents on the PC algorithm. Similarly, empirical algorithms to estimate chlorophyll a (Chl a) concentrations were developed using OCM bands centered at 556.4 and 669 nm. The best PC algorithm (R2=0.7450, p<0.0001, n=72) yielded a root mean square error (RMSE) of 36.92 μg/L with a relative RMSE of 10.27% (PC from 2.75-363.50 μg/L, n=48). The best algorithm for Chl a (R2=0.7510, p<0.0001, n=72) produced an RMSE of 31.19 μg/L with a relative RMSE of 16.56% (Chl a from 9.46-212.76 μg/L, n=48). While more field data are required to further validate the long
NASA Astrophysics Data System (ADS)
Arrasmith, William W.; Sullivan, Sean F.
2008-04-01
Phase diversity imaging methods work well in removing atmospheric turbulence and some system effects from predominantly near-field imaging systems. However, phase diversity approaches can be computationally intensive and slow. We present a recently adapted, high-speed phase diversity method using a conventional, software-based neural network paradigm. This phase-diversity method has the advantage of eliminating many time consuming, computationally heavy calculations and directly estimates the optical transfer function from the entrance pupil phases or phase differences. Additionally, this method is more accurate than conventional Zernike-based, phase diversity approaches and lends itself to implementation on parallel software or hardware architectures. We use computer simulation to demonstrate how this high-speed, phase diverse imaging method can be implemented on a parallel, highspeed, neural network-based architecture-specifically the Cellular Neural Network (CNN). The CNN architecture was chosen as a representative, neural network-based processing environment because 1) the CNN can be implemented in 2-D or 3-D processing schemes, 2) it can be implemented in hardware or software, 3) recent 2-D implementations of CNN technology have shown a 3 orders of magnitude superiority in speed, area, or power over equivalent digital representations, and 4) a complete development environment exists. We also provide a short discussion on processing speed.
NASA Astrophysics Data System (ADS)
Reuter, M.; Bösch, H.; Bovensmann, H.; Bril, A.; Buchwitz, M.; Butz, A.; Burrows, J. P.; O'Dell, C. W.; Guerlet, S.; Hasekamp, O.; Heymann, J.; Kikuchi, N.; Oshchepkov, S.; Parker, R.; Pfeifer, S.; Schneising, O.; Yokota, T.; Yoshida, Y.
2013-02-01
We analyze an ensemble of seven XCO2 retrieval algorithms for SCIAMACHY (scanning imaging absorption spectrometer of atmospheric chartography) and GOSAT (greenhouse gases observing satellite). The ensemble spread can be interpreted as regional uncertainty and can help to identify locations for new TCCON (total carbon column observing network) validation sites. Additionally, we introduce the ensemble median algorithm EMMA combining individual soundings of the seven algorithms into one new data set. The ensemble takes advantage of the algorithms' independent developments. We find ensemble spreads being often < 1 ppm but rising up to 2 ppm especially in the tropics and East Asia. On the basis of gridded monthly averages, we compare EMMA and all individual algorithms with TCCON and CarbonTracker model results (potential outliers, north/south gradient, seasonal (peak-to-peak) amplitude, standard deviation of the difference). Our findings show that EMMA is a promising candidate for inverse modeling studies. Compared to CarbonTracker, the satellite retrievals find consistently larger north/south gradients (by 0.3-0.9 ppm) and seasonal amplitudes (by 1.5-2.0 ppm).
NASA Astrophysics Data System (ADS)
Boukabara, S. A.; Garrett, K.
2014-12-01
A one-dimensional variational retrieval system has been developed, capable of producing temperature and water vapor profiles in clear, cloudy and precipitating conditions. The algorithm, known as the Microwave Integrated Retrieval System (MiRS), is currently running operationally at the National Oceanic and Atmospheric Administration (NOAA) National Environmental Satellite Data and Information Service (NESDIS), and is applied to a variety of data from the AMSU-A/MHS sensors on board the NOAA-18, NOAA-19, and MetOp-A/B polar satellite platforms, as well as SSMI/S on board both DMSP F-16 and F18, and from the NPP ATMS sensor. MiRS inverts microwave brightness temperatures into atmospheric temperature and water vapor profiles, along with hydrometeors and surface parameters, simultaneously. This atmosphere/surface coupled inversion allows for more accurate retrievals in the lower tropospheric layers by accounting for the surface emissivity impact on the measurements. It also allows the inversion of the soundings in all-weather conditions thanks to the incorporation of the hydrometeors parameters in the inverted state vector as well as to the inclusion of the emissivity in the same state vector, which is accounted for dynamically for the highly variable surface conditions found under precipitating atmospheres. The inversion is constrained in precipitating conditions by the inclusion of covariances for hydrometeors, to take advantage of the natural correlations that exist between temperature and water vapor with liquid and ice cloud along with rain water. In this study, we present a full assessment of temperature and water vapor retrieval performances in all-weather conditions and over all surface types (ocean, sea-ice, land, and snow) using matchups with radiosonde as well as Numerical Weather Prediction and other satellite retrieval algorithms as references. An emphasis is placed on retrievals in cloudy and precipitating atmospheres, including extreme weather events
NASA Astrophysics Data System (ADS)
Zhai, Shen-qiang; Liu, Jun-qi; Kong, Ning; Liu, Feng-qi; Li, Lu; Wang, Li-jun; Wang, Zhan-guo
2011-08-01
Infrared detection within the atmospheric window between 3 to 5μm has gained great interest because of its wide range of applications, such as eye-safe free-space optical communication links and high-precision time-of-flight measurements used in 3D imaging. In this letter, we report on the characteristics of two InP-based strain-compensated InGaAs/InAlAs quantum cascade detectors (QCDs) detecting around 4 μm and 4.5 μm, which are promising candidates for applications in this wavelength range. Maximal responsivity values of 11.43mA/W at 180K and 10.1 mA/W at 78K and Johnson noise limited detectivities of 2.43×1010 and 2×1010 Jones at 78K, for the 4.5 μm and the 4 μm device, respectively, were obtained. In addition, both devices can work up to room temperature with responsivities of 0.81 mA/W(4.5μm) and 1.64 mA/W(4μm).
NASA Astrophysics Data System (ADS)
Susskind, Joel; Blaisdell, John M.; Iredell, Lena
2014-01-01
The atmospheric infrared sounder (AIRS) science team version-6 AIRS/advanced microwave sounding unit (AMSU) retrieval algorithm is now operational at the Goddard Data and Information Services Center (DISC). AIRS version-6 level-2 products are generated near real time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. Some of the significant improvements in retrieval methodology contained in the version-6 retrieval algorithm compared to that previously used in version-5 are described. In particular, the AIRS science team made major improvements with regard to the algorithms used to (1) derive surface skin temperature and surface spectral emissivity; (2) generate the initial state used to start the cloud clearing and retrieval procedures; and (3) derive error estimates and use them for quality control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, version-6 also operates in an AIRS only (AO) mode, which produces results almost as good as those of the full AIRS/AMSU mode. The improvements of some AIRS version-6 and version-6 AO products compared to those obtained using version-5 are also demonstrated.
NASA Astrophysics Data System (ADS)
Teillet, P. M.
1992-09-01
Radiometric and atmospheric corrections are formulated with a view to computing vegetation indices such as the Normalized Difference Vegetation Index (NDVI) from surface reflectances rather than the digital signal levels recorded at the sensor. In particular, look-up table (LUT) results from an atmospheric radiative transfer code are used to save time and avoid the complexities of running and maintaining such a code in a production environment. The data flow for radiometric image correction is very similar to commonly used geometric correction data flows. The role of terrain elevation in the atmospheric correction process is discussed and the effect of topography on NDVI is highlighted.
Turbulence compensation: an overview
NASA Astrophysics Data System (ADS)
van Eekeren, Adam W. M.; Schutte, Klamer; Dijk, Judith; Schwering, Piet B. W.; van Iersel, Miranda; Doelman, Niek J.
2012-06-01
In general, long range visual detection, recognition and identification are hampered by turbulence caused by atmospheric conditions. Much research has been devoted to the field of turbulence compensation. One of the main advantages of turbulence compensation is that it enables visual identification over larger distances. In many (military) scenarios this is of crucial importance. In this paper we give an overview of several software and hardware approaches to compensate for the visual artifacts caused by turbulence. These approaches are very diverse and range from the use of dedicated hardware, such as adaptive optics, to the use of software methods, such as deconvolution and lucky imaging. For each approach the pros and cons are given and it is indicated for which scenario this approach is useful. In more detail we describe the turbulence compensation methods TNO has developed in the last years and place them in the context of the different turbulence compensation approaches and TNO's turbulence compensation roadmap. Furthermore we look forward and indicate the upcoming challenges in the field of turbulence compensation.
NASA Astrophysics Data System (ADS)
Anderson, G.; Berk, A.; Harder, G.; Fontenla, J.; Shettle, E.; Pilewski, P.; Kindel, B.; Chetwynd, J.; Gardner, J.; Hoke, M.; Jordan, A.; Lockwood, R.; Felde, G.; Archarya, P.
2006-12-01
The opportunity to insert state-of-the-art solar irradiance measurements and calculations, with subtle perturbations, into a narrow spectral resolution radiative transfer model has recently been facilitated through release of MODTRAN-5 (MOD5). The new solar data are from: (1) SORCE satellite measurements of solar variability over solar rotation cycle, & (2) ultra-narrow calculation of a new solar source irradiance, extending over the full MOD5 spectral range, from 0.2 um to far-IR. MODTRAN-5, MODerate resolution radiance and TRANsmittance code, has been developed collaboratively by Air Force Research Laboratory and Spectral Sciences, Inc., with history dating back to LOWTRAN. It includes approximations for all local thermodynamic equilibrium terms associated with molecular, cloud, aerosol and surface components for emission, scattering, and reflectance, including multiple scattering, refraction and a statistical implementation of Correlated-k averaging. The band model is based on 0.1 cm-1 (also 1.0, 5.0 and 15.0 cm-1 statistical binning for line centers within the interval, captured through an exact formulation of the full Voigt line shape. Spectroscopic parameters are from HITRAN 2004 with user-defined options for additional gases. Recent validation studies show MOD5 replicates line-by-line brightness temperatures to within ~0.02ºK average and <1.0ºK RMS. MOD5 can then serve as a surrogate for a variety of perturbation studies, including the two modes for the solar source function, Io. (1) Data from the Solar Radiation and Climate Experiment (SORCE) satellite mission provide state-of-the-art measurements of UV, visible, near-IR, plus total solar radiation, on near real-time basis. These internally consistent estimates of Sun's output over solar rotation and longer time scales are valuable inputs for studying effects of Sun's radiation on Earth's atmosphere and climate. When solar rotation encounters bright plage and dark sunspots, relative variations are
NASA Technical Reports Server (NTRS)
Wehrbein, W. M.; Leovy, C. B.
1981-01-01
A Curtis matrix is used to compute cooling by the 15 micron and 10 micron bands of carbon dioxide. Escape of radiation to space and exchange the lower boundary are used for the 9.6 micron band of ozone. Voigt line shape, vibrational relaxation, line overlap, and the temperature dependence of line strength distributions and transmission functions are incorporated into the Curtis matrices. The distributions of the atmospheric constituents included in the algorithm, and the method used to compute the Curtis matrices are discussed as well as cooling or heating by the 9.6 micron band of ozone. The FORTRAN programs and subroutines that were developed are described and listed.
Network compensation for missing sensors
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Mulligan, Jeffrey B.
1991-01-01
A network learning translation invariance algorithm to compute interpolation functions is presented. This algorithm with one fixed receptive field can construct a linear transformation compensating for gain changes, sensor position jitter, and sensor loss when there are enough remaining sensors to adequately sample the input images. However, when the images are undersampled and complete compensation is not possible, the algorithm need to be modified. For moderate sensor losses, the algorithm works if the transformation weight adjustment is restricted to the weights to output units affected by the loss.
Numerical advection algorithms and their role in atmospheric transport and chemistry models
NASA Technical Reports Server (NTRS)
Rood, Richard B.
1987-01-01
During the last 35 years, well over 100 algorithms for modeling advection processes have been described and tested. This review summarizes the development and improvements that have taken place. The nature of the errors caused by numerical approximation to the advection equation are highlighted. Then the particular devices that have been proposed to remedy these errors are discussed. The extensive literature comparing transport algorithms is reviewed. Although there is no clear cut 'best' algorithm, several conclusions can be made. Spectral and pseudospectral techniques consistently provide the highest degree of accuracy, but expense and difficulties assuring positive mixing ratios are serious drawbacks. Schemes which consider fluid slabs bounded by grid points (volume schemes), rather than the simple specification of constituent values at the grid points, provide accurate positive definite results.
ERIC Educational Resources Information Center
Roady, Celia
2008-01-01
Congress, the news media, and the Internal Revenue Service (IRS) continue to cast a wary eye on the compensation of nonprofit leaders. Hence, any college or university board that falls short of IRS expectations in its procedures for setting the president's compensation is putting the president, other senior officials, and board members at…
NASA Astrophysics Data System (ADS)
Badhan, Mahmuda A.; Mandell, Avi M.; Hesman, Brigette; Nixon, Conor; Deming, Drake; Irwin, Patrick; Barstow, Joanna; Garland, Ryan
2015-11-01
Understanding the formation environments and evolution scenarios of planets in nearby planetary systems requires robust measures for constraining their atmospheric physical properties. Here we have utilized a combination of two different parameter retrieval approaches, Optimal Estimation and Markov Chain Monte Carlo, as part of the well-validated NEMESIS atmospheric retrieval code, to infer a range of temperature profiles and molecular abundances of H2O, CO2, CH4 and CO from available dayside thermal emission observations of several hot-Jupiter candidates. In order to keep the number of parameters low and henceforth retrieve more plausible profile shapes, we have used a parametrized form of the temperature profile based upon an analytic radiative equilibrium derivation in Guillot et al. 2010 (Line et al. 2012, 2014). We show retrieval results on published spectroscopic and photometric data from both the Hubble Space Telescope and Spitzer missions, and compare them with simulations from the upcoming JWST mission. In addition, since NEMESIS utilizes correlated distribution of absorption coefficients (k-distribution) amongst atmospheric layers to compute these models, updates to spectroscopic databases can impact retrievals quite significantly for such high-temperature atmospheres. As high-temperature line databases are continually being improved, we also compare retrievals between old and newer databases.
Aerosol Plume Detection Algorithm Based on Image Segmentation of Scanning Atmospheric Lidar Data
Weekley, R. Andrew; Goodrich, R. Kent; Cornman, Larry B.
2016-04-06
An image-processing algorithm has been developed to identify aerosol plumes in scanning lidar backscatter data. The images in this case consist of lidar data in a polar coordinate system. Each full lidar scan is taken as a fixed image in time, and sequences of such scans are considered functions of time. The data are analyzed in both the original backscatter polar coordinate system and a lagged coordinate system. The lagged coordinate system is a scatterplot of two datasets, such as subregions taken from the same lidar scan (spatial delay), or two sequential scans in time (time delay). The lagged coordinatemore » system processing allows for finding and classifying clusters of data. The classification step is important in determining which clusters are valid aerosol plumes and which are from artifacts such as noise, hard targets, or background fields. These cluster classification techniques have skill since both local and global properties are used. Furthermore, more information is available since both the original data and the lag data are used. Performance statistics are presented for a limited set of data processed by the algorithm, where results from the algorithm were compared to subjective truth data identified by a human.« less
NASA Technical Reports Server (NTRS)
Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)
2001-01-01
A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.
NASA Astrophysics Data System (ADS)
Liuzzi, G.; Masiello, G.; Serio, C.; Mancarella, F.; Fonti, S.; Roush, T.
The problem of fully simultaneous retrieval of surface and atmosphere has been satisfactorily addressed as far as Earth is concerned in many works \\citep{masACP09,carENSO05}, especially for high-resolution instruments. However, such retrieval know-how has been not completely implemented in other planetary contexts. In this perspective, we present a new methodology for the simultaneous retrieval of surface and atmospheric parameters of Mars. The methodology, fully explained in \\cite{liuzzi2015} is based on a non-linear, iterative optimal estimation scheme, supported by a statistical retrieval procedure used to initialize the physical retrieval algorithm with a reliable first guess of the atmospheric parameters. The forward module \\cite{liuzzi2014} is fully integrated with the inverse one, and it is a monochromatic radiative transfer model with the capability to calculate genuine analytical Jacobians of any desired geophysical parameter. We describe both the mathematical framework of the methodology and, as a proof of concept, its application to a large sample of data acquired by the Thermal Emission Spectrometer (TES). Results are drawn for the case of surface temperature and emissivity, atmospheric temperature profile, water vapour, dust and ice mixing ratios. Some work has also been done for revisiting the claims of methane detection with TES data \\citep{fon10,fonti2015}. Comparison with climate models and other TES data analyses show a very good agreement and consistency. Moreover, we will show how the methodology can be applied to other instruments looking at Mars, simply customizing part of the forward and reverse modules.
Watson, K.; Hummer-Miller, S.
1981-01-01
A method based solely on remote sensing data has been developed to estimate those meteorological effects which are required for thermal-inertia mapping. It assumes that the atmospheric fluxes are spatially invariant and that the solar, sky, and sensible heat fluxes can be approximated by a simple mathematical form. Coefficients are determined from least-squares method by fitting observational data to our thermal model. A comparison between field measurements and the model-derived flux shows the type of agreement which can be achieved. An analysis of the limitations of the method is also provided. ?? 1981.
NASA Technical Reports Server (NTRS)
Mcfarland, Richard E.
1986-01-01
Computer-generated graphics in real-time helicopter simulation produces objectionable scene-presentation time delays. In the flight simulation laboratory at Ames Research Center, it has been determined that these delays have an adverse influence on pilot performance during aggressive tasks such as nap-of-the-earth (NOE) maneuvers. Using contemporary equipment, computer-generated image (CGI) time delays are an unavoidable consequence of the operations required for scene generation. However, providing that magnitide distortions at higher frequencies are tolerable, delay compensation is possible over a restricted frequency range. This range, assumed to have an upper limit of perhaps 10 or 15 rad/sec, conforms approximately to the bandwidth associated with helicopter handling qualities research. A compensation algorithm is introduced here and evaluated in terms of tradeoffs in frequency responses. The algorithm has a discrete basis and accommodates both a large, constant transport delay interval and a periodic delay interval, as associated with asynchronous operations.
NASA Astrophysics Data System (ADS)
Belov, V. V.; Tarasenkov, M. V.
2015-11-01
An algorithm for atmospheric correction of satellite images combining the consideration of the main factors influencing imaging and a number of techniques allowing the computational time to be decreased considerably is analyzed. On the example of a series of images of the South of the Tomsk Region recorded from 7/13/2013 to 7/17/2013 with the low atmospheric turbidity, a comparison of the results of atmospheric correction using the suggested algorithm with the results obtained using the NASA MOD09 algorithm is performed. The correction error is estimated under assumption of a linear change of the reflection coefficient from image to image. Our comparison demonstrates that the results of correction differ within the correction error.
Targeting Atmospheric Simulation Algorithms for Large Distributed Memory GPU Accelerated Computers
Norman, Matthew R
2013-01-01
Computing platforms are increasingly moving to accelerated architectures, and here we deal particularly with GPUs. In [15], a method was developed for atmospheric simulation to improve efficiency on large distributed memory machines by reducing communication demand and increasing the time step. Here, we improve upon this method to further target GPU accelerated platforms by reducing GPU memory accesses, removing a synchronization point, and better clustering computations. The modification ran over two times faster in some cases even though more computations were required, demonstrating the merit of improving memory handling on the GPU. Furthermore, we discover that the modification also has a near 100% hit rate in fast on-chip L1 cache and discuss the reasons for this. In concluding, we remark on further potential improvements to GPU efficiency.
NASA Astrophysics Data System (ADS)
Houweling, S.; Ringeval, B.; Basu, A.; Van Beek, L. P.; Van Bodegom, P.; Spahni, R.; Gatti, L.; Gloor, M.; Roeckmann, T.
2013-12-01
Tropical wetlands are an important and highly uncertain term in the global budget of methane. Unlike wetlands in higher latitudes, which are dominated by water logged peatlands, tropical wetlands consist primarily of inundated river floodplains responding seasonally to variations in river discharge. Despite the fact that the hydrology of these systems is obviously very different, process models used for estimating methane emissions from wetlands commonly lack a dedicated parameterization for the tropics. This study is a first attempt to develop such a parameterization for use in the global dynamical vegetation model LPX. The required floodplain extents and water depth are calculated offline using the global hydrological model PCR-GLOBWB, which includes a sophisticated river routing scheme. LPX itself has been extended with a dedicated floodplain land unit and flood tolerant PFTs. The simulated species competition and productivity have been verified using GLC2000 and MODIS, pointing to directions for further model improvement regarding vegetation dynamics and hydrology. LPX simulated methane fluxes have been compared with available in situ measurements from tropical America. Finally, estimates for the Amazon basin have been implemented in the TM5 atmospheric transport model and compared with aircraft measured vertical profiles. The first results that will be presented demonstrate that, despite the limited availability of measurements, useful constraints on the magnitude and seasonality of Amazonian methane emissions can be derived.
Detection of atmospheric rivers: Evaluation and application of an algorithm for global studies
NASA Astrophysics Data System (ADS)
Guan, Bin; Waliser, Duane E.
2015-12-01
Atmospheric rivers (ARs) are narrow, elongated, synoptic jets of water vapor that play important roles in the global water cycle and regional weather/hydrology. A technique is developed for objective detection of ARs on the global domain based on characteristics of the integrated water vapor transport (IVT). AR detection involves thresholding 6-hourly fields of ERA-Interim IVT based on the 85th percentile specific to each season and grid cell and a fixed lower limit of 100 kg m-1 s-1 and checking for the geometry requirements of length >2000 km, length/width ratio >2, and other considerations indicative of AR conditions. Output of the detection includes the AR shape, axis, landfall location, and basic statistics of each detected AR. The performance of the technique is evaluated by comparison to AR detection in the western North America, Britain, and East Antarctica with three independently conducted studies using different techniques, with over ~90% agreement in AR dates. Among the parameters tested, AR detection shows the largest sensitivity to the length criterion in terms of changes in the resulting statistical distribution of AR intensity and geometry. Global distributions of key AR characteristics are examined, and the results highlight the global footprints of ARs and their potential importance on global and regional scales. Also examined are seasonal dependence of AR frequency and precipitation and their modulation by four prominent modes of large-scale climate variability. The results are in broad consistency with previous studies that focused on landfalling ARs in the west coasts of North America and Europe.
Non-reciprocity compensation correction and antenna selection for optical large MIMO system
NASA Astrophysics Data System (ADS)
Chen, Jie; Chi, Xue-fen; Zhao, Lin-lin
2015-11-01
This paper exploits an optical large multiple input multiple output (MIMO) system. We first establish the non-reciprocity compensation correction factor to solve the channel non-reciprocity problem. Then we propose an antenna selection algorithm with the goal of realizing maximum energy efficiency ( EE) when satisfying the outage EE. The simulation results prove that this non-reciprocity compensation correction factor can compensate beam energy attenuation gap and spatial correlation gap between uplink and downlink effectively, and this antenna selection algorithm can economize the number of transmit antennas and achieve high EE performance. Finally, we apply direct current- biased optical orthogonal frequency division multiplexing (DCO-OFDM) modulation in our system and prove that it can improve the bit error rate ( BER) compared with on-off keying (OOK) modulation, so the DCO-OFDM modulation can resist atmospheric turbulence effectively.
NASA Astrophysics Data System (ADS)
Biavati, G.; Kretschmer, R.; Gerbig, C.; Feist, D. G.
2012-04-01
The retrieval of mixing height [MH] is a common target of several scientific community all over the world. A strong effort is needed to the fact that modeling of MH generally fails introducing strong errors in the estimate of the concentrations of pollutants and green house gasses within the boundary layer. In Europe local meteorological services and international projects are implementing networks of instruments that can provide atmospheric profiles of different quantities. These networks will continuously provide data which could be used to constrain MH values. The current availability of atmospheric profiles of different nature, such as radiosondes, ground based lidar and ceilometers as well as satellites over Europe grant a spatial coverage that allow to estimate the impact of the knowledge of MH on transport models at synoptic scale of quantities as CO2 and CH4 mixing ratios. In this study we apply several algorithms to retrieve MH from different data sources: the ceilometers network installed by the German Weather Service; the data from CALIPSO satellite and all the WMO radio-soundings available over Europe during the IMECC (Infrastructure for Measurements of the European Carbon Cycle) in 2009. The values obtained from the optical instruments are validated using as reference the estimation retrieved by the virtual potential temperature profiles obtained by the radiosondes where co-location occurs and using statistical interpolation to evaluate the estimates from satellite and non co-located stations.. The impact of this estimates of MH on CO2 mixing ratios will be evaluated with the Stochastic Time Inverted Lagrangian Transport model (STILT) driven by WRF meteorology in comparison with in-situ measurements.
NASA Astrophysics Data System (ADS)
Marras, S.; Spano, D.; Sirca, C.; Duce, P.; Snyder, R.; Pyles, R. D.; Paw U, K. T.
2008-12-01
Land surface models are usually used to quantify energy and mass fluxes between terrestrial ecosystems and atmosphere on micro- and regional scales. One of the most elaborate land surface models for flux modelling is the Advanced Canopy-Atmosphere-Soil Algorithm (ACASA) model, which provides micro-scale as well as regional-scale fluxes when imbedded in a meso-scale meteorological model (e.g., MM5 or WRF). The model predicts vegetation conditions and changes with time due to plant responses to environment variables. In particular, fluxes and profiles of heat, water vapor, carbon and momentum within and above canopy are estimated using third-order equations. It also estimates turbulent profiles of velocity, temperature, humidity within and above canopy, and CO2 fluxes are estimated using a combination of Ball-Berry and Farquhar equations. The ACASA model is also able to include the effects of water stress on stomata, transpiration and CO2 assimilation. ACASA model is unique because it separates canopy domain into twenty atmospheric layers (ten layers within the canopy and ten layers above the canopy), and the soil is partitioned into fifteen layers of variable thickness. The model was mainly used over dense canopies in the past, so the aim of this work was to test the ACASA model over a sparse canopy as Mediterranean maquis. Vegetation is composed by sclerophyllous species of shrubs that are always green, with leathery leaves, small height, with a moderately sparse canopy, and that are tolerant at water stress condition. Eddy Covariance (EC) technique was used to collect continuous data for more than 3 years period. Field measurements were taken in a natural maquis site located near Alghero, Sardinia, Italy and they were used to parameterize and validate the model. The input values were selected by running the model several times varying the one parameter per time. A second step in the parameterization process was the simultaneously variation of some parameters
NASA Technical Reports Server (NTRS)
Pawson, S.; Nielsen, Jon E.; Oman, L.; Douglass, A. R.; Duncan, B. N.; Zhu, Z.
2012-01-01
Convective transport is one of the dominant factors in determining the composition of the troposphere. It is the main mechanism for lofting constituents from near-surface source regions to the middle and upper troposphere, where they can subsequently be advected over large distances. Gases reaching the upper troposphere can also be injected through the tropopause and play a subsequent role in the lower stratospheric ozone balance. Convection codes in climate models remain a great source of uncertainty for both the energy balance of the general circulation and the transport of constituents. This study uses the Goddard Earth Observing System Chemistry-Climate Model (GEOS CCM) to perform a controlled experiment that isolates the impact of convective transport of constituents from the direct changes on the atmospheric energy balance. Two multi-year simulations are conducted. In the first, the thermodynamic variable, moisture, and all trace gases are transported using the multi-plume Relaxed-Arakawa-Schubert (RAS) convective parameterization. In the second simulation, RAS impacts the thermodynamic energy and moisture in this standard manner, but all other constituents are transported differently. The accumulated convective mass fluxes (including entrainment and detrainment) computed at each time step of the GCM are used with a diffusive (bulk) algorithm for the vertical transport, which above all is less efficient at transporting constituents from the lower to the upper troposphere. Initial results show the expected differences in vertical structure of trace gases such as carbon monoxide, but also show differences in lower stratospheric ozone, in a region where it can potentially impact the climate state of the model. This work will investigate in more detail the impact of convective transport changes by comparing the two simulations over many years (1996-2010), focusing on comparisons with observed constituent distributions and similarities and differences of patterns
A simplified guidance algorithm for lifting aeroassist orbital transfer vehicles
NASA Technical Reports Server (NTRS)
Cerimele, C. J.; Gamble, J. D.
1985-01-01
The derivation, logic, and performance of a simplified atmospheric guidance algorithm for aeroassist orbital-transfer vehicles (AOTVs) are presented. The algorithm was developed to meet the demands for an aerobraking trajectory guidance technique that was uncomplicated, easily integrated into existing trajectory programs, adaptable to a range of vehicle aerodynamic configurations, capable of performance equivalent to currently available guidance programs in compensating for dispersions in entry conditions, atmospheric conditions, and aerodynamic characteristics. The result was a hybrid lifting guidance algorithm combining the method of reference-profile generation with the method of predictor/corrector schemes. The resulting performance is good (less than 3 n.m. error from desired apogee despite uncertainties of + or - 50 percent atmospheric density, + or - 0.2 deg entry flight-path angle, or + or - 50 percent L/D. Combinations of these same dispersions with lesser magnitudes have also been successful, although performance with density 'pockets' within the atmosphere requires more analysis.
An overview of turbulence compensation
NASA Astrophysics Data System (ADS)
Schutte, Klamer; van Eekeren, Adam W. M.; Dijk, Judith; Schwering, Piet B. W.; van Iersel, Miranda; Doelman, Niek J.
2012-09-01
In general, long range visual detection, recognition and identification are hampered by turbulence caused by atmospheric conditions. Much research has been devoted to the field of turbulence compensation. One of the main advantages of turbulence compensation is that it enables visual identification over larger distances. In many (military) scenarios this is of crucial importance. In this paper we give an overview of several software and hardware approaches to compensate for the visual artifacts caused by turbulence. These approaches are very diverse and range from the use of dedicated hardware, such as adaptive optics, to the use of software methods, such as deconvolution and lucky imaging. For each approach the pros and cons are given and it is indicated for which type of scenario this approach is useful. In more detail we describe the turbulence compensation methods TNO has developed in the last years and place them in the context of the different turbulence compensation approaches and TNO's turbulence compensation roadmap. Furthermore we look forward and indicate the upcoming challenges in the field of turbulence compensation.
NASA Astrophysics Data System (ADS)
Mei, Alessandro; Manzo, Ciro; Petracchini, Francesco; Bassani, Cristiana
2016-04-01
Remote sensing techniques allow to estimate vegetation parameters related to large areas for forest health evaluation and biomass estimation. Moreover, the parametrization of specific indices such as Normalized Difference Vegetation Index (NDVI) allows to study biogeochemical cycles and radiative energy transfer processes between soil/vegetation and atmosphere. This paper focuses on the evaluation of vegetation cover analysis in Leonessa Municipality, Latium Region (Italy) by the use of 2015 Landsat 8 applying the OLI@CRI (OLI ATmospherically Corrected Reflectance Imagery) algorithm developed following the procedure described in Bassani et al. 2015. The OLI@CRI is based on 6SV radiative transfer model (Kotchenova et al., 2006) ables to simulate the radiative field in the atmosphere-earth coupled system. NDVI was derived from the OLI corrected image. This index, widely used for biomass estimation and vegetation analysis cover, considers the sensor channels falling in the near infrared and red spectral regions which are sensitive to chlorophyll absorption and cell structure. The retrieved product was then spatially resampled at MODIS image resolution and then validated by the NDVI of MODIS considered as reference. The physically-based OLI@CRI algorithm also provides the incident solar radiation at ground at the acquisition time by 6SV simulation. Thus, the OLI@CRI algorithm completes the remote sensing dataset required for a comprehensive analysis of the sub-regional biomass production by using data of the new generation remote sensing sensor and an atmospheric radiative transfer model. If the OLI@CRI algorithm is applied to a temporal series of OLI data, the influence of the solar radiation on the above-ground vegetation can be analysed as well as vegetation index variation.
Borel, C.C.; Villeneuve, P.V.; Clodium, W.B.; Szymenski, J.J.; Davis, A.B.
1999-04-04
Deriving information about the Earth's surface requires atmospheric corrections of the measured top-of-the-atmosphere radiances. One possible path is to use atmospheric radiative transfer codes to predict how the radiance leaving the ground is affected by the scattering and attenuation. In practice the atmosphere is usually not well known and thus it is necessary to use more practical methods. The authors will describe how to find dark surfaces, estimate the atmospheric optical depth, estimate path radiance and identify thick clouds using thresholds on reflectance and NDVI and columnar water vapor. The authors describe a simple method to correct a visible channel contaminated by a thin cirrus clouds.
NASA Astrophysics Data System (ADS)
Jackson, Christopher Robert
"Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.
NASA Astrophysics Data System (ADS)
Xu, Zhantang; Hu, Shuibo; Wang, Guifen; Zhao, Jun; Yang, Yuezhong; Cao, Wenxi; Lu, Peng
2016-05-01
Quantitative estimates of particulate matter [PM) concentration in sea ice using remote sensing data is helpful for studies of sediment transport and atmospheric dust deposition flux. In this study, the difference between the measured dirty and estimated clean albedo of sea ice was calculated and a relationship between the albedo difference and PM concentration was found using field and laboratory measurements. A semianalytical algorithm for estimating PM concentration in sea ice was established. The algorithm was then applied to MODIS data over the Bohai Sea, China. Comparisons between MODIS derived and in situ measured PM concentration showed good agreement, with a mean absolute percentage difference of 31.2%. From 2005 to 2010, the MODIS-derived annual average PM concentration was approximately 0.025 g/L at the beginning of January. After a month of atmospheric dust deposition, it increased to 0.038 g/L. Atmospheric dust deposition flux was estimated to be 2.50 t/km2/month, similar to 2.20 t/km2/month reported in a previous study. The result was compared with on-site measurements at a nearby ground station. The ground station was close to industrial and residential areas, where larger dust depositions occurred than in the sea, but although there were discrepancies between the absolute magnitudes of the two data sets, they demonstrated similar trends.
NASA Astrophysics Data System (ADS)
Malevich, S. B.; Woodhouse, C. A.
2015-12-01
This work explores a new approach to quantify cool-season mid-latitude circulation dynamics as they relate western US streamflow variability and drought. This information is used to probabilistically associate patterns of synoptic atmospheric circulation with spatial patterns of drought in western US streamflow. Cool-season storms transport moisture from the Pacific Ocean and are a primary source for western US streamflow. Studies overthe past several decades have emphasized that the western US hydroclimate is influenced by the intensity and phasing of ocean and atmosphere dynamics and teleconnections, such as ENSO and North Pacific variability. These complex interactions are realized in atmospheric circulation along the west coast of North America. The region's atmospheric circulation can encourage a preferential flow in winter storm tracks from the Pacific, and thus influence the moisture conditions of a given river basin over the course of the cool season. These dynamics have traditionally been measured with atmospheric indices based on values from fixed points in space or principal component loadings. This study uses collective search agents to quantify the position and intensity of potentially non-stationary atmosphere features in climate reanalysis datasets, relative to regional hydrology. Results underline the spatio-temporal relationship between semi-permanent atmosphere characteristics and naturalized streamflow from major river basins of the western US. A probabilistic graphical model quantifies this relationship while accounting for uncertainty from noisy climate processes, and eventually, limitations from dataset length. This creates probabilities for semi-permanent atmosphere features which we hope to associate with extreme droughts of the paleo record, based on our understanding of atmosphere-streamflow relations observed in the instrumental record.
Nikolaev, V.I.; Yatsko, S.N.
1995-12-01
A mathematical model and a package of programs are presented for simulating the atmospheric turbulent diffusion of contaminating impurities from land based and other sources. Test calculations and investigations of the effect of various factors are carried out.
Adaptive tracking and compensation of laser spot based on ant colony optimization
NASA Astrophysics Data System (ADS)
Yang, Lihong; Ke, Xizheng; Bai, Runbing; Hu, Qidi
2009-05-01
Because the effect of atmospheric scattering and atmospheric turbulence on laser signal of atmospheric absorption,laser spot twinkling, beam drift and spot split-up occur ,when laser signal transmits in the atmospheric channel. The phenomenon will be seriously affects the stability and the reliability of laser spot receiving system. In order to reduce the influence of atmospheric turbulence, we adopt optimum control thoughts in the field of artificial intelligence, propose a novel adaptive optical control technology-- model-free optimized adaptive control technology, analyze low-order pattern wave-front error theory, in which an -adaptive optical system is employed to adjust errors, and design its adaptive structure system. Ant colony algorithm is the control core algorithm, which is characteristic of positive feedback, distributed computing and greedy heuristic search. . The ant colony algorithm optimization of adaptive optical phase compensation is simulated. Simulation result shows that, the algorithm can effectively control laser energy distribution, improve laser light beam quality, and enhance signal-to-noise ratio of received signal.
C. BOREL; W. CLODIUS
2001-04-01
This paper discusses the algorithms created for the Multi-spectral Thermal Imager (MTI) to retrieve temperatures and emissivities. Recipes to create the physics based water temperature retrieval, emissivity of water surfaces are described. A simple radiative transfer model for multi-spectral sensors is developed. A method to create look-up-tables and the criterion of finding the optimum water temperature are covered. Practical aspects such as conversion from band-averaged radiances to brightness temperatures and effects of variations in the spectral response on the atmospheric transmission are discussed. A recipe for a temperature/emissivity separation algorithm when water surfaces are present is given. Results of retrievals of skin water temperatures are compared with in-situ measurements of the bulk water temperature at two locations are shown.
NASA Astrophysics Data System (ADS)
Reuter, M.; Bösch, H.; Bovensmann, H.; Bril, A.; Buchwitz, M.; Butz, A.; Burrows, J. P.; O'Dell, C. W.; Guerlet, S.; Hasekamp, O.; Heymann, J.; Kikuchi, N.; Oshchepkov, S.; Parker, R.; Pfeifer, S.; Schneising, O.; Yokota, T.; Yoshida, Y.
2012-09-01
We analyze an ensemble of seven XCO2 retrieval algorithms for SCIAMACHY and GOSAT. The ensemble spread can be interpreted as regional uncertainty and can help to identify locations for new TCCON validation sites. Additionally, we introduce the ensemble median algorithm EMMA combining individual soundings of the seven algorithms into one new dataset. The ensemble takes advantage of the algorithms' independent developments. We find ensemble spreads being often <1 ppm but rising up to 2 ppm especially in the tropics and East Asia. On the basis of gridded monthly averages, we compare EMMA and all individual algorithms with TCCON and CarbonTracker model results (potential outliers, north/south gradient, seasonal (peak-to-peak) amplitude, standard deviation of the difference). Our findings show that EMMA is a promising candidate for inverse modeling studies. Compared to CarbonTracker, the satellite retrievals find consistently larger north/south gradients (by 0.3 ppm-0.9 ppm) and seasonal amplitudes (by 1.5 ppm-2.0 ppm).
The theory of compensated laser propagation through strong thermal blooming
NASA Astrophysics Data System (ADS)
Schonfeld, Jonathan F.
An account is given of the theory of adaptive compensation for a laser beam's thermal blooming in atmospheric transmission, giving attention to MOLLY, a highly realistic computer simulation of adaptively compensated laser propagation which illustrates the effects of atmospheric turbulence and thermal blooming. Robust experimental signatures have been developed for such important fundamental processes as phase-compensation instability (PCI), which is caused by positive feedback between an adaptive optics system and laser-induced atmospheric heating. The physics of uncompensated and compensated thermal blooming is discussed, in conjunction with the architecture of MOLLY and an analysis of PCI that takes detailed adaptive-optics hardware structures into account.
NASA Technical Reports Server (NTRS)
Dieriam, Todd A.
1990-01-01
Future missions to Mars may require pin-point landing precision, possibly on the order of tens of meters. The ability to reach a target while meeting a dynamic pressure constraint to ensure safe parachute deployment is complicated at Mars by low atmospheric density, high atmospheric uncertainty, and the desire to employ only bank angle control. The vehicle aerodynamic performance requirements and guidance necessary for 0.5 to 1.5 lift drag ratio vehicle to maximize the achievable footprint while meeting the constraints are examined. A parametric study of the various factors related to entry vehicle performance in the Mars environment is undertaken to develop general vehicle aerodynamic design requirements. The combination of low lift drag ratio and low atmospheric density at Mars result in a large phugoid motion involving the dynamic pressure which complicates trajectory control. Vehicle ballistic coefficient is demonstrated to be the predominant characteristic affecting final dynamic pressure. Additionally, a speed brake is shown to be ineffective at reducing the final dynamic pressure. An adaptive precision entry atmospheric guidance scheme is presented. The guidance uses a numeric predictor-corrector algorithm to control downrange, an azimuth controller to govern crossrange, and analytic control law to reduce the final dynamic pressure. Guidance performance is tested against a variety of dispersions, and the results from selected tests are presented. Precision entry using bank angle control only is demonstrated to be feasible at Mars.
Reactive power compensating system
Williams, Timothy J.; El-Sharkawi, Mohamed A.; Venkata, Subrahmanyam S.
1987-01-01
The reactive power of an induction machine is compensated by providing fixed capacitors on each phase line for the minimum compensation required, sensing the current on one line at the time its voltage crosses zero to determine the actual compensation required for each phase, and selecting switched capacitors on each line to provide the balance of the compensation required.
NASA Astrophysics Data System (ADS)
Lalande, Jean-Marie; Waxler, Roger; Velea, Doru
2016-04-01
As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.
Optical tracking telescope compensation
NASA Technical Reports Server (NTRS)
Gilbart, J. W.
1973-01-01
In order to minimize the effects of parameter variations in the dynamics of an optical tracking telescope, a model referenced parameter adaptive control system is described that - in conjunction with more traditional forms of compensation - achieves a reduction of rms pointing error by more than a factor of six. The adaptive compensation system utilizes open loop compensation, closed loop compensation, and model reference compensation to provide the precise input to force telescope axis velocity to follow the ideal velocity.
Compensator improvement for multivariable control systems
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.; Gresham, L. L.
1977-01-01
A theory and the associated numerical technique are developed for an iterative design improvement of the compensation for linear, time-invariant control systems with multiple inputs and multiple outputs. A strict constraint algorithm is used in obtaining a solution of the specified constraints of the control design. The result of the research effort is the multiple input, multiple output Compensator Improvement Program (CIP). The objective of the Compensator Improvement Program is to modify in an iterative manner the free parameters of the dynamic compensation matrix so that the system satisfies frequency domain specifications. In this exposition, the underlying principles of the multivariable CIP algorithm are presented and the practical utility of the program is illustrated with space vehicle related examples.
NASA Technical Reports Server (NTRS)
Herring, Thomas A.; Quinn, Katherine J.
2012-01-01
NASA s Ice, Cloud, and Land Elevation Satellite (ICESat) mission will be launched late 2001. It s primary instrument is the Geoscience Laser Altimeter System (GLAS) instrument. The main purpose of this instrument is to measure elevation changes of the Greenland and Antarctic icesheets. To accurately measure the ranges it is necessary to correct for the atmospheric delay of the laser pulses. The atmospheric delay depends on the integral of the refractive index along the path that the laser pulse travels through the atmosphere. The refractive index of air at optical wavelengths is a function of density and molecular composition. For ray paths near zenith and closed form equations for the refractivity, the atmospheric delay can be shown to be directly related to surface pressure and total column precipitable water vapor. For ray paths off zenith a mapping function relates the delay to the zenith delay. The closed form equations for refractivity recommended by the International Union of Geodesy and Geophysics (IUGG) are optimized for ground based geodesy techniques and in the next section we will consider whether these equations are suitable for satellite laser altimetry.
NASA Astrophysics Data System (ADS)
Brands, S.; Gutiérrez, J. M.; San-Martín, D.
2016-04-01
A new atmospheric-river detection and tracking scheme based on the magnitude and direction of integrated water vapour transport is presented and applied separately over 13 regions located along the west coasts of Europe (including North Africa) and North America. Four distinct reanalyses are considered, two of which cover the entire twentieth-century: NOAA-CIRES Twentieth Century Reanalysis v2 (NOAA-20C) and ECMWF ERA-20C. Calculations are done separately for the OND and JFM-season and, for comparison with previous studies, for the ONDJFM-season as a whole. Comparing the AR-counts from NOAA-20C and ERA-20C with a running 31-year window looping through 1900-2010 reveals differences in the climatological mean and inter-annual variability which, at the start of the twentieth-century, are much more pronounced in western North America than in Europe. Correlating European AR-counts with the North Atlantic Oscillation (NAO) reveals a pattern reminiscent of the well-know precipitation dipole which is stable throughout the entire century. A similar analysis linking western North American AR-counts to the North Pacific index (NPI) is hampered by the aforementioned poor reanalysis agreement at the start of the century. During the second half of the twentieth-century, the strength of the NPI-link considerably varies with time in British Columbia and the Gulf of Alaska. Considering the period 1950-2010, AR-counts are then associated with other relevant large-scale circulation indices such as the East Atlantic, Scandinavian, Pacific-North American and West Pacific patterns (EA, SCAND, PNA and WP). Along the Atlantic coastline of the Iberian Peninsula and France, the EA-link is stronger than the NAO-link if the OND season is considered and the SCAND-link found in northern Europe is significant during both seasons. Along the west coast of North America, teleconnections are generally stronger during JFM in which case the NPI-link is significant in any of the five considered
Optimal design of robot accuracy compensators
Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)
1993-12-01
The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.
Energy Science and Technology Software Center (ESTSC)
2003-06-03
COMPERA is a decision support system designed to facilitate the compensation review process. With parameters provided by the user(s), the system generates recommendations for base increases and nonbase compensation that strives to align total compensation with performance compensation targets. The user(s) prescribe(s) compensation targets according to performance (or value of contribution) designators. These targets are presented in look-up tables, which are then used by embedded formulas in the worksheet to determine the recommended compensation formore » each individual.« less
Inflight parity vector compensation for FDI
NASA Astrophysics Data System (ADS)
Hall, S. R.; Motyka, P.; Gai, E.; Deyst, J. J., Jr.
The performance of a failure detection and isolation (FDI) algorithm applied to a redundant strapdown inertial measurement unit (IMU) is limited by sensor errors such as input axis misalignment, scale factor errors, and biases. This paper presents a technique for improving the performance of FDI algorithms applied to redundant strapdown IMUs. A Kalman filter provides estimates of those linear combinations of sensor errors that affect the parity vector. These estimates are used to form a compensated parity vector which does not include the effects of sensor errors. The compensated parity vector is then used in place of the uncompensated parity vector to make FDI decisions. Simulation results are presented in which the algorithm is tested in a realistic flight environment that includes vehicle maneuvers, the effects of turbulence, and sensor failures. The results show that the algorithm can significantly improve FDI performance, especially during vehicle maneuvers.
NASA Technical Reports Server (NTRS)
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 3 details the advanced CERES methods for performing scene identification and inverting each CERES scanner radiance to a top-of-the-atmosphere (TOA) flux. CERES determines cloud fraction, height, phase, effective particle size, layering, and thickness from high-resolution, multispectral imager data. CERES derives cloud properties for each pixel of the Tropical Rainfall Measuring Mission (TRMM) visible and infrared scanner and the Earth Observing System (EOS) moderate-resolution imaging spectroradiometer. Cloud properties for each imager pixel are convolved with the CERES footprint point spread function to produce average cloud properties for each CERES scanner radiance. The mean cloud properties are used to determine an angular distribution model (ADM) to convert each CERES radiance to a TOA flux. The TOA fluxes are used in simple parameterization to derive surface radiative fluxes. This state-of-the-art cloud-radiation product will be used to substantially improve our understanding of the complex relationship between clouds and the radiation budget of the Earth-atmosphere system.
NASA Astrophysics Data System (ADS)
Van Benthem, Mark H.; Woodbury, Drew P.
2015-05-01
In this paper, we describe the use of various methods of one-dimensional spectral compression by variable selection as well as principal component analysis (PCA) for compressing multi-dimensional sets of spectral data. We have examined methods of variable selection such as wavelength spacing, spectral derivatives, and spectral integration error. After variable selection, reduced transmission spectra must be decompressed for use. Here we examine various methods of interpolation, e.g., linear, cubic spline and piecewise cubic Hermite interpolating polynomial (PCHIP) to recover the spectra prior to estimating at-sensor radiance. Finally, we compressed multi-dimensional sets of spectral transmittance data from moderate resolution atmospheric transmission (MODTRAN) data using PCA. PCA seeks to find a set of basis spectra (vectors) that model the variance of a data matrix in a linear additive sense. Although MODTRAN data are intricate and are used in nonlinear modeling, their base spectra can be reasonably modeled using PCA yielding excellent results in terms of spectral reconstruction and estimation of at-sensor radiance. The major finding of this work is that PCA can be implemented to compress MODTRAN data with great effect, reducing file size, access time and computational burden while producing high-quality transmission spectra for a given set of input conditions.
Adaptive optics image deconvolution based on a modified Richardson-Lucy algorithm
NASA Astrophysics Data System (ADS)
Chen, Bo; Geng, Ze-xun; Yan, Xiao-dong; Yang, Yang; Sui, Xue-lian; Zhao, Zhen-lei
2007-12-01
Adaptive optical (AO) system provides a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The Richardson-Lucy (R-L) Algorithm is the technique most widely used for AO image deconvolution, but Standard R-L Algorithm (SRLA) is often puzzled by speckling phenomenon, wraparound artifact and noise problem. A Modified R-L Algorithm (MRLA) for AO image deconvolution is presented. This novel algorithm applies Magain's correct sampling approach and incorporating noise statistics to Standard R-L Algorithm. The alternant iterative method is applied to estimate PSF and object in the novel algorithm. Comparing experiments for indoor data and AO image are done with SRLA and the MRLA in this paper. Experimental results show that this novel MRLA outperforms the SRLA.
Robust springback compensation
NASA Astrophysics Data System (ADS)
Carleer, Bart; Grimm, Peter
2013-12-01
Springback simulation and springback compensation are more and more applied in productive use of die engineering. In order to successfully compensate a tool accurate springback results are needed as well as an effective compensation approach. In this paper a methodology has been introduce in order to effectively compensate tools. First step is the full process simulation meaning that not only the drawing operation will be simulated but also all secondary operations like trimming and flanging. Second will be the verification whether the process is robust meaning that it obtains repeatable results. In order to effectively compensate a minimum clamping concept will be defined. Once these preconditions are fulfilled the tools can be compensated effectively.
Anderson, R C
1985-01-01
Congress has demonstrated interest in toxic compensation legislation, but not enough agreement to make significant progress. Advocates of reform claim that the legal system is heavily weighed against victims who seek compensation through the courts. Proposed reforms include a compensation fund and a cause of action in federal court. Critics have questioned whether these changes in the law would represent an improvement. Existing income replacement, medical cost reimbursement, and survivor insurance programs largely cover the losses of individuals with chronic disease. Thus, the need for an additional compensation is not clear. Furthermore, experience with compensation funds such as the Black Lung Fund suggests that political rather than scientific criteria may be used to determine eligibility. Finally, under the proposed financing mechanisms the compensation funds that are being debated would not increase incentives for care in the handling of hazardous wastes or toxic substances. PMID:4085440
Compensator configurations for load currents' symmetrization
NASA Astrophysics Data System (ADS)
Rusinaru, D.; Manescu, L. G.; Dinu, R. C.
2016-02-01
This paper approaches aspects regarding the mitigation effects of asymmetries in 3-phase 3-wire networks. The measure consisting in connecting of load current symmetrization devices at the load coupling point is presented. A time-variation of compensators parameters is determined as a function of the time-recorded electrical values. The general sizing principle of the load current symmetrization reactive components is based on a simple equivalent model of the unbalanced 3-phase loads. By using these compensators a certain control of the power components transits is ensured in the network. The control is based on the variations laws of the compensators parameters as functions of the recorded electrical values: [B] = [T]·[M]. The link between compensator parameters and measured values is ensured by a transformation matrix [T] for each operation conditions of the supply network. Additional conditions for improving of energy and efficiency performance of the compensator are considered: i.e. reactive power compensation. The compensator sizing algorithm was implemented into a MATLAB environment software, which generate the time-evolution of the parameters of load current symmetrization device. The input data of application takes into account time-recording of the electrical values. By using the compensator sizing software, some results were achieved for the case of a consumer connected at 20 kV busbar of a distribution substation, during 24 hours measurement session. Even the sizing of the compensators aimed some additional network operation aspects (power factor correction) correlated with the total or major load symmetrizations, the harmonics aspects of the network values were neglected.
Results of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) Experiment
NASA Technical Reports Server (NTRS)
Wilson, K. E.; Leatherman, P. R.; Cleis, R.; Spinhirne, J.; Fugate, R. Q.
1997-01-01
Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes. Phase I of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) experiment demonstrated the first propagation of an atmosphere-compensated laser beam to the lunar retroreflectors. A 1.06-micron Nd:YAG laser beam was propagated through the full aperture of the 1.5-m telescope at the Starfire Optical Range (SOR), Kirtland Air Force Base, New Mexico, to the Apollo 15 retroreflector array at Hadley Rille. Laser guide-star adaptive optics were used to compensate turbulence-induced aberrations across the transmitter's 1.5-m aperture. A 3.5-m telescope, also located at the SOR, was used as a receiver for detecting the return signals. JPL-supplied Chebyshev polynomials of the retroreflector locations were used to develop tracking algorithms for the telescopes. At times we observed in excess of 100 photons returned from a single pulse when the outgoing beam from the 1.5-m telescope was corrected by the adaptive optics system. No returns were detected when the outgoing beam was uncompensated. The experiment was conducted from March through September 1994, during the first or last quarter of the Moon.
NASA Astrophysics Data System (ADS)
Annewandter, R.; Kalinowksi, M. B.
2009-04-01
An underground nuclear explosion injects radionuclids in the surrounding host rock creating an initial radionuclid distribution. In the case of fractured permeable media, cyclical changes in atmospheric pressure can draw gaseous species upwards to the surface, establishing a ratcheting pump effect. The resulting advective transport is orders of magnitude more significant than transport by molecular diffusion. In the 1990s the US Department of Energy funded the socalled Non-Proliferation Experiment conducted by the Lawrence Livermore National Laboratory to investigate this barometric pumping effect for verifying compliance with respect to the Comprehensive Nuclear Test Ban Treaty. A chemical explosive of approximately 1 kt TNT-equivalent has been detonated in a cavity located 390 m deep in the Rainier Mesa (Nevada Test Site) in which two tracer gases were emplaced. Within this experiment SF6 was first detected in soil gas samples taken near fault zones after 50 days and 3He after 325 days. For this paper a locally one-dimensional dual-porosity model for flow along the fracture and within the permeable matrix was used after Nilson and Lie (1990). Seepage of gases and diffusion of tracers between fracture and matrix are accounted. The advective flow along the fracture and within the matrix block is based on the FRAM filtering remedy and methodology of Chapman. The resulting system of equations is solved by an implicit non-iterative algorithm. Results on time of arrival and subsurface concentration levels for the CTBT-relevant xenons will be presented.
NASA Astrophysics Data System (ADS)
Andersson, A.; Sheesley, R. J.; Kirillova, E.; Gustafsson, O.
2010-12-01
High wintertime concentrations of black carbon aerosols (BCA) over South Asia and the Northern Indian Ocean are thought to have a large impact on the regional climate. Direct absorption of sunlight by BCAs causes heating of the atmosphere and cooling at the surface. To quantify such effects it is important to characterize a number of different properties of the aerosols. Here we present a novel application of the thermal-optical (OCEC) instrument in which the laser beam is used to obtain optical information about the aerosols. In particular, the novel algorithm accounts for non-carbon contributions to the light extinction. Combining these light extinction coefficients with the simultaneously constrained Elemental Carbon (EC) concentrations, the Mass Absorption Cross Section (MAC) is computed. Samples were collected during a continuous 14-month campaign Dec 2008 - Mar 2009 at Sinaghad in Western India and on Hanimaadhoo, the Northernmost Island in the Maldives. This data set suggests that the MAC of the BCAs are variable, sometimes by a factor of 3 compared to the mean. This observation adds to the complexity of calculating the radiative forcing for BCAs, reinforcing previous observations that parameters such as aerosol mixing state and sources need to be taken into account.
Compensation of distributed delays in integrated communication and control systems
NASA Technical Reports Server (NTRS)
Ray, Asok; Luck, Rogelio
1991-01-01
The concept, analysis, implementation, and verification of a method for compensating delays that are distributed between the sensors, controller, and actuators within a control loop are discussed. With the objective of mitigating the detrimental effects of these network induced delays, a predictor-controller algorithm was formulated and analyzed. Robustness of the delay compensation algorithm was investigated relative to parametric uncertainties in plant modeling. The delay compensator was experimentally verified on an IEEE 802.4 network testbed for velocity control of a DC servomotor.
Doerry, Armin W.
2004-07-20
Movement of a GMTI radar during a coherent processing interval over which a set of radar pulses are processed may cause defocusing of a range-Doppler map in the video signal. This problem may be compensated by varying waveform or sampling parameters of each pulse to compensate for distortions caused by variations in viewing angles from the radar to the target.
NASA Astrophysics Data System (ADS)
Drinovec, L.; Močnik, G.; Zotter, P.; Prévôt, A. S. H.; Ruckstuhl, C.; Coz, E.; Rupakheti, M.; Sciare, J.; Müller, T.; Wiedensohler, A.; Hansen, A. D. A.
2015-05-01
Aerosol black carbon is a unique primary tracer for combustion emissions. It affects the optical properties of the atmosphere and is recognized as the second most important anthropogenic forcing agent for climate change. It is the primary tracer for adverse health effects caused by air pollution. For the accurate determination of mass equivalent black carbon concentrations in the air and for source apportionment of the concentrations, optical measurements by filter-based absorption photometers must take into account the "filter loading effect". We present a new real-time loading effect compensation algorithm based on a two parallel spot measurement of optical absorption. This algorithm has been incorporated into the new Aethalometer model AE33. Intercomparison studies show excellent reproducibility of the AE33 measurements and very good agreement with post-processed data obtained using earlier Aethalometer models and other filter-based absorption photometers. The real-time loading effect compensation algorithm provides the high-quality data necessary for real-time source apportionment and for determination of the temporal variation of the compensation parameter k.
Horizontal density compensation in ocean general circulation models
NASA Astrophysics Data System (ADS)
Koch, Andrey O.; Helber, Robert W.; Richman, James G.; Barron, Charlie N.
2013-04-01
Density compensation is the condition where temperature (T) and salinity (S) gradients counteract in their effect on density. Open ocean observations with SeaSoar tows and recent glider observations in the Gulf of Mexico reported in the scientific literature suggest that horizontal gradients in the surface mixed layer tend to be strongly density compensated over a range of spatial scales while in seasonal thermocline and deeper layers T,S-fronts are only partially compensated or uncompensated. We assess the capability of ocean general circulation models (OGCM) to develop horizontal density compensation as observed in the upper ocean. The physics required to evolve the initial density compensated mixed layer toward the partially compensated conditions of the thermocline is tested. Idealistic scenarios with horizontal, partially compensated density fronts in the mixed layer are examined in submesoscale-resolved run-down simulations on Hybrid Coordinate Ocean Model (HYCOM). Simulations with no atmospheric forcing show that initial Density compensation does not change substantially experiencing only minor decrease with time simultaneously with the restratification of the mixed layer by submesoscale eddies. Submesoscale fronts tend to be more compensated than mesoscale fronts. A sensitivity analysis shows that the density compensation of submesoscale fronts is particularly sensitive to the horizontal diffusion rate. Simulations with wind forcing exhibit destruction of initial density compensation due to ageostrophic frontogenesis which is confirmed by recent glider observations in the Gulf of Mexico. The lack of the model skill to develop and maintain compensated thermohaline variability is attributed to the T, S horizontal diffusion parameterization used in HYCOM and generally in modern OGCMs: it is decoupled from vertical diffusion and T and S diffusion is horizontally identical. Our findings suggest that OGCM's skill to develop compensated thermohaline variability
Algorithms and Algorithmic Languages.
ERIC Educational Resources Information Center
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Unbalance vibratory displacement compensation for active magnetic bearings
NASA Astrophysics Data System (ADS)
Gao, Hui; Xu, Longxiang; Zhu, Yili
2013-01-01
As the dynamic stiffness of radial magnetic bearings is not big enough, when the rotor spins at high speed, unbalance displacement vibration phenomenon will be produced. The most effective way for reducing the displacement vibration is to enhance the radial magnetic bearing stiffness through increasing the control currents, but the suitable control currents are not easy to be provided, especially, to be provided in real time. To implement real time unbalance displacement vibration compensation, through analyzing active magnetic bearings (AMB) mathematical model, the existence of radial displacement runout is demonstrated. To restrain the runout, a new control scheme-adaptive iterative learning control (AILC) is proposed in view of rotor frequency periodic uncertainties during the startup process. The previous error signal is added into AILC learning law to enhance the convergence speed, and an impacting factor β influenced by the rotor rotating frequency is introduced as learning output coefficient to improve the rotor control effects. As a feed-forward compensation controller, AILC can provide one unknown and perfect compensatory signal to make the rotor rotate around its geometric axis through power amplifier and radial magnetic bearings. To improve AMB closed-loop control system robust stability, one kind of incomplete differential PID feedback controller is adopted. The correctness of the AILC algorithm is validated by the simulation of AMB mathematical model adding AILC compensation algorithm through MATLAB soft. And the compensation for fixed rotational frequency is implemented in the actual AMB system. The simulation and experiment results show that the compensation scheme based on AILC algorithm as feed-forward compensation and PID algorithm as close-loop control can realize AMB system displacement minimum compensation at one fixed frequency, and improve the stability of the control system. The proposed research provides a new adaptive iterative learning
Ma, Xia L; Wan, Zhengming; Moeller, Christopher C; Menzel, W Paul; Gumley, Liam E
2002-02-10
An extension to the two-step physical retrieval algorithm was developed. Combined clear-sky multitemporal and multispectral observations were used to retrieve the atmospheric temperature-humidity profile, land-surface temperature, and surface emissivities in the midwave (3-5 microns) and long-wave (8-14.5 microns) regions. The extended algorithm was tested with both simulated and real data from the Moderate-Resolution Imaging Spectroradiometer (MODIS) Airborne Simulator. A sensitivity study and error analysis demonstrate that retrieval performance is improved by the extended algorithm. The extended algorithm is relatively insensitive to the uncertainties simulated for the real observations. The extended algorithm was also applied to real MODIS daytime and nighttime observations and showed that it is capable of retrieving medium-scale atmospheric temperature water vapor and retrieving surface temperature emissivity with retrieval accuracy similar to that achieved by the Geostationary Operational Environmental Satellite (GOES) but at a spatial resolution higher than that of GOES. PMID:11908219
El-Sharkawi, M.A.; Venkata, S.S.; Chen, M.; Andexler, G.; Huang, T.
1992-07-28
A system and method for determining and providing reactive power compensation for an inductive load. A reactive power compensator (50,50') monitors the voltage and current flowing through each of three distribution lines (52a, 52b, 52c), which are supplying three-phase power to one or more inductive loads. Using signals indicative of the current on each of these lines when the voltage waveform on the line crosses zero, the reactive power compensator determines a reactive power compensator capacitance that must be connected to the lines to maintain a desired VAR level, power factor, or line voltage. Alternatively, an operator can manually select a specific capacitance for connection to each line, or the capacitance can be selected based on a time schedule. The reactive power compensator produces control signals, which are coupled through optical fibers (102/106) to a switch driver (110, 110') to select specific compensation capacitors (112) for connections to each line. The switch driver develops triggering signals that are supplied to a plurality of series-connected solid state switches (350), which control charge current in one direction in respect to ground for each compensation capacitor. During each cycle, current flows from ground to charge the capacitors as the voltage on the line begins to go negative from its positive peak value. The triggering signals are applied to gate the solid state switches into a conducting state when the potential on the lines and on the capacitors reaches a negative peak value, thereby minimizing both the potential difference and across the charge current through the switches when they begin to conduct. Any harmonic distortion on the potential and current carried by the lines is filtered out from the current and potential signals used by the reactive power compensator so that it does not affect the determination of the required reactive compensation. 26 figs.
El-Sharkawi, Mohamed A.; Venkata, Subrahmanyam S.; Chen, Mingliang; Andexler, George; Huang, Tony
1992-01-01
A system and method for determining and providing reactive power compensation for an inductive load. A reactive power compensator (50,50') monitors the voltage and current flowing through each of three distribution lines (52a, 52b, 52c), which are supplying three-phase power to one or more inductive loads. Using signals indicative of the current on each of these lines when the voltage waveform on the line crosses zero, the reactive power compensator determines a reactive power compensator capacitance that must be connected to the lines to maintain a desired VAR level, power factor, or line voltage. Alternatively, an operator can manually select a specific capacitance for connection to each line, or the capacitance can be selected based on a time schedule. The reactive power compensator produces control signals, which are coupled through optical fibers (102/106) to a switch driver (110, 110') to select specific compensation capacitors (112) for connections to each line. The switch driver develops triggering signals that are supplied to a plurality of series-connected solid state switches (350), which control charge current in one direction in respect to ground for each compensation capacitor. During each cycle, current flows from ground to charge the capacitors as the voltage on the line begins to go negative from its positive peak value. The triggering signals are applied to gate the solid state switches into a conducting state when the potential on the lines and on the capacitors reaches a negative peak value, thereby minimizing both the potential difference and across the charge current through the switches when they begin to conduct. Any harmonic distortion on the potential and current carried by the lines is filtered out from the current and potential signals used by the reactive power compensator so that it does not affect the determination of the required reactive compensation.
Temperature Effects and Compensation-Control Methods
Xia, Dunzhu; Chen, Shuling; Wang, Shourong; Li, Hongsheng
2009-01-01
In the analysis of the effects of temperature on the performance of microgyroscopes, it is found that the resonant frequency of the microgyroscope decreases linearly as the temperature increases, and the quality factor changes drastically at low temperatures. Moreover, the zero bias changes greatly with temperature variations. To reduce the temperature effects on the microgyroscope, temperature compensation-control methods are proposed. In the first place, a BP (Back Propagation) neural network and polynomial fitting are utilized for building the temperature model of the microgyroscope. Considering the simplicity and real-time requirements, piecewise polynomial fitting is applied in the temperature compensation system. Then, an integral-separated PID (Proportion Integration Differentiation) control algorithm is adopted in the temperature control system, which can stabilize the temperature inside the microgyrocope in pursuing its optimal performance. Experimental results reveal that the combination of microgyroscope temperature compensation and control methods is both realizable and effective in a miniaturized microgyroscope prototype. PMID:22408509
The American compensation phenomenon.
Bale, A
1990-01-01
In this article, the author defines the occupational safety and health domain, characterizes the distinct compensation phenomenon in the United States, and briefly reviews important developments in the last decade involving Karen Silkwood, intentional torts, and asbestos litigation. He examines the class conflict over the value and meaning of work-related injuries and illnesses involved in the practical activity of making claims and turning them into money through compensation inquiries. Juries, attributions of fault, and medicolegal discourse play key roles in the compensation phenomenon. This article demonstrates the extensive, probing inquiry through workers' bodies constituted by the American compensation phenomenon into the moral basis of elements of the system of production. PMID:2139638
NASA Technical Reports Server (NTRS)
Meinel, Aden B.; Meinel, Marjorie P.; Stacy, John E.
1989-01-01
Proposed reflecting telescope includes large, low-precision primary mirror stage and small, precise correcting mirror. Correcting mirror machined under computer control to compensate for error in primary mirror. Correcting mirror machined by diamond cutting tool. Computer analyzes interferometric measurements of primary mirror to determine shape of surface of correcting mirror needed to compensate for errors in wave front reflected from primary mirror and commands position and movement of cutting tool accordingly.
An innovative approach to compensator design
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Mcdaniel, W. L., Jr.
1973-01-01
The design is considered of a computer-aided-compensator for a control system from a frequency domain point of view. The design technique developed is based on describing the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. To do this, several definitions in regard to measuring the performance of a system in the frequency domain are given, e.g., relative stability, relative attenuation, proper phasing, etc. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. This tool is called the constraint improvement algorithm. Then for applying the constraint improvement algorithm generalized, gradients for the constraints are derived. Finally, the necessary theory is incorporated in a Computer program called CIP (compensator Improvement Program). The practical usefulness of CIP is demonstrated by two large system examples.
CGI delay compensation. [Computer Generated Image
NASA Technical Reports Server (NTRS)
Mcfarland, R. E.
1986-01-01
Computer-generated graphics in real-time helicopter simulation produces objectionable scene-presentation time delays. In the flight simulation laboratory at Ames Research Center, it has been determined that these delays have an adverse influence on pilot performance during agressive tasks such as nap of the earth (NOE) maneuvers. Using contemporary equipment, computer generated image (CGI) time delays are an unavoidable consequence of the operations required for scene generation. However, providing that magnitude distortions at higher frequencies are tolerable, delay compensation is possible over a restricted frequency range. This range, assumed to have an upper limit of perhaps 10 or 15 rad/sec, conforms approximately to the bandwidth associated with helicopter handling qualities research. A compensation algorithm is introduced here and evaluated in terms of tradeoffs in frequency responses. The algorithm has a discrete basis and accommodates both a large, constant transport delay interval and a periodic delay interval, as associated with asynchronous operations.
Workers' Compensation and Teacher Stress.
ERIC Educational Resources Information Center
Nisbet, Michael K.
1999-01-01
Examines the Workers' Compensation system and teacher stress to determine if a burned-out teacher should be eligible for Workers' Compensation benefits. Concludes that although most states do not allow Workers' Compensation benefits to burned-out teachers, compensation should be granted because the injuries are real and work-related. (Contains 48…
Kassianov, Evgueni I.; Flynn, Connor J.; Koontz, Annette S.; Sivaraman, Chitra; Barnard, James C.
2013-09-11
Well-known cloud-screening algorithms, which are designed to remove cloud-contaminated aerosol optical depths (AOD) from AOD measurements, have shown great performance at many middle-to-low latitude sites around the world. However, they may occasionally fail under challenging observational conditions, such as when the sun is low (near the horizon) or when optically thin clouds with small spatial inhomogeneity occur. Such conditions have been observed quite frequently at the high-latitude Atmospheric Radiation Measurement (ARM) North Slope of Alaska (NSA) sites. A slightly modified cloud-screening version of the standard algorithm is proposed here with a focus on the ARM-supported Multifilter Rotating Shadowband Radiometer (MFRSR) and Normal Incidence Multifilter Radiometer (NIMFR) data. The modified version uses approximately the same techniques as the standard algorithm, but it additionally examines the magnitude of the slant-path line of sight transmittance and eliminates points when the observed magnitude is below a specified threshold. Substantial improvement of the multi-year (1999-2012) aerosol product (AOD and its Angstrom exponent) is shown for the NSA sites when the modified version is applied. Moreover, this version reproduces the AOD product at the ARM Southern Great Plains (SGP) site, which was originally generated by the standard cloud-screening algorithms. The proposed minor modification is easy to implement and its application to existing and future cloud-screening algorithms can be particularly beneficial for challenging observational conditions.
NASA Astrophysics Data System (ADS)
Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.
2014-06-01
The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.
Compensating springback in the automotive practice using MASHAL
NASA Astrophysics Data System (ADS)
Ohnimus, S.; Petzoldt, M.; Rietman, B.; Weiher, J.
2005-08-01
New materials are used in the automotive industry to reduce weight and to improve crash performance. These materials feature a higher ratio of yield stress to elastic modulus leading to increased springback after tool release. The resulting shape deviations and their efficient reduction is of major interest for the automotive industry nowadays. The usual strategies for springback reduction can diminish springback to a certain amount only. In order to reduce the remaining shape deviation a mathematical compensation algorithm is presented. The objective is to obtain the tool geometry such that the part springs back into the right shape after releasing the tools. In practice the process of compensation involves different tasks beginning with CAD construction of the part, planning the drawing method and tool construction, FE-simulation, deep drawing at try-out stage and measurement of the manufactured part. Thus the compensation can not be treated as an isolated task but as a process with various restrictions and requirements of today's automotive practice. For this reason a software prototype for compensation methods MASHAL — meaning program to maintain accuracy (MASsHALtigkeit) — was developed. The basic idea of compensation with MASHAL is the transfer and application of shape deviations between two different geometries on a third one. The developed algorithm allows for an effective processing of these data, an approximation of springback and shape deviations and for a smooth extrapolation onto the tool geometry. Following topics are addressed: positioning of parts, global compensation and restriction of compensation to local areas, damping of the compensation function in the blank holder domain, simulation and validation of springback and compensation of CAD-data. The complete compensation procedure is illustrated on an industrial part.
Backlash compensator mechanism
Chrislock, Jerry L.
1979-01-01
Mechanism which compensates for backlash error in a lead screw position indicator by decoupling the indicator shaft from the lead screw when reversing rotation. The position indicator then displays correct information regardless of the direction of rotation of the lead screw.
Teacher Compensation and Organization.
ERIC Educational Resources Information Center
Kelley, Carolyn
1997-01-01
Examines changes in the conceptualization of schooling over time from an organizational perspective. Explores how compensation systems might be better designed to match alternative organizational designs, considering scientific management, effective schools, content-driven schooling, and high standards/high involvement schools as organizational…
ERIC Educational Resources Information Center
Richwine, Jason; Biggs, Andrew; Mishel, Lawrence; Roy, Joydeep
2012-01-01
Over the past few years, as cash-strapped states and school districts have faced tough budget decisions, spending on teacher compensation has come under the microscope. The underlying question is whether, when you take everything into account, today's teachers are fairly paid, underpaid, or overpaid. In this forum, two pairs of respected…
Reactive Power Compensating System.
Williams, Timothy J.; El-Sharkawi, Mohamed A.; Venkata, Subrahmanyam S.
1985-01-04
The circuit was designed for the specific application of wind-driven induction generators. It has great potential for application in any situation where a varying reactive power load is present, such as with induction motors or generators, or for transmission network compensation.
An innovative approach to compensator design
NASA Technical Reports Server (NTRS)
Mitchell, J. R.
1972-01-01
The primary goal is to present for a control system a computer-aided-compensator design technique from a frequency domain point of view. The thesis for developing this technique is to describe the open loop frequency response by n discrete frequency points which result in n functions of the compensator coefficients. Several of these functions are chosen so that the system specifications are properly portrayed; then mathematical programming is used to improve all of these functions which have values below minimum standards. In order to do this several definitions in regard to measuring the performance of a system in the frequency domain are given. Next, theorems which govern the number of compensator coefficients necessary to make improvements in a certain number of functions are proved. After this a mathematical programming tool for aiding in the solution of the problem is developed. Then for applying the constraint improvement algorithm generalized gradients for the constraints are derived. Finally, the necessary theory is incorporated in a computer program called CIP (compensator improvement program).
ICA-based compensation for IQ imbalance in OFDM optical fiber communication
NASA Astrophysics Data System (ADS)
Jiang, Shan; Hu, Guijun; Li, Zhaoxi; Mu, Liping; Zhang, Jingdong
2014-01-01
A method based on the independent component analysis (ICA) is proposed to compensate the in-phase and quadrature-phase the (IQ) imbalance in orthogonal frequency division multiplexing (OFDM) optical fiber communication systems. The mathematical model of IQ imbalance system has been analyzed. Then, ICA algorithm is applied in the system to combat the mirror interference introduced by IQ imbalance. This algorithm can realize the joint compensation of both transmitter and receiver IQ imbalance with the optical channel that contains noise, attenuation and chromatic dispersion. The simulation shows that the performance degradation caused by IQ imbalance can be compensated by ICA algorithm effectively.
Deferred Compensation Becomes More Common
ERIC Educational Resources Information Center
June, Audrey Williams
2006-01-01
A key part of the compensation package for some college and university presidents is money that they do not receive in their paychecks. Formally known as deferred compensation, such payments can take many forms, including supplemental retirement pay, severance pay, or even bonuses. With large institutions leading the way, deferred compensation has…
The Federal Employees' Compensation Act.
ERIC Educational Resources Information Center
Nordlund, Willis J.
1991-01-01
The 1916 Federal Employees' Compensation Act is still the focal point around which the federal workers compensation program works today. The program has gone through many changes on its way to becoming a modern means of compensating workers for job-related injury, disease, and death. (Author)
Ground difference compensating system
Johnson, Kris W.; Akasam, Sivaprasad
2005-10-25
A method of ground level compensation includes measuring a voltage of at least one signal with respect to a primary ground potential and measuring, with respect to the primary ground potential, a voltage level associated with a secondary ground potential. A difference between the voltage level associated with the secondary ground potential and an expected value is calculated. The measured voltage of the at least one signal is adjusted by an amount corresponding to the calculated difference.
Compensation of significant parametric uncertainties using sliding mode online learning
NASA Astrophysics Data System (ADS)
Schnetter, Philipp; Kruger, Thomas
An augmented nonlinear inverse dynamics (NID) flight control strategy using sliding mode online learning for a small unmanned aircraft system (UAS) is presented. Because parameter identification for this class of aircraft often is not valid throughout the complete flight envelope, aerodynamic parameters used for model based control strategies may show significant deviations. For the concept of feedback linearization this leads to inversion errors that in combination with the distinctive susceptibility of small UAS towards atmospheric turbulence pose a demanding control task for these systems. In this work an adaptive flight control strategy using feedforward neural networks for counteracting such nonlinear effects is augmented with the concept of sliding mode control (SMC). SMC-learning is derived from variable structure theory. It considers a neural network and its training as a control problem. It is shown that by the dynamic calculation of the learning rates, stability can be guaranteed and thus increase the robustness against external disturbances and system failures. With the resulting higher speed of convergence a wide range of simultaneously occurring disturbances can be compensated. The SMC-based flight controller is tested and compared to the standard gradient descent (GD) backpropagation algorithm under the influence of significant model uncertainties and system failures.
Transducer modeling and compensation in high-pressure dynamic calibration
NASA Astrophysics Data System (ADS)
Gong, Chikun; Li, Yongxin
2005-12-01
When the RBF neural network is used to establish and compensate the transducer model, the numbers of cluster need to be given in advance by using Kohonen algorithm, the RLS algorithm is complicated and the computational burden is much heavier by using it to regulate the output weights. In order to overcome the weakness, a new approach is proposed. The cluster center is decided by the subtractive clustering, and LMS algorithm is used to regulate the output weights. The noise elimination with correlative threshold plus wavelet packet transformation is used to improve the SNR. The study result shows that the network structure is simple and astringency is fast, the modeling and compensation by using the new algorithm is effective to correct the nonlinear dynamic character of transducer, and noise elimination with correlative threshold plus wavelet packet transformation is superior to conventional noise elimination methods.
Control optimization, stabilization and computer algorithms for aircraft applications
NASA Technical Reports Server (NTRS)
1975-01-01
Research related to reliable aircraft design is summarized. Topics discussed include systems reliability optimization, failure detection algorithms, analysis of nonlinear filters, design of compensators incorporating time delays, digital compensator design, estimation for systems with echoes, low-order compensator design, descent-phase controller for 4-D navigation, infinite dimensional mathematical programming problems and optimal control problems with constraints, robust compensator design, numerical methods for the Lyapunov equations, and perturbation methods in linear filtering and control.
Computer aided modelling/compensator design for a flexible space antenna
NASA Technical Reports Server (NTRS)
Gibson, J. S.; Mingori, D. L.
1985-01-01
Controller design algorithms are developed to produce simultaneously both a model of the plant and a compensator. The size of the model and properties of the compensator are driven by the performance requirements, the disturbance environment, and the location, number and type of sensors and actuators. The procedure is based on linear optimal control theory for distributed systems, and balanced realization theory is used to guide the development of the model and reduce the order of the compensator.
Software compensated multichannel pressure sensing system
NASA Technical Reports Server (NTRS)
Chapman, John J.
1990-01-01
A PC-based software system is described which can be used for data acquisition and thermal-error correction of a multichannel pressure-sensor system developed for use in a cryogenic environment. The software incorporates pressure-sensitivity and sensor-offset compensation files into thermal error-correction algorithms, and the sensors are calibrated by simulating the operating conditions. The system is found to be effective in the collecting, storing, and processing of multichannel pressure-sensor data to correct thermally induced offset and sensitivity errors.
Energy compensated solid state gamma dosimeter
Sinclair, F.; Clapp, A.; Entine, G.; Kronenberg, S.
1988-02-01
Solid state semiconductor detectors using pulse mode detection are attractive candidates for real time dosimetry systems. Their high atomic number relative to that of tissue gives a nonlinear response as a function of the photon energy over the range from 30 keV to 10 MeV. An analytical model of a silicon PIN diode has been developed, including the photoelectric and Compton interactions as well as the ejection of the secondary electrons from the sensitive volume. The authors tested a nonlinear pulse height compensation algorithm using calibrated gamma and x-ray fluxes, and find that this approach improves the dose accuracy.
Path Following with Slip Compensation for a Mars Rover
NASA Technical Reports Server (NTRS)
Helmick, Daniel; Cheng, Yang; Clouse, Daniel; Matthies, Larry; Roumeliotis, Stergios
2005-01-01
A software system for autonomous operation of a Mars rover is composed of several key algorithms that enable the rover to accurately follow a designated path, compensate for slippage of its wheels on terrain, and reach intended goals. The techniques implemented by the algorithms are visual odometry, full vehicle kinematics, a Kalman filter, and path following with slip compensation. The visual-odometry algorithm tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs, by use of a maximum-likelihood motion-estimation algorithm. The full-vehicle kinematics algorithm estimates motion, with a no-slip assumption, from measured wheel rates, steering angles, and angles of rockers and bogies in the rover suspension system. The Kalman filter merges data from an inertial measurement unit (IMU) and the visual-odometry algorithm. The merged estimate is then compared to the kinematic estimate to determine whether and how much slippage has occurred. The kinematic estimate is used to complement the Kalman-filter estimate if no statistically significant slippage has occurred. If slippage has occurred, then a slip vector is calculated by subtracting the current Kalman filter estimate from the kinematic estimate. This slip vector is then used, in conjunction with the inverse kinematics, to determine the wheel velocities and steering angles needed to compensate for slip and follow the desired path.
Analysis and modeling of thermal-blooming compensation
NASA Astrophysics Data System (ADS)
Schonfeld, Jonathan F.
1990-05-01
The present evaluation of recent progress in the analysis and computer modeling of adaptive optics hardware applicable to compensation for thermal blooming gives attention to an analytical theory of phase-compensation instability (PCI) that incorporates the actuator geometry of real deformable mirrors, as well as to novel algorithms for computer simulation of adaptive optics hardware. An analytical formalism is presented which facilitates the quantitative analysis of the effects of the adaptive-optics control system on PCI, and leads to both a universality theorem for PCI growth rates and the realization that wind exerts a greater influence on PCI growth rates than previously suspected. The analysis and algorithms are illustrated by the results of the time-dependent adaptively-compensated laser propagation code for thermal blooming, MOLLY, which has been optimized for the Cray-2 supercomputer.
Block-classified motion compensation scheme for digital video
Zafar, S.; Zhang, Ya-Qin; Jabbari, B.
1996-03-01
A novel scheme for block-based motion compensation is introduced in which a block is classified according to the energy that is directly related to the motion activity it represents. This classification allows more flexibility in controlling the bit rate arid the signal-to-noise ratio and results in a reduction in motion search complexity. The method introduced is not dependent on the particular type of motion search algorithm implemented and can thus be used with any method assuming that the underlying matching criteria used is minimum absolute difference. It has been shown that the method is superior to a simple motion compensation algorithm where all blocks are motion compensated regardless of the energy resulting after the displaced difference.
Using Weighting Adjustments to Compensate for Survey Nonresponse
ERIC Educational Resources Information Center
Pike, Gary R.
2008-01-01
Weighting adjustments are used in some studies to compensate for biased estimators produced by survey nonresponse. Using data from the 2004 National Survey of Student Engagement (NSSE) and the NSSE poststratification weighting algorithm, this study found that weighting adjustments were needed for some, but not all institutions. Unfortunately, no…
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2014-01-01
The AIRS Science Team Version-6 AIRS/AMSU retrieval algorithm is now operational at the Goddard DISC. AIRS Version-6 level-2 products are generated near real-time at the Goddard DISC and all level-2 and level-3 products are available starting from September 2002. This paper describes some of the significant improvements in retrieval methodology contained in the Version-6 retrieval algorithm compared to that previously used in Version-5. In particular, the AIRS Science Team made major improvements with regard to the algorithms used to 1) derive surface skin temperature and surface spectral emissivity; 2) generate the initial state used to start the cloud clearing and retrieval procedures; and 3) derive error estimates and use them for Quality Control. Significant improvements have also been made in the generation of cloud parameters. In addition to the basic AIRS/AMSU mode, Version-6 also operates in an AIRS Only (AO) mode which produces results almost as good as those of the full AIRS/AMSU mode. This paper also demonstrates the improvements of some AIRS Version-6 and Version-6 AO products compared to those obtained using Version-5.
A Novel Speed Compensation Method for ISAR Imaging with Low SNR.
Liu, Yongxiang; Zhang, Shuanghui; Zhu, Dekang; Li, Xiang
2015-01-01
In this paper, two novel speed compensation algorithms for ISAR imaging under a low signal-to-noise ratio (SNR) condition have been proposed, which are based on the cubic phase function (CPF) and the integrated cubic phase function (ICPF), respectively. These two algorithms can estimate the speed of the target from the wideband radar echo directly, which breaks the limitation of speed measuring in a radar system. With the utilization of non-coherent accumulation, the ICPF-based speed compensation algorithm is robust to noise and can meet the requirement of speed compensation for ISAR imaging under a low SNR condition. Moreover, a fast searching implementation strategy, which consists of coarse search and precise search, has been introduced to decrease the computational burden of speed compensation based on CPF and ICPF. Experimental results based on radar data validate the effectiveness of the proposed algorithms. PMID:26225980
A Novel Speed Compensation Method for ISAR Imaging with Low SNR
Liu, Yongxiang; Zhang, Shuanghui; Zhu, Dekang; Li, Xiang
2015-01-01
In this paper, two novel speed compensation algorithms for ISAR imaging under a low signal-to-noise ratio (SNR) condition have been proposed, which are based on the cubic phase function (CPF) and the integrated cubic phase function (ICPF), respectively. These two algorithms can estimate the speed of the target from the wideband radar echo directly, which breaks the limitation of speed measuring in a radar system. With the utilization of non-coherent accumulation, the ICPF-based speed compensation algorithm is robust to noise and can meet the requirement of speed compensation for ISAR imaging under a low SNR condition. Moreover, a fast searching implementation strategy, which consists of coarse search and precise search, has been introduced to decrease the computational burden of speed compensation based on CPF and ICPF. Experimental results based on radar data validate the effectiveness of the proposed algorithms. PMID:26225980
NASA Astrophysics Data System (ADS)
Arockia Bazil Raj, A.; Padmavathi, S.
2016-07-01
Atmospheric parameters strongly affect the performance of Free Space Optical Communication (FSOC) system when the optical wave is propagating through the inhomogeneous turbulent medium. Developing a model to get an accurate prediction of optical attenuation according to meteorological parameters becomes significant to understand the behaviour of FSOC channel during different seasons. A dedicated free space optical link experimental set-up is developed for the range of 0.5 km at an altitude of 15.25 m. The diurnal profile of received power and corresponding meteorological parameters are continuously measured using the developed optoelectronic assembly and weather station, respectively, and stored in a data logging computer. Measured meteorological parameters (as input factors) and optical attenuation (as response factor) of size [177147 × 4] are used for linear regression analysis and to design the mathematical model that is more suitable to predict the atmospheric optical attenuation at our test field. A model that exhibits the R2 value of 98.76% and average percentage deviation of 1.59% is considered for practical implementation. The prediction accuracy of the proposed model is investigated along with the comparative results obtained from some of the existing models in terms of Root Mean Square Error (RMSE) during different local seasons in one-year period. The average RMSE value of 0.043-dB/km is obtained in the longer range dynamic of meteorological parameters variations.
Self-compensating tensiometer and method
Hubbell, Joel M.; Sisson, James B.
2003-01-01
A pressure self-compensating tensiometer and method to in situ determine below grade soil moisture potential of earthen soil independent of changes in the volume of water contained within the tensiometer chamber, comprising a body having first and second ends, a porous material defining the first body end, a liquid within the body, a transducer housing submerged in the liquid such that a transducer sensor within the housing is kept below the working fluid level in the tensiometer and in fluid contact with the liquid and the ambient atmosphere.
Laser Gyro Temperature Compensation Using Modified RBFNN
Ding, Jicheng; Zhang, Jian; Huang, Weiquan; Chen, Shuai
2014-01-01
To overcome the effect of temperature on laser gyro zero bias and to stabilize the laser gyro output, this study proposes a modified radial basis function neural network (RBFNN) based on a Kohonen network and an orthogonal least squares (OLS) algorithm. The modified method, which combines the pattern classification capability of the Kohonen network and the optimal choice capacity of OLS, avoids the random selection of RBFNN centers and improves the compensation accuracy of the RBFNN. It can quickly and accurately identify the effect of temperature on laser gyro zero bias. A number of comparable identification and compensation tests on a variety of temperature-changing situations are completed using the multiple linear regression (MLR), RBFNN and modified RBFNN methods. The test results based on several sets of gyro output in constant and changing temperature conditions demonstrate that the proposed method is able to overcome the effect of randomly selected RBFNN centers. The running time of the method is about 60 s shorter than that of traditional RBFNN under the same test conditions, which suggests that the calculations are reduced. Meanwhile, the compensated gyro output accuracy using the modified method is about 7.0 × 10−4 °/h; comparatively, the traditional RBFNN is about 9.0 × 10−4 °/h and the MLR is about 1.4 × 10−3 °/h. PMID:25302814
50 CFR 296.4 - Claims eligible for compensation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Claims eligible for compensation. 296.4 Section 296.4 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE CONTINENTAL SHELF FISHERMEN'S CONTINGENCY FUND § 296.4 Claims eligible...
50 CFR 296.4 - Claims eligible for compensation.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 50 Wildlife and Fisheries 11 2013-10-01 2013-10-01 false Claims eligible for compensation. 296.4 Section 296.4 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE CONTINENTAL SHELF FISHERMEN'S CONTINGENCY FUND § 296.4 Claims eligible...
38 CFR 3.5 - Dependency and indemnity compensation.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., child or parent based on the death of a commissioned officer of the Public Health Service, the Coast and Geodetic Survey, the Environmental Science Services Administration, or the National Oceanic and Atmospheric... compensation is payable upon election. (38 U.S.C. 1310, 1316, 1317, Public Law 92-197, 85 Stat. 660)...
50 CFR 296.4 - Claims eligible for compensation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Claims eligible for compensation. 296.4 Section 296.4 Wildlife and Fisheries NATIONAL MARINE FISHERIES SERVICE, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE CONTINENTAL SHELF FISHERMEN'S CONTINGENCY FUND § 296.4 Claims eligible...
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A. (Principal Investigator); Barkstrom, Bruce R. (Principal Investigator); Baum, Bryan A.; Charlock, Thomas P.; Green, Richard N.; Lee, Robert B., III; Minnis, Patrick; Smith, G. Louis; Coakley, J. A.; Randall, David R.
1995-01-01
The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and the Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 4 details the advanced CERES techniques for computing surface and atmospheric radiative fluxes (using the coincident CERES cloud property and top-of-the-atmosphere (TOA) flux products) and for averaging the cloud properties and TOA, atmospheric, and surface radiative fluxes over various temporal and spatial scales. CERES attempts to match the observed TOA fluxes with radiative transfer calculations that use as input the CERES cloud products and NOAA National Meteorological Center analyses of temperature and humidity. Slight adjustments in the cloud products are made to obtain agreement of the calculated and observed TOA fluxes. The computed products include shortwave and longwave fluxes from the surface to the TOA. The CERES instantaneous products are averaged on a 1.25-deg latitude-longitude grid, then interpolated to produce global, synoptic maps to TOA fluxes and cloud properties by using 3-hourly, normalized radiances from geostationary meteorological satellites. Surface and atmospheric fluxes are computed by using these interpolated quantities. Clear-sky and total fluxes and cloud properties are then averaged over various scales.
Extrathermodynamics: Varieties of Compensation Effect.
Khakhel', Oleg A; Romashko, Tamila P
2016-03-31
There are several types of the ΔH compensation. Along with well-known phenomenon of the ΔH - ΔS compensation, two more types of the ΔH - (ΔS + RΔ ln Ω) compensation are observed in some series of systems. The nature of these phenomena is connected with the behavior of phase volume of systems, Ω. The role of other thermodynamic parameters, which describe series in manifestation of this or that types of the ΔH compensation, is shown in light of molecular statistical mechanics. PMID:26949977
Compensations during Unsteady Locomotion.
Qiao, Mu; Jindrich, Devin L
2014-12-01
Locomotion in a complex environment is often not steady, but the mechanisms used by animals to power and control unsteady locomotion (stability and maneuverability) are not well understood. We use behavioral, morphological, and impulsive perturbations to determine the compensations used during unsteady locomotion. At the level both of the whole-body and of joints, quasi-stiffness models are useful for describing adjustments to the functioning of legs and joints during maneuvers. However, alterations to the mechanics of legs and joints often are distinct for different phases of the step cycle or for specific joints. For example, negotiating steps involves independent changes of leg stiffness during compression and thrust phases of stance. Unsteady locomotion also involves parameters that are not part of the simplest reduced-parameter models of locomotion (e.g., the spring-loaded inverted pendulum) such as moments of the hip joint. Extensive coupling among translational and rotational parameters must be taken into account to stabilize locomotion or maneuver. For example, maneuvers with morphological perturbations (increased rotational inertial turns) involve changes to several aspects of movement, including the initial conditions of rotation and ground-reaction forces. Coupled changes to several parameters may be employed to control maneuvers on a trial-by-trial basis. Compensating for increased rotational inertia of the body during turns is facilitated by the opposing effects of several mechanical and behavioral parameters. However, the specific rules used by animals to control translation and rotation of the body to maintain stability or maneuver have not been fully characterized. We initiated direct-perturbation experiments to investigate the strategies used by humans to maintain stability following center-of-mass (COM) perturbations. When walking, humans showed more resistance to medio-lateral perturbations (lower COM displacement). However, when running, humans
Fixman compensating potential for general branched molecules
Jain, Abhinandan; Kandel, Saugat; Wagner, Jeffrey; Larsen, Adrien; Vaidehi, Nagarajan
2013-01-01
The technique of constraining high frequency modes of molecular motion is an effective way to increase simulation time scale and improve conformational sampling in molecular dynamics simulations. However, it has been shown that constraints on higher frequency modes such as bond lengths and bond angles stiffen the molecular model, thereby introducing systematic biases in the statistical behavior of the simulations. Fixman proposed a compensating potential to remove such biases in the thermodynamic and kinetic properties calculated from dynamics simulations. Previous implementations of the Fixman potential have been limited to only short serial chain systems. In this paper, we present a spatial operator algebra based algorithm to calculate the Fixman potential and its gradient within constrained dynamics simulations for branched topology molecules of any size. Our numerical studies on molecules of increasing complexity validate our algorithm by demonstrating recovery of the dihedral angle probability distribution function for systems that range in complexity from serial chains to protein molecules. We observe that the Fixman compensating potential recovers the free energy surface of a serial chain polymer, thus annulling the biases caused by constraining the bond lengths and bond angles. The inclusion of Fixman potential entails only a modest increase in the computational cost in these simulations. We believe that this work represents the first instance where the Fixman potential has been used for general branched systems, and establishes the viability for its use in constrained dynamics simulations of proteins and other macromolecules. PMID:24387353
Fixman compensating potential for general branched molecules
Jain, Abhinandan; Kandel, Saugat; Wagner, Jeffrey; Larsen, Adrien; Vaidehi, Nagarajan
2013-12-28
The technique of constraining high frequency modes of molecular motion is an effective way to increase simulation time scale and improve conformational sampling in molecular dynamics simulations. However, it has been shown that constraints on higher frequency modes such as bond lengths and bond angles stiffen the molecular model, thereby introducing systematic biases in the statistical behavior of the simulations. Fixman proposed a compensating potential to remove such biases in the thermodynamic and kinetic properties calculated from dynamics simulations. Previous implementations of the Fixman potential have been limited to only short serial chain systems. In this paper, we present a spatial operator algebra based algorithm to calculate the Fixman potential and its gradient within constrained dynamics simulations for branched topology molecules of any size. Our numerical studies on molecules of increasing complexity validate our algorithm by demonstrating recovery of the dihedral angle probability distribution function for systems that range in complexity from serial chains to protein molecules. We observe that the Fixman compensating potential recovers the free energy surface of a serial chain polymer, thus annulling the biases caused by constraining the bond lengths and bond angles. The inclusion of Fixman potential entails only a modest increase in the computational cost in these simulations. We believe that this work represents the first instance where the Fixman potential has been used for general branched systems, and establishes the viability for its use in constrained dynamics simulations of proteins and other macromolecules.
Topography-Dependent Motion Compensation: Application to UAVSAR Data
NASA Technical Reports Server (NTRS)
Jones, Cathleen E.; Hensley, Scott; Michel, Thierry
2009-01-01
The UAVSAR L-band synthetic aperture radar system has been designed for repeat track interferometry in support of Earth science applications that require high-precision measurements of small surface deformations over timescales from hours to years. Conventional motion compensation algorithms, which are based upon assumptions of a narrow beam and flat terrain, yield unacceptably large errors in areas with even moderate topographic relief, i.e., in most areas of interest. This often limits the ability to achieve sub-centimeter surface change detection over significant portions of an acquired scene. To reduce this source of error in the interferometric phase, we have implemented an advanced motion compensation algorithm that corrects for the scene topography and radar beam width. Here we discuss the algorithm used, its implementation in the UAVSAR data processor, and the improvement in interferometric phase and correlation achieved in areas with significant topographic relief.
Summing pressure compensation control
Myers, H.A.
1988-04-26
This patent describes a summing pressure compensator control for hydraulic loads with at least one of the hydraulic loads being a variable displacement motor having servo means for controlling the displacement thereof, first hydraulic means responsive to the supply of fluid to the variable displacement motor to provide a first pressure signal, second hydraulic means responsive to the supply of fluid to a second hydraulic load to provide a second pressure signal, summing means for receiving the first and second pressure signals and providing a control signal proportional to the sum of the first and second pressure signals, the control signal being applied to the servo means to increase the displacement of the variable displacement motor.
Temperature compensated photovoltaic array
Mosher, Dan Michael
1997-11-18
A temperature compensated photovoltaic module (20) comprised of a series of solar cells (22) having a thermally activated switch (24) connected in parallel with several of the cells (22). The photovoltaic module (20) is adapted to charge conventional batteries having a temperature coefficient (TC) differing from the temperature coefficient (TC) of the module (20). The calibration temperatures of the switches (24) are chosen whereby the colder the ambient temperature for the module (20), the more switches that are on and form a closed circuit to short the associated solar cells (22). By shorting some of the solar cells (22) as the ambient temperature decreases, the battery being charged by the module (20) is not excessively overcharged at lower temperatures. PV module (20) is an integrated solution that is reliable and inexpensive.
Temperature compensated photovoltaic array
Mosher, D.M.
1997-11-18
A temperature compensated photovoltaic module comprises a series of solar cells having a thermally activated switch connected in parallel with several of the cells. The photovoltaic module is adapted to charge conventional batteries having a temperature coefficient differing from the temperature coefficient of the module. The calibration temperatures of the switches are chosen whereby the colder the ambient temperature for the module, the more switches that are on and form a closed circuit to short the associated solar cells. By shorting some of the solar cells as the ambient temperature decreases, the battery being charged by the module is not excessively overcharged at lower temperatures. PV module is an integrated solution that is reliable and inexpensive. 2 figs.
NASA Astrophysics Data System (ADS)
Wittig, K. R.
1982-06-01
A signal processing system has been designed and constructed for a pyroelectric infrared area detector which uses a matrix-addressable JFET array for readout and for on-focal plane preamplification. The system compensates for all offset and gain nonuniformities in and after the array. Both compensations are performed in real time at standard television rates, so that changes in the response characteristics of the array are automatically corrected for. Two-point compensation is achieved without the need for two separate temperature references. The focal plane circuitry used to read out the array, the offset and gain compensation algorithms, the architecture of the signal processor, and the system hardware are described.
NASA Astrophysics Data System (ADS)
Guerra, J. E.; Ullrich, P. A.
2015-12-01
Tempest is a next-generation global climate and weather simulation platform designed to allow experimentation with numerical methods at very high spatial resolutions. The atmospheric fluid equations are discretized by continuous / discontinuous finite elements in the horizontal and by a staggered nodal finite element method (SNFEM) in the vertical, coupled with implicit/explicit time integration. At global horizontal resolutions below 10km, many important questions remain on optimal techniques for solving the fluid equations. We present results from a suite of meso-scale test cases to validate the performance of the SNFEM applied in the vertical. Internal gravity wave, mountain wave, convective, and Cartesian baroclinic instability tests will be shown at various vertical orders of accuracy and compared with known results.
NASA Astrophysics Data System (ADS)
Guerra, Jorge; Ullrich, Paul
2016-04-01
Tempest is a next-generation global climate and weather simulation platform designed to allow experimentation with numerical methods for a wide range of spatial resolutions. The atmospheric fluid equations are discretized by continuous / discontinuous finite elements in the horizontal and by a staggered nodal finite element method (SNFEM) in the vertical, coupled with implicit/explicit time integration. At horizontal resolutions below 10km, many important questions remain on optimal techniques for solving the fluid equations. We present results from a suite of idealized test cases to validate the performance of the SNFEM applied in the vertical with an emphasis on flow features and dynamic behavior. Internal gravity wave, mountain wave, convective bubble, and Cartesian baroclinic instability tests will be shown at various vertical orders of accuracy and compared with known results.
More rain compensation results
NASA Technical Reports Server (NTRS)
Sworder, D. D.; Vojak, R.
1992-01-01
To reduce the impact of rain-induced attenuation in the 20/30 GHz band, the attenuation at a specified signal frequency must be estimated and extrapolated forward in time on the basis of a noisy beacon measurement. Several studies have used model based procedures for solving this problem in statistical inference. Perhaps the most widely used model-based paradigm leads to the Kalman filter and its lineal variants. In this formulation, the dynamic features of the attenuation are represented by a state process (x(sub t)). The observation process (y(sub t)) is derived from beacon measurements. Some ideas relating to the signal processing problems related to uplink power control are presented. It is shown that some easily implemented algorithms hold promise for use in estimating rain induced fades. The algorithms were applied to actual data generated at the Virginia Polytechnic Institute and State University (VPI) test facility. Because only one such event was studied, it is not clear that the algorithms will have the same effectiveness when a wide range of events are studied.