List-Based Simulated Annealing Algorithm for Traveling Salesman Problem.
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun; Zhong, Yi-wen
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.
List-Based Simulated Annealing Algorithm for Traveling Salesman Problem
Zhan, Shi-hua; Lin, Juan; Zhang, Ze-jun
2016-01-01
Simulated annealing (SA) algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters' setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA) algorithm to solve traveling salesman problem (TSP). LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms. PMID:27034650
Monthly prediction of air temperature in Australia and New Zealand with machine learning algorithms
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Deo, R. C.; Carro-Calvo, L.; Saavedra-Moreno, B.
2016-07-01
Long-term air temperature prediction is of major importance in a large number of applications, including climate-related studies, energy, agricultural, or medical. This paper examines the performance of two Machine Learning algorithms (Support Vector Regression (SVR) and Multi-layer Perceptron (MLP)) in a problem of monthly mean air temperature prediction, from the previous measured values in observational stations of Australia and New Zealand, and climate indices of importance in the region. The performance of the two considered algorithms is discussed in the paper and compared to alternative approaches. The results indicate that the SVR algorithm is able to obtain the best prediction performance among all the algorithms compared in the paper. Moreover, the results obtained have shown that the mean absolute error made by the two algorithms considered is significantly larger for the last 20 years than in the previous decades, in what can be interpreted as a change in the relationship among the prediction variables involved in the training of the algorithms.
NASA Astrophysics Data System (ADS)
Bunai, Tasya; Rokhmatuloh; Wibowo, Adi
2018-05-01
In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.
D.J. Nicolsky; V.E. Romanovsky; G.G. Panteleev
2008-01-01
A variational data assimilation algorithm is developed to reconstruct thermal properties, porosity, and parametrization of the unfrozen water content for fully saturated soils. The algorithm is tested with simulated synthetic temperatures. The simulations are performed to determine the robustness and sensitivity of algorithm to estimate soil properties from in-situ...
Liauh, Chihng-Tsung; Shih, Tzu-Ching; Huang, Huang-Wen; Lin, Win-Li
2004-02-01
An inverse algorithm with Tikhonov regularization of order zero has been used to estimate the intensity ratios of the reflected longitudinal wave to the incident longitudinal wave and that of the refracted shear wave to the total transmitted wave into bone in calculating the absorbed power field and then to reconstruct the temperature distribution in muscle and bone regions based on a limited number of temperature measurements during simulated ultrasound hyperthermia. The effects of the number of temperature sensors are investigated, as is the amount of noise superimposed on the temperature measurements, and the effects of the optimal sensor location on the performance of the inverse algorithm. Results show that noisy input data degrades the performance of this inverse algorithm, especially when the number of temperature sensors is small. Results are also presented demonstrating an improvement in the accuracy of the temperature estimates by employing an optimal value of the regularization parameter. Based on the analysis of singular-value decomposition, the optimal sensor position in a case utilizing only one temperature sensor can be determined to make the inverse algorithm converge to the true solution.
NASA Astrophysics Data System (ADS)
Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew
2017-08-01
Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.
Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
NASA Astrophysics Data System (ADS)
Junghans, Christoph; Mniszewski, Susan; Voter, Arthur; Perez, Danny; Eidenbenz, Stephan
2014-03-01
We present an example of a new class of tools that we call application simulators, parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation (PDES). We demonstrate our approach with a TADSim application simulator that models the Temperature Accelerated Dynamics (TAD) method, which is an algorithmically complex member of the Accelerated Molecular Dynamics (AMD) family. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We further extend TADSim to model algorithm extensions to standard TAD, such as speculative spawning of the compute-bound stages of the algorithm, and predict performance improvements without having to implement such a method. Focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights into the TAD algorithm behavior and suggested extensions to the TAD method.
A simple algorithm for beam profile diagnostics using a thermographic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katagiri, Ken; Hojo, Satoru; Honma, Toshihiro
2014-03-15
A new algorithm for digital image processing apparatuses is developed to evaluate profiles of high-intensity DC beams from temperature images of irradiated thin foils. Numerical analyses are performed to examine the reliability of the algorithm. To simulate the temperature images acquired by a thermographic camera, temperature distributions are numerically calculated for 20 MeV proton beams with different parameters. Noise in the temperature images which is added by the camera sensor is also simulated to account for its effect. Using the algorithm, beam profiles are evaluated from the simulated temperature images and compared with exact solutions. We find that niobium ismore » an appropriate material for the thin foil used in the diagnostic system. We also confirm that the algorithm is adaptable over a wide beam current range of 0.11–214 μA, even when employing a general-purpose thermographic camera with rather high noise (ΔT{sub NETD} ≃ 0.3 K; NETD: noise equivalent temperature difference)« less
Temperature - Emissivity Separation Assessment in a Sub-Urban Scenario
NASA Astrophysics Data System (ADS)
Moscadelli, M.; Diani, M.; Corsini, G.
2017-10-01
In this paper, a methodology that aims at evaluating the effectiveness of different TES strategies is presented. The methodology takes into account the specific material of interest in the monitored scenario, sensor characteristics, and errors in the atmospheric compensation step. The methodology is proposed in order to predict and analyse algorithms performances during the planning of a remote sensing mission, aimed to discover specific materials of interest in the monitored scenario. As case study, the proposed methodology is applied to a real airborne data set of a suburban scenario. In order to perform the TES problem, three state-of-the-art algorithms, and a recently proposed one, are investigated: Temperature-Emissivity Separation '98 (TES-98) algorithm, Stepwise Refining TES (SRTES) algorithm, Linear piecewise TES (LTES) algorithm, and Optimized Smoothing TES (OSTES) algorithm. At the end, the accuracy obtained with real data, and the ones predicted by means of the proposed methodology are compared and discussed.
Subsonic flight test evaluation of a performance seeking control algorithm on an F-15 airplane
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Orme, John S.
1992-01-01
The subsonic flight test evaluation phase of the NASA F-15 (powered by F 100 engines) performance seeking control program was completed for single-engine operation at part- and military-power settings. The subsonic performance seeking control algorithm optimizes the quasi-steady-state performance of the propulsion system for three modes of operation. The minimum fuel flow mode minimizes fuel consumption. The minimum thrust mode maximizes thrust at military power. Decreases in thrust-specific fuel consumption of 1 to 2 percent were measured in the minimum fuel flow mode; these fuel savings are significant, especially for supersonic cruise aircraft. Decreases of up to approximately 100 degree R in fan turbine inlet temperature were measured in the minimum temperature mode. Temperature reductions of this magnitude would more than double turbine life if inlet temperature was the only life factor. Measured thrust increases of up to approximately 15 percent in the maximum thrust mode cause substantial increases in aircraft acceleration. The system dynamics of the closed-loop algorithm operation were good. The subsonic flight phase has validated the performance seeking control technology, which can significantly benefit the next generation of fighter and transport aircraft.
Analysis of the Dryden Wet Bulb GLobe Temperature Algorithm for White Sands Missile Range
NASA Technical Reports Server (NTRS)
LaQuay, Ryan Matthew
2011-01-01
In locations where workforce is exposed to high relative humidity and light winds, heat stress is a significant concern. Such is the case at the White Sands Missile Range in New Mexico. Heat stress is depicted by the wet bulb globe temperature, which is the official measurement used by the American Conference of Governmental Industrial Hygienists. The wet bulb globe temperature is measured by an instrument which was designed to be portable and needing routine maintenance. As an alternative form for measuring the wet bulb globe temperature, algorithms have been created to calculate the wet bulb globe temperature from basic meteorological observations. The algorithms are location dependent; therefore a specific algorithm is usually not suitable for multiple locations. Due to climatology similarities, the algorithm developed for use at the Dryden Flight Research Center was applied to data from the White Sands Missile Range. A study was performed that compared a wet bulb globe instrument to data from two Surface Atmospheric Measurement Systems that was applied to the Dryden wet bulb globe temperature algorithm. The period of study was from June to September of2009, with focus being applied from 0900 to 1800, local time. Analysis showed that the algorithm worked well, with a few exceptions. The algorithm becomes less accurate to the measurement when the dew point temperature is over 10 Celsius. Cloud cover also has a significant effect on the measured wet bulb globe temperature. The algorithm does not show red and black heat stress flags well due to shorter time scales of such events. The results of this study show that it is plausible that the Dryden Flight Research wet bulb globe temperature algorithm is compatible with the White Sands Missile Range, except for when there are increased dew point temperatures and cloud cover or precipitation. During such occasions, the wet bulb globe temperature instrument would be the preferred method of measurement. Out of the 30 dates examined, 23 fell under the category of having good accuracy.
NASA Astrophysics Data System (ADS)
Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.
2016-10-01
We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2016-10-14
A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.
Implementation and performance of shutterless uncooled micro-bolometer cameras
NASA Astrophysics Data System (ADS)
Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.
2015-06-01
A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.
Feng, Yibo; Li, Xisheng; Zhang, Xiaojuan
2015-05-13
We present an adaptive algorithm for a system integrated with micro-electro-mechanical systems (MEMS) gyroscopes and a compass to eliminate the influence from the environment, compensate the temperature drift precisely, and improve the accuracy of the MEMS gyroscope. We use a simplified drift model and changing but appropriate model parameters to implement this algorithm. The model of MEMS gyroscope temperature drift is constructed mostly on the basis of the temperature sensitivity of the gyroscope. As the state variables of a strong tracking Kalman filter (STKF), the parameters of the temperature drift model can be calculated to adapt to the environment under the support of the compass. These parameters change intelligently with the environment to maintain the precision of the MEMS gyroscope in the changing temperature. The heading error is less than 0.6° in the static temperature experiment, and also is kept in the range from 5° to -2° in the dynamic outdoor experiment. This demonstrates that the proposed algorithm exhibits strong adaptability to a changing temperature, and performs significantly better than KF and MLR to compensate the temperature drift of a gyroscope and eliminate the influence of temperature variation.
Benchmarking homogenization algorithms for monthly data
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.
2013-09-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
NASA Technical Reports Server (NTRS)
Orme, John S.
1995-01-01
The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.
Two-dimensional imaging of gas temperature and concentration based on hyperspectral tomography
NASA Astrophysics Data System (ADS)
Xin, Ming-yuan; Jin, Xing; Wang, Guang-yu; Song, Junling
2016-10-01
Two-dimensional imaging of gas temperature and concentration is realized by hyperspectral tomography, which has the characteristics of using multi-wavelengths absorption spectral information, so that the imaging could be accomplished in a small number of projections and viewing angles. A temperature and concentration model is established to simulate the combustion conditions and a total number of 10 near-infrared absorption spectral information of H2O is used. An improved simulated annealing algorithm by adjusting search step is performed the main search algorithm for the tomography. By adding random errors into the absorption area information, the stability of the algorithm is tested, and the results are compared with the reconstructions provided by algebraic reconstruction technique which takes advantage of 2 spectral information contents in imaging. The results show that the two methods perform equivalent in low-level noise environment, but at high-level, hyperspectral tomography turns out to be more stable.
Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng
2009-11-01
The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Choi, Doo-Won; Jeon, Min-Gyu; Cho, Gyeong-Rae; Kamimoto, Takahiro; Deguchi, Yoshihiro; Doh, Deog-Hee
2016-02-01
Performance improvement was attained in data reconstructions of 2-dimensional tunable diode laser absorption spectroscopy (TDLAS). Multiplicative Algebraic Reconstruction Technique (MART) algorithm was adopted for data reconstruction. The data obtained in an experiment for the measurement of temperature and concentration fields of gas flows were used. The measurement theory is based upon the Beer-Lambert law, and the measurement system consists of a tunable laser, collimators, detectors, and an analyzer. Methane was used as a fuel for combustion with air in the Bunsen-type burner. The data used for the reconstruction are from the optical signals of 8-laser beams passed on a cross-section of the methane flame. The performances of MART algorithm in data reconstruction were validated and compared with those obtained by Algebraic Reconstruction Technique (ART) algorithm.
NASA Technical Reports Server (NTRS)
Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.
1990-01-01
The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
NASA Technical Reports Server (NTRS)
Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)
2001-01-01
A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.
Pre-launch Performance Assessment of the VIIRS Ice Surface Temperature Algorithm
NASA Astrophysics Data System (ADS)
Ip, J.; Hauss, B.
2008-12-01
The VIIRS Ice Surface Temperature (IST) environmental data product provides the surface temperature of sea-ice at VIIRS moderate resolution (750m) during both day and night. To predict the IST, the retrieval algorithm utilizes a split-window approach with Long-wave Infrared (LWIR) channels at 10.76 μm (M15) and 12.01 μm (M16) to correct for atmospheric water vapor. The split-window approach using these LWIR channels is AVHRR and MODIS heritage, where the MODIS formulation has a slightly modified functional form. The algorithm relies on the VIIRS Cloud Mask IP for identifying cloudy and ocean pixels, the VIIRS Ice Concentration IP for identifying ice pixels, and the VIIRS Aerosol Optical Thickness (AOT) IP for excluding pixels with AOT greater than 1.0. In this paper, we will report the pre-launch performance assessment of the IST retrieval. We have taken two separate approaches to perform this assessment, one based on global synthetic data and the other based on proxy data from Terra MODIS. Results of the split- window algorithm have been assessed by comparison either to synthetic "truth" or results of the MODIS retrieval. We will also show that the results of the assessment with proxy data are consistent with those obtained using the global synthetic data.
Passive Microwave Algorithms for Sea Ice Concentration: A Comparison of Two Techniques
NASA Technical Reports Server (NTRS)
Comiso, Josefino C.; Cavalieri, Donald J.; Parkinson, Claire L.; Gloersen, Per
1997-01-01
The most comprehensive large-scale characterization of the global sea ice cover so far has been provided by satellite passive microwave data. Accurate retrieval of ice concentrations from these data is important because of the sensitivity of surface flux(e.g. heat, salt, and water) calculations to small change in the amount of open water (leads and polynyas) within the polar ice packs. Two algorithms that have been used for deriving ice concentrations from multichannel data are compared. One is the NASA Team algorithm and the other is the Bootstrap algorithm, both of which were developed at NASA's Goddard Space Flight Center. The two algorithms use different channel combinations, reference brightness temperatures, weather filters, and techniques. Analyses are made to evaluate the sensitivity of algorithm results to variations of emissivity and temperature with space and time. To assess the difference in the performance of the two algorithms, analyses were performed with data from both hemispheres and for all seasons. The results show only small differences in the central Arctic in but larger disagreements in the seasonal regions and in summer. In some ares in the Antarctic, the Bootstrap technique show ice concentrations higher than those of the Team algorithm by as much as 25%; whereas, in other areas, it shows ice concentrations lower by as much as 30%. The The differences in the results are caused by temperature effects, emissivity effects, and tie point differences. The Team and the Bootstrap results were compared with available Landsat, advanced very high resolution radiometer (AVHRR) and synthetic aperture radar (SAR) data. AVHRR, Landsat, and SAR data sets all yield higher concentrations than the passive microwave algorithms. Inconsistencies among results suggest the need for further validation studies.
Preliminary flight evaluation of an engine performance optimization algorithm
NASA Technical Reports Server (NTRS)
Lambert, H. H.; Gilyard, G. B.; Chisholm, J. D.; Kerr, L. J.
1991-01-01
A performance seeking control (PSC) algorithm has undergone initial flight test evaluation in subsonic operation of a PW 1128 engined F-15. This algorithm is designed to optimize the quasi-steady performance of an engine for three primary modes: (1) minimum fuel consumption; (2) minimum fan turbine inlet temperature (FTIT); and (3) maximum thrust. The flight test results have verified a thrust specific fuel consumption reduction of 1 pct., up to 100 R decreases in FTIT, and increases of as much as 12 pct. in maximum thrust. PSC technology promises to be of value in next generation tactical and transport aircraft.
Land Surface Temperature Measurements form EOS MODIS Data
NASA Technical Reports Server (NTRS)
Wan, Zhengming
1996-01-01
We have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NE(Delta)T) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4-0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10-12.5 micrometer IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2-3 K. Several issues related to the day/night LST algorithm (uncertainties in the day/night registration and in surface emissivity changes caused by dew occurrence, and the cloud cover) have been investigated. The LST algorithms have been validated with MODIS Airborne Simulator (MAS) dada and ground-based measurement data in two field campaigns conducted in Railroad Valley playa, NV in 1995 and 1996. The MODIS LST version 1 software has been delivered.
SDR input power estimation algorithms
NASA Astrophysics Data System (ADS)
Briones, J. C.; Nappier, J. M.
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
SDR Input Power Estimation Algorithms
NASA Technical Reports Server (NTRS)
Nappier, Jennifer M.; Briones, Janette C.
2013-01-01
The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.
Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi
2016-01-01
Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579
NASA Astrophysics Data System (ADS)
Begum, A. Yasmine; Gireesh, N.
2018-04-01
In superheater, steam temperature is controlled in a cascade control loop. The cascade control loop consists of PI and PID controllers. To improve the superheater steam temperature control the controller's gains in a cascade control loop has to be tuned efficiently. The mathematical model of the superheater is derived by sets of nonlinear partial differential equations. The tuning methods taken for study here are designed for delay plus first order transfer function model. Hence from the dynamical model of the superheater, a FOPTD model is derived using frequency response method. Then by using Chien-Hrones-Reswick Tuning Algorithm and Gain-Phase Assignment Algorithm optimum controller gains has been found out based on the least value of integral time weighted absolute error.
Performance Improvement of Raman Distributed Temperature System by Using Noise Suppression
NASA Astrophysics Data System (ADS)
Li, Jian; Li, Yunting; Zhang, Mingjiang; Liu, Yi; Zhang, Jianzhong; Yan, Baoqiang; Wang, Dong; Jin, Baoquan
2018-06-01
In Raman distributed temperature system, the key factor for performance improvement is noise suppression, which seriously affects the sensing distance and temperature accuracy. Therefore, we propose and experimentally demonstrate dynamic noise difference algorithm and wavelet transform modulus maximum (WTMM) to de-noising Raman anti-Stokes signal. Experimental results show that the sensing distance can increase from 3 km to 11.5 km and the temperature accuracy increases to 1.58 °C at the sensing distance of 10.4 km.
Event-chain algorithm for the Heisenberg model: Evidence for z≃1 dynamic scaling.
Nishikawa, Yoshihiko; Michel, Manon; Krauth, Werner; Hukushima, Koji
2015-12-01
We apply the event-chain Monte Carlo algorithm to the three-dimensional ferromagnetic Heisenberg model. The algorithm is rejection-free and also realizes an irreversible Markov chain that satisfies global balance. The autocorrelation functions of the magnetic susceptibility and the energy indicate a dynamical critical exponent z≈1 at the critical temperature, while that of the magnetization does not measure the performance of the algorithm. We show that the event-chain Monte Carlo algorithm substantially reduces the dynamical critical exponent from the conventional value of z≃2.
NASA Astrophysics Data System (ADS)
Katz, S. D.; Niedermayer, F.; Nógrádi, D.; Török, Cs.
2017-03-01
We study three possible ways to circumvent the sign problem in the O(3) nonlinear sigma model in 1 +1 dimensions. We compare the results of the worm algorithm to complex Langevin and multiparameter reweighting. Using the worm algorithm, the thermodynamics of the model is investigated, and continuum results are shown for the pressure at different μ /T values in the range 0-4. By performing T =0 simulations using the worm algorithm, the Silver Blaze phenomenon is reproduced. Regarding the complex Langevin, we test various implementations of discretizing the complex Langevin equation. We found that the exponentialized Euler discretization of the Langevin equation gives wrong results for the action and the density at low T /m . By performing a continuum extrapolation, we found that this discrepancy does not disappear and depends slightly on temperature. The discretization with spherical coordinates performs similarly at low μ /T but breaks down also at some higher temperatures at high μ /T . However, a third discretization that uses a constraining force to achieve the ϕ2=1 condition gives correct results for the action but wrong results for the density at low μ /T .
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
Pre-Launch Performance Testing of the ICESat-2/ATLAS Flight Science Receiver Algorithms
NASA Astrophysics Data System (ADS)
Mcgarry, J.; Carabajal, C. C.; Saba, J. L.; Rackley, A.; Holland, S.
2016-12-01
NASA's Advanced Topographic Laser Altimeter System (ATLAS) will be the single instrument on the ICESat-2 spacecraft which is expected to launch in late 2017 with a 3 year mission lifetime. The ICESat-2 planned orbital altitude is 500 km with a 92 degree inclination and 91-day repeat tracks. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz and a 6 spot pattern on the Earth's surface. Without some method of reducing the received data, the volume of ATLAS telemetry would far exceed the normal X-band downlink capability. To reduce the data volume to an acceptable level a set of onboard Receiver Algorithms has been developed. These Algorithms limit the daily data volume by distinguishing surface echoes from the background noise and allowing the instrument to telemeter data from only a small vertical region about the signal. This is accomplished through the use of an onboard Digital Elevation Model (DEM), signal processing techniques, and onboard relief and surface reference maps. The ATLAS Receiver Algorithms have been completed and have been verified during Instrument testing in the spacecraft assembly area at the Goddard Space Flight Center in late 2015 and early 2016. Testing has been performed at ambient temperature with a pressure of one atmosphere as well as at the expected hot and cold temperatures in a vacuum. Results from testing to date show the Receiver Algorithms have the ability to handle a wide range of signal and noise levels with a very good sensitivity at relatively low signal to noise ratios. Testing with the ATLAS instrument and flight software shows very good agreement with previous Simulator testing and all of the requirements for ATLAS Receiver Algorithms were successfully verified during Run for the Record Testing in December 2015. This poster will describe the performance of the ATLAS Flight Science Receiver Algorithms during the Run for Record and Comprehensive Performance Testing performed at Goddard, which will give insight into the future on-orbit performance of the Algorithms. See the companion poster (Carabajal, et al) in this session.
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.
NASA Astrophysics Data System (ADS)
Man, E. A.; Sera, D.; Mathe, L.; Schaltz, E.; Rosendahl, L.
2016-03-01
Characterization of thermoelectric generators (TEG) is widely discussed and equipment has been built that can perform such analysis. One method is often used to perform such characterization: constant temperature with variable thermal power input. Maximum power point tracking (MPPT) methods for TEG systems are mostly tested under steady-state conditions for different constant input temperatures. However, for most TEG applications, the input temperature gradient changes, exposing the MPPT to variable tracking conditions. An example is the exhaust pipe on hybrid vehicles, for which, because of the intermittent operation of the internal combustion engine, the TEG and its MPPT controller are exposed to a cyclic temperature profile. Furthermore, there are no guidelines on how fast the MPPT must be under such dynamic conditions. In the work discussed in this paper, temperature gradients for TEG integrated in several applications were evaluated; the results showed temperature variation up to 5°C/s for TEG systems. Electrical characterization of a calcium-manganese oxide TEG was performed at steady-state for different input temperatures and a maximum temperature of 401°C. By using electrical data from characterization of the oxide module, a solar array simulator was emulated to perform as a TEG. A trapezoidal temperature profile with different gradients was used on the TEG simulator to evaluate the dynamic MPPT efficiency. It is known that the perturb and observe (P&O) algorithm may have difficulty accurately tracking under rapidly changing conditions. To solve this problem, a compromise must be found between the magnitude of the increment and the sampling frequency of the control algorithm. The standard P&O performance was evaluated experimentally by using different temperature gradients for different MPPT sampling frequencies, and efficiency values are provided for all cases. The results showed that a tracking speed of 2.5 Hz can be successfully implemented on a TEG system to provide ˜95% MPPT efficiency when the input temperature is changing at 5°C/s.
The Rational Hybrid Monte Carlo algorithm
NASA Astrophysics Data System (ADS)
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Climatologies at high resolution for the earth’s land surface areas
Karger, Dirk Nikolaus; Conrad, Olaf; Böhner, Jürgen; Kawohl, Tobias; Kreft, Holger; Soria-Auza, Rodrigo Wilber; Zimmermann, Niklaus E.; Linder, H. Peter; Kessler, Michael
2017-01-01
High-resolution information on climatic conditions is essential to many applications in environmental and ecological sciences. Here we present the CHELSA (Climatologies at high resolution for the earth’s land surface areas) data of downscaled model output temperature and precipitation estimates of the ERA-Interim climatic reanalysis to a high resolution of 30 arc sec. The temperature algorithm is based on statistical downscaling of atmospheric temperatures. The precipitation algorithm incorporates orographic predictors including wind fields, valley exposition, and boundary layer height, with a subsequent bias correction. The resulting data consist of a monthly temperature and precipitation climatology for the years 1979–2013. We compare the data derived from the CHELSA algorithm with other standard gridded products and station data from the Global Historical Climate Network. We compare the performance of the new climatologies in species distribution modelling and show that we can increase the accuracy of species range predictions. We further show that CHELSA climatological data has a similar accuracy as other products for temperature, but that its predictions of precipitation patterns are better. PMID:28872642
Climatologies at high resolution for the earth's land surface areas
NASA Astrophysics Data System (ADS)
Karger, Dirk Nikolaus; Conrad, Olaf; Böhner, Jürgen; Kawohl, Tobias; Kreft, Holger; Soria-Auza, Rodrigo Wilber; Zimmermann, Niklaus E.; Linder, H. Peter; Kessler, Michael
2017-09-01
High-resolution information on climatic conditions is essential to many applications in environmental and ecological sciences. Here we present the CHELSA (Climatologies at high resolution for the earth's land surface areas) data of downscaled model output temperature and precipitation estimates of the ERA-Interim climatic reanalysis to a high resolution of 30 arc sec. The temperature algorithm is based on statistical downscaling of atmospheric temperatures. The precipitation algorithm incorporates orographic predictors including wind fields, valley exposition, and boundary layer height, with a subsequent bias correction. The resulting data consist of a monthly temperature and precipitation climatology for the years 1979-2013. We compare the data derived from the CHELSA algorithm with other standard gridded products and station data from the Global Historical Climate Network. We compare the performance of the new climatologies in species distribution modelling and show that we can increase the accuracy of species range predictions. We further show that CHELSA climatological data has a similar accuracy as other products for temperature, but that its predictions of precipitation patterns are better.
NASA Technical Reports Server (NTRS)
Srivastava, Prashant K.; Han, Dawei; Rico-Ramirez, Miguel A.; O'Neill, Peggy; Islam, Tanvir; Gupta, Manika
2014-01-01
Soil Moisture and Ocean Salinity (SMOS) is the latest mission which provides flow of coarse resolution soil moisture data for land applications. However, the efficient retrieval of soil moisture for hydrological applications depends on optimally choosing the soil and vegetation parameters. The first stage of this work involves the evaluation of SMOS Level 2 products and then several approaches for soil moisture retrieval from SMOS brightness temperature are performed to estimate Soil Moisture Deficit (SMD). The most widely applied algorithm i.e. Single channel algorithm (SCA), based on tau-omega is used in this study for the soil moisture retrieval. In tau-omega, the soil moisture is retrieved using the Horizontal (H) polarisation following Hallikainen dielectric model, roughness parameters, Fresnel's equation and estimated Vegetation Optical Depth (tau). The roughness parameters are empirically calibrated using the numerical optimization techniques. Further to explore the improvement in retrieval models, modifications have been incorporated in the algorithms with respect to the sources of the parameters, which include effective temperatures derived from the European Center for Medium-Range Weather Forecasts (ECMWF) downscaled using the Weather Research and Forecasting (WRF)-NOAH Land Surface Model and Moderate Resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) while the s is derived from MODIS Leaf Area Index (LAI). All the evaluations are performed against SMD, which is estimated using the Probability Distributed Model following a careful calibration and validation integrated with sensitivity and uncertainty analysis. The performance obtained after all those changes indicate that SCA-H using WRF-NOAH LSM downscaled ECMWF LST produces an improved performance for SMD estimation at a catchment scale.
A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery.
Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng
2009-01-01
The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m(2) and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m(2), only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m(2), the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection.
Windowed multipole for cross section Doppler broadening
NASA Astrophysics Data System (ADS)
Josey, C.; Ducru, P.; Forget, B.; Smith, K.
2016-02-01
This paper presents an in-depth analysis on the accuracy and performance of the windowed multipole Doppler broadening method. The basic theory behind cross section data is described, along with the basic multipole formalism followed by the approximations leading to windowed multipole method and the algorithm used to efficiently evaluate Doppler broadened cross sections. The method is tested by simulating the BEAVRS benchmark with a windowed multipole library composed of 70 nuclides. Accuracy of the method is demonstrated on a single assembly case where total neutron production rates and 238U capture rates compare within 0.1% to ACE format files at the same temperature. With regards to performance, clock cycle counts and cache misses were measured for single temperature ACE table lookup and for windowed multipole. The windowed multipole method was found to require 39.6% more clock cycles to evaluate, translating to a 7.9% performance loss overall. However, the algorithm has significantly better last-level cache performance, with 3 fewer misses per evaluation, or a 65% reduction in last-level misses. This is due to the small memory footprint of the windowed multipole method and better memory access pattern of the algorithm.
The SEASAT altimeter wet tropospheric range correction revisited
NASA Technical Reports Server (NTRS)
Tapley, D. B.; Lundberg, J. B.; Born, G. H.
1984-01-01
An expanded set of radiosonde observations was used to calculate the wet tropospheric range correction for the brightness temperature measurements of the SEASAT scanning multichannel microwave radiometer (SMMR). The accuracy of the conventional algorithm for wet tropospheric range correction was evaluated. On the basis of the expanded observational data set, the algorithm was found to have a bias of about 1.0 cm, and a standard deviation 2.8 cm. In order to improve the algorithm, the exact linear, quadratic and logarithmic relationships between brightness temperatures and range corrections were determined. Various combinations of measurement parameters were used to reduce the standard deviation between SEASAT SMMR and radiosonde observations to about 2.1 cm. The performance of various range correction formulas is compared in a table.
An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.
Vimalarani, C; Subramanian, R; Sivanandam, S N
2016-01-01
Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.
NASA Astrophysics Data System (ADS)
Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter
2018-05-01
This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.
An assessment of 'shuffle algorithm' collision mechanics for particle simulations
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Boyd, Iain D.
1991-01-01
Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.
Electrochemical model based charge optimization for lithium-ion batteries
NASA Astrophysics Data System (ADS)
Pramanik, Sourav; Anwar, Sohel
2016-05-01
In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.
Ground temperature measurement by PRT-5 for maps experiment
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1978-01-01
A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.
An Automatic Cloud Mask Algorithm Based on Time Series of MODIS Measurements
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Wang, Yujie; Frey, R.
2008-01-01
Quality of aerosol retrievals and atmospheric correction depends strongly on accuracy of the cloud mask (CM) algorithm. The heritage CM algorithms developed for AVHRR and MODIS use the latest sensor measurements of spectral reflectance and brightness temperature and perform processing at the pixel level. The algorithms are threshold-based and empirically tuned. They don't explicitly address the classical problem of cloud search, wherein the baseline clear-skies scene is defined for comparison. Here, we report on a new CM algorithm which explicitly builds and maintains a reference clear-skies image of the surface (refcm) using a time series of MODIS measurements. The new algorithm, developed as part of the Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm for MODIS, relies on fact that clear-skies images of the same surface area have a common textural pattern, defined by the surface topography, boundaries of rivers and lakes, distribution of soils and vegetation etc. This pattern changes slowly given the daily rate of global Earth observations, whereas clouds introduce high-frequency random disturbances. Under clear skies, consecutive gridded images of the same surface area have a high covariance, whereas in presence of clouds covariance is usually low. This idea is central to initialization of refcm which is used to derive cloud mask in combination with spectral and brightness temperature tests. The refcm is continuously updated with the latest clear-skies MODIS measurements, thus adapting to seasonal and rapid surface changes. The algorithm is enhanced by an internal dynamic land-water-snow classification coupled with a surface change mask. An initial comparison shows that the new algorithm offers the potential to perform better than the MODIS MOD35 cloud mask in situations where the land surface is changing rapidly, and over Earth regions covered by snow and ice.
NASA Astrophysics Data System (ADS)
Choy, Vanessa; Tang, Kee; Wachsmuth, Jeff; Chopra, Rajiv; Bronskill, Michael
2006-05-01
Transurethral thermal therapy offers a minimally invasive alternative for the treatment of prostate diseases including benign prostate hyperplasia (BPH) and prostate cancer. Accurate heating of a targeted region of the gland can be achieved through the use of a rotating directional heating source incorporating planar ultrasound transducers, and the implementation of active temperature feedback along the beam direction during heating provided by magnetic resonance (MR) thermometry. The performance of this control method with practical spatial, temporal, and temperature resolution (such as angular alignment, spatial resolution, update rate for temperature feedback (imaging time), and the presence of noise) for thermal feedback using a clinical 1.5 T MR scanner was investigated in simulations. As expected, the control algorithm was most sensitive to the presence of noise, with noticeable degradation in its performance above ±2°C of temperature uncertainty. With respect to temporal resolution, acceptable performance was achieved at update rates of 5s or faster. The control algorithm was relatively insensitive to reduced spatial resolution due to the broad nature of the heating pattern produced by the heating applicator, this provides an opportunity to improve signal-to-noise ratio (SNR). The overall simulation results confirm that existing clinical 1.5T MR imagers are capable of providing adequate temperature feedback for transurethral thermal therapy without special pulse sequences or enhanced imaging hardware.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John; Iredell, Lena
2011-01-01
The Goddard DISC has generated products derived from AIRS/AMSU-A observations, starting from September 2002 when the AIRS instrument became stable, using the AIRS Science Team Version-5 retrieval algorithm. The AIRS Science Team Version-6 retrieval algorithm will be finalized in September 2011. This paper describes some of the significant improvements contained in the Version-6 retrieval algorithm, compared to that used in Version-5, with an emphasis on the improvement of atmospheric temperature profiles, ocean and land surface skin temperatures, and ocean and land surface spectral emissivities. AIRS contains 2378 spectral channels covering portions of the spectral region 650 cm(sup -1) (15.38 micrometers) - 2665 cm(sup -1) (3.752 micrometers). These spectral regions contain significant absorption features from two CO2 absorption bands, the 15 micrometers (longwave) CO2 band, and the 4.3 micrometers (shortwave) CO2 absorption band. There are also two atmospheric window regions, the 12 micrometer - 8 micrometer (longwave) window, and the 4.17 micrometer - 3.75 micrometer (shortwave) window. Historically, determination of surface and atmospheric temperatures from satellite observations was performed using primarily observations in the longwave window and CO2 absorption regions. According to cloud clearing theory, more accurate soundings of both surface skin and atmospheric temperatures can be obtained under partial cloud cover conditions if one uses observations in longwave channels to determine coefficients which generate cloud cleared radiances R(sup ^)(sub i) for all channels, and uses R(sup ^)(sub i) only from shortwave channels in the determination of surface and atmospheric temperatures. This procedure is now being used in the AIRS Version-6 Retrieval Algorithm. Results are presented for both daytime and nighttime conditions showing improved Version-6 surface and atmospheric soundings under partial cloud cover.
NASA Astrophysics Data System (ADS)
Kirchengast, Gottfried; Li, Ying; Scherllin-Pirscher, Barbara; Schwärz, Marc; Schwarz, Jakob; Nielsen, Johannes K.
2017-04-01
The GNSS radio occultation (RO) technique is an important remote sensing technique for obtaining thermodynamic profiles of temperature, humidity, and pressure in the Earth's troposphere. However, due to refraction effects of both dry ambient air and water vapor in the troposphere, retrieval of accurate thermodynamic profiles at these lower altitudes is challenging and requires suitable background information in addition to the RO refractivity information. Here we introduce a new moist air retrieval algorithm aiming to improve the quality and robustness of retrieving temperature, humidity and pressure profiles in moist air tropospheric conditions. The new algorithm consists of four steps: (1) use of prescribed specific humidity and its uncertainty to retrieve temperature and its associated uncertainty; (2) use of prescribed temperature and its uncertainty to retrieve specific humidity and its associated uncertainty; (3) use of the previous results to estimate final temperature and specific humidity profiles through optimal estimation; (4) determination of air pressure and density profiles from the results obtained before. The new algorithm does not require elaborated matrix inversions which are otherwise widely used in 1D-Var retrieval algorithms, and it allows a transparent uncertainty propagation, whereby the uncertainties of prescribed variables are dynamically estimated accounting for their spatial and temporal variations. Estimated random uncertainties are calculated by constructing error covariance matrices from co-located ECMWF short-range forecast and corresponding analysis profiles. Systematic uncertainties are estimated by empirical modeling. The influence of regarding or disregarding vertical error correlations is quantified. The new scheme is implemented with static input uncertainty profiles in WEGC's current OPSv5.6 processing system and with full scope in WEGC's next-generation system, the Reference Occultation Processing System (rOPS). Results from both WEGC systems, current OPSv5.6 and next-generation rOPS, are shown and discussed, based on both insights from individual profiles and statistical ensembles, and compared to moist air retrieval results from the UCAR Boulder and ROM-SAF Copenhagen centers. The results show that the new algorithmic scheme improves the temperature, humidity and pressure retrieval performance, in particular also the robustness including for integrated uncertainty estimation for large-scale applications, over the previous algorithms. The new rOPS-implemented algorithm will therefore be used in the first large-scale reprocessing towards a tropospheric climate data record 2001-2016 by the rOPS, including its integrated uncertainty propagation.
MOLA II Laser Transmitter Calibration and Performance. 1.2
NASA Technical Reports Server (NTRS)
Afzal, Robert S.; Smith, David E. (Technical Monitor)
1997-01-01
The goal of the document is to explain the algorithm for determining the laser output energy from the telemetry data within the return packets from MOLA II. A simple algorithm is developed to convert the raw start detector data into laser energy, measured in millijoules. This conversion is dependent on three variables, start detector counts, array heat sink temperature and start detector temperature. All these values are contained within the return packets. The conversion is applied to the GSFC Thermal Vacuum data as well as the in-space data to date and shows good correlation.
NASA Technical Reports Server (NTRS)
Aires, F.; Chedin, A.; Scott, N. A.; Rossow, W. B.; Hansen, James E. (Technical Monitor)
2001-01-01
Abstract In this paper, a fast atmospheric and surface temperature retrieval algorithm is developed for the high resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. This algorithm is constructed on the basis of a neural network technique that has been regularized by introduction of a priori information. The performance of the resulting fast and accurate inverse radiative transfer model is presented for a large divE:rsified dataset of radiosonde atmospheres including rare events. Two configurations are considered: a tropical-airmass specialized scheme and an all-air-masses scheme.
Radiofrequency pulse design in parallel transmission under strict temperature constraints.
Boulant, Nicolas; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre
2014-09-01
To gain radiofrequency (RF) pulse performance by directly addressing the temperature constraints, as opposed to the specific absorption rate (SAR) constraints, in parallel transmission at ultra-high field. The magnitude least-squares RF pulse design problem under hard SAR constraints was solved repeatedly by using the virtual observation points and an active-set algorithm. The SAR constraints were updated at each iteration based on the result of a thermal simulation. The numerical study was performed for an SAR-demanding and simplified time of flight sequence using B1 and ΔB0 maps obtained in vivo on a human brain at 7T. The proposed adjustment of the SAR constraints combined with an active-set algorithm provided higher flexibility in RF pulse design within a reasonable time. The modifications of those constraints acted directly upon the thermal response as desired. Although further confidence in the thermal models is needed, this study shows that RF pulse design under strict temperature constraints is within reach, allowing better RF pulse performance and faster acquisitions at ultra-high fields at the cost of higher sequence complexity. Copyright © 2013 Wiley Periodicals, Inc.
A Contextual Fire Detection Algorithm for Simulated HJ-1B Imagery
Qian, Yonggang; Yan, Guangjian; Duan, Sibo; Kong, Xiangsheng
2009-01-01
The HJ-1B satellite, which was launched on September 6, 2008, is one of the small ones placed in the constellation for disaster prediction and monitoring. HJ-1B imagery was simulated in this paper, which contains fires of various sizes and temperatures in a wide range of terrestrial biomes and climates, including RED, NIR, MIR and TIR channels. Based on the MODIS version 4 contextual algorithm and the characteristics of HJ-1B sensor, a contextual fire detection algorithm was proposed and tested using simulated HJ-1B data. It was evaluated by the probability of fire detection and false alarm as functions of fire temperature and fire area. Results indicate that when the simulated fire area is larger than 45 m2 and the simulated fire temperature is larger than 800 K, the algorithm has a higher probability of detection. But if the simulated fire area is smaller than 10 m2, only when the simulated fire temperature is larger than 900 K, may the fire be detected. For fire areas about 100 m2, the proposed algorithm has a higher detection probability than that of the MODIS product. Finally, the omission and commission error were evaluated which are important factors to affect the performance of this algorithm. It has been demonstrated that HJ-1B satellite data are much sensitive to smaller and cooler fires than MODIS or AVHRR data and the improved capabilities of HJ-1B data will offer a fine opportunity for the fire detection. PMID:22399950
Theoretical algorithms for satellite-derived sea surface temperatures
NASA Astrophysics Data System (ADS)
Barton, I. J.; Zavody, A. M.; O'Brien, D. M.; Cutten, D. R.; Saunders, R. W.; Llewellyn-Jones, D. T.
1989-03-01
Reliable climate forecasting using numerical models of the ocean-atmosphere system requires accurate data sets of sea surface temperature (SST) and surface wind stress. Global sets of these data will be supplied by the instruments to fly on the ERS 1 satellite in 1990. One of these instruments, the Along-Track Scanning Radiometer (ATSR), has been specifically designed to provide SST in cloud-free areas with an accuracy of 0.3 K. The expected capabilities of the ATSR can be assessed using transmission models of infrared radiative transfer through the atmosphere. The performances of several different models are compared by estimating the infrared brightness temperatures measured by the NOAA 9 AVHRR for three standard atmospheres. Of these, a computationally quick spectral band model is used to derive typical AVHRR and ATSR SST algorithms in the form of linear equations. These algorithms show that a low-noise 3.7-μm channel is required to give the best satellite-derived SST and that the design accuracy of the ATSR is likely to be achievable. The inclusion of extra water vapor information in the analysis did not improve the accuracy of multiwavelength SST algorithms, but some improvement was noted with the multiangle technique. Further modeling is required with atmospheric data that include both aerosol variations and abnormal vertical profiles of water vapor and temperature.
NASA Technical Reports Server (NTRS)
Vicroy, D. D.; Knox, C. E.
1983-01-01
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modeling required for the DC-10 airplane is described.
NASA Astrophysics Data System (ADS)
Chen, Jun; Zhang, Xiangguang; Xing, Xiaogang; Ishizaka, Joji; Yu, Zhifeng
2017-12-01
Quantifying the diffuse attenuation coefficient of the photosynthetically available radiation (Kpar) can improve our knowledge of euphotic depth (Zeu) and biomass heating effects in the upper layers of oceans. An algorithm to semianalytically derive Kpar from remote sensing reflectance (Rrs) is developed for the global open oceans. This algorithm includes the following two portions: (1) a neural network model for deriving the diffuse attention coefficients (Kd) that considers the residual error in satellite Rrs, and (2) a three band depth-dependent Kpar algorithm (TDKA) for describing the spectrally selective attenuation mechanism of underwater solar radiation in the open oceans. This algorithm is evaluated with both in situ PAR profile data and satellite images, and the results show that it can produce acceptable PAR profile estimations while clearly removing the impacts of satellite residual errors on Kpar estimations. Furthermore, the performance of the TDKA algorithm is evaluated by its applicability in Zeu derivation and mean temperature within a mixed layer depth (TML) simulation, and the results show that it can significantly decrease the uncertainty in both compared with the classical chlorophyll-a concentration-based Kpar algorithm. Finally, the TDKA algorithm is applied in simulating biomass heating effects in the Sargasso Sea near Bermuda, with new Kpar data it is found that the biomass heating effects can lead to a 3.4°C maximum positive difference in temperature in the upper layers but could result in a 0.67°C maximum negative difference in temperature in the deep layers.
Measurement Marker Recognition In A Time Sequence Of Infrared Images For Biomedical Applications
NASA Astrophysics Data System (ADS)
Fiorini, A. R.; Fumero, R.; Marchesi, R.
1986-03-01
In thermographic measurements, quantitative surface temperature evaluation is often uncertain. The main reason is in the lack of available reference points in transient conditions. Reflective markers were used for automatic marker recognition and pixel coordinate computations. An algorithm selects marker icons to match marker references where particular luminance conditions are satisfied. Automatic marker recognition allows luminance compensation and temperature calibration of recorded infrared images. A biomedical application is presented: the dynamic behaviour of the surface temperature distributions is investigated in order to study the performance of two different pumping systems for extracorporeal circulation. Sequences of images are compared and results are discussed. Finally, the algorithm allows to monitor the experimental environment and to alert for the presence of unusual experimental conditions.
Retinex enhancement of infrared images.
Li, Ying; He, Renjie; Xu, Guizhi; Hou, Changzhi; Sun, Yunyan; Guo, Lei; Rao, Liyun; Yan, Weili
2008-01-01
With the ability of imaging the temperature distribution of body, infrared imaging is promising in diagnostication and prognostication of diseases. However the poor quality of the raw original infrared images prevented applications and one of the essential problems is the low contrast appearance of the imagined object. In this paper, the image enhancement technique based on the Retinex theory is studied, which is a process that automatically retrieve the visual realism to images. The algorithms, including Frackle-McCann algorithm, McCann99 algorithm, single-scale Retinex algorithm, multi-scale Retinex algorithm and multi-scale Retinex algorithm with color restoration, are experienced to the enhancement of infrared images. The entropy measurements along with the visual inspection were compared and results shown the algorithms based on Retinex theory have the ability in enhancing the infrared image. Out of the algorithms compared, MSRCR demonstrated the best performance.
Pre-Launch Performance Assessment of the VIIRS Land Surface Temperature Environmental Data Record
NASA Astrophysics Data System (ADS)
Hauss, B.; Ip, J.; Agravante, H.
2009-12-01
The Visible/Infrared Imager Radiometer Suite (VIIRS) Land Surface Temperature (LST) Environmental Data Record (EDR) provides the surface temperature of land surface including coastal and inland-water pixels at VIIRS moderate resolution (750m) during both day and night. To predict the LST under optimal conditions, the retrieval algorithm utilizes a dual split-window approach with both Short-wave Infrared (SWIR) channels at 3.70 µm (M12) and 4.05 µm (M13), and Long-wave Infrared (LWIR) channels at 10.76 µm (M15) and 12.01 µm (M16) to correct for atmospheric water vapor. Under less optimal conditions, the algorithm uses a fallback split-window approach with M15 and M16 channels. By comparison, the MODIS generalized split-window algorithm only uses the LWIR bands in the retrieval of surface temperature because of the concern for both solar contamination and large emissivity variations in the SWIR bands. In this paper, we assess whether these concerns are real and whether there is an impact on the precision and accuracy of the LST retrieval. The algorithm relies on the VIIRS Cloud Mask IP for identifying cloudy and ocean pixels, the VIIRS Surface Type EDR for identifying the IGBP land cover type for the pixels, and the VIIRS Aerosol Optical Thickness (AOT) IP for excluding pixels with AOT greater than 1.0. In this paper, we will report the pre-launch performance assessment of the LST EDR based on global synthetic data and proxy data from Terra MODIS. Results of both the split-window and dual split-window algorithms will be assessed by comparison either to synthetic "truth" or results of the MODIS retrieval. We will also show that the results of the assessment with proxy data are consistent with those obtained using the global synthetic data.
Benchmarking homogenization algorithms for monthly data
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2012-01-01
The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Temperature Effects and Compensation-Control Methods
Xia, Dunzhu; Chen, Shuling; Wang, Shourong; Li, Hongsheng
2009-01-01
In the analysis of the effects of temperature on the performance of microgyroscopes, it is found that the resonant frequency of the microgyroscope decreases linearly as the temperature increases, and the quality factor changes drastically at low temperatures. Moreover, the zero bias changes greatly with temperature variations. To reduce the temperature effects on the microgyroscope, temperature compensation-control methods are proposed. In the first place, a BP (Back Propagation) neural network and polynomial fitting are utilized for building the temperature model of the microgyroscope. Considering the simplicity and real-time requirements, piecewise polynomial fitting is applied in the temperature compensation system. Then, an integral-separated PID (Proportion Integration Differentiation) control algorithm is adopted in the temperature control system, which can stabilize the temperature inside the microgyrocope in pursuing its optimal performance. Experimental results reveal that the combination of microgyroscope temperature compensation and control methods is both realizable and effective in a miniaturized microgyroscope prototype. PMID:22408509
Linear and nonlinear trending and prediction for AVHRR time series data
NASA Technical Reports Server (NTRS)
Smid, J.; Volf, P.; Slama, M.; Palus, M.
1995-01-01
The variability of AVHRR calibration coefficient in time was analyzed using algorithms of linear and non-linear time series analysis. Specifically we have used the spline trend modeling, autoregressive process analysis, incremental neural network learning algorithm and redundancy functional testing. The analysis performed on available AVHRR data sets revealed that (1) the calibration data have nonlinear dependencies, (2) the calibration data depend strongly on the target temperature, (3) both calibration coefficients and the temperature time series can be modeled, in the first approximation, as autonomous dynamical systems, (4) the high frequency residuals of the analyzed data sets can be best modeled as an autoregressive process of the 10th degree. We have dealt with a nonlinear identification problem and the problem of noise filtering (data smoothing). The system identification and filtering are significant problems for AVHRR data sets. The algorithms outlined in this study can be used for the future EOS missions. Prediction and smoothing algorithms for time series of calibration data provide a functional characterization of the data. Those algorithms can be particularly useful when calibration data are incomplete or sparse.
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas
2016-03-01
Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.
NASA Astrophysics Data System (ADS)
Braiek, A.; Adili, A.; Albouchi, F.; Karkri, M.; Ben Nasrallah, S.
2016-06-01
The aim of this work is to simultaneously identify the conductive and radiative parameters of a semitransparent sample using a photothermal method associated with an inverse problem. The identification of the conductive and radiative proprieties is performed by the minimization of an objective function that represents the errors between calculated temperature and measured signal. The calculated temperature is obtained from a theoretical model built with the thermal quadrupole formalism. Measurement is obtained in the rear face of the sample whose front face is excited by a crenel of heat flux. For identification procedure, a genetic algorithm is developed and used. The genetic algorithm is a useful tool in the simultaneous estimation of correlated or nearly correlated parameters, which can be a limiting factor for the gradient-based methods. The results of the identification procedure show the efficiency and the stability of the genetic algorithm to simultaneously estimate the conductive and radiative properties of clear glass.
Overhead longwave infrared hyperspectral material identification using radiometric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelinski, M. E.
Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimalmore » atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vicroy, D.D.; Knox, C.E.
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modelingmore » required for the DC-10 airplane is described.« less
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
SST algorithm based on radiative transfer model
NASA Astrophysics Data System (ADS)
Mat Jafri, Mohd Z.; Abdullah, Khiruddin; Bahari, Alui
2001-03-01
An algorithm for measuring sea surface temperature (SST) without recourse to the in-situ data for calibration has been proposed. The algorithm which is based on the recorded infrared signal by the satellite sensor is composed of three terms, namely, the surface emission, the up-welling radiance emitted by the atmosphere, and the down-welling atmospheric radiance reflected at the sea surface. This algorithm requires the transmittance values of thermal bands. The angular dependence of the transmittance function was modeled using the MODTRAN code. Radiosonde data were used with the MODTRAN code. The expression of transmittance as a function of zenith view angle was obtained for each channel through regression of the MODTRAN output. The Ocean Color Temperature Scanner (OCTS) data from the Advanced Earth Observation Satellite (ADEOS) were used in this study. The study area covers the seas of the North West of Peninsular Malaysia region. The in-situ data (ship collected SST values) were used for verification of the results. Cloud contaminated pixels were masked out using the standard procedures which have been applied to the Advanced Very High Resolution Radiometer (AVHRR) data. The cloud free pixels at the in-situ sites were extracted for analysis. The OCTS data were then substituted in the proposed algorithm. The appropriate transmittance value for each channel was then assigned in the calculation. Assessment for the accuracy was made by observing the correlation and the rms deviations between the computed and the ship collected values. The results were also compared with the results from OCTS multi- channel sea surface temperature algorithm. The comparison produced high correlation values. The performance of this algorithm is comparable with the established OCTS algorithm. The effect of emissivity on the retrieved SST values was also investigated. SST map was generated and contoured manually.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, Howard; Braun, James E.
2015-12-31
This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment inmore » the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.« less
Symmetry-conserving purification of quantum states within the density matrix renormalization group
Nocera, Alberto; Alvarez, Gonzalo
2016-01-28
The density matrix renormalization group (DMRG) algorithm was originally designed to efficiently compute the zero-temperature or ground-state properties of one-dimensional strongly correlated quantum systems. The development of the algorithm at finite temperature has been a topic of much interest, because of the usefulness of thermodynamics quantities in understanding the physics of condensed matter systems, and because of the increased complexity associated with efficiently computing temperature-dependent properties. The ancilla method is a DMRG technique that enables the computation of these thermodynamic quantities. In this paper, we review the ancilla method, and improve its performance by working on reduced Hilbert spaces andmore » using canonical approaches. Furthermore we explore its applicability beyond spins systems to t-J and Hubbard models.« less
Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K. L.
2016-01-01
Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood’s temperature model during transportation, the UAVs’ scheduling and routes’ planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood’s temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance. PMID:27163361
Wen, Tingxi; Zhang, Zhongnan; Wong, Kelvin K L
2016-01-01
Unmanned aerial vehicle (UAV) has been widely used in many industries. In the medical environment, especially in some emergency situations, UAVs play an important role such as the supply of medicines and blood with speed and efficiency. In this paper, we study the problem of multi-objective blood supply by UAVs in such emergency situations. This is a complex problem that includes maintenance of the supply blood's temperature model during transportation, the UAVs' scheduling and routes' planning in case of multiple sites requesting blood, and limited carrying capacity. Most importantly, we need to study the blood's temperature change due to the external environment, the heating agent (or refrigerant) and time factor during transportation, and propose an optimal method for calculating the mixing proportion of blood and appendage in different circumstances and delivery conditions. Then, by introducing the idea of transportation appendage into the traditional Capacitated Vehicle Routing Problem (CVRP), this new problem is proposed according to the factors of distance and weight. Algorithmically, we use the combination of decomposition-based multi-objective evolutionary algorithm and local search method to perform a series of experiments on the CVRP public dataset. By comparing our technique with the traditional ones, our algorithm can obtain better optimization results and time performance.
NASA Technical Reports Server (NTRS)
Kitzis, J. L.; Kitzis, S. N.
1979-01-01
The brightness temperature data produced by the SMMR final Antenna Pattern Correction (APC) algorithm is discussed. The algorithm consisted of: (1) a direct comparison of the outputs of the final and interim APC algorithms; and (2) an analysis of a possible relationship between observed cross track gradients in the interim brightness temperatures and the asymmetry in the antenna temperature data. Results indicate a bias between the brightness temperature produced by the final and interim APC algorithm.
Unraveling Quantum Annealers using Classical Hardness
Martin-Mayor, Victor; Hen, Itay
2015-01-01
Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257
Degradation forecast for PEMFC cathode-catalysts under cyclic loads
NASA Astrophysics Data System (ADS)
Moein-Jahromi, M.; Kermani, M. J.; Movahed, S.
2017-08-01
Degradation of Fuel Cell (FC) components under cyclic loads is one of the biggest bottlenecks in FC commercialization. In this paper, a novel experimental based algorithm is presented to predict the Catalyst Layer (CL) performance loss during cyclic load. The algorithm consists of two models namely Models 1 and 2. The Model 1 calculates the Electro-Chemical Surface Area (ECSA) and agglomerate size (e.g. agglomerate radius, rt,agg) for the catalyst layer under cyclic load. The Model 2 is the already-existing model from our earlier studies that computes catalyst performance with fixed structural parameters. Combinations of these two Models predict the CL performance under an arbitrary cyclic load. A set of parametric/sensitivity studies is performed to investigate the effects of operating parameters on the percentage of Voltage Degradation Rate (VDR%) with rank 1 for the most influential one. Amongst the considered parameters (such as: temperature, relative humidity, pressure, minimum and maximum voltage of the cyclic load), the results show that temperature and pressure have the most and the least influences on the VDR%, respectively. So that, increase of temperature from 60 °C to 80 °C leads to over 20% VDR intensification, the VDR will also reduce 1.41% by increasing pressure from 2 atm to 4 atm.
Efficiency of exchange schemes in replica exchange
NASA Astrophysics Data System (ADS)
Lingenheil, Martin; Denschlag, Robert; Mathias, Gerald; Tavan, Paul
2009-08-01
In replica exchange simulations a fast diffusion of the replicas through the temperature space maximizes the efficiency of the statistical sampling. Here, we compare the diffusion speed as measured by the round trip rates for four exchange algorithms. We find different efficiency profiles with optimal average acceptance probabilities ranging from 8% to 41%. The best performance is determined by benchmark simulations for the most widely used algorithm, which alternately tries to exchange all even and all odd replica pairs. By analytical mathematics we show that the excellent performance of this exchange scheme is due to the high diffusivity of the underlying random walk.
Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma
2015-04-21
Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources.
Pardo, Juan; Zamora-Martínez, Francisco; Botella-Rocamora, Paloma
2015-01-01
Time series forecasting is an important predictive methodology which can be applied to a wide range of problems. Particularly, forecasting the indoor temperature permits an improved utilization of the HVAC (Heating, Ventilating and Air Conditioning) systems in a home and thus a better energy efficiency. With such purpose the paper describes how to implement an Artificial Neural Network (ANN) algorithm in a low cost system-on-chip to develop an autonomous intelligent wireless sensor network. The present paper uses a Wireless Sensor Networks (WSN) to monitor and forecast the indoor temperature in a smart home, based on low resources and cost microcontroller technology as the 8051MCU. An on-line learning approach, based on Back-Propagation (BP) algorithm for ANNs, has been developed for real-time time series learning. It performs the model training with every new data that arrive to the system, without saving enormous quantities of data to create a historical database as usual, i.e., without previous knowledge. Consequently to validate the approach a simulation study through a Bayesian baseline model have been tested in order to compare with a database of a real application aiming to see the performance and accuracy. The core of the paper is a new algorithm, based on the BP one, which has been described in detail, and the challenge was how to implement a computational demanding algorithm in a simple architecture with very few hardware resources. PMID:25905698
Thermal buckling optimisation of composite plates using firefly algorithm
NASA Astrophysics Data System (ADS)
Kamarian, S.; Shakeri, M.; Yas, M. H.
2017-07-01
Composite plates play a very important role in engineering applications, especially in aerospace industry. Thermal buckling of such components is of great importance and must be known to achieve an appropriate design. This paper deals with stacking sequence optimisation of laminated composite plates for maximising the critical buckling temperature using a powerful meta-heuristic algorithm called firefly algorithm (FA) which is based on the flashing behaviour of fireflies. The main objective of present work was to show the ability of FA in optimisation of composite structures. The performance of FA is compared with the results reported in the previous published works using other algorithms which shows the efficiency of FA in stacking sequence optimisation of laminated composite structures.
Worm Algorithm simulations of the hole dynamics in the t-J model
NASA Astrophysics Data System (ADS)
Prokof'ev, Nikolai; Ruebenacker, Oliver
2001-03-01
In the limit of small J << t, relevant for HTSC materials and Mott-Hubbard systems, computer simulations have to be performed for large systems and at low temperatures. Despite convincing evidence against spin-charge separation obtained by various methods for J > 0.4t there is an ongoing argument that at smaller J spin-charge separation is still possible. Worm algorithm Monte Carlo simulations of the hole Green function for 0.1 < J/t < 0.4 were performed on lattices with up to 32x32 sites, and at temperature J/T = 40 (for the largest size). Spectral analysis reveals a single, delta-function sharp quasiparticle peak at the lowest edge of the spectrum and two distinct peaks above it at all studied J. We rule out the possibility of spin-charge separation in this parameter range, and present, apparently, the hole spectral function in the thermodynamic limit.
NASA Technical Reports Server (NTRS)
Grossman, B.; Garrett, J.; Cinnella, P.
1989-01-01
Several versions of flux-vector split and flux-difference split algorithms were compared with regard to general applicability and complexity. Test computations were performed using curve-fit equilibrium air chemistry for an M = 5 high-temperature inviscid flow over a wedge, and an M = 24.5 inviscid flow over a blunt cylinder for test computations; for these cases, little difference in accuracy was found among the versions of the same flux-split algorithm. For flows with nonequilibrium chemistry, the effects of the thermodynamic model on the development of flux-vector split and flux-difference split algorithms were investigated using an equilibrium model, a general nonequilibrium model, and a simplified model based on vibrational relaxation. Several numerical examples are presented, including nonequilibrium air chemistry in a high-temperature shock tube and nonequilibrium hydrogen-air chemistry in a supersonic diffuser.
Mori, Yoshiharu; Okumura, Hisashi
2015-12-05
Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm. © 2015 Wiley Periodicals, Inc.
Dyck, P J; Zimmerman, I; Gillen, D A; Johnson, D; Karnes, J L; O'Brien, P C
1993-08-01
We recently found that vibratory detection threshold is greatly influenced by the algorithm of testing. Here, we study the influence of stimulus characteristics and algorithm of testing and estimating threshold on cool (CDT), warm (WDT), and heat-pain (HPDT) detection thresholds. We show that continuously decreasing (for CDT) or increasing (for WDT) thermode temperature to the point at which cooling or warming is perceived and signaled by depressing a response key ("appearance" threshold) overestimates threshold with rapid rates of thermal change. The mean of the appearance and disappearance thresholds also does not perform well for insensitive sites and patients. Pyramidal (or flat-topped pyramidal) stimuli ranging in magnitude, in 25 steps, from near skin temperature to 9 degrees C for 10 seconds (for CDT), from near skin temperature to 45 degrees C for 10 seconds (for WDT), and from near skin temperature to 49 degrees C for 10 seconds (for HPDT) provide ideal stimuli for use in several algorithms of testing and estimating threshold. Near threshold, only the initial direction of thermal change from skin temperature is perceived, and not its return to baseline. Use of steps of stimulus intensity allows the subject or patient to take the needed time to decide whether the stimulus was felt or not (in 4, 2, and 1 stepping algorithms), or whether it occurred in stimulus interval 1 or 2 (in two-alternative forced-choice testing). Thermal thresholds were generally significantly lower with a large (10 cm2) than with a small (2.7 cm2) thermode.(ABSTRACT TRUNCATED AT 250 WORDS)
Fail Safe, High Temperature Magnetic Bearings
NASA Technical Reports Server (NTRS)
Minihan, Thomas; Palazzolo, Alan; Kim, Yeonkyu; Lei, Shu-Liang; Kenny, Andrew; Na, Uhn Joo; Tucker, Randy; Preuss, Jason; Hunt, Andrew; Carter, Bart;
2002-01-01
This paper contributes to the magnetic bearing literature in two distinct areas: high temperature and redundant actuation. Design considerations and test results are given for the first published combined 538 C (1000 F) high speed rotating test performance of a magnetic bearing. Secondly, a significant extension of the flux isolation based, redundant actuator control algorithm is proposed to eliminate the prior deficiency of changing position stiffness after failure. The benefit of the novel extension was not experimentally demonstrated due to a high active stiffness requirement. In addition, test results are given for actuator failure tests at 399 C (750 F), 12,500 rpm. Finally, simulation results are presented confirming the experimental data and validating the redundant control algorithm.
Time series analysis of infrared satellite data for detecting thermal anomalies: a hybrid approach
NASA Astrophysics Data System (ADS)
Koeppen, W. C.; Pilger, E.; Wright, R.
2011-07-01
We developed and tested an automated algorithm that analyzes thermal infrared satellite time series data to detect and quantify the excess energy radiated from thermal anomalies such as active volcanoes. Our algorithm enhances the previously developed MODVOLC approach, a simple point operation, by adding a more complex time series component based on the methods of the Robust Satellite Techniques (RST) algorithm. Using test sites at Anatahan and Kīlauea volcanoes, the hybrid time series approach detected ~15% more thermal anomalies than MODVOLC with very few, if any, known false detections. We also tested gas flares in the Cantarell oil field in the Gulf of Mexico as an end-member scenario representing very persistent thermal anomalies. At Cantarell, the hybrid algorithm showed only a slight improvement, but it did identify flares that were undetected by MODVOLC. We estimate that at least 80 MODIS images for each calendar month are required to create good reference images necessary for the time series analysis of the hybrid algorithm. The improved performance of the new algorithm over MODVOLC will result in the detection of low temperature thermal anomalies that will be useful in improving our ability to document Earth's volcanic eruptions, as well as detecting low temperature thermal precursors to larger eruptions.
Long-Term Evaluation of the AMSR-E Soil Moisture Product Over the Walnut Gulch Watershed, AZ
NASA Astrophysics Data System (ADS)
Bolten, J. D.; Jackson, T. J.; Lakshmi, V.; Cosh, M. H.; Drusch, M.
2005-12-01
The Advanced Microwave Scanning Radiometer -Earth Observing System (AMSR-E) was launched aboard NASA's Aqua satellite on May 4th, 2002. Quantitative estimates of soil moisture using the AMSR-E provided data have required routine radiometric data calibration and validation using comparisons of satellite observations, extended targets and field campaigns. The currently applied NASA EOS Aqua ASMR-E soil moisture algorithm is based on a change detection approach using polarization ratios (PR) of the calibrated AMSR-E channel brightness temperatures. To date, the accuracy of the soil moisture algorithm has been investigated on short time scales during field campaigns such as the Soil Moisture Experiments in 2004 (SMEX04). Results have indicated self-consistency and calibration stability of the observed brightness temperatures; however the performance of the moisture retrieval algorithm has been poor. The primary objective of this study is to evaluate the quality of the current version of the AMSR-E soil moisture product for a three year period over the Walnut Gulch Experimental Watershed (150 km2) near Tombstone, AZ; the northern study area of SMEX04. This watershed is equipped with hourly and daily recording of precipitation, soil moisture and temperature via a network of raingages and a USDA-NRCS Soil Climate Analysis Network (SCAN) site. Surface wetting and drying are easily distinguished in this area due to the moderately-vegetated terrain and seasonally intense precipitation events. Validation of AMSR-E derived soil moisture is performed from June 2002 to June 2005 using watershed averages of precipitation, and soil moisture and temperature data from the SCAN site supported by a surface soil moisture network. Long-term assessment of soil moisture algorithm performance is investigated by comparing temporal variations of moisture estimates with seasonal changes and precipitation events. Further comparisons are made with a standard soil dataset from the European Centre for Medium-Range Weather Forecasts. The results of this research will contribute to a better characterization of the low biases and discrepancies currently observed in the AMSR-E soil moisture product.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vicroy, D.D.
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. An explanation and examples of how the algorithm is used,more » as well as a detailed flow chart and listing of the algorithm are contained.« less
An improved molecular dynamics algorithm to study thermodiffusion in binary hydrocarbon mixtures
NASA Astrophysics Data System (ADS)
Antoun, Sylvie; Saghir, M. Ziad; Srinivasan, Seshasai
2018-03-01
In multicomponent liquid mixtures, the diffusion flow of chemical species can be induced by temperature gradients, which leads to a separation of the constituent components. This cross effect between temperature and concentration is known as thermodiffusion or the Ludwig-Soret effect. The performance of boundary driven non-equilibrium molecular dynamics along with the enhanced heat exchange (eHEX) algorithm was studied by assessing the thermodiffusion process in n-pentane/n-decane (nC5-nC10) binary mixtures. The eHEX algorithm consists of an extended version of the HEX algorithm with an improved energy conservation property. In addition to this, the transferable potentials for phase equilibria-united atom force field were employed in all molecular dynamics (MD) simulations to precisely model the molecular interactions in the fluid. The Soret coefficients of the n-pentane/n-decane (nC5-nC10) mixture for three different compositions (at 300.15 K and 0.1 MPa) were calculated and compared with the experimental data and other MD results available in the literature. Results of our newly employed MD algorithm showed great agreement with experimental data and a better accuracy compared to other MD procedures.
NASA Astrophysics Data System (ADS)
Chang, Yaping; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Zhang, Shiqiang
2018-06-01
The long-term change of evapotranspiration (ET) is crucial for managing water resources in areas with extreme climates, such as the Tibetan Plateau (TP). This study proposed a modified algorithm for estimating ET based on the MOD16 algorithm on a global scale over alpine meadow on the TP in China. Wind speed and vegetation height were integrated to estimate aerodynamic resistance, while the temperature and moisture constraints for stomatal conductance were revised based on the technique proposed by Fisher et al. (2008). Moreover, Fisher's method for soil evaporation was adopted to reduce the uncertainty in soil evaporation estimation. Five representative alpine meadow sites on the TP were selected to investigate the performance of the modified algorithm. Comparisons were made between the ET observed using the Eddy Covariance (EC) and estimated using both the original and modified algorithms. The results revealed that the modified algorithm performed better than the original MOD16 algorithm with the coefficient of determination (R2) increasing from 0.26 to 0.68, and root mean square error (RMSE) decreasing from 1.56 to 0.78 mm d-1. The modified algorithm performed slightly better with a higher R2 (0.70) and lower RMSE (0.61 mm d-1) for after-precipitation days than for non-precipitation days at Suli site. Contrarily, better results were obtained for non-precipitation days than for after-precipitation days at Arou, Tanggula, and Hulugou sites, indicating that the modified algorithm may be more suitable for estimating ET for non-precipitation days with higher accuracy than for after-precipitation days, which had large observation errors. The comparisons between the modified algorithm and two mainstream methods suggested that the modified algorithm could produce high accuracy ET over the alpine meadow sites on the TP.
Optimization design of LED heat dissipation structure based on strip fins
NASA Astrophysics Data System (ADS)
Xue, Lingyun; Wan, Wenbin; Chen, Qingguang; Rao, Huanle; Xu, Ping
2018-03-01
To solve the heat dissipation problem of LED, a radiator structure based on strip fins is designed and the method to optimize the structure parameters of strip fins is proposed in this paper. The combination of RBF neural networks and particle swarm optimization (PSO) algorithm is used for modeling and optimization respectively. During the experiment, the 150 datasets of LED junction temperature when structure parameters of number of strip fins, length, width and height of the fins have different values are obtained by ANSYS software. Then RBF neural network is applied to build the non-linear regression model and the parameters optimization of structure based on particle swarm optimization algorithm is performed with this model. The experimental results show that the lowest LED junction temperature reaches 43.88 degrees when the number of hidden layer nodes in RBF neural network is 10, the two learning factors in particle swarm optimization algorithm are 0.5, 0.5 respectively, the inertia factor is 1 and the maximum number of iterations is 100, and now the number of fins is 64, the distribution structure is 8*8, and the length, width and height of fins are 4.3mm, 4.48mm and 55.3mm respectively. To compare the modeling and optimization results, LED junction temperature at the optimized structure parameters was simulated and the result is 43.592°C which approximately equals to the optimal result. Compared with the ordinary plate-fin-type radiator structure whose temperature is 56.38°C, the structure greatly enhances heat dissipation performance of the structure.
Zhang, F; de Dear, R
2017-01-01
As one of the most common strategies for managing peak electricity demand, direct load control (DLC) of air-conditioners involves cycling the compressors on and off at predetermined intervals. In university lecture theaters, the implementation of DLC induces temperature cycles which might compromise university students' learning performance. In these experiments, university students' learning performance, represented by four cognitive skills of memory, concentration, reasoning, and planning, was closely monitored under DLC-induced temperature cycles and control conditions simulated in a climate chamber. In Experiment 1 with a cooling set point temperature of 22°C, subjects' cognitive performance was relatively stable or even slightly promoted by the mild heat intensity and short heat exposure resulting from temperature cycles; in Experiment 2 with a cooling set point of 24°C, subjects' reasoning and planning performance observed a trend of decline at the higher heat intensity and longer heat exposure. Results confirm that simpler cognitive tasks are less susceptible to temperature effects than more complex tasks; the effect of thermal variations on cognitive performance follows an extended-U relationship with performance being relatively stable across a range of temperatures. DLC appears to be feasible in university lecture theaters if DLC algorithms are implemented judiciously. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm.
Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei; Wang, Hongxun; Dai, Wei
2018-04-08
A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry-Perot (F-P) filter and optical switch. To improve system resolution, the F-P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed.
A Fiber Bragg Grating Interrogation System with Self-Adaption Threshold Peak Detection Algorithm
Zhang, Weifang; Li, Yingwu; Jin, Bo; Ren, Feifei
2018-01-01
A Fiber Bragg Grating (FBG) interrogation system with a self-adaption threshold peak detection algorithm is proposed and experimentally demonstrated in this study. This system is composed of a field programmable gate array (FPGA) and advanced RISC machine (ARM) platform, tunable Fabry–Perot (F–P) filter and optical switch. To improve system resolution, the F–P filter was employed. As this filter is non-linear, this causes the shifting of central wavelengths with the deviation compensated by the parts of the circuit. Time-division multiplexing (TDM) of FBG sensors is achieved by an optical switch, with the system able to realize the combination of 256 FBG sensors. The wavelength scanning speed of 800 Hz can be achieved by a FPGA+ARM platform. In addition, a peak detection algorithm based on a self-adaption threshold is designed and the peak recognition rate is 100%. Experiments with different temperatures were conducted to demonstrate the effectiveness of the system. Four FBG sensors were examined in the thermal chamber without stress. When the temperature changed from 0 °C to 100 °C, the degree of linearity between central wavelengths and temperature was about 0.999 with the temperature sensitivity being 10 pm/°C. The static interrogation precision was able to reach 0.5 pm. Through the comparison of different peak detection algorithms and interrogation approaches, the system was verified to have an optimum comprehensive performance in terms of precision, capacity and speed. PMID:29642507
Yi, S.; Li, N.; Xiang, B.; Wang, X.; Ye, B.; McGuire, A.D.
2013-01-01
Soil surface temperature is a critical boundary condition for the simulation of soil temperature by environmental models. It is influenced by atmospheric and soil conditions and by vegetation cover. In sophisticated land surface models, it is simulated iteratively by solving surface energy budget equations. In ecosystem, permafrost, and hydrology models, the consideration of soil surface temperature is generally simple. In this study, we developed a methodology for representing the effects of vegetation cover and atmospheric factors on the estimation of soil surface temperature for alpine grassland ecosystems on the Qinghai-Tibetan Plateau. Our approach integrated measurements from meteorological stations with simulations from a sophisticated land surface model to develop an equation set for estimating soil surface temperature. After implementing this equation set into an ecosystem model and evaluating the performance of the ecosystem model in simulating soil temperature at different depths in the soil profile, we applied the model to simulate interactions among vegetation cover, freeze-thaw cycles, and soil erosion to demonstrate potential applications made possible through the implementation of the methodology developed in this study. Results showed that (1) to properly estimate daily soil surface temperature, algorithms should use air temperature, downward solar radiation, and vegetation cover as independent variables; (2) the equation set developed in this study performed better than soil surface temperature algorithms used in other models; and (3) the ecosystem model performed well in simulating soil temperature throughout the soil profile using the equation set developed in this study. Our application of the model indicates that the representation in ecosystem models of the effects of vegetation cover on the simulation of soil thermal dynamics has the potential to substantially improve our understanding of the vulnerability of alpine grassland ecosystems to changes in climate and grazing regimes.
NASA Astrophysics Data System (ADS)
Yi, S.; Li, N.; Xiang, B.; Wang, X.; Ye, B.; McGuire, A. D.
2013-07-01
surface temperature is a critical boundary condition for the simulation of soil temperature by environmental models. It is influenced by atmospheric and soil conditions and by vegetation cover. In sophisticated land surface models, it is simulated iteratively by solving surface energy budget equations. In ecosystem, permafrost, and hydrology models, the consideration of soil surface temperature is generally simple. In this study, we developed a methodology for representing the effects of vegetation cover and atmospheric factors on the estimation of soil surface temperature for alpine grassland ecosystems on the Qinghai-Tibetan Plateau. Our approach integrated measurements from meteorological stations with simulations from a sophisticated land surface model to develop an equation set for estimating soil surface temperature. After implementing this equation set into an ecosystem model and evaluating the performance of the ecosystem model in simulating soil temperature at different depths in the soil profile, we applied the model to simulate interactions among vegetation cover, freeze-thaw cycles, and soil erosion to demonstrate potential applications made possible through the implementation of the methodology developed in this study. Results showed that (1) to properly estimate daily soil surface temperature, algorithms should use air temperature, downward solar radiation, and vegetation cover as independent variables; (2) the equation set developed in this study performed better than soil surface temperature algorithms used in other models; and (3) the ecosystem model performed well in simulating soil temperature throughout the soil profile using the equation set developed in this study. Our application of the model indicates that the representation in ecosystem models of the effects of vegetation cover on the simulation of soil thermal dynamics has the potential to substantially improve our understanding of the vulnerability of alpine grassland ecosystems to changes in climate and grazing regimes.
Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna Roberts
Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order polynomial with 16-bit precision, significant improvement over the one and two-point correction algorithms. All algorithm have been implemented in software with satisfactory results and the third order gain equalization non-uniformity correction algorithm has been implemented in hardware.
A random forest algorithm for nowcasting of intense precipitation events
NASA Astrophysics Data System (ADS)
Das, Saurabh; Chakraborty, Rohit; Maitra, Animesh
2017-09-01
Automatic nowcasting of convective initiation and thunderstorms has potential applications in several sectors including aviation planning and disaster management. In this paper, random forest based machine learning algorithm is tested for nowcasting of convective rain with a ground based radiometer. Brightness temperatures measured at 14 frequencies (7 frequencies in 22-31 GHz band and 7 frequencies in 51-58 GHz bands) are utilized as the inputs of the model. The lower frequency band is associated to the water vapor absorption whereas the upper frequency band relates to the oxygen absorption and hence, provide information on the temperature and humidity of the atmosphere. Synthetic minority over-sampling technique is used to balance the data set and 10-fold cross validation is used to assess the performance of the model. Results indicate that random forest algorithm with fixed alarm generation time of 30 min and 60 min performs quite well (probability of detection of all types of weather condition ∼90%) with low false alarms. It is, however, also observed that reducing the alarm generation time improves the threat score significantly and also decreases false alarms. The proposed model is found to be very sensitive to the boundary layer instability as indicated by the variable importance measure. The study shows the suitability of a random forest algorithm for nowcasting application utilizing a large number of input parameters from diverse sources and can be utilized in other forecasting problems.
Discrete event performance prediction of speculatively parallel temperature-accelerated dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamora, Richard James; Voter, Arthur F.; Perez, Danny
Due to its unrivaled ability to predict the dynamical evolution of interacting atoms, molecular dynamics (MD) is a widely used computational method in theoretical chemistry, physics, biology, and engineering. Despite its success, MD is only capable of modeling time scales within several orders of magnitude of thermal vibrations, leaving out many important phenomena that occur at slower rates. The Temperature Accelerated Dynamics (TAD) method overcomes this limitation by thermally accelerating the state-to-state evolution captured by MD. Due to the algorithmically complex nature of the serial TAD procedure, implementations have yet to improve performance by parallelizing the concurrent exploration of multiplemore » states. Here we utilize a discrete event-based application simulator to introduce and explore a new Speculatively Parallel TAD (SpecTAD) method. We investigate the SpecTAD algorithm, without a full-scale implementation, by constructing an application simulator proxy (SpecTADSim). Finally, following this method, we discover that a nontrivial relationship exists between the optimal SpecTAD parameter set and the number of CPU cores available at run-time. Furthermore, we find that a majority of the available SpecTAD boost can be achieved within an existing TAD application using relatively simple algorithm modifications.« less
Discrete event performance prediction of speculatively parallel temperature-accelerated dynamics
Zamora, Richard James; Voter, Arthur F.; Perez, Danny; ...
2016-12-01
Due to its unrivaled ability to predict the dynamical evolution of interacting atoms, molecular dynamics (MD) is a widely used computational method in theoretical chemistry, physics, biology, and engineering. Despite its success, MD is only capable of modeling time scales within several orders of magnitude of thermal vibrations, leaving out many important phenomena that occur at slower rates. The Temperature Accelerated Dynamics (TAD) method overcomes this limitation by thermally accelerating the state-to-state evolution captured by MD. Due to the algorithmically complex nature of the serial TAD procedure, implementations have yet to improve performance by parallelizing the concurrent exploration of multiplemore » states. Here we utilize a discrete event-based application simulator to introduce and explore a new Speculatively Parallel TAD (SpecTAD) method. We investigate the SpecTAD algorithm, without a full-scale implementation, by constructing an application simulator proxy (SpecTADSim). Finally, following this method, we discover that a nontrivial relationship exists between the optimal SpecTAD parameter set and the number of CPU cores available at run-time. Furthermore, we find that a majority of the available SpecTAD boost can be achieved within an existing TAD application using relatively simple algorithm modifications.« less
Hudson, Thomas J; Looi, Thomas; Pichardo, Samuel; Amaral, Joao; Temple, Michael; Drake, James M; Waspe, Adam C
2018-02-01
Magnetic resonance-guided focused ultrasound (MRgFUS) is emerging as a treatment alternative for osteoid osteoma and painful bone metastases. This study describes a new simulation platform that predicts the distribution of heat generated by MRgFUS when applied to bone tissue. Calculation of the temperature distribution was performed using two mathematical models. The first determined the propagation and absorption of acoustic energy through each medium, and this was performed using a multilayered approximation of the Rayleigh integral method. The ultrasound energy distribution derived from these equations could then be converted to heat energy, and the second mathematical model would then use the heat generated to determine the final temperature distribution using a finite-difference time-domain application of Pennes' bio-heat transfer equation. Anatomical surface geometry was generated using a modified version of a mesh-based semiautomatic segmentation algorithm, and both the acoustic and thermodynamic models were calculated using a parallelized algorithm running on a graphics processing unit (GPU) to greatly accelerate computation time. A series of seven porcine experiments were performed to validate the model, comparing simulated temperatures to MR thermometry and assessing spatial, temporal, and maximum temperature accuracy in the soft tissue. The parallelized algorithm performed acoustic and thermodynamic calculations on grids of over 10 8 voxels in under 30 s for a simulated 20 s of heating and 40 s of cooling, with a maximum time per calculated voxel of less than 0.3 μs. Accuracy was assessed by comparing the soft tissue thermometry to the simulation in the soft tissue adjacent to bone using four metrics. The maximum temperature difference between the simulation and thermometry in a region of interest around the bone was measured to be 5.43 ± 3.51°C average absolute difference and a percentage difference of 16.7%. The difference in heating location resulted in a total root-mean-square error of 4.21 ± 1.43 mm. The total size of the ablated tissue calculated from the thermal dose approximation in the simulation was, on average, 67.6% smaller than measured from the thermometry. The cooldown was much faster in the simulation, where it decreased by 14.22 ± 4.10°C more than the thermometry in 40 s after sonication ended. The use of a Rayleigh-based acoustic model combined with a discretized bio-heat transfer model provided a rapid three-dimensional calculation of the temperature distribution through bone and soft tissue during MRgFUS application, and the parallelized GPU algorithm provided the computational speed that would be necessary for an intraoperative treatment planning software platform. © 2017 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Foster, J. L.; Chang, A. T. C.; Hall, D. K.
1997-01-01
While it is recognized that no single snow algorithm is capable of producing accurate global estimates of snow depth, for research purposes it is useful to test an algorithm's performance in different climatic areas in order to see how it responds to a variety of snow conditions. This study is one of the first to develop separate passive microwave snow algorithms for North America and Eurasia by including parameters that consider the effects of variations in forest cover and crystal size on microwave brightness temperature. A new algorithm (GSFC 1996) is compared to a prototype algorithm (Chang et al., 1987) and to a snow depth climatology (SDC), which for this study is considered to be a standard reference or baseline. It is shown that the GSFC 1996 algorithm compares much more favorably to the SDC than does the Chang et al. (1987) algorithm. For example, in North America in February there is a 15% difference between the GSFC 198-96 Algorithm and the SDC, but with the Chang et al. (1987) algorithm the difference is greater than 50%. In Eurasia, also in February, there is only a 1.3% difference between the GSFC 1996 algorithm and the SDC, whereas with the Chang et al. (1987) algorithm the difference is about 20%. As expected, differences tend to be less when the snow cover extent is greater, particularly for Eurasia. The GSFC 1996 algorithm performs better in North America in each month than dose the Chang et al. (1987) algorithm. This is also the case in Eurasia, except in April and May when the Chang et al.(1987) algorithms is in closer accord to the SDC than is GSFC 1996 algorithm.
NASA Astrophysics Data System (ADS)
Kim, D.; Youn, J.; Kim, C.
2017-08-01
As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, B.; Misra, A.; Fricke, B.A.
1997-12-31
A computer algorithm was developed that estimates the latent and sensible heat loads due to the bulk refrigeration of fruits and vegetables. The algorithm also predicts the commodity moisture loss and temperature distribution which occurs during refrigeration. Part 1 focused upon the thermophysical properties of commodities and the flowfield parameters which govern the heat and mass transfer from fresh fruits and vegetables. This paper, Part 2, discusses the modeling methodology utilized in the current computer algorithm and describes the development of the heat and mass transfer models. Part 2 also compares the results of the computer algorithm to experimental datamore » taken from the literature and describes a parametric study which was performed with the algorithm. In addition, this paper also reviews existing numerical models for determining the heat and mass transfer in bulk loads of fruits and vegetables.« less
SPARC GENERATED CHEMICAL PROPERTIES DATABASE FOR USE IN NATIONAL RISK ASSESSMENTS
The SPARC (Sparc Performs Automated Reasoning in Chemistry) Model was used to provide temperature dependent algorithms used to estimate chemical properties for approximately 200 chemicals of interest to the promulgation of the Hazardous Waste Identification Rule (HWIR) . Proper...
NASA Technical Reports Server (NTRS)
Knox, C. E.
1983-01-01
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight tests flown with a T-39A (Sabreliner) airplane are presented.
Toward an Objective Enhanced-V Detection Algorithm
NASA Technical Reports Server (NTRS)
Moses, John F.; Brunner,Jason C.; Feltz, Wayne F.; Ackerman, Steven A.; Moses, John F.; Rabin, Robert M.
2007-01-01
The area of coldest cloud tops above thunderstorms sometimes has a distinct V or U shape. This pattern, often referred to as an "enhanced-V signature, has been observed to occur during and preceding severe weather. This study describes an algorithmic approach to objectively detect overshooting tops, temperature couplets, and enhanced-V features with observations from the Geostationary Operational Environmental Satellite and Low Earth Orbit data. The methodology consists of temperature, temperature difference, and distance thresholds for the overshooting top and temperature couplet detection parts of the algorithm and consists of cross correlation statistics of pixels for the enhanced-V detection part of the algorithm. The effectiveness of the overshooting top and temperature couplet detection components of the algorithm is examined using GOES and MODIS image data for case studies in the 2003-2006 seasons. The main goal is for the algorithm to be useful for operations with future sensors, such as GOES-R.
Tracking control of a spool displacement in a direct piezoactuator-driven servo valve system
NASA Astrophysics Data System (ADS)
Han, Chulhee; Hwang, Yong-Hoon; Choi, Seung-Bok
2017-03-01
This paper presents tracking control performances of a piezostack direct drive valve (PDDV) operated at various temperatures. As afirst step, a spool valve and valve system are designed operated by the piezoactuator. After briefly describing about operating principle, an experimental apparatus to investigate the effect of temperaturs on the performances is set up. Subsequently, the PDDV is installed in a large-size heat chamber equipped with electric circuits and sensors. A classical proportional-integral-derivative (PID) controller is designed and applied to control the spool displacement. In addition, a fuzzt algorithm is integrated with the PID controller to enhace performance of the proposed valve system. The tracking performance of a spool displacement is tested by increasing the teperature and exciting frequency up to 150°C and 200 Hz, respectively. It is shown that the tracking performance heavily depends on both the operating temperature and the excitation frequency.
Shilov, Ignat V; Seymour, Sean L; Patel, Alpesh A; Loboda, Alex; Tang, Wilfred H; Keating, Sean P; Hunter, Christie L; Nuwaysir, Lydia M; Schaeffer, Daniel A
2007-09-01
The Paragon Algorithm, a novel database search engine for the identification of peptides from tandem mass spectrometry data, is presented. Sequence Temperature Values are computed using a sequence tag algorithm, allowing the degree of implication by an MS/MS spectrum of each region of a database to be determined on a continuum. Counter to conventional approaches, features such as modifications, substitutions, and cleavage events are modeled with probabilities rather than by discrete user-controlled settings to consider or not consider a feature. The use of feature probabilities in conjunction with Sequence Temperature Values allows for a very large increase in the effective search space with only a very small increase in the actual number of hypotheses that must be scored. The algorithm has a new kind of user interface that removes the user expertise requirement, presenting control settings in the language of the laboratory that are translated to optimal algorithmic settings. To validate this new algorithm, a comparison with Mascot is presented for a series of analogous searches to explore the relative impact of increasing search space probed with Mascot by relaxing the tryptic digestion conformance requirements from trypsin to semitrypsin to no enzyme and with the Paragon Algorithm using its Rapid mode and Thorough mode with and without tryptic specificity. Although they performed similarly for small search space, dramatic differences were observed in large search space. With the Paragon Algorithm, hundreds of biological and artifact modifications, all possible substitutions, and all levels of conformance to the expected digestion pattern can be searched in a single search step, yet the typical cost in search time is only 2-5 times that of conventional small search space. Despite this large increase in effective search space, there is no drastic loss of discrimination that typically accompanies the exploration of large search space.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Cannon, D. G.
1979-01-01
A flight management algorithm designed to improve the accuracy of delivering the airplane fuel efficiently to a metering fix at a time designated by air traffic control is discussed. The algorithm provides a 3-D path with time control (4-D) for a test B 737 airplane to make an idle thrust, clean configured descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms and the results of the flight tests are discussed.
A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System
Zhou, Guanwu; Zhao, Yulong; Guo, Fangfang; Xu, Wenju
2014-01-01
Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM) as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU) after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system's performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor. PMID:25006998
Kalman filtered MR temperature imaging for laser induced thermal therapies.
Fuentes, D; Yung, J; Hazle, J D; Weinberg, J S; Stafford, R J
2012-04-01
The feasibility of using a stochastic form of Pennes bioheat model within a 3-D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L(2) (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, ∆t < 10 s, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss ∆t > 10 sec.
Kim, Suhwan; Kwon, Hyungwoo; Yang, Injae; Lee, Seungho; Kim, Jeehyun; Kang, Shinwon
2013-11-12
A simultaneous strain and temperature measurement method using a Fabry-Perot laser diode (FP-LD) and a dual-stage fiber Bragg grating (FBG) optical demultiplexer was applied to a distributed sensor system based on Brillouin optical time domain reflectometry (BOTDR). By using a Kalman filter, we improved the performance of the FP-LD based OTDR, and decreased the noise using the dual-stage FBG optical demultiplexer. Applying the two developed components to the BOTDR system and using a temperature compensating algorithm, we successfully demonstrated the simultaneous measurement of strain and temperature distributions under various experimental conditions. The observed errors in the temperature and strain measured using the developed sensing system were 0.6 °C and 50 με, and the spatial resolution was 1 m, respectively.
Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
Liang, Kun; Yang, Cailan; Peng, Li; Zhou, Bo
2017-02-01
In uncooled long-wave IR camera systems, the temperature of a focal plane array (FPA) is variable along with the environmental temperature as well as the operating time. The spatial nonuniformity of the FPA, which is partly affected by the FPA temperature, obviously changes as well, resulting in reduced image quality. This study presents a real-time nonuniformity correction algorithm based on FPA temperature to compensate for nonuniformity caused by FPA temperature fluctuation. First, gain coefficients are calculated using a two-point correction technique. Then offset parameters at different FPA temperatures are obtained and stored in tables. When the camera operates, the offset tables are called to update the current offset parameters via a temperature-dependent interpolation. Finally, the gain coefficients and offset parameters are used to correct the output of the IR camera in real time. The proposed algorithm is evaluated and compared with two representative shutterless algorithms [minimizing the sum of the squares of errors algorithm (MSSE), template-based solution algorithm (TBS)] using IR images captured by a 384×288 pixel uncooled IR camera with a 17 μm pitch. Experimental results show that this method can quickly trace the response drift of the detector units when the FPA temperature changes. The quality of the proposed algorithm is as good as MSSE, while the processing time is as short as TBS, which means the proposed algorithm is good for real-time control and at the same time has a high correction effect.
NASA Astrophysics Data System (ADS)
Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo
2012-08-01
We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.
Application of based on improved wavelet algorithm in fiber temperature sensor
NASA Astrophysics Data System (ADS)
Qi, Hui; Tang, Wenjuan
2018-03-01
It is crucial point that accurate temperature in distributed optical fiber temperature sensor. In order to solve the problem of temperature measurement error due to weak Raman scattering signal and strong noise in system, a new based on improved wavelet algorithm is presented. On the basis of the traditional modulus maxima wavelet algorithm, signal correlation is considered to improve the ability to capture signals and noise, meanwhile, combined with wavelet decomposition scale adaptive method to eliminate signal loss or noise not filtered due to mismatch scale. Superiority of algorithm filtering is compared with others by Matlab. At last, the 3km distributed optical fiber temperature sensing system is used for verification. Experimental results show that accuracy of temperature generally increased by 0.5233.
On-Orbit Operation and Performance of MODIS Blackbody
NASA Technical Reports Server (NTRS)
Xiong, X.; Chang, T.; Barnes, W.
2009-01-01
MODIS collects data in 36 spectral bands, including 20 reflective solar bands (RSB) and 16 thermal emissive bands (TES). The TEB on-orbit calibration is performed on a scan-by-scan basis using a quadratic algorithm that relates the detector response with the calibration radiance from the sensor on-board blackbody (BB). The calibration radiance is accurately determined each scan from the BB temperature measured using a set of 12 thermistors. The BB thermistors were calibrated pre-launch with traceability to the NIST temperature standard. Unlike many heritage sensors, the MODIS BB can be operated at a constant temperature or with the temperature continuously varying between instrument ambient (about 270K) and 315K. In this paper, we provide an overview of both Terra and Aqua MODIS on-board BB operations, functions, and on-orbit performance. We also examine the impact of key calibration parameters, such as BB emissivity and temperature (stability and gradient) determined from its thermistors, on the TEB calibration and Level I (LIB) data product uncertainty.
Ni-MH battery charger with a compensator for electric vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, H.W.; Han, C.S.; Kim, C.S.
1996-09-01
The development of a high-performance battery and safe and reliable charging methods are two important factors for commercialization of the Electric Vehicles (EV). Hyundai and Ovonic together spent many years in the research on optimum charging method for Ni-MH battery. This paper presents in detail the results of intensive experimental analysis, performed by Hyundai in collaboration with Ovonic. An on-board Ni-MH battery charger and its controller which are designed to use as a standard home electricity supply are described. In addition, a 3 step constant current recharger with the temperature and the battery aging compensator is proposed. This has amore » multi-loop algorithm function to detect its 80% and fully charged state, and carry out equalization charging control. The algorithm is focused on safety, reliability, efficiency, charging speed and thermal management (maintaining uniform temperatures within a battery pack). It is also designed to minimize the necessity for user input.« less
Information filtering via biased heat conduction.
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou et al., Proc. Natl. Acad. Sci. USA 107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
Some aspects of algorithm performance and modeling in transient analysis of structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1981-01-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit algorithms with variable time steps, known as the GEAR package, is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite-element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the wing of the space shuttle orbiter. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures).
NASA Technical Reports Server (NTRS)
Tedesco, Marco; Kim, Edward J.
2005-01-01
In this paper, GA-based techniques are used to invert the equations of an electromagnetic model based on Dense Medium Radiative Transfer Theory (DMRT) under the Quasi Crystalline Approximation with Coherent Potential to retrieve snow depth, mean grain size and fractional volume from microwave brightness temperatures. The technique is initially tested on both noisy and not-noisy simulated data. During this phase, different configurations of genetic algorithm parameters are considered to quantify how their change can affect the algorithm performance. A configuration of GA parameters is then selected and the algorithm is applied to experimental data acquired during the NASA Cold Land Process Experiment. Snow parameters retrieved with the GA-DMRT technique are then compared with snow parameters measured on field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knox, C.E.
A simplified flight-management descent algorithm, programmed on a small programmable calculator, was developed and flight tested. It was designed to aid the pilot in planning and executing a fuel-conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel-conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight-management descent algorithm is described. The results of flight testsmore » flown with a T-39A (Sabreliner) airplane are presented.« less
Concurrent design of an RTP chamber and advanced control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spence, P.; Schaper, C.; Kermani, A.
1995-12-31
A concurrent-engineering approach is applied to the development of an axisymmetric rapid-thermal-processing (RTP) reactor and its associated temperature controller. Using a detailed finite-element thermal model as a surrogate for actual hardware, the authors have developed and tested a multi-input multi-output (MIMO) controller. Closed-loop simulations are performed by linking the control algorithm with the finite-element code. Simulations show that good temperature uniformity is maintained on the wafer during both steady and transient conditions. A numerical study shows the effect of ramp rate, feedback gain, sensor placement, and wafer-emissivity patterns on system performance.
NASA Astrophysics Data System (ADS)
Zhang, Xianxia; Wang, Jian; Qin, Tinggao
2003-09-01
Intelligent control algorithms are introduced into the control system of temperature and humidity. A multi-mode control algorithm of PI-Single Neuron is proposed for single loop control of temperature and humidity. In order to remove the coupling between temperature and humidity, a new decoupling method is presented, which is called fuzzy decoupling. The decoupling is achieved by using a fuzzy controller that dynamically modifies the static decoupling coefficient. Taking the control algorithm of PI-Single Neuron as the single loop control of temperature and humidity, the paper provides the simulated output response curves with no decoupling control, static decoupling control and fuzzy decoupling control. Those control algorithms are easily implemented in singlechip-based hardware systems.
Hail detection algorithm for the Global Precipitation Measuring mission core satellite sensors
NASA Astrophysics Data System (ADS)
Mroz, Kamil; Battaglia, Alessandro; Lang, Timothy J.; Tanelli, Simone; Cecil, Daniel J.; Tridon, Frederic
2017-04-01
By exploiting an abundant number of extreme storms observed simultaneously by the Global Precipitation Measurement (GPM) mission core satellite's suite of sensors and by the ground-based S-band Next-Generation Radar (NEXRAD) network over continental US, proxies for the identification of hail are developed based on the GPM core satellite observables. The full capabilities of the GPM observatory are tested by analyzing more than twenty observables and adopting the hydrometeor classification based on ground-based polarimetric measurements as truth. The proxies have been tested using the Critical Success Index (CSI) as a verification measure. The hail detection algorithm based on the mean Ku reflectivity in the mixed-phase layer performs the best, out of all considered proxies (CSI of 45%). Outside the Dual frequency Precipitation Radar (DPR) swath, the Polarization Corrected Temperature at 18.7 GHz shows the greatest potential for hail detection among all GMI channels (CSI of 26% at a threshold value of 261 K). When dual variable proxies are considered, the combination involving the mixed-phase reflectivity values at both Ku and Ka-bands outperforms all the other proxies, with a CSI of 49%. The best-performing radar-radiometer algorithm is based on the mixed-phase reflectivity at Ku-band and on the brightness temperature (TB) at 10.7 GHz (CSI of 46%). When only radiometric data are available, the algorithm based on the TBs at 36.6 and 166 GHz is the most efficient, with a CSI of 27.5%.
Directly data processing algorithm for multi-wavelength pyrometer (MWP).
Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong
2017-11-27
Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.
NASA Technical Reports Server (NTRS)
Nyangweso, Emmanuel; Bole, Brian
2014-01-01
Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.
Lifetime Prediction of IGBT in a STATCOM Using Modified-Graphical Rainflow Counting Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak
Rainflow algorithms are one of the best counting methods used in fatigue and failure analysis [17]. There have been many approaches to the rainflow algorithm, some proposing modifications. Graphical Rainflow Method (GRM) was proposed recently with a claim of faster execution times [10]. However, the steps of the graphical method of rainflow algorithm, when implemented, do not generate the same output as the four-point or ASTM standard algorithm. A modified graphical method is presented and discussed in this paper to overcome the shortcomings of graphical rainflow algorithm. A fast rainflow algorithm based on four-point algorithm but considering point comparison thanmore » range comparison is also presented. A comparison between the performances of the common rainflow algorithms [6-10], including the proposed methods, in terms of execution time, memory used, and efficiency, complexity, and load sequences is presented. Finally, the rainflow algorithm is applied to temperature data of an IGBT in assessing the lifetime of a STATCOM operating for power factor correction of the load. From 5-minute data load profiles available, the lifetime is estimated to be at 3.4 years.« less
(abstract) Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, A. E.
1994-01-01
Self consistent circuit analog thermal models, that can be run in commercial spreadsheet programs on personal computers, have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. These models have been used to analyze the Cryogenic Telescope Test Facility (CTTF). The facility will be on line in early 1995 for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison of the model predictions and actual performance of this facility will be presented.
Simple Spreadsheet Thermal Models for Cryogenic Applications
NASA Technical Reports Server (NTRS)
Nash, Alfred
1995-01-01
Self consistent circuit analog thermal models that can be run in commercial spreadsheet programs on personal computers have been created to calculate the cooldown and steady state performance of cryogen cooled Dewars. The models include temperature dependent conduction and radiation effects. The outputs of the models provide temperature distribution and Dewar performance information. these models have been used to analyze the SIRTF Telescope Test Facility (STTF). The facility has been brought on line for its first user, the Infrared Telescope Technology Testbed (ITTT), for the Space Infrared Telescope Facility (SIRTF) at JPL. The model algorithm as well as a comparison between the models' predictions and actual performance of this facility will be presented.
Version 2 of the IASI NH3 neural network retrieval algorithm: near-real-time and reanalysed datasets
NASA Astrophysics Data System (ADS)
Van Damme, Martin; Whitburn, Simon; Clarisse, Lieven; Clerbaux, Cathy; Hurtmans, Daniel; Coheur, Pierre-François
2017-12-01
Recently, Whitburn et al.(2016) presented a neural-network-based algorithm for retrieving atmospheric ammonia (NH3) columns from Infrared Atmospheric Sounding Interferometer (IASI) satellite observations. In the past year, several improvements have been introduced, and the resulting new baseline version, Artificial Neural Network for IASI (ANNI)-NH3-v2.1, is documented here. One of the main changes to the algorithm is that separate neural networks were trained for land and sea observations, resulting in a better training performance for both groups. By reducing and transforming the input parameter space, performance is now also better for observations associated with favourable sounding conditions (i.e. enhanced thermal contrasts). Other changes relate to the introduction of a bias correction over land and sea and the treatment of the satellite zenith angle. In addition to these algorithmic changes, new recommendations for post-filtering the data and for averaging data in time or space are formulated. We also introduce a second dataset (ANNI-NH3-v2.1R-I) which relies on ERA-Interim ECMWF meteorological input data, along with surface temperature retrieved from a dedicated network, rather than the operationally provided Eumetsat IASI Level 2 (L2) data used for the standard near-real-time version. The need for such a dataset emerged after a series of sharp discontinuities were identified in the NH3 time series, which could be traced back to incremental changes in the IASI L2 algorithms for temperature and clouds. The reanalysed dataset is coherent in time and can therefore be used to study trends. Furthermore, both datasets agree reasonably well in the mean on recent data, after the date when the IASI meteorological L2 version 6 became operational (30 September 2014).
NASA Astrophysics Data System (ADS)
Wu, Jiasheng; Cao, Lin; Zhang, Guoqiang
2018-02-01
Cooling tower of air conditioning has been widely used as cooling equipment, and there will be broad application prospect if it can be reversibly used as heat source under heat pump heating operation condition. In view of the complex non-linear relationship of each parameter in the process of heat and mass transfer inside tower, In this paper, the BP neural network model based on genetic algorithm optimization (GABP neural network model) is established for the reverse use of cross flow cooling tower. The model adopts the structure of 6 inputs, 13 hidden nodes and 8 outputs. With this model, the outlet air dry bulb temperature, wet bulb temperature, water temperature, heat, sensible heat ratio and heat absorbing efficiency, Lewis number, a total of 8 the proportion of main performance parameters were predicted. Furthermore, the established network model is used to predict the water temperature and heat absorption of the tower at different inlet temperatures. The mean relative error MRE between BP predicted value and experimental value are 4.47%, 3.63%, 2.38%, 3.71%, 6.35%,3.14%, 13.95% and 6.80% respectively; the mean relative error MRE between GABP predicted value and experimental value are 2.66%, 3.04%, 2.27%, 3.02%, 6.89%, 3.17%, 11.50% and 6.57% respectively. The results show that the prediction results of GABP network model are better than that of BP network model; the simulation results are basically consistent with the actual situation. The GABP network model can well predict the heat and mass transfer performance of the cross flow cooling tower.
Liger, Vladimir V; Mironenko, Vladimir R; Kuritsyn, Yurii A; Bolshov, Mikhail A
2018-05-17
A new algorithm for the estimation of the maximum temperature in a non-uniform hot zone by a sensor based on absorption spectrometry with a diode laser is developed. The algorithm is based on the fitting of the absorption spectrum with a test molecule in a non-uniform zone by linear combination of two single temperature spectra simulated using spectroscopic databases. The proposed algorithm allows one to better estimate the maximum temperature of a non-uniform zone and can be useful if only the maximum temperature rather than a precise temperature profile is of primary interest. The efficiency and specificity of the algorithm are demonstrated in numerical experiments and experimentally proven using an optical cell with two sections. Temperatures and water vapor concentrations could be independently regulated in both sections. The best fitting was found using a correlation technique. A distributed feedback (DFB) diode laser in the spectral range around 1.343 µm was used in the experiments. Because of the significant differences between the temperature dependences of the experimental and theoretical absorption spectra in the temperature range 300⁻1200 K, a database was constructed using experimentally detected single temperature spectra. Using the developed algorithm the maximum temperature in the two-section cell was estimated with accuracy better than 30 K.
NASA Technical Reports Server (NTRS)
Goldberg, Mitchell D.; Fleming, Henry E.
1994-01-01
An algorithm for generating deep-layer mean temperatures from satellite-observed microwave observations is presented. Unlike traditional temperature retrieval methods, this algorithm does not require a first guess temperature of the ambient atmosphere. By eliminating the first guess a potentially systematic source of error has been removed. The algorithm is expected to yield long-term records that are suitable for detecting small changes in climate. The atmospheric contribution to the deep-layer mean temperature is given by the averaging kernel. The algorithm computes the coefficients that will best approximate a desired averaging kernel from a linear combination of the satellite radiometer's weighting functions. The coefficients are then applied to the measurements to yield the deep-layer mean temperature. Three constraints were used in deriving the algorithm: (1) the sum of the coefficients must be one, (2) the noise of the product is minimized, and (3) the shape of the approximated averaging kernel is well-behaved. Note that a trade-off between constraints 2 and 3 is unavoidable. The algorithm can also be used to combine measurements from a future sensor (i.e., the 20-channel Advanced Microwave Sounding Unit (AMSU)) to yield the same averaging kernel as that based on an earlier sensor (i.e., the 4-channel Microwave Sounding Unit (MSU)). This will allow a time series of deep-layer mean temperatures based on MSU measurements to be continued with AMSU measurements. The AMSU is expected to replace the MSU in 1996.
An adaptive evolutionary multi-objective approach based on simulated annealing.
Li, H; Landa-Silva, D
2011-01-01
A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.
Experimental and analytical study of secondary path variations in active engine mounts
NASA Astrophysics Data System (ADS)
Hausberg, Fabian; Scheiblegger, Christian; Pfeffer, Peter; Plöchl, Manfred; Hecker, Simon; Rupp, Markus
2015-03-01
Active engine mounts (AEMs) provide an effective solution to further improve the acoustic and vibrational comfort of passenger cars. Typically, adaptive feedforward control algorithms, e.g., the filtered-x-least-mean-squares (FxLMS) algorithm, are applied to cancel disturbing engine vibrations. These algorithms require an accurate estimate of the AEM active dynamic characteristics, also known as the secondary path, in order to guarantee control performance and stability. This paper focuses on the experimental and theoretical study of secondary path variations in AEMs. The impact of three major influences, namely nonlinearity, change of preload and component temperature, on the AEM active dynamic characteristics is experimentally analyzed. The obtained test results are theoretically investigated with a linear AEM model which incorporates an appropriate description for elastomeric components. A special experimental set-up extends the model validation of the active dynamic characteristics to higher frequencies up to 400 Hz. The theoretical and experimental results show that significant secondary path variations are merely observed in the frequency range of the AEM actuator's resonance frequency. These variations mainly result from the change of the component temperature. As the stability of the algorithm is primarily affected by the actuator's resonance frequency, the findings of this paper facilitate the design of AEMs with simpler adaptive feedforward algorithms. From a practical point of view it may further be concluded that algorithmic countermeasures against instability are only necessary in the frequency range of the AEM actuator's resonance frequency.
A New Operational Snow Retrieval Algorithm Applied to Historical AMSR-E Brightness Temperatures
NASA Technical Reports Server (NTRS)
Tedesco, Marco; Jeyaratnam, Jeyavinoth
2016-01-01
Snow is a key element of the water and energy cycles and the knowledge of spatio-temporal distribution of snow depth and snow water equivalent (SWE) is fundamental for hydrological and climatological applications. SWE and snow depth estimates can be obtained from spaceborne microwave brightness temperatures at global scale and high temporal resolution (daily). In this regard, the data recorded by the Advanced Microwave Scanning Radiometer-Earth Orbiting System (EOS) (AMSR-E) onboard the National Aeronautics and Space Administration's (NASA) AQUA spacecraft have been used to generate operational estimates of SWE and snow depth, complementing estimates generated with other microwave sensors flying on other platforms. In this study, we report the results concerning the development and assessment of a new operational algorithm applied to historical AMSR-E data. The new algorithm here proposed makes use of climatological data, electromagnetic modeling and artificial neural networks for estimating snow depth as well as a spatio-temporal dynamic density scheme to convert snow depth to SWE. The outputs of the new algorithm are compared with those of the current AMSR-E operational algorithm as well as in-situ measurements and other operational snow products, specifically the Canadian Meteorological Center (CMC) and GlobSnow datasets. Our results show that the AMSR-E algorithm here proposed generally performs better than the operational one and addresses some major issues identified in the spatial distribution of snow depth fields associated with the evolution of effective grain size.
Kalman Filtered MR Temperature Imaging for Laser Induced Thermal Therapies
Fuentes, D.; Yung, J.; Hazle, J. D.; Weinberg, J. S.; Stafford, R. J.
2013-01-01
The feasibility of using a stochastic form of Pennes bioheat model within a 3D finite element based Kalman filter (KF) algorithm is critically evaluated for the ability to provide temperature field estimates in the event of magnetic resonance temperature imaging (MRTI) data loss during laser induced thermal therapy (LITT). The ability to recover missing MRTI data was analyzed by systematically removing spatiotemporal information from a clinical MR-guided LITT procedure in human brain and comparing predictions in these regions to the original measurements. Performance was quantitatively evaluated in terms of a dimensionless L2 (RMS) norm of the temperature error weighted by acquisition uncertainty. During periods of no data corruption, observed error histories demonstrate that the Kalman algorithm does not alter the high quality temperature measurement provided by MR thermal imaging. The KF-MRTI implementation considered is seen to predict the bioheat transfer with RMS error < 4 for a short period of time, Δt < 10sec, until the data corruption subsides. In its present form, the KF-MRTI method currently fails to compensate for consecutive for consecutive time periods of data loss Δt > 10sec. PMID:22203706
Gika, Helen G; Theodoridis, Georgios; Extance, Jon; Edge, Anthony M; Wilson, Ian D
2008-08-15
The applicability and potential of using elevated temperatures and sub 2-microm porous particles in chromatography for metabonomics/metabolomics was investigated using, for the first time, solvent temperatures higher than the boiling point of water (up to 180 degrees C) and thermal gradients to reduce the use of organic solvents. Ultra performance liquid chromatography, combined with mass spectrometry, was investigated for the global metabolite profiling of the plasma and urine of normal and Zucker (fa/fa) obese rats (a well established disease animal model). "Isobaric" high temperature chromatography, where the temperature and flow rate follow a gradient program, was developed and evaluated against a conventional organic solvent gradient. LC-MS data were first examined by established chromatographic criteria in order to evaluate the chromatographic performance and next were treated by special peak picking algorithms to allow the application of multivariate statistics. These studies showed that, for urine (but not plasma), chromatography at elevated temperatures provided better results than conventional reversed-phase LC with higher peak capacity and better peak asymmetry. From a systems biology point of view, better group clustering and separation was obtained with a larger number of variables of high importance when using high temperature-ultra performance liquid chromatography (HT-UPLC) compared to conventional solvent gradients.
Information filtering via biased heat conduction
NASA Astrophysics Data System (ADS)
Liu, Jian-Guo; Zhou, Tao; Guo, Qiang
2011-09-01
The process of heat conduction has recently found application in personalized recommendation [Zhou , Proc. Natl. Acad. Sci. USA PNASA60027-842410.1073/pnas.1000488107107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.
A robust embedded vision system feasible white balance algorithm
NASA Astrophysics Data System (ADS)
Wang, Yuan; Yu, Feihong
2018-01-01
White balance is a very important part of the color image processing pipeline. In order to meet the need of efficiency and accuracy in embedded machine vision processing system, an efficient and robust white balance algorithm combining several classical ones is proposed. The proposed algorithm mainly has three parts. Firstly, in order to guarantee higher efficiency, an initial parameter calculated from the statistics of R, G and B components from raw data is used to initialize the following iterative method. After that, the bilinear interpolation algorithm is utilized to implement demosaicing procedure. Finally, an adaptive step adjustable scheme is introduced to ensure the controllability and robustness of the algorithm. In order to verify the proposed algorithm's performance on embedded vision system, a smart camera based on IMX6 DualLite, IMX291 and XC6130 is designed. Extensive experiments on a large amount of images under different color temperatures and exposure conditions illustrate that the proposed white balance algorithm avoids color deviation problem effectively, achieves a good balance between efficiency and quality, and is suitable for embedded machine vision processing system.
Multiscale approach to contour fitting for MR images
NASA Astrophysics Data System (ADS)
Rueckert, Daniel; Burger, Peter
1996-04-01
We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.
Methods Development for Spectral Simplification of Room-Temperature Rotational Spectra
NASA Astrophysics Data System (ADS)
Kent, Erin B.; Shipman, Steven
2014-06-01
Room-temperature rotational spectra are dense and difficult to assign, and so we have been working to develop methods to accelerate this process. We have tested two different methods with our waveguide-based spectrometer, which operates from 8.7 to 26.5 GHz. The first method, based on previous work by Medvedev and De Lucia, was used to estimate lower state energies of transitions by performing relative intensity measurements at a range of temperatures between -20 and +50 °C. The second method employed hundreds of microwave-microwave double resonance measurements to determine level connectivity between rotational transitions. The relative intensity measurements were not particularly successful in this frequency range (the reasons for this will be discussed), but the information gleaned from the double-resonance measurements can be incorporated into other spectral search algorithms (such as autofit or genetic algorithm approaches) via scoring or penalty functions to help with the spectral assignment process. I.R. Medvedev, F.C. De Lucia, Astrophys. J. 656, 621-628 (2007).
Barua, Shaibal; Begum, Shahina; Ahmed, Mobyen Uddin
2015-01-01
Machine learning algorithms play an important role in computer science research. Recent advancement in sensor data collection in clinical sciences lead to a complex, heterogeneous data processing, and analysis for patient diagnosis and prognosis. Diagnosis and treatment of patients based on manual analysis of these sensor data are difficult and time consuming. Therefore, development of Knowledge-based systems to support clinicians in decision-making is important. However, it is necessary to perform experimental work to compare performances of different machine learning methods to help to select appropriate method for a specific characteristic of data sets. This paper compares classification performance of three popular machine learning methods i.e., case-based reasoning, neutral networks and support vector machine to diagnose stress of vehicle drivers using finger temperature and heart rate variability. The experimental results show that case-based reasoning outperforms other two methods in terms of classification accuracy. Case-based reasoning has achieved 80% and 86% accuracy to classify stress using finger temperature and heart rate variability. On contrary, both neural network and support vector machine have achieved less than 80% accuracy by using both physiological signals.
An efficient Cellular Potts Model algorithm that forbids cell fragmentation
NASA Astrophysics Data System (ADS)
Durand, Marc; Guesnet, Etienne
2016-11-01
The Cellular Potts Model (CPM) is a lattice based modeling technique which is widely used for simulating cellular patterns such as foams or biological tissues. Despite its realism and generality, the standard Monte Carlo algorithm used in the scientific literature to evolve this model preserves connectivity of cells on a limited range of simulation temperature only. We present a new algorithm in which cell fragmentation is forbidden for all simulation temperatures. This allows to significantly enhance realism of the simulated patterns. It also increases the computational efficiency compared with the standard CPM algorithm even at same simulation temperature, thanks to the time spared in not doing unrealistic moves. Moreover, our algorithm restores the detailed balance equation, ensuring that the long-term stage is independent of the chosen acceptance rate and chosen path in the temperature space.
A retrieval algorithm of hydrometer profile for submillimeter-wave radiometer
NASA Astrophysics Data System (ADS)
Liu, Yuli; Buehler, Stefan; Liu, Heguang
2017-04-01
Vertical profiles of particle microphysics perform vital functions for the estimation of climatic feedback. This paper proposes a new algorithm to retrieve the profile of the parameters of the hydrometeor(i.e., ice, snow, rain, liquid cloud, graupel) based on passive submillimeter-wave measurements. These parameters include water content and particle size. The first part of the algorithm builds the database and retrieves the integrated quantities. Database is built up by Atmospheric Radiative Transfer Simulator(ARTS), which uses atmosphere data to simulate the corresponding brightness temperature. Neural network, trained by the precalculated database, is developed to retrieve the water path for each type of particles. The second part of the algorithm analyses the statistical relationship between water path and vertical parameters profiles. Based on the strong dependence existing between vertical layers in the profiles, Principal Component Analysis(PCA) technique is applied. The third part of the algorithm uses the forward model explicitly to retrieve the hydrometeor profiles. Cost function is calculated in each iteration, and Differential Evolution(DE) algorithm is used to adjust the parameter values during the evolutionary process. The performance of this algorithm is planning to be verified for both simulation database and measurement data, by retrieving profiles in comparison with the initial one. Results show that this algorithm has the ability to retrieve the hydrometeor profiles efficiently. The combination of ARTS and optimization algorithm can get much better results than the commonly used database approach. Meanwhile, the concept that ARTS can be used explicitly in the retrieval process shows great potential in providing solution to other retrieval problems.
Electronic Thermometer Readings
NASA Technical Reports Server (NTRS)
2001-01-01
NASA Stennis' adaptive predictive algorithm for electronic thermometers uses sample readings during the initial rise in temperature and applies an algorithm that accurately and rapidly predicts the steady state temperature. The final steady state temperature of an object can be calculated based on the second-order logarithm of the temperature signals acquired by the sensor and predetermined variables from the sensor characteristics. These variables are calculated during tests of the sensor. Once the variables are determined, relatively little data acquisition and data processing time by the algorithm is required to provide a near-accurate approximation of the final temperature. This reduces the delay in the steady state response time of a temperature sensor. This advanced algorithm can be implemented in existing software or hardware with an erasable programmable read-only memory (EPROM). The capability for easy integration eliminates the expense of developing a whole new system that offers the benefits provided by NASA Stennis' technology.
Analysing the Effects of Different Land Cover Types on Land Surface Temperature Using Satellite Data
NASA Astrophysics Data System (ADS)
Şekertekin, A.; Kutoglu, Ş. H.; Kaya, S.; Marangoz, A. M.
2015-12-01
Monitoring Land Surface Temperature (LST) via remote sensing images is one of the most important contributions to climatology. LST is an important parameter governing the energy balance on the Earth and it also helps us to understand the behavior of urban heat islands. There are lots of algorithms to obtain LST by remote sensing techniques. The most commonly used algorithms are split-window algorithm, temperature/emissivity separation method, mono-window algorithm and single channel method. In this research, mono window algorithm was implemented to Landsat 5 TM image acquired on 28.08.2011. Besides, meteorological data such as humidity and temperature are used in the algorithm. Moreover, high resolution Geoeye-1 and Worldview-2 images acquired on 29.08.2011 and 12.07.2013 respectively were used to investigate the relationships between LST and land cover type. As a result of the analyses, area with vegetation cover has approximately 5 ºC lower temperatures than the city center and arid land., LST values change about 10 ºC in the city center because of different surface properties such as reinforced concrete construction, green zones and sandbank. The temperature around some places in thermal power plant region (ÇATES and ZETES) Çatalağzı, is about 5 ºC higher than city center. Sandbank and agricultural areas have highest temperature due to the land cover structure.
Entropic stabilization of isolated beta-sheets.
Dugourd, Philippe; Antoine, Rodolphe; Breaux, Gary; Broyer, Michel; Jarrold, Martin F
2005-04-06
Temperature-dependent electric deflection measurements have been performed for a series of unsolvated alanine-based peptides (Ac-WA(n)-NH(2), where Ac = acetyl, W = tryptophan, A = alanine, and n = 3, 5, 10, 13, and 15). The measurements are interpreted using Monte Carlo simulations performed with a parallel tempering algorithm. Despite alanine's high helix propensity in solution, the results suggest that unsolvated Ac-WA(n)-NH(2) peptides with n > 10 adopt beta-sheet conformations at room temperature. Previous studies have shown that protonated alanine-based peptides adopt helical or globular conformations in the gas phase, depending on the location of the charge. Thus, the charge more than anything else controls the structure.
Application of ANN and fuzzy logic algorithms for streamflow modelling of Savitri catchment
NASA Astrophysics Data System (ADS)
Kothari, Mahesh; Gharde, K. D.
2015-07-01
The streamflow prediction is an essentially important aspect of any watershed modelling. The black box models (soft computing techniques) have proven to be an efficient alternative to physical (traditional) methods for simulating streamflow and sediment yield of the catchments. The present study focusses on development of models using ANN and fuzzy logic (FL) algorithm for predicting the streamflow for catchment of Savitri River Basin. The input vector to these models were daily rainfall, mean daily evaporation, mean daily temperature and lag streamflow used. In the present study, 20 years (1992-2011) rainfall and other hydrological data were considered, of which 13 years (1992-2004) was for training and rest 7 years (2005-2011) for validation of the models. The mode performance was evaluated by R, RMSE, EV, CE, and MAD statistical parameters. It was found that, ANN model performance improved with increasing input vectors. The results with fuzzy logic models predict the streamflow with single input as rainfall better in comparison to multiple input vectors. While comparing both ANN and FL algorithms for prediction of streamflow, ANN model performance is quite superior.
In-flight calibration/validation of the ENVISAT/MWR
NASA Astrophysics Data System (ADS)
Tran, N.; Obligis, E.; Eymard, L.
2003-04-01
Retrieval algorithms for wet tropospheric correction, integrated vapor and liquid water contents, atmospheric attenuations of backscattering coefficients in Ku and S band, have been developed using a database of geophysical parameters from global analyses from a meteorological model and corresponding simulated brightness temperatures and backscattering cross-sections by a radiative transfer model. Meteorological data correspond to 12 hours predictions from the European Center for Medium range Weather Forecasts (ECMWF) model. Relationships between satellite measurements and geophysical parameters are determined using a statistical method. The quality of the retrieval algorithms depends therefore on the representativity of the database, the accuracy of the radiative transfer model used for the simulations and finally on the quality of the inversion model. The database has been built using the latest version of the ECMWF forecast model, which has been operationally run since November 2000. The 60 levels in the model allow a complete description of the troposphere/stratosphere profiles and the horizontal resolution is now half of a degree. The radiative transfer model is the emissivity model developed at the Université Catholique de Louvain [Lemaire, 1998], coupled to an atmospheric model [Liebe et al, 1993] for gaseous absorption. For the inversion, we have replaced the classical log-linear regression with a neural networks inversion. For Envisat, the backscattering coefficient in Ku band is used in the different algorithms to take into account the surface roughness as it is done with the 18 GHz channel for the TOPEX algorithms or an additional term in wind speed for ERS2 algorithms. The in-flight calibration/validation of the Envisat radiometer has been performed with the tuning of 3 internal parameters (the transmission coefficient of the reflector, the sky horn feed transmission coefficient and the main antenna transmission coefficient). First an adjustment of the ERS2 brightness temperatures to the simulations for the 2000/2001 version of the ECMWF model has been applied. Then, Envisat brightness temperatures have been calibrated on these adjusted ERS2 values. The advantages of this calibration approach are that : i) such a method provides the relative discrepancy with respect to the simulation chain. The results, obtained simultaneously for several radiometers (we repeat the same analyze with TOPEX and JASON radiometers), can be used to detect significant calibration problems, more than 2 3 K). ii) the retrieval algorithms have been developed using the same meteorological model (2000/2001 version of the ECMWF model), and the same radiative transfer model than the calibration process, insuring the consistency between calibration and retrieval processing. Retrieval parameters are then optimized. iii) the calibration of the Envisat brightness temperatures over the 2000/2001 version of the ECMWF model, as well as the recommendation to use the same model as a reference to correct ERS2 brightness temperatures, allow the use of the same retrieval algorithms for the two missions, providing the continuity between the two. iv) by comparison with other calibration methods (such as systematic calibration of an instrument or products by using respectively the ones from previous mission), this method is more satisfactory since improvements in terms of technology, modelisation, retrieval processing are taken into account. For the validation of the brightness temperatures, we use either a direct comparison with measurements provided by other instruments in similar channel, or the monitoring over stable areas (coldest ocean points, stable continental areas). The validation of the wet tropospheric correction can be also provided by comparison with other radiometer products, but the only real validation rely on the comparison between in-situ measurements (performed by radiosonding) and retrieved products in coincidence.
Validation of VIIRS Cloud Base Heights at Night Using Ground and Satellite Measurements over Alaska
NASA Astrophysics Data System (ADS)
NOH, Y. J.; Miller, S. D.; Seaman, C.; Forsythe, J. M.; Brummer, R.; Lindsey, D. T.; Walther, A.; Heidinger, A. K.; Li, Y.
2016-12-01
Knowledge of Cloud Base Height (CBH) is critical to describing cloud radiative feedbacks in numerical models and is of practical significance to aviation communities. We have developed a new CBH algorithm constrained by Cloud Top Height (CTH) and Cloud Water Path (CWP) by performing a statistical analysis of A-Train satellite data. It includes an extinction-based method for thin cirrus. In the algorithm, cloud geometric thickness is derived with upstream CTH and CWP input and subtracted from CTH to generate the topmost layer CBH. The CBH information is a key parameter for an improved Cloud Cover/Layers product. The algorithm has been applied to the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi NPP spacecraft. Nighttime cloud optical properties for CWP are retrieved from the nighttime lunar cloud optical and microphysical properties (NLCOMP) algorithm based on a lunar reflectance model for the VIIRS Day/Night Band (DNB) measuring nighttime visible light such as moonlight. The DNB has innovative capabilities to fill the polar winter and nighttime gap of cloud observations which has been an important shortfall from conventional radiometers. The CBH products have been intensively evaluated against CloudSat data. The results showed the new algorithm yields significantly improved performance over the original VIIRS CBH algorithm. However, since CloudSat is now operational during daytime only due to a battery anomaly, the nighttime performance has not been fully assessed. This presentation will show our approach to assess the performance of the CBH algorithm at night. VIIRS CBHs are retrieved over the Alaska region from October 2015 to April 2016 using the Clouds from AVHRR Extended (CLAVR-x) processing system. Ground-based measurements from ceilometer and micropulse lidar at the Atmospheric Radiation Measurement (ARM) site on the North Slope of Alaska are used for the analysis. Local weather conditions are checked using temperature and precipitation observations at the site. CALIPSO data with near-simultaneous colocation are added for multi-layered cloud cases which may have high clouds aloft beyond the ground measurements. Multi-month statistics of performance and case studies will be shown. Additional efforts for algorithm refinements will be also discussed.
Novel automated inversion algorithm for temperature reconstruction using gas isotopes from ice cores
NASA Astrophysics Data System (ADS)
Döring, Michael; Leuenberger, Markus C.
2018-06-01
Greenland past temperature history can be reconstructed by forcing the output of a firn-densification and heat-diffusion model to fit multiple gas-isotope data (δ15N or δ40Ar or δ15Nexcess) extracted from ancient air in Greenland ice cores using published accumulation-rate (Acc) datasets. We present here a novel methodology to solve this inverse problem, by designing a fully automated algorithm. To demonstrate the performance of this novel approach, we begin by intentionally constructing synthetic temperature histories and associated δ15N datasets, mimicking real Holocene data that we use as true values
(targets) to be compared to the output of the algorithm. This allows us to quantify uncertainties originating from the algorithm itself. The presented approach is completely automated and therefore minimizes the subjective
impact of manual parameter tuning, leading to reproducible temperature estimates. In contrast to many other ice-core-based temperature reconstruction methods, the presented approach is completely independent from ice-core stable-water isotopes, providing the opportunity to validate water-isotope-based reconstructions or reconstructions where water isotopes are used together with δ15N or δ40Ar. We solve the inverse problem T(δ15N, Acc) by using a combination of a Monte Carlo based iterative approach and the analysis of remaining mismatches between modelled and target data, based on cubic-spline filtering of random numbers and the laboratory-determined temperature sensitivity for nitrogen isotopes. Additionally, the presented reconstruction approach was tested by fitting measured δ40Ar and δ15Nexcess data, which led as well to a robust agreement between modelled and measured data. The obtained final mismatches follow a symmetric standard-distribution function. For the study on synthetic data, 95 % of the mismatches compared to the synthetic target data are in an envelope between 3.0 to 6.3 permeg for δ15N and 0.23 to 0.51 K for temperature (2σ, respectively). In addition to Holocene temperature reconstructions, the fitting approach can also be used for glacial temperature reconstructions. This is shown by fitting of the North Greenland Ice Core Project (NGRIP) δ15N data for two Dansgaard-Oeschger events using the presented approach, leading to results comparable to other studies.
NASA Astrophysics Data System (ADS)
Nalli, N. R.; Gambacorta, A.; Tan, C.; Iturbide, F.; Barnet, C. D.; Reale, A.; Sun, B.; Liu, Q.
2017-12-01
This presentation overviews the performance of the operational SNPP NOAA Unique Combined Atmospheric Processing System (NUCAPS) environmental data record (EDR) products. The SNPP Cross-track Infrared Sounder and Advanced Technology Microwave Sounder (CrIS/ATMS) suite, the first of the Joint Polar Satellite System (JPSS) Program, is one of NOAA's major investments in our nation's future operational environmental observation capability. The NUCAPS algorithm is a world-class NOAA-operational IR/MW retrieval algorithm based upon the well-established AIRS science team algorithm for deriving temperature, moisture, ozone and carbon trace gas to provide users with state-of-the-art EDR products. Operational use of the products includes the NOAA National Weather Service (NWS) Advanced Weather Interactive Processing System (AWIPS), along with numerous science-user applications. NUCAPS EDR product assessments are made with reference to JPSS Level 1 global requirements, which provide the definitive metrics for assessing that the products have minimally met predefined global performance specifications. The NESDIS/STAR NUCAPS development and validation team recently delivered the Phase 4 algorithm which incorporated critical updates necessary for compatibility with full spectral-resolution (FSR) CrIS sensor data records (SDRs). Based on comprehensive analyses, the NUCAPS Phase 4 CrIS-FSR temperature, moisture and ozone profile EDRs, as well as the carbon trace gas EDRs (CO, CH4 and CO2), are shown o be meeting or close to meeting the JPSS program global requirements. Regional and temporal assessments of interest to EDR users (e.g., AWIPS) will also be presented.
NASA Astrophysics Data System (ADS)
Vajda, Istvan; Kohari, Zalan; Porjesz, Tamas; Benko, Laszlo; Meerovich, V.; Sokolovsky; Gawalek, W.
2002-08-01
Technical and economical feasibilities of short-term energy storage flywheels with high temperature superconducting (HTS) bearing are widely investigated. It is essential to reduce the ac losses caused by magnetic field variations in HTS bulk disks/rings (levitators) used in the magnetic bearings of flywheels. For the HTS bearings the calculation and measurement of the magnetic field distribution were performed. Effects like eccentricity, tilting were measured. Time dependency of the levitation force following a jumpwise movement of the permanent magnet was measured. The results were used to setup an engineering design algorithm for energy storage HTS flywheels. This algorithm was applied to an experimental HTS flywheel model with a disk type permanent magnet motor/generator unit designed and constructed by the authors. A conceptual design of the disk-type motor/generator with radial flux is shown.
Evaluation of Skin Temperatures Retrieved from GOES-8
NASA Technical Reports Server (NTRS)
Suggs, Ronnie, J.; Jedlovec, G. J.; Lapenta, W. M.; Haines, S. L.
2000-01-01
Skin temperatures derived from geostationary satellites have the potential of providing the temporal and spatial resolution needed for model assimilation. To adequately assess the potential improvements in numerical model forecasts that can be made by assimilating satellite data, an estimate of the accuracy of the skin temperature product is necessary. A particular skin temperature algorithm, the Physical Split Window Technique, that uses the longwave infrared channels of the GOES Imager has shown promise in recent model assimilation studies to provide land surface temperatures with reasonable accuracy. A comparison of retrieved GOES-8 skin temperatures from this algorithm with in situ measurements is presented. Various retrieval algorithm issues are addressed including surface emissivity
Transfer and distortion of atmospheric information in the satellite temperature retrieval problem
NASA Technical Reports Server (NTRS)
Thompson, O. E.
1981-01-01
A systematic approach to investigating the transfer of basic ambient temperature information and its distortion by satellite systems and subsequent analysis algorithms is discussed. The retrieval analysis cycle is derived, the variance spectrum of information is examined as it takes different forms in that process, and the quality and quantity of information existing at each stop is compared with the initial ambient temperature information. Temperature retrieval algorithms can smooth, add, or further distort information, depending on how stable the algorithm is, and how heavily influenced by a priori data.
Grygierek, Krzysztof; Ferdyn-Grygierek, Joanna
2018-01-01
An inappropriate indoor climate, mostly indoor temperature, may cause occupants’ discomfort. There are a great number of air conditioning systems that make it possible to maintain the required thermal comfort. Their installation, however, involves high investment costs and high energy demand. The study analyses the possibilities of limiting too high a temperature in residential buildings using passive cooling by means of ventilation with ambient cool air. A fuzzy logic controller whose aim is to control mechanical ventilation has been proposed and optimized. In order to optimize the controller, the modified Multiobjective Evolutionary Algorithm, based on the Strength Pareto Evolutionary Algorithm, has been adopted. The optimization algorithm has been implemented in MATLAB®, which is coupled by MLE+ with EnergyPlus for performing dynamic co-simulation between the programs. The example of a single detached building shows that the occupants’ thermal comfort in a transitional climate may improve significantly owing to mechanical ventilation controlled by the suggested fuzzy logic controller. When the system is connected to the traditional cooling system, it may further bring about a decrease in cooling demand. PMID:29642525
Methods to Calculate the Heat Index as an Exposure Metric in Environmental Health Research
Bell, Michelle L.; Peng, Roger D.
2013-01-01
Background: Environmental health research employs a variety of metrics to measure heat exposure, both to directly study the health effects of outdoor temperature and to control for temperature in studies of other environmental exposures, including air pollution. To measure heat exposure, environmental health studies often use heat index, which incorporates both air temperature and moisture. However, the method of calculating heat index varies across environmental studies, which could mean that studies using different algorithms to calculate heat index may not be comparable. Objective and Methods: We investigated 21 separate heat index algorithms found in the literature to determine a) whether different algorithms generate heat index values that are consistent with the theoretical concepts of apparent temperature and b) whether different algorithms generate similar heat index values. Results: Although environmental studies differ in how they calculate heat index values, most studies’ heat index algorithms generate values consistent with apparent temperature. Additionally, most different algorithms generate closely correlated heat index values. However, a few algorithms are potentially problematic, especially in certain weather conditions (e.g., very low relative humidity, cold weather). To aid environmental health researchers, we have created open-source software in R to calculate the heat index using the U.S. National Weather Service’s algorithm. Conclusion: We identified 21 separate heat index algorithms used in environmental research. Our analysis demonstrated that methods to calculate heat index are inconsistent across studies. Careful choice of a heat index algorithm can help ensure reproducible and consistent environmental health research. Citation: Anderson GB, Bell ML, Peng RD. 2013. Methods to calculate the heat index as an exposure metric in environmental health research. Environ Health Perspect 121:1111–1119; http://dx.doi.org/10.1289/ehp.1206273 PMID:23934704
NASA Astrophysics Data System (ADS)
Zhou, Chaojie; Ding, Xiaohua; Zhang, Jie; Yang, Jungang; Ma, Qiang
2017-12-01
While global oceanic surface information with large-scale, real-time, high-resolution data is collected by satellite remote sensing instrumentation, three-dimensional (3D) observations are usually obtained from in situ measurements, but with minimal coverage and spatial resolution. To meet the needs of 3D ocean investigations, we have developed a new algorithm to reconstruct the 3D ocean temperature field based on the Array for Real-time Geostrophic Oceanography (Argo) profiles and sea surface temperature (SST) data. The Argo temperature profiles are first optimally fitted to generate a series of temperature functions of depth, with the vertical temperature structure represented continuously. By calculating the derivatives of the fitted functions, the calculation of the vertical temperature gradient of the Argo profiles at an arbitrary depth is accomplished. A gridded 3D temperature gradient field is then found by applying inverse distance weighting interpolation in the horizontal direction. Combined with the processed SST, the 3D temperature field reconstruction is realized below the surface using the gridded temperature gradient. Finally, to confirm the effectiveness of the algorithm, an experiment in the Pacific Ocean south of Japan is conducted, for which a 3D temperature field is generated. Compared with other similar gridded products, the reconstructed 3D temperature field derived by the proposed algorithm achieves satisfactory accuracy, with correlation coefficients of 0.99 obtained, including a higher spatial resolution (0.25° × 0.25°), resulting in the capture of smaller-scale characteristics. Finally, both the accuracy and the superiority of the algorithm are validated.
Modeling the viscoplastic behavior of Inconel 718 at 1200 F
NASA Technical Reports Server (NTRS)
Abdel-Kader, M. S.; Eftis, J.; Jones, D. L.
1988-01-01
A large number of tests, including tensile, creep, fatigue, and creep-fatigue were performed to characterize the mechanical properties of Inconel 718 (a nickel based superalloy) at 1200 F, the operating temperature for turbine blades. In addition, a few attempts were made to model the behavior of Inconel 718 at 1200 F using viscoplastic theories. The Chaboche theory of viscoplasticity can model a wide variety of mechanical behavior, including monotonic, sustained, and cyclic responses of homogeneous, initially-isotropic, strain hardening (or softening) materials. It is shown how the Chaboche theory can be used to model the viscoplastic behavior of Inconel 718 at 1200 F. First, an algorithm was developed to systematically determine the material parameters of the Chaboche theory from uniaxial tensile, creep, and cyclic data. The algorithm is general and can be used in conjunction with similar high temperature materials. A sensitivity study was then performed and an optimal set of Chaboche's parameters were obtained. This study has also indicated the role of each parameter in modeling the response to different loading conditions.
NASA Astrophysics Data System (ADS)
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
Exploring the Energy Landscapes of Protein Folding Simulations with Bayesian Computation
Burkoff, Nikolas S.; Várnai, Csilla; Wells, Stephen A.; Wild, David L.
2012-01-01
Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. PMID:22385859
Exploring the energy landscapes of protein folding simulations with Bayesian computation.
Burkoff, Nikolas S; Várnai, Csilla; Wells, Stephen A; Wild, David L
2012-02-22
Nested sampling is a Bayesian sampling technique developed to explore probability distributions localized in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algorithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering. In this article, we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Gō-like force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins that are commonly used for testing protein-folding procedures. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high-level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Mohamed, Ahmed F; Elarini, Mahdi M; Othman, Ahmed M
2014-05-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt.
Mohamed, Ahmed F.; Elarini, Mahdi M.; Othman, Ahmed M.
2013-01-01
One of the most recent optimization techniques applied to the optimal design of photovoltaic system to supply an isolated load demand is the Artificial Bee Colony Algorithm (ABC). The proposed methodology is applied to optimize the cost of the PV system including photovoltaic, a battery bank, a battery charger controller, and inverter. Two objective functions are proposed: the first one is the PV module output power which is to be maximized and the second one is the life cycle cost (LCC) which is to be minimized. The analysis is performed based on measured solar radiation and ambient temperature measured at Helwan city, Egypt. A comparison between ABC algorithm and Genetic Algorithm (GA) optimal results is done. Another location is selected which is Zagazig city to check the validity of ABC algorithm in any location. The ABC is more optimal than GA. The results encouraged the use of the PV systems to electrify the rural sites of Egypt. PMID:25685507
Lin, Jonathan S; Hwang, Ken-Pin; Jackson, Edward F; Hazle, John D; Stafford, R Jason; Taylor, Brian A
2013-10-01
A k-means-based classification algorithm is investigated to assess suitability for rapidly separating and classifying fat/water spectral peaks from a fast chemical shift imaging technique for magnetic resonance temperature imaging. Algorithm testing is performed in simulated mathematical phantoms and agar gel phantoms containing mixed fat/water regions. Proton resonance frequencies (PRFs), apparent spin-spin relaxation (T2*) times, and T1-weighted (T1-W) amplitude values were calculated for each voxel using a single-peak autoregressive moving average (ARMA) signal model. These parameters were then used as criteria for k-means sorting, with the results used to determine PRF ranges of each chemical species cluster for further classification. To detect the presence of secondary chemical species, spectral parameters were recalculated when needed using a two-peak ARMA signal model during the subsequent classification steps. Mathematical phantom simulations involved the modulation of signal-to-noise ratios (SNR), maximum PRF shift (MPS) values, analysis window sizes, and frequency expansion factor sizes in order to characterize the algorithm performance across a variety of conditions. In agar, images were collected on a 1.5T clinical MR scanner using acquisition parameters close to simulation, and algorithm performance was assessed by comparing classification results to manually segmented maps of the fat/water regions. Performance was characterized quantitatively using the Dice Similarity Coefficient (DSC), sensitivity, and specificity. The simulated mathematical phantom experiments demonstrated good fat/water separation depending on conditions, specifically high SNR, moderate MPS value, small analysis window size, and low but nonzero frequency expansion factor size. Physical phantom results demonstrated good identification for both water (0.997 ± 0.001, 0.999 ± 0.001, and 0.986 ± 0.001 for DSC, sensitivity, and specificity, respectively) and fat (0.763 ± 0.006, 0.980 ± 0.004, and 0.941 ± 0.002 for DSC, sensitivity, and specificity, respectively). Temperature uncertainties, based on PRF uncertainties from a 5 × 5-voxel ROI, were 0.342 and 0.351°C for pure and mixed fat/water regions, respectively. Algorithm speed was tested using 25 × 25-voxel and whole image ROIs containing both fat and water, resulting in average processing times per acquisition of 2.00 ± 0.07 s and 146 ± 1 s, respectively, using uncompiled MATLAB scripts running on a shared CPU server with eight Intel Xeon(TM) E5640 quad-core processors (2.66 GHz, 12 MB cache) and 12 GB RAM. Results from both the mathematical and physical phantom suggest the k-means-based classification algorithm could be useful for rapid, dynamic imaging in an ROI for thermal interventions. Successful separation of fat/water information would aid in reducing errors from the nontemperature sensitive fat PRF, as well as potentially facilitate using fat as an internal reference for PRF shift thermometry when appropriate. Additionally, the T1-W or R2* signals may be used for monitoring temperature in surrounding adipose tissue.
NASA Technical Reports Server (NTRS)
Knox, C. E.; Cannon, D. G.
1980-01-01
A simple flight management descent algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control was developed and flight tested. This algorithm provides a three dimensional path with terminal area time constraints (four dimensional) for an airplane to make an idle thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithm is described. The results of the flight tests flown with the Terminal Configured Vehicle airplane are presented.
Performance of the Lester battery charger in electric vehicles
NASA Technical Reports Server (NTRS)
Vivian, H. C.; Bryant, J. A.
1984-01-01
Tests are performed on an improved battery charger. The primary purpose of the testing is to develop test methodologies for battery charger evaluation. Tests are developed to characterize the charger in terms of its charge algorithm and to assess the effects of battery initial state of charge and temperature on charger and battery efficiency. Tests show this charger to be a considerable improvement in the state of the art for electric vehicle chargers.
Hasegawa, M
2011-03-01
The aim of the present study is to elucidate how simulated annealing (SA) works in its finite-time implementation by starting from the verification of its conventional optimization scenario based on equilibrium statistical mechanics. Two and one supplementary experiments, the design of which is inspired by concepts and methods developed for studies on liquid and glass, are performed on two types of random traveling salesman problems. In the first experiment, a newly parameterized temperature schedule is introduced to simulate a quasistatic process along the scenario and a parametric study is conducted to investigate the optimization characteristics of this adaptive cooling. In the second experiment, the search trajectory of the Metropolis algorithm (constant-temperature SA) is analyzed in the landscape paradigm in the hope of drawing a precise physical analogy by comparison with the corresponding dynamics of glass-forming molecular systems. These two experiments indicate that the effectiveness of finite-time SA comes not from equilibrium sampling at low temperature but from downward interbasin dynamics occurring before equilibrium. These dynamics work most effectively at an intermediate temperature varying with the total search time and thus this effective temperature is identified using the Deborah number. To test directly the role of these relaxation dynamics in the process of cooling, a supplementary experiment is performed using another parameterized temperature schedule with a piecewise variable cooling rate and the effect of this biased cooling is examined systematically. The results show that the optimization performance is not only dependent on but also sensitive to cooling in the vicinity of the above effec-tive temperature and that this feature is interpreted as a consequence of the presence or absence of the workable interbasin dynamics. It is confirmed for the present instances that the effectiveness of finite-time SA derives from the glassy relaxation dynamics occurring in the "landscape-influenced" temperature regime and that its naive optimization scenario should be rectified by considering the analogy with vitrification phenomena. A comprehensive guideline for the design of finite-time SA and SA-related algorithms is discussed on the basis of this rectified analogy.
Modeling the effects of contrast enhancement on target acquisition performance
NASA Astrophysics Data System (ADS)
Du Bosq, Todd W.; Fanning, Jonathan D.
2008-04-01
Contrast enhancement and dynamic range compression are currently being used to improve the performance of infrared imagers by increasing the contrast between the target and the scene content, by better utilizing the available gray levels either globally or locally. This paper assesses the range-performance effects of various contrast enhancement algorithms for target identification with well contrasted vehicles. Human perception experiments were performed to determine field performance using contrast enhancement on the U.S. Army RDECOM CERDEC NVESD standard military eight target set using an un-cooled LWIR camera. The experiments compare the identification performance of observers viewing linearly scaled images and various contrast enhancement processed images. Contrast enhancement is modeled in the US Army thermal target acquisition model (NVThermIP) by changing the scene contrast temperature. The model predicts improved performance based on any improved target contrast, regardless of feature saturation or enhancement. To account for the equivalent blur associated with each contrast enhancement algorithm, an additional effective MTF was calculated and added to the model. The measured results are compared with the predicted performance based on the target task difficulty metric used in NVThermIP.
An FPGA Noise Resistant Digital Temperature Sensor with Auto Calibration
2012-03-01
temperature sensor [6] . . . . . . . . . . . . . . 14 9 Two different digital temperature sensor placement algorithms: (a) Grid placement (b) Optimal...create a grid over the FPGA. While this method works reasonably well, it requires many sensors, some of which are unnecessary. The optimal placement, on...temperature sensor placement algorithms: (a) Grid placement (b) Optimal Placement [7] 16 2.4 Summary Integrated circuits’ sensitivity to temperatures has
On the performance of explicit and implicit algorithms for transient thermal analysis
NASA Astrophysics Data System (ADS)
Adelman, H. M.; Haftka, R. T.
1980-09-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.
NASA Technical Reports Server (NTRS)
Key, Jeff; Maslanik, James; Steffen, Konrad
1995-01-01
During the second phase project year we have made progress in the development and refinement of surface temperature retrieval algorithms and in product generation. More specifically, we have accomplished the following: (1) acquired a new advanced very high resolution radiometer (AVHRR) data set for the Beaufort Sea area spanning an entire year; (2) acquired additional along-track scanning radiometer(ATSR) data for the Arctic and Antarctic now totalling over eight months; (3) refined our AVHRR Arctic and Antarctic ice surface temperature (IST) retrieval algorithm, including work specific to Greenland; (4) developed ATSR retrieval algorithms for the Arctic and Antarctic, including work specific to Greenland; (5) developed cloud masking procedures for both AVHRR and ATSR; (6) generated a two-week bi-polar global area coverage (GAC) set of composite images from which IST is being estimated; (7) investigated the effects of clouds and the atmosphere on passive microwave 'surface' temperature retrieval algorithms; and (8) generated surface temperatures for the Beaufort Sea data set, both from AVHRR and special sensor microwave imager (SSM/I).
Du, Jiaying; Gerdtman, Christer; Lindén, Maria
2018-04-06
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented.
Gerdtman, Christer
2018-01-01
Motion sensors such as MEMS gyroscopes and accelerometers are characterized by a small size, light weight, high sensitivity, and low cost. They are used in an increasing number of applications. However, they are easily influenced by environmental effects such as temperature change, shock, and vibration. Thus, signal processing is essential for minimizing errors and improving signal quality and system stability. The aim of this work is to investigate and present a systematic review of different signal error reduction algorithms that are used for MEMS gyroscope-based motion analysis systems for human motion analysis or have the potential to be used in this area. A systematic search was performed with the search engines/databases of the ACM Digital Library, IEEE Xplore, PubMed, and Scopus. Sixteen papers that focus on MEMS gyroscope-related signal processing and were published in journals or conference proceedings in the past 10 years were found and fully reviewed. Seventeen algorithms were categorized into four main groups: Kalman-filter-based algorithms, adaptive-based algorithms, simple filter algorithms, and compensation-based algorithms. The algorithms were analyzed and presented along with their characteristics such as advantages, disadvantages, and time limitations. A user guide to the most suitable signal processing algorithms within this area is presented. PMID:29642412
Assessment of the NPOESS/VIIRS Nighttime Infrared Cloud Optical Properties Algorithms
NASA Astrophysics Data System (ADS)
Wong, E.; Ou, S. C.
2008-12-01
In this paper we will describe two NPOESS VIIRS IR algorithms used to retrieve microphysical properties for water and ice clouds during nighttime conditions. Both algorithms employ four VIIRS IR channels: M12 (3.7 μm), M14 (8.55 μm), M15 (10.7 μm) and M16 (12 μm). The physical basis for the two algorithms is similar in that while the Cloud Top Temperature (CTT) is derived from M14 and M16 for ice clouds the Cloud Optical Thickness (COT) and Cloud Effective Particle Size (CEPS) are derived from M12 and M15. The two algorithms depart in the different radiative transfer parameterization equations used for ice and water clouds. Both the VIIRS nighttime IR algorithms and the CERES split-window method employ the 3.7 μm and 10.7 μm bands for cloud optical properties retrievals, apparently based on similar physical principles but with different implementations. It is reasonable to expect that the VIIRS and CERES IR algorithms produce comparable performance and similar limitations. To demonstrate the VIIRS nighttime IR algorithm performance, we will select a number of test cases using NASA MODIS L1b radiance products as proxy input data for VIIRS. The VIIRS retrieved COT and CEPS will then be compared to cloud products available from the MODIS, NASA CALIPSO, CloudSat and CERES sensors. For the MODIS product, the nighttime cloud emissivity will serve as an indirect comparison to VIIRS COT. For the CALIPSO and CloudSat products, the layered COT will be used for direct comparison. Finally, the CERES products will provide direct comparison with COT as well as CEPS. This study can only provide a qualitative assessment of the VIIRS IR algorithms due to the large uncertainties in these cloud products.
NASA Astrophysics Data System (ADS)
Plimley, Brian; Coffer, Amy; Zhang, Yigong; Vetter, Kai
2016-08-01
Previously, scientific silicon charge-coupled devices (CCDs) with 10.5-μm pixel pitch and a thick (650 μm), fully depleted bulk have been used to measure gamma-ray-induced fast electrons and demonstrate electron track Compton imaging. A model of the response of this CCD was also developed and benchmarked to experiment using Monte Carlo electron tracks. We now examine the trade-off in pixel pitch and electronic noise. We extend our CCD response model to different pixel pitch and readout noise per pixel, including pixel pitch of 2.5 μm, 5 μm, 10.5 μm, 20 μm, and 40 μm, and readout noise from 0 eV/pixel to 2 keV/pixel for 10.5 μm pixel pitch. The CCD images generated by this model using simulated electron tracks are processed by our trajectory reconstruction algorithm. The performance of the reconstruction algorithm defines the expected angular sensitivity as a function of electron energy, CCD pixel pitch, and readout noise per pixel. Results show that our existing pixel pitch of 10.5 μm is near optimal for our approach, because smaller pixels add little new information but are subject to greater statistical noise. In addition, we measured the readout noise per pixel for two different device temperatures in order to estimate the effect of temperature on the reconstruction algorithm performance, although the readout is not optimized for higher temperatures. The noise in our device at 240 K increases the FWHM of angular measurement error by no more than a factor of 2, from 26° to 49° FWHM for electrons between 425 keV and 480 keV. Therefore, a CCD could be used for electron-track-based imaging in a Peltier-cooled device.
Soil Moisture Active/Passive (SMAP) Forward Brightness Temperature Simulator
NASA Technical Reports Server (NTRS)
Peng, Jinzheng; Peipmeier, Jeffrey; Kim, Edward
2012-01-01
The SMAP is one of four first-tier missions recommended by the US National Research Council's Committee on Earth Science and Applications from Space (Earth Science and Applications from Space: National Imperatives for the Next Decade and Beyond, Space Studies Board, National Academies Press, 2007) [1]. It is to measure the global soil moisture and freeze/thaw from space. One of the spaceborne instruments is an L-band radiometer with a shared single feedhorn and parabolic mesh reflector. While the radiometer measures the emission over a footprint of interest, unwanted emissions are also received by the antenna through the antenna sidelobes from the cosmic background and other error sources such as the Sun, the Moon and the galaxy. Their effects need to be considered accurately, and the analysis of the overall performance of the radiometer requires end-to-end performance simulation from Earth emission to antenna brightness temperature, such as the global simulation of L-band brightness temperature simulation over land and sea [2]. To assist with the SMAP radiometer level 1B algorithm development, the SMAP forward brightness temperature simulator is developed by adapting the Aquarius simulator [2] with necessary modifications. This poster presents the current status of the SMAP forward brightness simulator s development including incorporating the land microwave emission model and its input datasets, and a simplified atmospheric radiative transfer model. The latest simulation results are also presented to demonstrate the ability of supporting the SMAP L1B algorithm development.
AATSR land surface temperature product algorithm verification over a WATERMED site
NASA Astrophysics Data System (ADS)
Noyes, E. J.; Sòria, G.; Sobrino, J. A.; Remedios, J. J.; Llewellyn-Jones, D. T.; Corlett, G. K.
A new operational Land Surface Temperature (LST) product generated from data acquired by the Advanced Along-Track Scanning Radiometer (AATSR) provides the opportunity to measure LST on a global scale with a spatial resolution of 1 km2. The target accuracy of the product, which utilises nadir data from the AATSR thermal channels at 11 and 12 μm, is 2.5 K for daytime retrievals and 1.0 K at night. We present the results of an experiment where the performance of the algorithm has been assessed for one daytime and one night time overpass occurring over the WATERMED field site near Marrakech, Morocco, on 05 March 2003. Top of atmosphere (TOA) brightness temperatures (BTs) are simulated for 12 pixels from each overpass using a radiative transfer model, with the LST product and independent emissivity values and atmospheric data as inputs. We have estimated the error in the LST product over this biome for this set of conditions by applying the operational AATSR LST retrieval algorithm to the modelled BTs and comparing the results with the original AATSR LSTs input into the model. An average bias of -1.00 K (standard deviation 0.07 K) for the daytime data, and -1.74 K (standard deviation 0.02 K) for the night time data is obtained, which indicates that the algorithm is yielding an LST that is too cold under these conditions. While these results are within specification for daytime retrievals, this suggests that the target accuracy of 1.0 K at night is not being met within this biome.
Minimum airflow reset of single-duct VAV terminal boxes
NASA Astrophysics Data System (ADS)
Cho, Young-Hum
Single duct Variable Air Volume (VAV) systems are currently the most widely used type of HVAC system in the United States. When installing such a system, it is critical to determine the minimum airflow set point of the terminal box, as an optimally selected set point will improve the level of thermal comfort and indoor air quality (IAQ) while at the same time lower overall energy costs. In principle, this minimum rate should be calculated according to the minimum ventilation requirement based on ASHRAE standard 62.1 and maximum heating load of the zone. Several factors must be carefully considered when calculating this minimum rate. Terminal boxes with conventional control sequences may result in occupant discomfort and energy waste. If the minimum rate of airflow is set too high, the AHUs will consume excess fan power, and the terminal boxes may cause significant simultaneous room heating and cooling. At the same time, a rate that is too low will result in poor air circulation and indoor air quality in the air-conditioned space. Currently, many scholars are investigating how to change the algorithm of the advanced VAV terminal box controller without retrofitting. Some of these controllers have been found to effectively improve thermal comfort, indoor air quality, and energy efficiency. However, minimum airflow set points have not yet been identified, nor has controller performance been verified in confirmed studies. In this study, control algorithms were developed that automatically identify and reset terminal box minimum airflow set points, thereby improving indoor air quality and thermal comfort levels, and reducing the overall rate of energy consumption. A theoretical analysis of the optimal minimum airflow and discharge air temperature was performed to identify the potential energy benefits of resetting the terminal box minimum airflow set points. Applicable control algorithms for calculating the ideal values for the minimum airflow reset were developed and applied to actual systems for performance validation. The results of the theoretical analysis, numeric simulations, and experiments show that the optimal control algorithms can automatically identify the minimum rate of heating airflow under actual working conditions. Improved control helps to stabilize room air temperatures. The vertical difference in the room air temperature was lower than the comfort value. Measurements of room CO2 levels indicate that when the minimum airflow set point was reduced it did not adversely affect the indoor air quality. According to the measured energy results, optimal control algorithms give a lower rate of reheating energy consumption than conventional controls.
Graphene-based room-temperature implementation of a modified Deutsch-Jozsa quantum algorithm.
Dragoman, Daniela; Dragoman, Mircea
2015-12-04
We present an implementation of a one-qubit and two-qubit modified Deutsch-Jozsa quantum algorithm based on graphene ballistic devices working at room temperature. The modified Deutsch-Jozsa algorithm decides whether a function, equivalent to the effect of an energy potential distribution on the wave function of ballistic charge carriers, is constant or not, without measuring the output wave function. The function need not be Boolean. Simulations confirm that the algorithm works properly, opening the way toward quantum computing at room temperature based on the same clean-room technologies as those used for fabrication of very-large-scale integrated circuits.
Finding Blackbody Temperature and Emissivity on a Sub-Pixel Scale
NASA Astrophysics Data System (ADS)
Bernstein, D. J.; Bausell, J.; Grigsby, S.; Kudela, R. M.
2015-12-01
Surface temperature and emissivity provide important insight into the ecosystem being remotely sensed. Dozier (1981) proposed a an algorithm to solve for percent coverage and temperatures of two different surface types (e.g. sea surface, cloud cover, etc.) within a given pixel, with a constant value for emissivity assumed. Here we build on Dozier (1981) by proposing an algorithm that solves for both temperature and emissivity of a water body within a satellite pixel by assuming known percent coverage of surface types within the pixel. Our algorithm generates thermal infrared (TIR) and emissivity end-member spectra for the two surface types. Our algorithm then superposes these end-member spectra on emissivity and TIR spectra emitted from four pixels with varying percent coverage of different surface types. The algorithm was tested preliminarily (48 iterations) using simulated pixels containing more than one surface type, with temperature and emissivity percent errors of ranging from 0 to 1.071% and 2.516 to 15.311% respectively[1]. We then tested the algorithm using a MASTER image from MASTER collected as part of the NASA Student Airborne Research Program (NASA SARP). Here the temperature of water was calculated to be within 0.22 K of in situ data. The algorithm calculated emissivity of water with an accuracy of 0.13 to 1.53% error for Salton Sea pixels collected with MASTER, also collected as part of NASA SARP. This method could improve retrievals for the HyspIRI sensor. [1] Percent error for emissivity was generated by averaging percent error across all selected bands widths.
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-04-17
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors.
Genetic Algorithm (GA)-Based Inclinometer Layout Optimization
Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo
2015-01-01
This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500
Li, Xianfeng; Murthy, Sanjeeva; Latour, Robert A.
2011-01-01
A new empirical sampling method termed “temperature intervals with global exchange of replicas and reduced radii” (TIGER3) is presented and demonstrated to efficiently equilibrate entangled long-chain molecular systems such as amorphous polymers. The TIGER3 algorithm is a replica exchange method in which simulations are run in parallel over a range of temperature levels at and above a designated baseline temperature. The replicas sampled at temperature levels above the baseline are run through a series of cycles with each cycle containing four stages – heating, sampling, quenching, and temperature level reassignment. The method allows chain segments to pass through one another at elevated temperature levels during the sampling stage by reducing the van der Waals radii of the atoms, thus eliminating chain entanglement problems. Atomic radii are then returned to their regular values and re-equilibrated at elevated temperature prior to quenching to the baseline temperature. Following quenching, replicas are compared using a Metropolis Monte Carlo exchange process for the construction of an approximate Boltzmann-weighted ensemble of states and then reassigned to the elevated temperature levels for additional sampling. Further system equilibration is performed by periodic implementation of the previously developed TIGER2 algorithm between cycles of TIGER3, which applies thermal cycling without radii reduction. When coupled with a coarse-grained modeling approach, the combined TIGER2/TIGER3 algorithm yields fast equilibration of bulk-phase models of amorphous polymer, even for polymers with complex, highly branched structures. The developed method was tested by modeling the polyethylene melt. The calculated properties of chain conformation and chain segment packing agreed well with published data. The method was also applied to generate equilibrated structural models of three increasingly complex amorphous polymer systems: poly(methyl methacrylate), poly(butyl methacrylate), and DTB-succinate copolymer. Calculated glass transition temperature (Tg) and structural parameter profile (S(q)) for each resulting polymer model were found to be in close agreement with experimental Tg values and structural measurements obtained by x-ray diffraction, thus validating that the developed methods provide realistic models of amorphous polymer structure. PMID:21769156
Biased Metropolis Sampling for Rugged Free Energy Landscapes
NASA Astrophysics Data System (ADS)
Berg, Bernd A.
2003-11-01
Metropolis simulations of all-atom models of peptides (i.e. small proteins) are considered. Inspired by the funnel picture of Bryngelson and Wolyness, a transformation of the updating probabilities of the dihedral angles is defined, which uses probability densities from a higher temperature to improve the algorithmic performance at a lower temperature. The method is suitable for canonical as well as for generalized ensemble simulations. A simple approximation to the full transformation is tested at room temperature for Met-Enkephalin in vacuum. Integrated autocorrelation times are found to be reduced by factors close to two and a similar improvement due to generalized ensemble methods enters multiplicatively.
Using Geostationary Communications Satellites as a Sensor: Telemetry Search Algorithms
NASA Astrophysics Data System (ADS)
Cahoy, K.; Carlton, A.; Lohmeyer, W. Q.
2014-12-01
For decades, operators and manufacturers have collected large amounts of telemetry from geostationary (GEO) communications satellites to monitor system health and performance, yet this data is rarely mined for scientific purposes. The goal of this work is to mine data archives acquired from commercial operators using new algorithms that can detect when a space weather (or non-space weather) event of interest has occurred or is in progress. We have developed algorithms to statistically analyze power amplifier current and temperature telemetry and identify deviations from nominal operations or other trends of interest. We then examine space weather data to see what role, if any, it might have played. We also closely examine both long and short periods of time before an anomaly to determine whether or not the anomaly could have been predicted.
Global Soil Moisture from the Aquarius/SAC-D Satellite: Description and Initial Assessment
NASA Technical Reports Server (NTRS)
Bindlish, Rajat; Jackson, Thomas; Cosh, Michael; Zhao, Tianjie; O'Neil, Peggy
2015-01-01
Aquarius satellite observations over land offer a new resource for measuring soil moisture from space. Although Aquarius was designed for ocean salinity mapping, our objective in this investigation is to exploit the large amount of land observations that Aquarius acquires and extend the mission scope to include the retrieval of surface soil moisture. The soil moisture retrieval algorithm development focused on using only the radiometer data because of the extensive heritage of passive microwave retrieval of soil moisture. The single channel algorithm (SCA) was implemented using the Aquarius observations to estimate surface soil moisture. Aquarius radiometer observations from three beams (after bias/gain modification) along with the National Centers for Environmental Prediction model forecast surface temperatures were then used to retrieve soil moisture. Ancillary data inputs required for using the SCA are vegetation water content, land surface temperature, and several soil and vegetation parameters based on land cover classes. The resulting global spatial patterns of soil moisture were consistent with the precipitation climatology and with soil moisture from other satellite missions (Advanced Microwave Scanning Radiometer for the Earth Observing System and Soil Moisture Ocean Salinity). Initial assessments were performed using in situ observations from the U.S. Department of Agriculture Little Washita and Little River watershed soil moisture networks. Results showed good performance by the algorithm for these land surface conditions for the period of August 2011-June 2013 (rmse = 0.031 m(exp 3)/m(exp 3), Bias = -0.007 m(exp 3)/m(exp 3), and R = 0.855). This radiometer-only soil moisture product will serve as a baseline for continuing research on both active and combined passive-active soil moisture algorithms. The products are routinely available through the National Aeronautics and Space Administration data archive at the National Snow and Ice Data Center.
NASA Astrophysics Data System (ADS)
Zhao, Runchen; Ientilucci, Emmett J.
2017-05-01
Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.
NASA Astrophysics Data System (ADS)
Wei, Jun; Jiang, Guo-Qing; Liu, Xin
2017-09-01
This study proposed three algorithms that can potentially be used to provide sea surface temperature (SST) conditions for typhoon prediction models. Different from traditional data assimilation approaches, which provide prescribed initial/boundary conditions, our proposed algorithms aim to resolve a flow-dependent SST feedback between growing typhoons and oceans in the future time. Two of these algorithms are based on linear temperature equations (TE-based), and the other is based on an innovative technique involving machine learning (ML-based). The algorithms are then implemented into a Weather Research and Forecasting model for the simulation of typhoon to assess their effectiveness, and the results show significant improvement in simulated storm intensities by including ocean cooling feedback. The TE-based algorithm I considers wind-induced ocean vertical mixing and upwelling processes only, and thus obtained a synoptic and relatively smooth sea surface temperature cooling. The TE-based algorithm II incorporates not only typhoon winds but also ocean information, and thus resolves more cooling features. The ML-based algorithm is based on a neural network, consisting of multiple layers of input variables and neurons, and produces the best estimate of the cooling structure, in terms of its amplitude and position. Sensitivity analysis indicated that the typhoon-induced ocean cooling is a nonlinear process involving interactions of multiple atmospheric and oceanic variables. Therefore, with an appropriate selection of input variables and neuron sizes, the ML-based algorithm appears to be more efficient in prognosing the typhoon-induced ocean cooling and in predicting typhoon intensity than those algorithms based on linear regression methods.
The correction of time and temperature effects in MR-based 3D Fricke xylenol orange dosimetry.
Welch, Mattea L; Jaffray, David A
2017-04-21
Previously developed MR-based three-dimensional (3D) Fricke-xylenol orange (FXG) dosimeters can provide end-to-end quality assurance and validation protocols for pre-clinical radiation platforms. FXG dosimeters quantify ionizing irradiation induced oxidation of Fe 2+ ions using pre- and post-irradiation MR imaging methods that detect changes in spin-lattice relaxation rates (R 1 = [Formula: see text]) caused by irradiation induced oxidation of Fe 2+ . Chemical changes in MR-based FXG dosimeters that occur over time and with changes in temperature can decrease dosimetric accuracy if they are not properly characterized and corrected. This paper describes the characterization, development and utilization of an empirical model-based correction algorithm for time and temperature effects in the context of a pre-clinical irradiator and a 7 T pre-clinical MR imaging system. Time and temperature dependent changes of R 1 values were characterized using variable TR spin-echo imaging. R 1 -time and R 1 -temperature dependencies were fit using non-linear least squares fitting methods. Models were validated using leave-one-out cross-validation and resampling. Subsequently, a correction algorithm was developed that employed the previously fit empirical models to predict and reduce baseline R 1 shifts that occurred in the presence of time and temperature changes. The correction algorithm was tested on R 1 -dose response curves and 3D dose distributions delivered using a small animal irradiator at 225 kVp. The correction algorithm reduced baseline R 1 shifts from -2.8 × 10 -2 s -1 to 1.5 × 10 -3 s -1 . In terms of absolute dosimetric performance as assessed with traceable standards, the correction algorithm reduced dose discrepancies from approximately 3% to approximately 0.5% (2.90 ± 2.08% to 0.20 ± 0.07%, and 2.68 ± 1.84% to 0.46 ± 0.37% for the 10 × 10 and 8 × 12 mm 2 fields, respectively). Chemical changes in MR-based FXG dosimeters produce time and temperature dependent R 1 values for the time intervals and temperature changes found in a typical small animal imaging and irradiation laboratory setting. These changes cause baseline R 1 shifts that negatively affect dosimeter accuracy. Characterization, modeling and correction of these effects improved in-field reported dose accuracy to less than 1% when compared to standardized ion chamber measurements.
King, David A.; Bachelet, Dominique M.; Symstad, Amy J.; Ferschweiler, Ken; Hobbins, Michael
2014-01-01
The potential evapotranspiration (PET) that would occur with unlimited plant access to water is a central driver of simulated plant growth in many ecological models. PET is influenced by solar and longwave radiation, temperature, wind speed, and humidity, but it is often modeled as a function of temperature alone. This approach can cause biases in projections of future climate impacts in part because it confounds the effects of warming due to increased greenhouse gases with that which would be caused by increased radiation from the sun. We developed an algorithm for linking PET to extraterrestrial solar radiation (incoming top-of atmosphere solar radiation), as well as temperature and atmospheric water vapor pressure, and incorporated this algorithm into the dynamic global vegetation model MC1. We tested the new algorithm for the Northern Great Plains, USA, whose remaining grasslands are threatened by continuing woody encroachment. Both the new and the standard temperature-dependent MC1 algorithm adequately simulated current PET, as compared to the more rigorous PenPan model of Rotstayn et al. (2006). However, compared to the standard algorithm, the new algorithm projected a much more gradual increase in PET over the 21st century for three contrasting future climates. This difference led to lower simulated drought effects and hence greater woody encroachment with the new algorithm, illustrating the importance of more rigorous calculations of PET in ecological models dealing with climate change.
Daytime Land Surface Temperature Extraction from MODIS Thermal Infrared Data under Cirrus Clouds
Fan, Xiwei; Tang, Bo-Hui; Wu, Hua; Yan, Guangjian; Li, Zhao-Liang
2015-01-01
Simulated data showed that cirrus clouds could lead to a maximum land surface temperature (LST) retrieval error of 11.0 K when using the generalized split-window (GSW) algorithm with a cirrus optical depth (COD) at 0.55 μm of 0.4 and in nadir view. A correction term in the COD linear function was added to the GSW algorithm to extend the GSW algorithm to cirrus cloudy conditions. The COD was acquired by a look up table of the isolated cirrus bidirectional reflectance at 0.55 μm. Additionally, the slope k of the linear function was expressed as a multiple linear model of the top of the atmospheric brightness temperatures of MODIS channels 31–34 and as the difference between split-window channel emissivities. The simulated data showed that the LST error could be reduced from 11.0 to 2.2 K. The sensitivity analysis indicated that the total errors from all the uncertainties of input parameters, extension algorithm accuracy, and GSW algorithm accuracy were less than 2.5 K in nadir view. Finally, the Great Lakes surface water temperatures measured by buoys showed that the retrieval accuracy of the GSW algorithm was improved by at least 1.5 K using the proposed extension algorithm for cirrus skies. PMID:25928059
NASA Technical Reports Server (NTRS)
Masters, P. A.
1974-01-01
An analysis to predict the pressurant gas requirements for the discharge of cryogenic liquid propellants from storage tanks is presented, along with an algorithm and two computer programs. One program deals with the pressurization (ramp) phase of bringing the propellant tank up to its operating pressure. The method of analysis involves a numerical solution of the temperature and velocity functions for the tank ullage at a discrete set of points in time and space. The input requirements of the program are the initial ullage conditions, the initial temperature and pressure of the pressurant gas, and the time for the expulsion or the ramp. Computations are performed which determine the heat transfer between the ullage gas and the tank wall. Heat transfer to the liquid interface and to the hardware components may be included in the analysis. The program output includes predictions of mass of pressurant required, total energy transfer, and wall and ullage temperatures. The analysis, the algorithm, a complete description of input and output, and the FORTRAN 4 program listings are presented. Sample cases are included to illustrate use of the programs.
3D brain tumor localization and parameter estimation using thermographic approach on GPU.
Bousselham, Abdelmajid; Bouattane, Omar; Youssfi, Mohamed; Raihani, Abdelhadi
2018-01-01
The aim of this paper is to present a GPU parallel algorithm for brain tumor detection to estimate its size and location from surface temperature distribution obtained by thermography. The normal brain tissue is modeled as a rectangular cube including spherical tumor. The temperature distribution is calculated using forward three dimensional Pennes bioheat transfer equation, it's solved using massively parallel Finite Difference Method (FDM) and implemented on Graphics Processing Unit (GPU). Genetic Algorithm (GA) was used to solve the inverse problem and estimate the tumor size and location by minimizing an objective function involving measured temperature on the surface to those obtained by numerical simulation. The parallel implementation of Finite Difference Method reduces significantly the time of bioheat transfer and greatly accelerates the inverse identification of brain tumor thermophysical and geometrical properties. Experimental results show significant gains in the computational speed on GPU and achieve a speedup of around 41 compared to the CPU. The analysis performance of the estimation based on tumor size inside brain tissue also presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Generalized source Finite Volume Method for radiative transfer equation in participating media
NASA Astrophysics Data System (ADS)
Zhang, Biao; Xu, Chuan-Long; Wang, Shi-Min
2017-03-01
Temperature monitoring is very important in a combustion system. In recent years, non-intrusive temperature reconstruction has been explored intensively on the basis of calculating arbitrary directional radiative intensities. In this paper, a new method named Generalized Source Finite Volume Method (GSFVM) was proposed. It was based on radiative transfer equation and Finite Volume Method (FVM). This method can be used to calculate arbitrary directional radiative intensities and is proven to be accurate and efficient. To verify the performance of this method, six test cases of 1D, 2D, and 3D radiative transfer problems were investigated. The numerical results show that the efficiency of this method is close to the radial basis function interpolation method, but the accuracy and stability is higher than that of the interpolation method. The accuracy of the GSFVM is similar to that of the Backward Monte Carlo (BMC) algorithm, while the time required by the GSFVM is much shorter than that of the BMC algorithm. Therefore, the GSFVM can be used in temperature reconstruction and improvement on the accuracy of the FVM.
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-09-07
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure.
Estimating the extreme low-temperature event using nonparametric methods
NASA Astrophysics Data System (ADS)
D'Silva, Anisha
This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.
Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation
Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu
2015-01-01
To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401
Robust algorithm for aligning two-dimensional chromatograms.
Gros, Jonas; Nabi, Deedar; Dimitriou-Christidis, Petros; Rutler, Rebecca; Arey, J Samuel
2012-11-06
Comprehensive two-dimensional gas chromatography (GC × GC) chromatograms typically exhibit run-to-run retention time variability. Chromatogram alignment is often a desirable step prior to further analysis of the data, for example, in studies of environmental forensics or weathering of complex mixtures. We present a new algorithm for aligning whole GC × GC chromatograms. This technique is based on alignment points that have locations indicated by the user both in a target chromatogram and in a reference chromatogram. We applied the algorithm to two sets of samples. First, we aligned the chromatograms of twelve compositionally distinct oil spill samples, all analyzed using the same instrument parameters. Second, we applied the algorithm to two compositionally distinct wastewater extracts analyzed using two different instrument temperature programs, thus involving larger retention time shifts than the first sample set. For both sample sets, the new algorithm performed favorably compared to two other available alignment algorithms: that of Pierce, K. M.; Wood, Lianna F.; Wright, B. W.; Synovec, R. E. Anal. Chem.2005, 77, 7735-7743 and 2-D COW from Zhang, D.; Huang, X.; Regnier, F. E.; Zhang, M. Anal. Chem.2008, 80, 2664-2671. The new algorithm achieves the best matches of retention times for test analytes, avoids some artifacts which result from the other alignment algorithms, and incurs the least modification of quantitative signal information.
Satellite Imagery Analysis for Nighttime Temperature Inversion Clouds
NASA Technical Reports Server (NTRS)
Kawamoto, K.; Minnis, P.; Arduini, R.; Smith, W., Jr.
2001-01-01
Clouds play important roles in the climate system. Their optical and microphysical properties, which largely determine their radiative property, need to be investigated. Among several measurement means, satellite remote sensing seems to be the most promising. Since most of the cloud algorithms proposed so far are daytime use which utilizes solar radiation, Minnis et al. (1998) developed a nighttime use one using 3.7-, 11 - and 12-microns channels. Their algorithm, however, has a drawback that is not able to treat temperature inversion cases. We update their algorithm, incorporating new parameterization by Arduini et al. (1999) which is valid for temperature inversion cases. This updated algorithm has been applied to GOES satellite data and reasonable retrieval results were obtained.
Remote sensing data supporting EULAKES project
NASA Astrophysics Data System (ADS)
Bresciani, Mariano; Matta, Erica; Giardino, Claudia
2013-04-01
EULAKES Project (European Lakes Under Environmental Stressors), funded by Central Europe Programme 2010-2013, includes four European lakes study: Garda Lake (Italy), Charzykowskie Lake (Poland), Neusiedl Lake (Austria) and Balaton Lake (Hungary). Aim of the Project is to evaluate lakes exposure to different type of risks in order to provide some useful tools to improve natural resources planning and management. The goal is to build an informatics system to support decision makers' purposes, which also provides a list of possible measures to be undertaken for water quality protection. Thanks to remote sensing techniques water quality characteristics have been assessed. Our activity provided photosynthetic cyanobacteria specific pigments spatial distribution in Charzykowskie Lake, macrophyte mapping in Garda Lake using MIVIS images, and common reeds change detection in Neusiedl Lake through Landsat satellite images analysis. 4800 MODIS 11A products, from 2004 to 2010, have been acquired to evaluate surface water temperature trends, significant input data for future global change scenarios. Temperature analysis allowed the evaluation of lakes different characteristics, temperature temporal trends and temperature spatial variability inside each lake. Optical active parameters (Chlorophyll-a, Total Suspended Matter, Colored Dissolved Organic Matter), as well as water transparency, have been estimated from 250 MERIS images processing. Satellite images, acquired following Water Frame Directive monitoring rules, have been corrected for adjacent effects using ESA Beam-Visat software (ICOL tool). Atmospheric correction has been performed applying different softwares: 6S radiative transfer code and Beam Neural-Network. Different algorithms for the water quality parameters estimation have been applied to reflectance values, after their validation with spectroradiometric field measures. Garda Lake has been analysed with ESA Case 2 Regional algorithm, while for Balaton and Neusiedl lakes a new dedicated algorithm from Case 2 Regional and Eutrophic algorithms integration have been purposely created. Eutrophic algorithm has been used for Charzykowskie Lake. Results, validated through limnological data, highlighted Garda Lake's oligotrophic characteristics and other lakes' meso-eutrophic properties. Neusiedl Lake came out as highly turbid and colored organic dissolved matter rich lake, while Charzykowskie Lake is characterised by frequent cyanobacteria blooms.
Optimal Area Profiles for Ideal Single Nozzle Air-Breathing Pulse Detonation Engines
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2003-01-01
The effects of cross-sectional area variation on idealized Pulse Detonation Engine performance are examined numerically. A quasi-one-dimensional, reacting, numerical code is used as the kernel of an algorithm that iteratively determines the correct sequencing of inlet air, inlet fuel, detonation initiation, and cycle time to achieve a limit cycle with specified fuel fraction, and volumetric purge fraction. The algorithm is exercised on a tube with a cross sectional area profile containing two degrees of freedom: overall exit-to-inlet area ratio, and the distance along the tube at which continuous transition from inlet to exit area begins. These two parameters are varied over three flight conditions (defined by inlet total temperature, inlet total pressure and ambient static pressure) and the performance is compared to a straight tube. It is shown that compared to straight tubes, increases of 20 to 35 percent in specific impulse and specific thrust are obtained with tubes of relatively modest area change. The iterative algorithm is described, and its limitations are noted and discussed. Optimized results are presented showing performance measurements, wave diagrams, and area profiles. Suggestions for future investigation are also discussed.
NASA Astrophysics Data System (ADS)
Broderick, Ciaran; Fealy, Rowan
2013-04-01
Circulation type classifications (CTCs) compiled as part of the COST733 Action, entitled 'Harmonisation and Application of Weather Type Classifications for European Regions', are examined for their synoptic and climatological applicability to Ireland based on their ability to characterise surface temperature and precipitation. In all 16 different objective classification schemes, representative of four different methodological approaches to circulation typing (optimization algorithms, threshold based methods, eigenvector techniques and leader algorithms) are considered. Several statistical metrics which variously quantify the ability of CTCs to discretize daily data into well-defined homogeneous groups are used to evaluate and compare different approaches to synoptic typing. The records from 14 meteorological stations located across the island of Ireland are used in the study. The results indicate that while it was not possible to identify a single optimum classification or approach to circulation typing - conditional on the location and surface variables considered - a number of general assertions regarding the performance of different schemes can be made. The findings for surface temperature indicate that that those classifications based on predefined thresholds (e.g. Litynski, GrossWetterTypes and original Lamb Weather Type) perform well, as do the Kruizinga and Lund classification schemes. Similarly for precipitation predefined type classifications return high skill scores, as do those classifications derived using some optimization procedure (e.g. SANDRA, Self Organizing Maps and K-Means clustering). For both temperature and precipitation the results generally indicate that the classifications perform best for the winter season - reflecting the closer coupling between large-scale circulation and surface conditions during this period. In contrast to the findings for temperature, spatial patterns in the performance of classifications were more evident for precipitation. In the case of this variable those more westerly synoptic stations open to zonal airflow and less influenced by regional scale forcings generally exhibited a stronger link with large-scale circulation.
NASA Technical Reports Server (NTRS)
Ichoku, Charles; Kaufman, Y. J.; Fraser, R. H.; Jin, J.-Z.; Park, W. M.; Lau, William K. M. (Technical Monitor)
2001-01-01
Two fixed-threshold Canada Centre for Remote Sensing and European Space Agency (CCRS and ESA) and three contextual GIGLIO, International Geosphere and Biosphere Project, and Moderate Resolution Imaging Spectroradiometer (GIGLIO, IGBP, and MODIS) algorithms were used for fire detection with Advanced Very High Resolution Radiometer (AVHRR) data acquired over Canada during the 1995 fire season. The CCRS algorithm was developed for the boreal ecosystem, while the other four are for global application. The MODIS algorithm, although developed specifically for use with the MODIS sensor data, was applied to AVHRR in this study for comparative purposes. Fire detection accuracy assessment for the algorithms was based on comparisons with available 1995 burned area ground survey maps covering five Canadian provinces. Overall accuracy estimations in terms of omission (CCRS=46%, ESA=81%, GIGLIO=75%, IGBP=51%, MODIS=81%) and commission (CCRS=0.35%, ESA=0.08%, GIGLIO=0.56%, IGBP=0.75%, MODIS=0.08%) errors over forested areas revealed large differences in performance between the algorithms, with no relevance to type (fixed-threshold or contextual). CCRS performed best in detecting real forest fires, with the least omission error, while ESA and MODIS produced the highest omission error, probably because of their relatively high threshold values designed for global application. The commission error values appear small because the area of pixels falsely identified by each algorithm was expressed as a ratio of the vast unburned forest area. More detailed study shows that most commission errors in all the algorithms were incurred in nonforest agricultural areas, especially on days with very high surface temperatures. The advantage of the high thresholds in ESA and MODIS was that they incurred the least commission errors.
NASA Astrophysics Data System (ADS)
Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.
2018-06-01
A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.
A satellite snow depth multi-year average derived from SSM/I for the high latitude regions
Biancamaria, S.; Mognard, N.M.; Boone, A.; Grippa, M.; Josberger, E.G.
2008-01-01
The hydrological cycle for high latitude regions is inherently linked with the seasonal snowpack. Thus, accurately monitoring the snow depth and the associated aerial coverage are critical issues for monitoring the global climate system. Passive microwave satellite measurements provide an optimal means to monitor the snowpack over the arctic region. While the temporal evolution of snow extent can be observed globally from microwave radiometers, the determination of the corresponding snow depth is more difficult. A dynamic algorithm that accounts for the dependence of the microwave scattering on the snow grain size has been developed to estimate snow depth from Special Sensor Microwave/Imager (SSM/I) brightness temperatures and was validated over the U.S. Great Plains and Western Siberia. The purpose of this study is to assess the dynamic algorithm performance over the entire high latitude (land) region by computing a snow depth multi-year field for the time period 1987-1995. This multi-year average is compared to the Global Soil Wetness Project-Phase2 (GSWP2) snow depth computed from several state-of-the-art land surface schemes and averaged over the same time period. The multi-year average obtained by the dynamic algorithm is in good agreement with the GSWP2 snow depth field (the correlation coefficient for January is 0.55). The static algorithm, which assumes a constant snow grain size in space and time does not correlate with the GSWP2 snow depth field (the correlation coefficient with GSWP2 data for January is - 0.03), but exhibits a very high anti-correlation with the NCEP average January air temperature field (correlation coefficient - 0.77), the deepest satellite snow pack being located in the coldest regions, where the snow grain size may be significantly larger than the average value used in the static algorithm. The dynamic algorithm performs better over Eurasia (with a correlation coefficient with GSWP2 snow depth equal to 0.65) than over North America (where the correlation coefficient decreases to 0.29). ?? 2007 Elsevier Inc. All rights reserved.
Simulated GOLD Observations of Atmospheric Waves
NASA Astrophysics Data System (ADS)
Correira, J.; Evans, J. S.; Lumpe, J. D.; Rusch, D. W.; Chandran, A.; Eastes, R.; Codrescu, M.
2016-12-01
The Global-scale Observations of the Limb and Disk (GOLD) mission will measure structures in the Earth's airglow layer due to dynamical forcing by vertically and horizontally propagating waves. These measurements focus on global-scale structures, including compositional and temperature responses resulting from dynamical forcing. Daytime observations of far-UV emissions by GOLD will be used to generate two-dimensional maps of the ratio of atomic oxygen and molecular nitrogen column densities (ΣO/N2 ) as well as neutral temperature that provide signatures of large-scale spatial structure. In this presentation, we use simulations to demonstrate GOLD's capability to deduce periodicities and spatial dimensions of large-scale waves from the spatial and temporal evolution observed in composition and temperature maps. Our simulations include sophisticated forward modeling of the upper atmospheric airglow that properly accounts for anisotropy in neutral and ion composition, temperature, and solar illumination. Neutral densities and temperatures used in the simulations are obtained from global circulation and climatology models that have been perturbed by propagating waves with a range of amplitudes, periods, and sources of excitation. Modeling of airglow emission and predictions of ΣO/N2 and neutral temperatures are performed with the Atmospheric Ultraviolet Radiance Integrated Code (AURIC) and associated derived product algorithms. Predicted structure in ΣO/N2 and neutral temperature due to dynamical forcing by propagating waves is compared to existing observations. Realistic GOLD Level 2 data products are generated from simulated airglow emission using algorithm code that will be implemented operationally at the GOLD Science Data Center.
NASA Astrophysics Data System (ADS)
Millard, R. C.; Seaver, G.
1990-12-01
A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.
The Edge-Disjoint Path Problem on Random Graphs by Message-Passing.
Altarelli, Fabrizio; Braunstein, Alfredo; Dall'Asta, Luca; De Bacco, Caterina; Franz, Silvio
2015-01-01
We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length.
The Edge-Disjoint Path Problem on Random Graphs by Message-Passing
2015-01-01
We present a message-passing algorithm to solve a series of edge-disjoint path problems on graphs based on the zero-temperature cavity equations. Edge-disjoint paths problems are important in the general context of routing, that can be defined by incorporating under a unique framework both traffic optimization and total path length minimization. The computation of the cavity equations can be performed efficiently by exploiting a mapping of a generalized edge-disjoint path problem on a star graph onto a weighted maximum matching problem. We perform extensive numerical simulations on random graphs of various types to test the performance both in terms of path length minimization and maximization of the number of accommodated paths. In addition, we test the performance on benchmark instances on various graphs by comparison with state-of-the-art algorithms and results found in the literature. Our message-passing algorithm always outperforms the others in terms of the number of accommodated paths when considering non trivial instances (otherwise it gives the same trivial results). Remarkably, the largest improvement in performance with respect to the other methods employed is found in the case of benchmarks with meshes, where the validity hypothesis behind message-passing is expected to worsen. In these cases, even though the exact message-passing equations do not converge, by introducing a reinforcement parameter to force convergence towards a sub optimal solution, we were able to always outperform the other algorithms with a peak of 27% performance improvement in terms of accommodated paths. On random graphs, we numerically observe two separated regimes: one in which all paths can be accommodated and one in which this is not possible. We also investigate the behavior of both the number of paths to be accommodated and their minimum total length. PMID:26710102
Thallium Bromide as an Alternative Material for Room-Temperature Gamma-Ray Spectroscopy and Imaging
NASA Astrophysics Data System (ADS)
Koehler, William
Thallium bromide is an attractive material for room-temperature gamma-ray spectroscopy and imaging because of its high atomic number (Tl: 81, Br: 35), high density (7.56 g/cm3), and a wide bandgap (2.68 eV). In this work, 5 mm thick TlBr detectors achieved 0.94% FWHM at 662 keV for all single-pixel events and 0.72% FWHM at 662 keV from the best pixel and depth using three-dimensional position sensing technology. However, these results were limited to stable operation at -20°C. After days to months of room-temperature operation, ionic conduction caused these devices to fail. Depth-dependent signal analysis was used to isolate room-temperature degradation effects to within 0.5 mm of the anode surface. This was verified by refabricating the detectors after complete failure at room temperature; after refabrication, similar performance and functionality was recovered. As part of this work, the improvement in electron drift velocity and energy resolution during conditioning at -20°C was quantified. A new method was developed to measure the impurity concentration without changing the gamma ray measurement setup. The new method was used to show that detector conditioning was likely the result of charged impurities drifting out of the active volume. This space charge reduction then caused a more stable and uniform electric field. Additionally, new algorithms were developed to remove hole contributions in high-hole-mobility detectors to improve depth reconstruction. These algorithms improved the depth reconstruction (accuracy) without degrading the depth uncertainty (precision). Finally, spectroscopic and imaging performance of new 11 x 11 pixelated-anode TlBr detectors was characterized. The larger detectors were used to show that energy resolution can be improved by identifying photopeak events from their Tl characteristic x-rays.
SMOS and AMSR-2 soil moisture evaluation using representative monitoring sites in southern Australia
NASA Astrophysics Data System (ADS)
Walker, J. P.; Mei Sun, M. S.; Rudiger, C.; Parinussa, R.; Koike, T.; Kerr, Y. H.
2016-12-01
The performance of soil moisture products from AMSR-2 and SMOS were evaluated against representative surface soil moisture stations within the Yanco study area in the Murrumbidgee Catchment, in southeast Australia. AMSR-2 Level 3 (L3) soil moisture products retrieved from two sets of brightness temperatures using the Japanese Aerospace exploration Agency (JAXA) and the Land Parameter Retrieval Model (LPRM) algorithms were included. For the LPRM algorithm, two different parameterization methods were applied. In the case of SMOS, two versions of the SMOS L3 soil moisture product were assessed. Results based on using "random" and representative stations to evaluate the products were contrasted. The latest versions of the JAXA (JX2) and LPRM (LP3) products were found to perform better than the earlier versions (JX1, LP1 and LP2). Moreover, soil moisture retrieval based on the latter version of brightness temperature and parameterization scheme improved when C-band observations were used, as opposed to the X-band data. Yet, X-band retrievals were found to perform better than C-band. Inter-comparing AMSR-2 X-band products from different acquisition times showed a better performance for 1:30 pm overpasses whereas SMOS 6:00 am retrievals were found to perform the best. The mean average error (MAE) goal accuracy of the AMSR-2 mission (MAE < 0.08 m3/m3) was met by both versions of the JAXA products, the LPRM X-band products retrieved from the reprocessed version of brightness temperatures, and both versions of SMOS products. Nevertheless, none of the products achieved the SMOS target accuracy of 0.04 m3/m3. Finally, the product performance depended on the statistics used in their evaluation; based on temporal and absolute accuracy JX2 is recommended, whereas LP3 X-band 1:30 pm and SMOS2 6:00 am are recommended based on temporal accuracy alone.
Fearnot, N E; Kitoh, O; Fujita, T; Okamura, H; Smith, H J; Calderini, M
1989-05-01
The effectiveness of using blood temperature change as an indicator to automatically vary heart rate physiologically was evaluated in 3 patients implanted with Model Sensor Kelvin 500 (Cook Pacemaker Corporation, Leechburg, PA, USA) pacemakers. Each patient performed two block-randomized treadmill exercise tests: one while programmed for temperature-based, rate-modulated pacing and the other while programmed without rate modulation. In 1 pacemaker patient and 4 volunteers, heart rates were recorded during exposure to a hot water bath. Blood temperature measured at 10 sec intervals and pacing rate measured at 1 min intervals were telemetered to a diagnostic programmer and data collector for storage and transfer to a computer. Observation comments and ECG-derived heart rates were manually recorded. The temperature-based pacemaker was shown to respond promptly not only to physical exertion but also to emotionally caused stress and submersion in a hot bath. These events cause increased heart rate in the normal heart. Using a suitable algorithm to process the measurement of blood temperature, it was possible to produce appropriate pacing rates in paced patients.
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.
2013-01-01
A two-step ensemble recentering Kalman filter (ERKF) analysis scheme is introduced. The algorithm consists of a recentering step followed by an ensemble Kalman filter (EnKF) analysis step. The recentering step is formulated such as to adjust the prior distribution of an ensemble of model states so that the deviations of individual samples from the sample mean are unchanged but the original sample mean is shifted to the prior position of the most likely particle, where the likelihood of each particle is measured in terms of closeness to a chosen subset of the observations. The computational cost of the ERKF is essentially the same as that of a same size EnKF. The ERKF is applied to the assimilation of Argo temperature profiles into the OGCM component of an ensemble of NASA GEOS-5 coupled models. Unassimilated Argo salt data are used for validation. A surprisingly small number (16) of model trajectories is sufficient to significantly improve model estimates of salinity over estimates from an ensemble run without assimilation. The two-step algorithm also performs better than the EnKF although its performance is degraded in poorly observed regions.
Khajeh, Mostafa; Sarafraz-Yazdi, Ali; Natavan, Zahra Bameri
2016-03-01
The aim of this research was to develop a low price and environmentally friendly adsorbent with abundant of source to remove methylene blue (MB) from water samples. Sawdust solid-phase extraction coupled with high-performance liquid chromatography was used for the extraction and determination of MB. In this study, an experimental data-based artificial neural network model is constructed to describe the performance of sawdust solid-phase extraction method for various operating conditions. The pH, time, amount of sawdust, and temperature were the input variables, while the percentage of extraction of MB was the output. The optimum operating condition was then determined by genetic algorithm method. The optimized conditions were obtained as follows: 11.5, 22.0 min, 0.3 g, and 26.0°C for pH of the solution, extraction time, amount of adsorbent, and temperature, respectively. Under these optimum conditions, the detection limit and relative standard deviation were 0.067 μg L(-1) and <2.4%, respectively. The Langmuir and Freundlich adsorption models were applied to describe the isotherm constant and for the removal and determination of MB from water samples. © The Author(s) 2013.
Short term load forecasting using a self-supervised adaptive neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, H.; Pimmel, R.L.
The authors developed a self-supervised adaptive neural network to perform short term load forecasts (STLF) for a large power system covering a wide service area with several heavy load centers. They used the self-supervised network to extract correlational features from temperature and load data. In using data from the calendar year 1993 as a test case, they found a 0.90 percent error for hour-ahead forecasting and 1.92 percent error for day-ahead forecasting. These levels of error compare favorably with those obtained by other techniques. The algorithm ran in a couple of minutes on a PC containing an Intel Pentium --more » 120 MHz CPU. Since the algorithm included searching the historical database, training the network, and actually performing the forecasts, this approach provides a real-time, portable, and adaptable STLF.« less
TPS In-Flight Health Monitoring Project Progress Report
NASA Technical Reports Server (NTRS)
Kostyk, Chris; Richards, Lance; Hudston, Larry; Prosser, William
2007-01-01
Progress in the development of new thermal protection systems (TPS) is reported. New approaches use embedded lightweight, sensitive, fiber optic strain and temperature sensors within the TPS. Goals of the program are to develop and demonstrate a prototype TPS health monitoring system, develop a thermal-based damage detection algorithm, characterize limits of sensor/system performance, and develop ea methodology transferable to new designs of TPS health monitoring systems. Tasks completed during the project helped establish confidence in understanding of both test setup and the model and validated system/sensor performance in a simple TPS structure. Other progress included complete initial system testing, commencement of the algorithm development effort, generation of a damaged thermal response characteristics database, initial development of a test plan for integration testing of proven FBG sensors in simple TPS structure, and development of partnerships to apply the technology.
A Clinical Prediction Algorithm to Stratify Pediatric Musculoskeletal Infection by Severity
Benvenuti, Michael A; An, Thomas J; Mignemi, Megan E; Martus, Jeffrey E; Mencio, Gregory A; Lovejoy, Stephen A; Thomsen, Isaac P; Schoenecker, Jonathan G; Williams, Derek J
2016-01-01
Objective There are currently no algorithms for early stratification of pediatric musculoskeletal infection (MSKI) severity that are applicable to all types of tissue involvement. In this study, the authors sought to develop a clinical prediction algorithm that accurately stratifies infection severity based on clinical and laboratory data at presentation to the emergency department. Methods An IRB-approved retrospective review was conducted to identify patients aged 0–18 who presented to the pediatric emergency department at a tertiary care children’s hospital with concern for acute MSKI over a five-year period (2008–2013). Qualifying records were reviewed to obtain clinical and laboratory data and to classify in-hospital outcomes using a three-tiered severity stratification system. Ordinal regression was used to estimate risk for each outcome. Candidate predictors included age, temperature, respiratory rate, heart rate, C-reactive protein, and peripheral white blood cell count. We fit fully specified (all predictors) and reduced models (retaining predictors with a p-value ≤ 0.2). Discriminatory power of the models was assessed using the concordance (c)-index. Results Of the 273 identified children, 191 (70%) met inclusion criteria. Median age was 5.8 years. Outcomes included 47 (25%) children with inflammation only, 41 (21%) with local infection, and 103 (54%) with disseminated infection. Both the full and reduced models accurately demonstrated excellent performance (full model c-index 0.83, 95% CI [0.79–0.88]; reduced model 0.83, 95% CI [0.78–0.87]). Model fit was also similar, indicating preference for the reduced model. Variables in this model included C-reactive protein, pulse, temperature, and an interaction term for pulse and temperature. The odds of a more severe outcome increased by 30% for every 10-unit increase in C-reactive protein. Conclusions Clinical and laboratory data obtained in the emergency department may be used to accurately differentiate pediatric MSKI severity. The predictive algorithm in this study stratifies pediatric MSKI severity at presentation irrespective of tissue involvement and anatomic diagnosis. Prospective studies are needed to validate model performance and clinical utility. PMID:27682512
Adiabatic Quantum Search in Open Systems.
Wild, Dominik S; Gopalakrishnan, Sarang; Knap, Michael; Yao, Norman Y; Lukin, Mikhail D
2016-10-07
Adiabatic quantum algorithms represent a promising approach to universal quantum computation. In isolated systems, a key limitation to such algorithms is the presence of avoided level crossings, where gaps become extremely small. In open quantum systems, the fundamental robustness of adiabatic algorithms remains unresolved. Here, we study the dynamics near an avoided level crossing associated with the adiabatic quantum search algorithm, when the system is coupled to a generic environment. At zero temperature, we find that the algorithm remains scalable provided the noise spectral density of the environment decays sufficiently fast at low frequencies. By contrast, higher order scattering processes render the algorithm inefficient at any finite temperature regardless of the spectral density, implying that no quantum speedup can be achieved. Extensions and implications for other adiabatic quantum algorithms will be discussed.
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak; ...
2015-07-15
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Je; Yoon, Hyun; Im, Piljae
This paper developed an algorithm that controls the supply air temperature in the variable refrigerant flow (VRF), outdoor air processing unit (OAP) system, according to indoor and outdoor temperature and humidity, and verified the effects after applying the algorithm to real buildings. The VRF-OAP system refers to a heating, ventilation, and air conditioning (HVAC) system to complement a ventilation function, which is not provided in the VRF system. It is a system that supplies air indoors by heat treatment of outdoor air through the OAP, as a number of indoor units and OAPs are connected to the outdoor unit inmore » the VRF system simultaneously. This paper conducted experiments with regard to changes in efficiency and the cooling capabilities of each unit and system according to supply air temperature in the OAP using a multicalorimeter. Based on these results, an algorithm that controlled the temperature of the supply air in the OAP was developed considering indoor and outdoor temperatures and humidity. The algorithm was applied in the test building to verify the effects of energy reduction and the effects on indoor temperature and humidity. Loads were then changed by adjusting the number of conditioned rooms to verify the effect of the algorithm according to various load conditions. In the field test results, the energy reduction effect was approximately 15–17% at a 100% load, and 4–20% at a 75% load. However, no significant effects were shown at a 50% load. The indoor temperature and humidity reached a comfortable level.« less
Lee, Je; Yoon, Hyun; Im, Piljae; ...
2017-12-27
This paper developed an algorithm that controls the supply air temperature in the variable refrigerant flow (VRF), outdoor air processing unit (OAP) system, according to indoor and outdoor temperature and humidity, and verified the effects after applying the algorithm to real buildings. The VRF-OAP system refers to a heating, ventilation, and air conditioning (HVAC) system to complement a ventilation function, which is not provided in the VRF system. It is a system that supplies air indoors by heat treatment of outdoor air through the OAP, as a number of indoor units and OAPs are connected to the outdoor unit inmore » the VRF system simultaneously. This paper conducted experiments with regard to changes in efficiency and the cooling capabilities of each unit and system according to supply air temperature in the OAP using a multicalorimeter. Based on these results, an algorithm that controlled the temperature of the supply air in the OAP was developed considering indoor and outdoor temperatures and humidity. The algorithm was applied in the test building to verify the effects of energy reduction and the effects on indoor temperature and humidity. Loads were then changed by adjusting the number of conditioned rooms to verify the effect of the algorithm according to various load conditions. In the field test results, the energy reduction effect was approximately 15–17% at a 100% load, and 4–20% at a 75% load. However, no significant effects were shown at a 50% load. The indoor temperature and humidity reached a comfortable level.« less
NASA Technical Reports Server (NTRS)
Knox, C. E.; Person, L. H., Jr.
1981-01-01
The NASA developed, implemented, and flight tested a flight management algorithm designed to improve the accuracy of delivering an airplane in a fuel-conservative manner to a metering fix at a time designated by air traffic control. This algorithm provides a 3D path with time control (4D) for the TCV B-737 airplane to make an idle-thrust, clean configured (landing gear up, flaps zero, and speed brakes retracted) descent to arrive at the metering fix at a predetermined time, altitude, and airspeed. The descent path is calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard pressure and temperature effects. The flight management descent algorithms are described and flight test results are presented.
NASA Astrophysics Data System (ADS)
Shamsoddini, Rahim
2018-04-01
An incompressible smoothed particle hydrodynamics algorithm is proposed to model and investigate the thermal effect on the mixing rate of an active micromixer in which the rotating stirrers enhance the mixing rate. In liquids, mass diffusion increases with increasing temperature, while viscosity decreases; so, the local Schmidt number decreases considerably with increasing temperature. The present study investigates the effect of wall temperature on mixing rate with an improved SPH method. The robust SPH method used in the present work is equipped with a shifting algorithm and renormalization tensors. By introducing this new algorithm, the several mass, momentum, energy, and concentration equations are solved. The results, discussed for different temperature ratios, show that mixing rate increases significantly with increased temperature ratio.
FFT analysis of sensible-heat solar-dynamic receivers
NASA Astrophysics Data System (ADS)
Lund, Kurt O.
The use of solar dynamic receivers with sensible energy storage in single-phase materials is considered. The feasibility of single-phase designs with weight and thermal performance comparable to existing two-phase designs is addressed. Linearized heat transfer equations are formulated for the receiver heat storage, representing the periodic input solar flux as the sum of steady and oscillating distributions. The steady component is solved analytically to produce the desired receiver steady outlet gas temperature, and the FFT algorithm is applied to the oscillating components to obtain the amplitudes and mode shapes of the oscillating solid and gas temperatures. The results indicate that sensible-heat receiver designs with performance comparable to state-of-the-art two-phase receivers are available.
NASA Technical Reports Server (NTRS)
Evans, Keith D.; Demoz, Belay B.; Cadirola, Martin P.; Melfi, S. H.; Whiteman, David N.; Schwemmer, Geary K.; Starr, David OC.; Schmidlin, F. J.; Feltz, Wayne
2000-01-01
The NAcA/Goddard Space Flight Center Scanning Raman Lidar has made measurements of water vapor and aerosols for almost ten years. Calibration of the water vapor data has typically been performed by comparison with another water vapor sensor such as radiosondes. We present a new method for water vapor calibration that only requires low clouds, and surface pressure and temperature measurements. A sensitivity study was performed and the cloud base algorithm agrees with the radiosonde calibration to within 10- 15%. Knowledge of the true atmospheric lapse rate is required to obtain more accurate cloud base temperatures. Analysis of water vapor and aerosol measurements made in the vicinity of Hurricane Bonnie are discussed.
Estimates of Single Sensor Error Statistics for the MODIS Matchup Database Using Machine Learning
NASA Astrophysics Data System (ADS)
Kumar, C.; Podesta, G. P.; Minnett, P. J.; Kilpatrick, K. A.
2017-12-01
Sea surface temperature (SST) is a fundamental quantity for understanding weather and climate dynamics. Although sensors aboard satellites provide global and repeated SST coverage, a characterization of SST precision and bias is necessary for determining the suitability of SST retrievals in various applications. Guidance on how to derive meaningful error estimates is still being developed. Previous methods estimated retrieval uncertainty based on geophysical factors, e.g. season or "wet" and "dry" atmospheres, but the discrete nature of these bins led to spatial discontinuities in SST maps. Recently, a new approach clustered retrievals based on the terms (excluding offset) in the statistical algorithm used to estimate SST. This approach resulted in over 600 clusters - too many to understand the geophysical conditions that influence retrieval error. Using MODIS and buoy SST matchups (2002 - 2016), we use machine learning algorithms (recursive and conditional trees, random forests) to gain insight into geophysical conditions leading to the different signs and magnitudes of MODIS SST residuals (satellite SSTs minus buoy SSTs). MODIS retrievals were first split into three categories: < -0.4 C, -0.4 C ≤ residual ≤ 0.4 C, and > 0.4 C. These categories are heavily unbalanced, with residuals > 0.4 C being much less frequent. Performance of classification algorithms is affected by imbalance, thus we tested various rebalancing algorithms (oversampling, undersampling, combinations of the two). We consider multiple features for the decision tree algorithms: regressors from the MODIS SST algorithm, proxies for temperature deficit, and spatial homogeneity of brightness temperatures (BTs), e.g., the range of 11 μm BTs inside a 25 km2 area centered on the buoy location. These features and a rebalancing of classes led to an 81.9% accuracy when classifying SST retrievals into the < -0.4 C and -0.4 C ≤ residual ≤ 0.4 C categories. Spatial homogeneity in BTs consistently appears as a very important variable for classification, suggesting that unidentified cloud contamination still is one of the causes leading to negative SST residuals. Precision and accuracy of error estimates from our decision tree classifier are enhanced using this knowledge.
Estimation of body temperature rhythm based on heart activity parameters in daily life.
Sooyoung Sim; Heenam Yoon; Hosuk Ryou; Kwangsuk Park
2014-01-01
Body temperature contains valuable health related information such as circadian rhythm and menstruation cycle. Also, it was discovered from previous studies that body temperature rhythm in daily life is related with sleep disorders and cognitive performances. However, monitoring body temperature with existing devices during daily life is not easy because they are invasive, intrusive, or expensive. Therefore, the technology which can accurately and nonintrusively monitor body temperature is required. In this study, we developed body temperature estimation model based on heart rate and heart rate variability parameters. Although this work was inspired by previous research, we originally identified that the model can be applied to body temperature monitoring in daily life. Also, we could find out that normalized Mean heart rate (nMHR) and frequency domain parameters of heart rate variability showed better performance than other parameters. Although we should validate the model with more number of subjects and consider additional algorithms to decrease the accumulated estimation error, we could verify the usefulness of this approach. Through this study, we expect that we would be able to monitor core body temperature and circadian rhythm from simple heart rate monitor. Then, we can obtain various health related information derived from daily body temperature rhythm.
Performance optimization of a photovoltaic chain conversion by the PWM control
NASA Astrophysics Data System (ADS)
Rezoug, M. R.; Chenni, R.
2017-02-01
The interest of the research technique of maximum power point tracking, exposed by this article, lays in the fact of work instantly on the real characteristic of the photovoltaic module. This work is based on instantaneous measurements of its terminals' current & voltage as well as the exploitation of the characteristic "Power - Duty Cycle" to define rapidly the Duty cycle in which power reaches its maximum value. To ensure instantaneous tracking of the point of maximum power, we use "DC/DC Converter" based on "Pulse Wave Modulation's (PWM) Command" controlled by an algorithm implanted in a microcontroller's memory. This algorithm responds to the quick changes in climate (sunlight and temperature). To identify the control parameters "VPV & IPV" at any change in operating conditions, sensors are projected. this algorithm applied to the Duty cycle of the static converter enables the control of power supplied by the photovoltaic generator thanks to oscillatory movement around the MPP. Our article highlights the importance of this technique which lays in its simplicity and performance in changing climatic conditions. This efficiency is confirmed by experimental tests and this technique will improve its predecessors.
Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants
NASA Astrophysics Data System (ADS)
Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo
2017-10-01
Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.
Adaptive Optimization of Aircraft Engine Performance Using Neural Networks
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Long, Theresa W.
1995-01-01
Preliminary results are presented on the development of an adaptive neural network based control algorithm to enhance aircraft engine performance. This work builds upon a previous National Aeronautics and Space Administration (NASA) effort known as Performance Seeking Control (PSC). PSC is an adaptive control algorithm which contains a model of the aircraft's propulsion system which is updated on-line to match the operation of the aircraft's actual propulsion system. Information from the on-line model is used to adapt the control system during flight to allow optimal operation of the aircraft's propulsion system (inlet, engine, and nozzle) to improve aircraft engine performance without compromising reliability or operability. Performance Seeking Control has been shown to yield reductions in fuel flow, increases in thrust, and reductions in engine fan turbine inlet temperature. The neural network based adaptive control, like PSC, will contain a model of the propulsion system which will be used to calculate optimal control commands on-line. Hopes are that it will be able to provide some additional benefits above and beyond those of PSC. The PSC algorithm is computationally intensive, it is valid only at near steady-state flight conditions, and it has no way to adapt or learn on-line. These issues are being addressed in the development of the optimal neural controller. Specialized neural network processing hardware is being developed to run the software, the algorithm will be valid at steady-state and transient conditions, and will take advantage of the on-line learning capability of neural networks. Future plans include testing the neural network software and hardware prototype against an aircraft engine simulation. In this paper, the proposed neural network software and hardware is described and preliminary neural network training results are presented.
NASA Astrophysics Data System (ADS)
Bostock, J.; Weller, P.; Cooklin, M.
2010-07-01
Automated diagnostic algorithms are used in implantable cardioverter-defibrillators (ICD's) to detect abnormal heart rhythms. Algorithms misdiagnose and improved specificity is needed to prevent inappropriate therapy. Knowledge engineering (KE) and artificial intelligence (AI) could improve this. A pilot study of KE was performed with artificial neural network (ANN) as AI system. A case note review analysed arrhythmic events stored in patients ICD memory. 13.2% patients received inappropriate therapy. The best ICD algorithm had sensitivity 1.00, specificity 0.69 (p<0.001 different to gold standard). A subset of data was used to train and test an ANN. A feed-forward, back-propagation network with 7 inputs, a 4 node hidden layer and 1 output had sensitivity 1.00, specificity 0.71 (p<0.001). A prospective study was performed using KE to list arrhythmias, factors and indicators for which measurable parameters were evaluated and results reviewed by a domain expert. Waveforms from electrodes in the heart and thoracic bio-impedance; temperature and motion data were collected from 65 patients during cardiac electrophysiological studies. 5 incomplete datasets were due to technical failures. We concluded that KE successfully guided selection of parameters and ANN produced a usable system and that complex data collection carries greater risk of technical failure, leading to data loss.
Space Shuttle Main Engine performance analysis
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1993-01-01
For a number of years, NASA has relied primarily upon periodically updated versions of Rocketdyne's power balance model (PBM) to provide space shuttle main engine (SSME) steady-state performance prediction. A recent computational study indicated that PBM predictions do not satisfy fundamental energy conservation principles. More recently, SSME test results provided by the Technology Test Bed (TTB) program have indicated significant discrepancies between PBM flow and temperature predictions and TTB observations. Results of these investigations have diminished confidence in the predictions provided by PBM, and motivated the development of new computational tools for supporting SSME performance analysis. A multivariate least squares regression algorithm was developed and implemented during this effort in order to efficiently characterize TTB data. This procedure, called the 'gains model,' was used to approximate the variation of SSME performance parameters such as flow rate, pressure, temperature, speed, and assorted hardware characteristics in terms of six assumed independent influences. These six influences were engine power level, mixture ratio, fuel inlet pressure and temperature, and oxidizer inlet pressure and temperature. A BFGS optimization algorithm provided the base procedure for determining regression coefficients for both linear and full quadratic approximations of parameter variation. Statistical information relative to data deviation from regression derived relations was also computed. A new strategy for integrating test data with theoretical performance prediction was also investigated. The current integration procedure employed by PBM treats test data as pristine and adjusts hardware characteristics in a heuristic manner to achieve engine balance. Within PBM, this integration procedure is called 'data reduction.' By contrast, the new data integration procedure, termed 'reconciliation,' uses mathematical optimization techniques, and requires both measurement and balance uncertainty estimates. The reconciler attempts to select operational parameters that minimize the difference between theoretical prediction and observation. Selected values are further constrained to fall within measurement uncertainty limits and to satisfy fundamental physical relations (mass conservation, energy conservation, pressure drop relations, etc.) within uncertainty estimates for all SSME subsystems. The parameter selection problem described above is a traditional nonlinear programming problem. The reconciler employs a mixed penalty method to determine optimum values of SSME operating parameters associated with this problem formulation.
NASA Technical Reports Server (NTRS)
Essias, Wayne E.; Abbott, Mark; Carder, Kendall; Campbell, Janet; Clark, Dennis; Evans, Robert; Brown, Otis; Kearns, Ed; Kilpatrick, Kay; Balch, W.
2003-01-01
Simplistic models relating global satellite ocean color, temperature, and light to ocean net primary production (ONPP) are sensitive to the accuracy and limitations of the satellite estimate of chlorophyll and other input fields, as well as the primary productivity model. The standard MODIS ONPP product uses the new semi-analytic chlorophyll algorithm as its input for two ONPP indexes. The three primary MODIS chlorophyll Q estimates from MODIS, as well as the SeaWiFS 4 chlorophyll product, were used to assess global and regional performance in estimating ONPP for the full mission, but concentrating on 2001. The two standard ONPP algorithms were examined with 8-day and 39 kilometer resolution to quantify chlorophyll algorithm dependency of ONPP. Ancillary data (MLD from FNMOC, MODIS SSTD1, and PAR from the GSFC DAO) were identical. The standard MODIS ONPP estimates for annual production in 2001 was 59 and 58 GT C for the two ONPP algorithms. Differences in ONPP using alternate chlorophylls were on the order of 10% for global annual ONPP, but ranged to 100% regionally. On all scales the differences in ONPP were smaller between MODIS and SeaWiFS than between ONPP models, or among chlorophyll algorithms within MODIS. Largest regional ONPP differences were found in the Southern Ocean (SO). In the SO, application of the semi-analytic chlorophyll resulted in not only a magnitude difference in ONPP (2x), but also a temporal shift in the time of maximum production compared to empirical algorithms when summed over standard oceanic areas. The resulting increase in global ONPP (6-7 GT) is supported by better performance of the semi-analytic chlorophyll in the SO and other high chlorophyll regions. The differences are significant in terms of understanding regional differences and dynamics of ocean carbon transformations.
Daily air temperature interpolated at high spatial resolution over a large mountainous region
Dodson, R.; Marks, D.
1997-01-01
Two methods are investigated for interpolating daily minimum and maximum air temperatures (Tmin and Tmax) at a 1 km spatial resolution over a large mountainous region (830 000 km2) in the U.S. Pacific Northwest. The methods were selected because of their ability to (1) account for the effect of elevation on temperature and (2) efficiently handle large volumes of data. The first method, the neutral stability algorithm (NSA), used the hydrostatic and potential temperature equations to convert measured temperatures and elevations to sea-level potential temperatures. The potential temperatures were spatially interpolated using an inverse-squared-distance algorithm and then mapped to the elevation surface of a digital elevation model (DEM). The second method, linear lapse rate adjustment (LLRA), involved the same basic procedure as the NSA, but used a constant linear lapse rate instead of the potential temperature equation. Cross-validation analyses were performed using the NSA and LLRA methods to interpolate Tmin and Tmax each day for the 1990 water year, and the methods were evaluated based on mean annual interpolation error (IE). The NSA method showed considerable bias for sites associated with vertical extrapolation. A correction based on climate station/grid cell elevation differences was developed and found to successfully remove the bias. The LLRA method was tested using 3 lapse rates, none of which produced a serious extrapolation bias. The bias-adjusted NSA and the 3 LLRA methods produced almost identical levels of accuracy (mean absolute errors between 1.2 and 1.3??C), and produced very similar temperature surfaces based on image difference statistics. In terms of accuracy, speed, and ease of implementation, LLRA was chosen as the best of the methods tested.
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Fiorino, Steven
2002-01-01
Coordinated ground, aircraft, and satellite observations are analyzed from the 1999 TRMM Kwajalein Atoll field experiment (KWAJEX) to better understand the relationships between cloud microphysical processes and microwave radiation intensities in the context of physical evaluation of the Level 2 TRMM radiometer rain profile algorithm and uncertainties with its assumed microphysics-radiation relationships. This talk focuses on the results of a multi-dataset analysis based on measurements from KWAJEX surface, air, and satellite platforms to test the hypothesis that uncertainties in the passive microwave radiometer algorithm (TMI 2a12 in the nomenclature of TRMM) are systematically coupled and correlated with the magnitudes of deviation of the assumed 3-dimensional microphysical properties from observed microphysical properties. Re-stated, this study focuses on identifying the weaknesses in the operational TRMM 2a12 radiometer algorithm based on observed microphysics and radiation data in terms of over-simplifications used in its theoretical microphysical underpinnings. The analysis makes use of a common transform coordinate system derived from the measuring capabilities of the aircraft radiometer used to survey the experimental study area, i.e., the 4-channel AMPR radiometer flown on the NASA DC-8 aircraft. Normalized emission and scattering indices derived from radiometer brightness temperatures at the four measuring frequencies enable a 2-dimensional coordinate system that facilities compositing of Kwajalein S-band ground radar reflectivities, ARMAR Ku-band aircraft radar reflectivities, TMI spacecraft radiometer brightness temperatures, PR Ku-band spacecraft radar reflectivities, bulk microphysical parameters derived from the aircraft-mounted cloud microphysics laser probes (including liquid/ice water contents, effective liquid/ice hydrometeor radii, and effective liquid/ice hydrometeor variances), and rainrates derived from any of the individual ground, aircraft, or satellite algorithms applied to the radar or radiometer measurements, or their combination. The results support the study's underlying hypothesis, particularly in context of ice phase processes, in that the cloud regions where the 2a12 algorithm's microphysical database most misrepresents the microphysical conditions as determined by the laser probes, are where retrieved surface rainrates are most erroneous relative to other reference rainrates as determined by ground and aircraft radar. In reaching these conclusions, TMI and PR brightness temperatures and reflectivities have been synthesized from the aircraft AMPR and ARMAR measurements with the analysis conducted in a composite framework to eliminate measurement noise associated with the case study approach and single element volumes obfuscated by heterogeneous beam filling effects. In diagnosing the performance of the 2a12 algorithm, weaknesses have been found in the cloud-radiation database used to provide microphysical guidance to the algorithm for upper cloud ice microphysics. It is also necessary to adjust a fractional convective rainfall factor within the algorithm somewhat arbitrarily to achieve satisfactory algorithm accuracy.
Advances in thermal control and performance of the MMT M1 mirror
NASA Astrophysics Data System (ADS)
Gibson, J. D.; Williams, G. G.; Callahan, S.; Comisso, B.; Ortiz, R.; Williams, J. T.
2010-07-01
Strategies for thermal control of the 6.5-meter diameter borosilicate honeycomb primary (M1) mirror at the MMT Observatory have included: 1) direct control of ventilation system chiller setpoints by the telescope operator, 2) semiautomated control of chiller setpoints, using a fixed offset from the ambient temperature, and 3) most recently, an automated temperature controller for conditioned air. Details of this automated controller, including the integration of multiple chillers, heat exchangers, and temperature/dew point sensors, are presented here. Constraints and sanity checks for thermal control are also discussed, including: 1) mirror and hardware safety, 2) aluminum coating preservation, and 3) optimization of M1 thermal conditions for science acquisition by minimizing both air-to-glass temperature differences, which cause mirror seeing, and internal glass temperature gradients, which cause wavefront errors. Consideration is given to special operating conditions, such as high dew and frost points. Precise temperature control of conditioned ventilation air as delivered to the M1 mirror cell is also discussed. The performance of the new automated controller is assessed and compared to previous control strategies. Finally, suggestions are made for further refinement of the M1 mirror thermal control system and related algorithms.
Rauscher, Sarah; Neale, Chris; Pomès, Régis
2009-10-13
Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.
Application of genetic algorithms in nonlinear heat conduction problems.
Kadri, Muhammad Bilal; Khan, Waqar A
2014-01-01
Genetic algorithms are employed to optimize dimensionless temperature in nonlinear heat conduction problems. Three common geometries are selected for the analysis and the concept of minimum entropy generation is used to determine the optimum temperatures under the same constraints. The thermal conductivity is assumed to vary linearly with temperature while internal heat generation is assumed to be uniform. The dimensionless governing equations are obtained for each selected geometry and the dimensionless temperature distributions are obtained using MATLAB. It is observed that GA gives the minimum dimensionless temperature in each selected geometry.
NASA Technical Reports Server (NTRS)
Njoku, E. G.; Christensen, E. J.; Cofield, R. E.
1980-01-01
The antenna temperatures measured by the Seasat scanning multichannel microwave radiometer (SMMR) differ from the true brightness temperatures of the observed scene due to antenna pattern effects, principally from antenna sidelobe contributions and cross-polarization coupling. To provide accurate brightness temperatures convenient for geophysical parameter retrievals the antenna temperatures are processed through a series of stages, collectively known as the antenna pattern correction (APC) algorithm. A description of the development and implementation of the APC algorithm is given, along with an error analysis of the resulting brightness temperatures.
Dedicated tool to assess the impact of a rhetorical task on human body temperature.
Koprowski, Robert; Wilczyński, Sławomir; Martowska, Katarzyna; Gołuch, Dominik; Wrocławska-Warchala, Emilia
2017-10-01
Functional infrared thermal imaging is a method widely used in medicine, including analysis of the mechanisms related to the effect of emotions on physiological processes. The article shows how the body temperature may change during stress associated with performing a rhetorical task and proposes new parameters useful for dynamic thermal imaging measurements MATERIALS AND METHODS: 29 healthy male subjects were examined. They were given a rhetorical task that induced stress. Analysis and processing of collected body temperature data in a spatial resolution of 256×512pixels and a temperature resolution of 0.1°C enabled to show the dynamics of temperature changes. This analysis was preceded by dedicated image analysis and processing methods RESULTS: The presented dedicated algorithm for image analysis and processing allows for fully automated, reproducible and quantitative assessment of temperature changes and time constants in a sequence of thermal images of the patient. When performing the rhetorical task, the temperature rose by 0.47±0.19°C in 72.41% of the subjects, including 20.69% in whom the temperature decreased by 0.49±0.14°C after 237±141s. For 20.69% of the subjects only a drop in temperature was registered. For the remaining 6.89% of the cases, no temperature changes were registered CONCLUSIONS: The performance of the rhetorical task by the subjects causes body temperature changes. The ambiguous temperature response to the given stress factor indicates the complex mechanisms responsible for regulating stressful situations. Stress associated with the examination itself induces body temperature changes. These changes should always be taken into account in the analysis of infrared data. Copyright © 2017 Elsevier B.V. All rights reserved.
A Database for Comparative Electrochemical Performance of Commercial 18650-Format Lithium-Ion Cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barkholtz, Heather M.; Fresquez, Armando; Chalamala, Babu R.
Lithium-ion batteries are a central technology to our daily lives with widespread use in mobile devices and electric vehicles. These batteries are also beginning to be widely used in electric grid infrastructure support applications which have stringent safety and reliability requirements. Typically, electrochemical performance data is not available for modelers to validate their simulations, mechanisms, and algorithms for lithium-ion battery performance and lifetime. In this paper, we report on the electrochemical performance of commercial 18650 cells at a variety of temperatures and discharge currents. We found that LiFePO 4 is temperature tolerant for discharge currents at or below 10 Amore » whereas LiCoO 2, LiNi xCo yAl 1-x-yO 2, and LiNi 0.80Mn 0.15Co 0.05O 2 exhibited optimal electrochemical performance when the temperature is maintained at 15°C. LiNi xCo yAl 1-x-yO 2 showed signs of lithium plating at lower temperatures, evidenced by irreversible capacity loss and emergence of a high-voltage differential capacity peak. Furthermore, all cells need to be monitored for self-heating, as environment temperature and high discharge currents may elicit an unintended abuse condition. Overall, this study shows that lithium-ion batteries are highly application-specific and electrochemical behavior must be well understood for safe and reliable operation. Additionally, data collected in this study is available for anyone to download for further analysis and model validation.« less
A Database for Comparative Electrochemical Performance of Commercial 18650-Format Lithium-Ion Cells
Barkholtz, Heather M.; Fresquez, Armando; Chalamala, Babu R.; ...
2017-09-08
Lithium-ion batteries are a central technology to our daily lives with widespread use in mobile devices and electric vehicles. These batteries are also beginning to be widely used in electric grid infrastructure support applications which have stringent safety and reliability requirements. Typically, electrochemical performance data is not available for modelers to validate their simulations, mechanisms, and algorithms for lithium-ion battery performance and lifetime. In this paper, we report on the electrochemical performance of commercial 18650 cells at a variety of temperatures and discharge currents. We found that LiFePO 4 is temperature tolerant for discharge currents at or below 10 Amore » whereas LiCoO 2, LiNi xCo yAl 1-x-yO 2, and LiNi 0.80Mn 0.15Co 0.05O 2 exhibited optimal electrochemical performance when the temperature is maintained at 15°C. LiNi xCo yAl 1-x-yO 2 showed signs of lithium plating at lower temperatures, evidenced by irreversible capacity loss and emergence of a high-voltage differential capacity peak. Furthermore, all cells need to be monitored for self-heating, as environment temperature and high discharge currents may elicit an unintended abuse condition. Overall, this study shows that lithium-ion batteries are highly application-specific and electrochemical behavior must be well understood for safe and reliable operation. Additionally, data collected in this study is available for anyone to download for further analysis and model validation.« less
Numerical Simulation of a Solar Domestic Hot Water System
NASA Astrophysics Data System (ADS)
Mongibello, L.; Bianco, N.; Di Somma, M.; Graditi, G.; Naso, V.
2014-11-01
An innovative transient numerical model is presented for the simulation of a solar Domestic Hot Water (DHW) system. The solar collectors have been simulated by using a zerodimensional analytical model. The temperature distributions in the heat transfer fluid and in the water inside the tank have been evaluated by one-dimensional models. The reversion elimination algorithm has been used to include the effects of natural convection among the water layers at different heights in the tank on the thermal stratification. A finite difference implicit scheme has been implemented to solve the energy conservation equation in the coil heat exchanger, and the energy conservation equation in the tank has been solved by using the finite difference Euler implicit scheme. Energy conservation equations for the solar DHW components models have been coupled by means of a home-made implicit algorithm. Results of the simulation performed using as input data the experimental values of the ambient temperature and the solar irradiance in a summer day are presented and discussed.
Conformational rigidity in a lattice model of proteins.
Collet, Olivier
2003-06-01
It is shown in this paper that some simulations of protein folding in lattice models, which use an incorrect implementation of the Monte Carlo algorithm, do not converge towards thermal equilibrium. I developed a rigorous treatment for protein folding simulation on a lattice model relying on the introduction of a parameter standing for the rigidity of the conformations. Its properties are discussed and its role during the folding process is elucidated. The calculation of thermal properties of small chains living on a two-dimensional lattice is performed and a Bortz-Kalos-Lebowitz scheme is implemented in the presented method in order to study kinetics of chains at very low temperature. The coefficients of the Arrhenius law obtained with this algorithm are found to be in excellent agreement with the value of the main potential barrier of the system. Finally, a scenario of the mechanisms, including the rigidity parameters, that guide a protein towards its native structure, at medium temperature, is given.
A Coulomb collision algorithm for weighted particle simulations
NASA Technical Reports Server (NTRS)
Miller, Ronald H.; Combi, Michael R.
1994-01-01
A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.
Antenna pattern correction for the Nimbus-7 SMMR
NASA Technical Reports Server (NTRS)
Milman, A. S.
1986-01-01
This paper describes the philosophy and method used to develop the antenna pattern correction (APC) algorithm that was used on the data from the Scanning Multichannel Microwave Radiometer (SMMR) on Nimbus-7. There are limitations on what can be accomplished with such a procedure; these limitations are explored with the aid of Fourier analysis, even though the algorithm used on the SMMR data does not perform any Fourier transforms. The resulting analysis showed that, for the SMMR instrument, no useful improvement could be made in the data in terms of reduction of side lobes, but the quality of the sea surface temperature retrievals could be improved considerably by matching the antenna beamwidths at the different frequencies.
Bazzo, João Paulo; Pipa, Daniel Rodrigues; da Silva, Erlon Vagner; Martelli, Cicero; Cardozo da Silva, Jean Carlos
2016-01-01
This paper presents an image reconstruction method to monitor the temperature distribution of electric generator stators. The main objective is to identify insulation failures that may arise as hotspots in the structure. The method is based on temperature readings of fiber optic distributed sensors (DTS) and a sparse reconstruction algorithm. Thermal images of the structure are formed by appropriately combining atoms of a dictionary of hotspots, which was constructed by finite element simulation with a multi-physical model. Due to difficulties for reproducing insulation faults in real stator structure, experimental tests were performed using a prototype similar to the real structure. The results demonstrate the ability of the proposed method to reconstruct images of hotspots with dimensions down to 15 cm, representing a resolution gain of up to six times when compared to the DTS spatial resolution. In addition, satisfactory results were also obtained to detect hotspots with only 5 cm. The application of the proposed algorithm for thermal imaging of generator stators can contribute to the identification of insulation faults in early stages, thereby avoiding catastrophic damage to the structure. PMID:27618040
NASA Astrophysics Data System (ADS)
Lu, J.; Wakai, K.; Takahashi, S.; Shimizu, S.
2000-06-01
The algorithm which takes into account the effect of refraction of sound wave paths for acoustic computer tomography (CT) is developed. Incorporating the algorithm of refraction into ordinary CT algorithms which are based on Fourier transformation is very difficult. In this paper, the least-squares method, which is capable of considering the refraction effect, is employed to reconstruct the two-dimensional temperature distribution. The refraction effect is solved by writing a set of differential equations which is derived from Fermat's theorem and the calculus of variations. It is impossible to carry out refraction analysis and the reconstruction of temperature distribution simultaneously, so the problem is solved using the iteration method. The measurement field is assumed to take the shape of a circle and 16 speakers, also serving as the receivers, are set around it isometrically. The algorithm is checked through computer simulation with various kinds of temperature distributions. It is shown that the present method which takes into account the algorithm of the refraction effect can reconstruct temperature distributions with much greater accuracy than can methods which do not include the refraction effect.
A novel resource sharing algorithm based on distributed construction for radiant enclosure problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finzell, Peter; Bryden, Kenneth M.
This study demonstrates a novel approach to solving inverse radiant enclosure problems based on distributed construction. Specifically, the problem of determining the temperature distribution needed on the heater surfaces to achieve a desired design surface temperature profile is recast as a distributed construction problem in which a shared resource, temperature, is distributed by computational agents moving blocks. The sharing of blocks between agents enables them to achieve their desired local state, which in turn achieves the desired global state. Each agent uses the current state of their local environment and a simple set of rules to determine when to exchangemore » blocks, each block representing a discrete unit of temperature change. This algorithm is demonstrated using the established two-dimensional inverse radiation enclosure problem. The temperature profile on the heater surfaces is adjusted to achieve a desired temperature profile on the design surfaces. The resource sharing algorithm was able to determine the needed temperatures on the heater surfaces to obtain the desired temperature distribution on the design surfaces in the nine cases examined.« less
A novel resource sharing algorithm based on distributed construction for radiant enclosure problems
Finzell, Peter; Bryden, Kenneth M.
2017-03-06
This study demonstrates a novel approach to solving inverse radiant enclosure problems based on distributed construction. Specifically, the problem of determining the temperature distribution needed on the heater surfaces to achieve a desired design surface temperature profile is recast as a distributed construction problem in which a shared resource, temperature, is distributed by computational agents moving blocks. The sharing of blocks between agents enables them to achieve their desired local state, which in turn achieves the desired global state. Each agent uses the current state of their local environment and a simple set of rules to determine when to exchangemore » blocks, each block representing a discrete unit of temperature change. This algorithm is demonstrated using the established two-dimensional inverse radiation enclosure problem. The temperature profile on the heater surfaces is adjusted to achieve a desired temperature profile on the design surfaces. The resource sharing algorithm was able to determine the needed temperatures on the heater surfaces to obtain the desired temperature distribution on the design surfaces in the nine cases examined.« less
NASA Astrophysics Data System (ADS)
Yamauchi, Masataka; Okumura, Hisashi
2017-11-01
We developed a two-dimensional replica-permutation molecular dynamics method in the isothermal-isobaric ensemble. The replica-permutation method is a better alternative to the replica-exchange method. It was originally developed in the canonical ensemble. This method employs the Suwa-Todo algorithm, instead of the Metropolis algorithm, to perform permutations of temperatures and pressures among more than two replicas so that the rejection ratio can be minimized. We showed that the isothermal-isobaric replica-permutation method performs better sampling efficiency than the isothermal-isobaric replica-exchange method and infinite swapping method. We applied this method to a β-hairpin mini protein, chignolin. In this simulation, we observed not only the folded state but also the misfolded state. We calculated the temperature and pressure dependence of the fractions on the folded, misfolded, and unfolded states. Differences in partial molar enthalpy, internal energy, entropy, partial molar volume, and heat capacity were also determined and agreed well with experimental data. We observed a new phenomenon that misfolded chignolin becomes more stable under high-pressure conditions. We also revealed this mechanism of the stability as follows: TYR2 and TRP9 side chains cover the hydrogen bonds that form a β-hairpin structure. The hydrogen bonds are protected from the water molecules that approach the protein as the pressure increases.
Karwat, Piotr; Kujawska, Tamara; Lewin, Peter A; Secomski, Wojciech; Gambin, Barbara; Litniewski, Jerzy
2016-02-01
In therapeutic applications of High Intensity Focused Ultrasound (HIFU) the guidance of the HIFU beam and especially its focal plane is of crucial importance. This guidance is needed to appropriately target the focal plane and hence the whole focal volume inside the tumor tissue prior to thermo-ablative treatment and beginning of tissue necrosis. This is currently done using Magnetic Resonance Imaging that is relatively expensive. In this study an ultrasound method, which calculates the variations of speed of sound in the locally heated tissue volume by analyzing the phase shifts of echo-signals received by an ultrasound scanner from this very volume is presented. To improve spatial resolution of B-mode imaging and minimize the uncertainty of temperature estimation the acoustic signals were transmitted and received by 8 MHz linear phased array employing Synthetic Transmit Aperture (STA) technique. Initially, the validity of the algorithm developed was verified experimentally in a tissue-mimicking phantom heated from 20.6 to 48.6 °C. Subsequently, the method was tested using a pork loin sample heated locally by a 2 MHz pulsed HIFU beam with focal intensity ISATA of 129 W/cm(2). The temperature calibration of 2D maps of changes in the sound velocity induced by heating was performed by comparison of the algorithm-determined changes in the sound velocity with the temperatures measured by thermocouples located in the heated tissue volume. The method developed enabled ultrasound temperature imaging of the heated tissue volume from the very inception of heating with the contrast-to-noise ratio of 3.5-12 dB in the temperature range 21-56 °C. Concurrently performed, conventional B-mode imaging revealed CNR close to zero dB until the temperature reached 50 °C causing necrosis. The data presented suggest that the proposed method could offer an alternative to MRI-guided temperature imaging for prediction of the location and extent of the thermal lesion prior to applying the final HIFU treatment. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lodhi, Ehtisham; Lodhi, Zeeshan; Noman Shafqat, Rana; Chen, Fieda
2017-07-01
Photovoltaic (PV) system usually employed The Maximum power point tracking (MPPT) techniques for increasing its efficiency. The performance of the PV system perhaps boosts by controlling at its apex point of power, in this way maximal power can be given to load. The proficiency of a PV system usually depends upon irradiance, temperature and array architecture. PV array shows a non-linear style for V-I curve and maximal power point on V-P curve also varies with changing environmental conditions. MPPT methods grantees that a PV module is regulated at reference voltage and to produce entire usage of the maximal output power. This paper gives analysis between two widely employed Perturb and Observe (P&O) and Incremental Conductance (INC) MPPT techniques. Their performance is evaluated and compared through theoretical analysis and digital simulation on the basis of response time and efficiency under varying irradiance and temperature condition using Matlab/Simulink.
Milewski, Marek C; Kamel, Karol; Kurzynska-Kokorniak, Anna; Chmielewski, Marcin K; Figlerowicz, Marek
2017-10-01
Experimental methods based on DNA and RNA hybridization, such as multiplex polymerase chain reaction, multiplex ligation-dependent probe amplification, or microarray analysis, require the use of mixtures of multiple oligonucleotides (primers or probes) in a single test tube. To provide an optimal reaction environment, minimal self- and cross-hybridization must be achieved among these oligonucleotides. To address this problem, we developed EvOligo, which is a software package that provides the means to design and group DNA and RNA molecules with defined lengths. EvOligo combines two modules. The first module performs oligonucleotide design, and the second module performs oligonucleotide grouping. The software applies a nearest-neighbor model of nucleic acid interactions coupled with a parallel evolutionary algorithm to construct individual oligonucleotides, and to group the molecules that are characterized by the weakest possible cross-interactions. To provide optimal solutions, the evolutionary algorithm sorts oligonucleotides into sets, preserves preselected parts of the oligonucleotides, and shapes their remaining parts. In addition, the oligonucleotide sets can be designed and grouped based on their melting temperatures. For the user's convenience, EvOligo is provided with a user-friendly graphical interface. EvOligo was used to design individual oligonucleotides, oligonucleotide pairs, and groups of oligonucleotide pairs that are characterized by the following parameters: (1) weaker cross-interactions between the non-complementary oligonucleotides and (2) more uniform ranges of the oligonucleotide pair melting temperatures than other available software products. In addition, in contrast to other grouping algorithms, EvOligo offers time-efficient sorting of paired and unpaired oligonucleotides based on various parameters defined by the user.
Automated general temperature correction method for dielectric soil moisture sensors
NASA Astrophysics Data System (ADS)
Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao
2017-08-01
An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a significant error factor comparable to ±1% manufacturer's accuracy.
Development of fog detection algorithm using Himawari-8/AHI data at daytime
NASA Astrophysics Data System (ADS)
Han, Ji-Hye; Kim, So-Hyeong; suh, Myoung-Seok
2017-04-01
Fog is defined that small cloud water drops or ice particles float in the air and visibility is less than 1 km. In general, fog affects ecological system, radiation budget and human activities such as airplane, ship, and car. In this study, we developed a fog detection algorithm (FDA) consisted of four threshold tests of optical and textual properties of fog using satellite and ground observation data at daytime. For the detection of fog, we used satellite data (Himawari-8/AHI data) and other ancillary data such as air temperature from NWP data (over land), SST from OSTIA (over sea). And for validation, ground observed visibility data from KMA. The optical and textual properties of fog are normalized albedo (NAlb) and normalized local standard deviation (NLSD), respectively. In addition, differences between air temperature (SST) and fog top temperature (FTa(S)) are applied to discriminate the fog from low clouds. And post-processing is performed to detect the fog edge based on spatial continuity of fog. Threshold values for each test are determined by optimization processes based on the ROC analysis for the selected fog cases. Fog detection is performed according to solar zenith angle (SZA) because of the difference of available satellite data. In this study, we defined daytime when SZA is less than 85˚ . Result of FDA is presented by probability (0 ˜ 100 %) of fog through the weighted sum of each test result. The validation results with ground observed visibility data showed that POD and FAR are 0.63 ˜ 0.89 and 0.29 ˜ 0.46 according to the fog intensity and type, respectively. In general, the detection skills are better in the cases of intense and without high clouds than localized and weak fog. We are plan to transfer this algorithm to the National Meteorological Satellite Center of KMA for the operational detection of fog using GK-2A/AMI data which will be launched in 2018.
Aquarius L-Band Microwave Radiometer: Three Years of Radiometric Performance and Systematic Effects
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey R.; Hong, Liang; Pellerano, Fernando A.
2015-01-01
The Aquarius L-band microwave radiometer is a three-beam pushbroom instrument designed to measure sea surface salinity. Results are analyzed for performance and systematic effects over three years of operation. The thermal control system maintains tight temperature stability promoting good gain stability. The gain spectrum exhibits expected orbital variations with 1f noise appearing at longer time periods. The on-board detection and integration scheme coupled with the calibration algorithm produce antenna temperatures with NEDT 0.16 K for 1.44-s samples. Nonlinearity is characterized before launch and the derived correction is verified with cold-sky calibration data. Finally, long-term drift is discovered in all channels with 1-K amplitude and 100-day time constant. Nonetheless, it is adeptly corrected using an exponential model.
Correlation of Wissler Human Thermal Model Blood Flow and Shiver Algorithms
NASA Technical Reports Server (NTRS)
Bue, Grant; Makinen, Janice; Cognata, Thomas
2010-01-01
The Wissler Human Thermal Model (WHTM) is a thermal math model of the human body that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. The model has been shown to predict core temperature and skin temperatures higher and lower, respectively, than in tests of subjects in crew escape suit working in a controlled hot environments. Conversely the model predicts core temperature and skin temperatures lower and higher, respectively, than in tests of lightly clad subjects immersed in cold water conditions. The blood flow algorithms of the model has been investigated to allow for more and less flow, respectively, for the cold and hot case. These changes in the model have yielded better correlation of skin and core temperatures in the cold and hot cases. The algorithm for onset of shiver did not need to be modified to achieve good agreement in cold immersion simulations
Study of phase clustering method for analyzing large volumes of meteorological observation data
NASA Astrophysics Data System (ADS)
Volkov, Yu. V.; Krutikov, V. A.; Botygin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.
2017-11-01
The article describes an iterative parallel phase grouping algorithm for temperature field classification. The algorithm is based on modified method of structure forming by using analytic signal. The developed method allows to solve tasks of climate classification as well as climatic zoning for any time or spatial scale. When used to surface temperature measurement series, the developed algorithm allows to find climatic structures with correlated changes of temperature field, to make conclusion on climate uniformity in a given area and to overview climate changes over time by analyzing offset in type groups. The information on climate type groups specific for selected geographical areas is expanded by genetic scheme of class distribution depending on change in mutual correlation level between ground temperature monthly average.
NASA Astrophysics Data System (ADS)
Sorokin, V. A.; Volkov, Yu V.; Sherstneva, A. I.; Botygin, I. A.
2016-11-01
This paper overviews a method of generating climate regions based on an analytic signal theory. When applied to atmospheric surface layer temperature data sets, the method allows forming climatic structures with the corresponding changes in the temperature to make conclusions on the uniformity of climate in an area and to trace the climate changes in time by analyzing the type group shifts. The algorithm is based on the fact that the frequency spectrum of the thermal oscillation process is narrow-banded and has only one mode for most weather stations. This allows using the analytic signal theory, causality conditions and introducing an oscillation phase. The annual component of the phase, being a linear function, was removed by the least squares method. The remaining phase fluctuations allow consistent studying of their coordinated behavior and timing, using the Pearson correlation coefficient for dependence evaluation. This study includes program experiments to evaluate the calculation efficiency in the phase grouping task. The paper also overviews some single-threaded and multi-threaded computing models. It is shown that the phase grouping algorithm for meteorological data can be parallelized and that a multi-threaded implementation leads to a 25-30% increase in the performance.
A New 1DVAR Retrieval for AMSR2 and GMI: Validation and Sensitivites
NASA Astrophysics Data System (ADS)
Duncan, D.; Kummerow, C. D.
2015-12-01
A new non-raining retrieval has been developed for microwave imagers and applied to the GMI and AMSR2 sensors. With the Community Radiative Transfer Model (CRTM) as the forward model for the physical retrieval, a 1-dimensional variational method finds the atmospheric state which minimizes the difference between observed and simulated brightness temperatures. A key innovation of the algorithm development is a method to calculate the sensor error covariance matrix that is specific to the forward model employed and includes off-diagonal elements, allowing the algorithm to handle various forward models and sensors with little cross-talk. The water vapor profile is resolved by way of empirical orthogonal functions (EOFs) and then summed to get total precipitable water (TPW). Validation of retrieved 10m wind speed, TPW, and sea surface temperature (SST) is performed via comparison with buoys and radiosondes as well as global models and other remotely sensed products. In addition to the validation, sensitivity experiments investigate the impact of ancillary data on the under-constrained retrieval, a concern for climate data records that strive to be independent of model biases. The introduction of model analysis data is found to aid the algorithm most at high frequency channels and affect TPW retrievals, whereas wind and cloud water retrievals show little effect from ingesting further ancillary data.
NASA Astrophysics Data System (ADS)
Niu, Chun-Yang; Qi, Hong; Huang, Xing; Ruan, Li-Ming; Wang, Wei; Tan, He-Ping
2015-11-01
A hybrid least-square QR decomposition (LSQR)-particle swarm optimization (LSQR-PSO) algorithm was developed to estimate the three-dimensional (3D) temperature distributions and absorption coefficients simultaneously. The outgoing radiative intensities at the boundary surface of the absorbing media were simulated by the line-of-sight (LOS) method, which served as the input for the inverse analysis. The retrieval results showed that the 3D temperature distributions of the participating media with known radiative properties could be retrieved accurately using the LSQR algorithm, even with noisy data. For the participating media with unknown radiative properties, the 3D temperature distributions and absorption coefficients could be retrieved accurately using the LSQR-PSO algorithm even with measurement errors. It was also found that the temperature field could be estimated more accurately than the absorption coefficients. In order to gain insight into the effects on the accuracy of temperature distribution reconstruction, the selection of the detection direction and the angle between two detection directions was also analyzed. Project supported by the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), the National Natural Science Foundation of China (Grant No. 51476043), and the Fund of Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation University of China.
A stable and accurate partitioned algorithm for conjugate heat transfer
NASA Astrophysics Data System (ADS)
Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.
2017-09-01
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.
A stable and accurate partitioned algorithm for conjugate heat transfer
Meng, F.; Banks, J. W.; Henshaw, W. D.; ...
2017-04-25
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
NASA Technical Reports Server (NTRS)
Liu, X.; Kizer, S.; Barnet, C.; Dvakarla, M.; Zhou, D. K.; Larar, A. M.
2012-01-01
The Joint Polar Satellite System (JPSS) is a U.S. National Oceanic and Atmospheric Administration (NOAA) mission in collaboration with the U.S. National Aeronautical Space Administration (NASA) and international partners. The NPP Cross-track Infrared Microwave Sounding Suite (CrIMSS) consists of the infrared (IR) Crosstrack Infrared Sounder (CrIS) and the microwave (MW) Advanced Technology Microwave Sounder (ATMS). The CrIS instrument is hyperspectral interferometer, which measures high spectral and spatial resolution upwelling infrared radiances. The ATMS is a 22-channel radiometer similar to Advanced Microwave Sounding Units (AMSU) A and B. It measures top of atmosphere MW upwelling radiation and provides capability of sounding below clouds. The CrIMSS Environmental Data Record (EDR) algorithm provides three EDRs, namely the atmospheric vertical temperature, moisture and pressure profiles (AVTP, AVMP and AVPP, respectively), with the lower tropospheric AVTP and the AVMP being JPSS Key Performance Parameters (KPPs). The operational CrIMSS EDR an algorithm was originally designed to run on large IBM computers with dedicated data management subsystem (DMS). We have ported the operational code to simple Linux systems by replacing DMS with appropriate interfaces. We also changed the interface of the operational code so that we can read data from both the CrIMSS science code and the operational code and be able to compare lookup tables, parameter files, and output results. The detail of the CrIMSS EDR algorithm is described in reference [1]. We will present results of testing the CrIMSS EDR operational algorithm using proxy data generated from the Infrared Atmospheric Sounding Interferometer (IASI) satellite data and from the NPP CrIS/ATMS data.
Nonconvergence of the Wang-Landau algorithms with multiple random walkers.
Belardinelli, R E; Pereyra, V D
2016-05-01
This paper discusses some convergence properties in the entropic sampling Monte Carlo methods with multiple random walkers, particularly in the Wang-Landau (WL) and 1/t algorithms. The classical algorithms are modified by the use of m-independent random walkers in the energy landscape to calculate the density of states (DOS). The Ising model is used to show the convergence properties in the calculation of the DOS, as well as the critical temperature, while the calculation of the number π by multiple dimensional integration is used in the continuum approximation. In each case, the error is obtained separately for each walker at a fixed time, t; then, the average over m walkers is performed. It is observed that the error goes as 1/sqrt[m]. However, if the number of walkers increases above a certain critical value m>m_{x}, the error reaches a constant value (i.e., it saturates). This occurs for both algorithms; however, it is shown that for a given system, the 1/t algorithm is more efficient and accurate than the similar version of the WL algorithm. It follows that it makes no sense to increase the number of walkers above a critical value m_{x}, since it does not reduce the error in the calculation. Therefore, the number of walkers does not guarantee convergence.
Lamberti, A; Vanlanduit, S; De Pauw, B; Berghmans, F
2014-03-24
Fiber Bragg Gratings (FBGs) can be used as sensors for strain, temperature and pressure measurements. For this purpose, the ability to determine the Bragg peak wavelength with adequate wavelength resolution and accuracy is essential. However, conventional peak detection techniques, such as the maximum detection algorithm, can yield inaccurate and imprecise results, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. Other techniques, such as the cross-correlation demodulation algorithm are more precise and accurate but require a considerable higher computational effort. To overcome these problems, we developed a novel fast phase correlation (FPC) peak detection algorithm, which computes the wavelength shift in the reflected spectrum of a FBG sensor. This paper analyzes the performance of the FPC algorithm for different values of the SNR and wavelength resolution. Using simulations and experiments, we compared the FPC with the maximum detection and cross-correlation algorithms. The FPC method demonstrated a detection precision and accuracy comparable with those of cross-correlation demodulation and considerably higher than those obtained with the maximum detection technique. Additionally, FPC showed to be about 50 times faster than the cross-correlation. It is therefore a promising tool for future implementation in real-time systems or in embedded hardware intended for FBG sensor interrogation.
Studies of implicit and explicit solution techniques in transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1982-01-01
Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.
Studies of implicit and explicit solution techniques in transient thermal analysis of structures
NASA Astrophysics Data System (ADS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1982-08-01
Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.
Modeling and analysis of solar distributed generation
NASA Astrophysics Data System (ADS)
Ortiz Rivera, Eduardo Ivan
Recent changes in the global economy are creating a big impact in our daily life. The price of oil is increasing and the number of reserves are less every day. Also, dramatic demographic changes are impacting the viability of the electric infrastructure and ultimately the economic future of the industry. These are some of the reasons that many countries are looking for alternative energy to produce electric energy. The most common form of green energy in our daily life is solar energy. To convert solar energy into electrical energy is required solar panels, dc-dc converters, power control, sensors, and inverters. In this work, a photovoltaic module, PVM, model using the electrical characteristics provided by the manufacturer data sheet is presented for power system applications. Experimental results from testing are showed, verifying the proposed PVM model. Also in this work, three maximum power point tracker, MPPT, algorithms would be presented to obtain the maximum power from a PVM. The first MPPT algorithm is a method based on the Rolle's and Lagrange's Theorems and can provide at least an approximate answer to a family of transcendental functions that cannot be solved using differential calculus. The second MPPT algorithm is based on the approximation of the proposed PVM model using fractional polynomials where the shape, boundary conditions and performance of the proposed PVM model are satisfied. The third MPPT algorithm is based in the determination of the optimal duty cycle for a dc-dc converter and the previous knowledge of the load or load matching conditions. Also, four algorithms to calculate the effective irradiance level and temperature over a photovoltaic module are presented in this work. The main reasons to develop these algorithms are for monitoring climate conditions, the elimination of temperature and solar irradiance sensors, reductions in cost for a photovoltaic inverter system, and development of new algorithms to be integrated with maximum power point tracking algorithms. Finally, several PV power applications will be presented like circuit analysis for a load connected to two different PV arrays, speed control for a do motor connected to a PVM, and a novel single phase photovoltaic inverter system using the Z-source converter.
Tang, Bohui; Bi, Yuyun; Li, Zhao-Liang; Xia, Jun
2008-01-01
On the basis of the radiative transfer theory, this paper addressed the estimate of Land Surface Temperature (LST) from the Chinese first operational geostationary meteorological satellite-FengYun-2C (FY-2C) data in two thermal infrared channels (IR1, 10.3-11.3 μm and IR2, 11.5-12.5 μm), using the Generalized Split-Window (GSW) algorithm proposed by Wan and Dozier (1996). The coefficients in the GSW algorithm corresponding to a series of overlapping ranging of the mean emissivity, the atmospheric Water Vapor Content (WVC), and the LST were derived using a statistical regression method from the numerical values simulated with an accurate atmospheric radiative transfer model MODTRAN 4 over a wide range of atmospheric and surface conditions. The simulation analysis showed that the LST could be estimated by the GSW algorithm with the Root Mean Square Error (RMSE) less than 1 K for the sub-ranges with the Viewing Zenith Angle (VZA) less than 30° or for the sub-rangs with VZA less than 60° and the atmospheric WVC less than 3.5 g/cm2 provided that the Land Surface Emissivities (LSEs) are known. In order to determine the range for the optimum coefficients of the GSW algorithm, the LSEs could be derived from the data in MODIS channels 31 and 32 provided by MODIS/Terra LST product MOD11B1, or be estimated either according to the land surface classification or using the method proposed by Jiang et al. (2006); and the WVC could be obtained from MODIS total precipitable water product MOD05, or be retrieved using Li et al.' method (2003). The sensitivity and error analyses in term of the uncertainty of the LSE and WVC as well as the instrumental noise were performed. In addition, in order to compare the different formulations of the split-window algorithms, several recently proposed split-window algorithms were used to estimate the LST with the same simulated FY-2C data. The result of the intercomparsion showed that most of the algorithms give comparable results. PMID:27879744
Development of Algorithms for Control of Humidity in Plant Growth Chambers
NASA Technical Reports Server (NTRS)
Costello, Thomas A.
2003-01-01
Algorithms were developed to control humidity in plant growth chambers used for research on bioregenerative life support at Kennedy Space Center. The algorithms used the computed water vapor pressure (based on measured air temperature and relative humidity) as the process variable, with time-proportioned outputs to operate the humidifier and de-humidifier. Algorithms were based upon proportional-integral-differential (PID) and Fuzzy Logic schemes and were implemented using I/O Control software (OPTO-22) to define and download the control logic to an autonomous programmable logic controller (PLC, ultimate ethernet brain and assorted input-output modules, OPTO-22), which performed the monitoring and control logic processing, as well the physical control of the devices that effected the targeted environment in the chamber. During limited testing, the PLC's successfully implemented the intended control schemes and attained a control resolution for humidity of less than 1%. The algorithms have potential to be used not only with autonomous PLC's but could also be implemented within network-based supervisory control programs. This report documents unique control features that were implemented within the OPTO-22 framework and makes recommendations regarding future uses of the hardware and software for biological research by NASA.
Lau, Sarah J.; Moore, David G.; Stair, Sarah L.; ...
2016-01-01
Ultrasonic analysis is being explored as a way to capture events during melting of highly dispersive wax. Typical events include temperature changes in the material, phase transition of the material, surface flows and reformations, and void filling as the material melts. Melt tests are performed with wax to evaluate the usefulness of different signal processing algorithms in capturing event data. Several algorithm paths are being pursued. The first looks at changes in the velocity of the signal through the material. This is only appropriate when the changes from one ultrasonic signal to the next can be represented by a linearmore » relationship, which is not always the case. The second tracks changes in the frequency content of the signal. The third algorithm tracks changes in the temporal moments of a signal over a full test. This method does not require that the changes in the signal be represented by a linear relationship, but attaching changes in the temporal moments to physical events can be difficult. This study describes the algorithm paths applied to experimental data from ultrasonic signals as wax melts and explores different ways to display the results.« less
NASA Technical Reports Server (NTRS)
Key, Jeff; Maslanik, James; Steffen, Konrad
1994-01-01
During the first half of our second project year we have accomplished the following: (1) acquired a new AVHRR data set for the Beaufort Sea area spanning an entire year; (2) acquired additional ATSR data for the Arctic and Antarctic now totaling over seven months; (3) refined our AVHRR Arctic and Antarctic ice surface temperature (IST) retrieval algorithm, including work specific to Greenland; (4) developed ATSR retrieval algorithms for the Arctic and Antarctic, including work specific to Greenland; (5) investigated the effects of clouds and the atmosphere on passive microwave 'surface' temperature retrieval algorithms; (6) generated surface temperatures for the Beaufort Sea data set, both from AVHRR and SSM/I; and (7) continued work on compositing GAC data for coverage of the entire Arctic and Antarctic. During the second half of the year we will continue along these same lines, and will undertake a detailed validation study of the AVHRR and ATSR retrievals using LEADEX and the Beaufort Sea year-long data. Cloud masking methods used for the AVHRR will be modified for use with the ATSR. Methods of blending in situ and satellite-derived surface temperature data sets will be investigated.
An adaptive scale factor based MPPT algorithm for changing solar irradiation levels in outer space
NASA Astrophysics Data System (ADS)
Kwan, Trevor Hocksun; Wu, Xiaofeng
2017-03-01
Maximum power point tracking (MPPT) techniques are popularly used for maximizing the output of solar panels by continuously tracking the maximum power point (MPP) of their P-V curves, which depend both on the panel temperature and the input insolation. Various MPPT algorithms have been studied in literature, including perturb and observe (P&O), hill climbing, incremental conductance, fuzzy logic control and neural networks. This paper presents an algorithm which improves the MPP tracking performance by adaptively scaling the DC-DC converter duty cycle. The principle of the proposed algorithm is to detect the oscillation by checking the sign (ie. direction) of the duty cycle perturbation between the current and previous time steps. If there is a difference in the signs then it is clear an oscillation is present and the DC-DC converter duty cycle perturbation is subsequently scaled down by a constant factor. By repeating this process, the steady state oscillations become negligibly small which subsequently allows for a smooth steady state MPP response. To verify the proposed MPPT algorithm, a simulation involving irradiances levels that are typically encountered in outer space is conducted. Simulation and experimental results prove that the proposed algorithm is fast and stable in comparison to not only the conventional fixed step counterparts, but also to previous variable step size algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, F.; Banks, J. W.; Henshaw, W. D.
We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less
Land surface temperature measurements from EOS MODIS data
NASA Technical Reports Server (NTRS)
Wan, Zhengming
1995-01-01
A significant progress has been made in TIR instrumentation which is required to establish the spectral BRDF/emissivity knowledge base of land-surface materials and to validate the land-surface temperature (LST) algorithms. The SIBRE (spectral Infrared Bidirectional Reflectance and Emissivity) system and a TIR system for measuring spectral directional-hemispherical emissivity have been completed and tested successfully. Optical properties and performance features of key components (including spectrometer, and TIR source) of these systems have been characterized by integrated use of local standards (blackbody and reference plates). The stabilization of the spectrometer performance was improved by a custom designed and built liquid cooling system. Methods and procedures for measuring spectral TIR BRDF and directional-hemispheric emissivity with these two systems have been verified in sample measurements. These TIR instruments have been used in the laboratory and the field, giving very promising results. The measured spectral emissivities of water surface are very close to the calculated values based on well established water refractive index values in published papers. Preliminary results show that the TIR instruments can be used for validation of the MODIS LST algorithm in homogeneous test sites. The beta-3 version of the MODIS LST software is being prepared for its delivery scheduled in the early second half of this year.
Fast and precise thermoregulation system in physiological brain slice experiment
NASA Astrophysics Data System (ADS)
Sheu, Y. H.; Young, M. S.
1995-12-01
We have developed a fast and precise thermoregulation system incorporated within a physiological experiment on a brain slice. The thermoregulation system is used to control the temperature of a recording chamber in which the brain slice is placed. It consists of a single-chip microcomputer, a set command module, a display module, and an FLC module. A fuzzy control algorithm was developed and a fuzzy logic controller then designed for achieving fast, smooth thermostatic performance and providing precise temperature control with accuracy to 0.1 °C, from room temperature through 42 °C (experimental temperature range). The fuzzy logic controller is implemented by microcomputer software and related peripheral hardware circuits. Six operating modes of thermoregulation are offered with the system and this can be further extended according to experimental needs. The test results of this study demonstrate that the fuzzy control method is easily implemented by a microcomputer and also verifies that this method provides a simple way to achieve fast and precise high-performance control of a nonlinear thermoregulation system in a physiological brain slice experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Bingjing; Zhao, Jianlin, E-mail: jlzhao@nwpu.edu.cn; Wang, Jun
2013-11-21
We present a method for visually and quantitatively investigating the heat dissipation process of plate-fin heat sinks by using digital holographic interferometry. A series of phase change maps reflecting the temperature distribution and variation trend of the air field surrounding heat sink during the heat dissipation process are numerically reconstructed based on double-exposure holographic interferometry. According to the phase unwrapping algorithm and the derived relationship between temperature and phase change of the detection beam, the full-field temperature distributions are quantitatively obtained with a reasonably high measurement accuracy. And then the impact of heat sink's channel width on the heat dissipationmore » performance in the case of natural convection is analyzed. In addition, a comparison between simulation and experiment results is given to verify the reliability of this method. The experiment results certify the feasibility and validity of the presented method in full-field, dynamical, and quantitative measurement of the air field temperature distribution, which provides a basis for analyzing the heat dissipation performance of plate-fin heat sinks.« less
OSI SAF Sea Surface Temperature reprocessing of MSG/SEVIRI archive.
NASA Astrophysics Data System (ADS)
Saux Picart, Stéphane; Legendre, Gerard; Marsouin, Anne; Péré, Sonia; Roquet, Hervé
2017-04-01
The Ocean and Sea-Ice Satellite Application Facility (OSI-SAF) of the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) is planning to deliver a reprocessing of Sea Surface Temperature (SST) from Spinning Enhanced Visible and Infrared Imager/Meteosat Second Generation (SEVIRI/MSG) archive (2004-2012) by the end of 2016. This reprocessing is drawing from experiences of the OSI SAF team in near real time processing of MSG/SEVIRI data. The retrieval method consist in a non-linear split-window algorithm including the algorithm correction scheme developed by Le Borgne et al. (2011). The bias correction relies on simulations of infrared brightness temperatures performed using Numerical Weather Prediction model atmospheric profiles of water vapour and temperature, and RTTOV radiative transfer model. The cloud mask used is the Climate SAF reprocessing of the MSG/SEVIRI archive. It is consistent over the period in consideration. Atmospheric Saharan dusts have a strong impact on the retrieved SST, they are taken into consideration through the computation of the Saharan Dust Index (Merchant et al., 2006) which is then used to determine an empirical correction applied to SST. The MSG/SEVIRI SST reprocessing dataset consist in hourly level 3 composite of sub-skin temperature projected onto a regular 0.05° grid over the region delimited by 60N,60S and 60W,60E. This presentation gives an overview of the data and methods used for the reprocessing, the products and validation results against drifting buoys measurements extracted from the ERA Clim dataset.
A novel calibration method of focused light field camera for 3-D reconstruction of flame temperature
NASA Astrophysics Data System (ADS)
Sun, Jun; Hossain, Md. Moinul; Xu, Chuan-Long; Zhang, Biao; Wang, Shi-Min
2017-05-01
This paper presents a novel geometric calibration method for focused light field camera to trace the rays of flame radiance and to reconstruct the three-dimensional (3-D) temperature distribution of a flame. A calibration model is developed to calculate the corner points and their projections of the focused light field camera. The characteristics of matching main lens and microlens f-numbers are used as an additional constrains for the calibration. Geometric parameters of the focused light field camera are then achieved using Levenberg-Marquardt algorithm. Total focused images in which all the points are in focus, are utilized to validate the proposed calibration method. Calibration results are presented and discussed in details. The maximum mean relative error of the calibration is found less than 0.13%, indicating that the proposed method is capable of calibrating the focused light field camera successfully. The parameters obtained by the calibration are then utilized to trace the rays of flame radiance. A least square QR-factorization algorithm with Plank's radiation law is used to reconstruct the 3-D temperature distribution of a flame. Experiments were carried out on an ethylene air fired combustion test rig to reconstruct the temperature distribution of flames. The flame temperature obtained by the proposed method is then compared with that obtained by using high-precision thermocouple. The difference between the two measurements was found no greater than 6.7%. Experimental results demonstrated that the proposed calibration method and the applied measurement technique perform well in the reconstruction of the flame temperature.
Retrieving cloudy atmosphere parameters from RPG-HATPRO radiometer data
NASA Astrophysics Data System (ADS)
Kostsov, V. S.
2015-03-01
An algorithm for simultaneously determining both tropospheric temperature and humidity profiles and cloud liquid water content from ground-based measurements of microwave radiation is presented. A special feature of this algorithm is that it combines different types of measurements and different a priori information on the sought parameters. The features of its use in processing RPG-HATPRO radiometer data obtained in the course of atmospheric remote sensing experiments carried out by specialists from the Faculty of Physics of St. Petersburg State University are discussed. The results of a comparison of both temperature and humidity profiles obtained using a ground-based microwave remote sensing method with those obtained from radiosonde data are analyzed. It is shown that this combined algorithm is comparable (in accuracy) to the classical method of statistical regularization in determining temperature profiles; however, this algorithm demonstrates better accuracy (when compared to the method of statistical regularization) in determining humidity profiles.
Determination of cloud liquid water content using the SSM/I
NASA Technical Reports Server (NTRS)
Alishouse, John C.; Snider, Jack B.; Westwater, Ed R.; Swift, Calvin T.; Ruf, Christopher S.
1990-01-01
As part of a calibration/validation effort for the special sensor microwave/imager (SSM/I), coincident observations of SSM/I brightness temperatures and surface-based observations of cloud liquid water were obtained. These observations were used to validate initial algorithms and to derive an improved algorithm. The initial algorithms were divided into latitudinal-, seasonal-, and surface-type zones. It was found that these initial algorithms, which were of the D-matrix type, did not yield sufficiently accurate results. The surface-based measurements of channels were investigated; however, the 85V channel was excluded because of excessive noise. It was found that there is no significant correlation between the SSM/I brightness temperatures and the surface-based cloud liquid water determination when the background surface is land or snow. A high correlation was found between brightness temperatures and ground-based measurements over the ocean.
NASA Astrophysics Data System (ADS)
Murrieta Mendoza, Alejandro
Aircraft reference trajectory is an alternative method to reduce fuel consumption, thus the pollution released to the atmosphere. Fuel consumption reduction is of special importance for two reasons: first, because the aeronautical industry is responsible of 2% of the CO2 released to the atmosphere, and second, because it will reduce the flight cost. The aircraft fuel model was obtained from a numerical performance database which was created and validated by our industrial partner from flight experimental test data. A new methodology using the numerical database was proposed in this thesis to compute the fuel burn for a given trajectory. Weather parameters such as wind and temperature were taken into account as they have an important effect in fuel burn. The open source model used to obtain the weather forecast was provided by Weather Canada. A combination of linear and bi-linear interpolations allowed finding the required weather data. The search space was modelled using different graphs: one graph was used for mapping the different flight phases such as climb, cruise and descent, and another graph was used for mapping the physical space in which the aircraft would perform its flight. The trajectory was optimized in its vertical reference trajectory using the Beam Search algorithm, and a combination of the Beam Search algorithm with a search space reduction technique. The trajectory was optimized simultaneously for the vertical and lateral reference navigation plans while fulfilling a Required Time of Arrival constraint using three different metaheuristic algorithms: the artificial bee's colony, and the ant colony optimization. Results were validated using the software FlightSIMRTM, a commercial Flight Management System, an exhaustive search algorithm, and as flown flights obtained from flightawareRTM. All algorithms were able to reduce the fuel burn, and the flight costs. None None None None None None None
Banjak, Hussein; Grenier, Thomas; Epicier, Thierry; Koneti, Siddardha; Roiban, Lucian; Gay, Anne-Sophie; Magnin, Isabelle; Peyrin, Françoise; Maxim, Voichita
2018-06-01
Fast tomography in Environmental Transmission Electron Microscopy (ETEM) is of a great interest for in situ experiments where it allows to observe 3D real-time evolution of nanomaterials under operating conditions. In this context, we are working on speeding up the acquisition step to a few seconds mainly with applications on nanocatalysts. In order to accomplish such rapid acquisitions of the required tilt series of projections, a modern 4K high-speed camera is used, that can capture up to 100 images per second in a 2K binning mode. However, due to the fast rotation of the sample during the tilt procedure, noise and blur effects may occur in many projections which in turn would lead to poor quality reconstructions. Blurred projections make classical reconstruction algorithms inappropriate and require the use of prior information. In this work, a regularized algebraic reconstruction algorithm named SIRT-FISTA-TV is proposed. The performance of this algorithm using blurred data is studied by means of a numerical blur introduced into simulated images series to mimic possible mechanical instabilities/drifts during fast acquisitions. We also present reconstruction results from noisy data to show the robustness of the algorithm to noise. Finally, we show reconstructions with experimental datasets and we demonstrate the interest of fast tomography with an ultra-fast acquisition performed under environmental conditions, i.e. gas and temperature, in the ETEM. Compared to classically used SIRT and SART approaches, our proposed SIRT-FISTA-TV reconstruction algorithm provides higher quality tomograms allowing easier segmentation of the reconstructed volume for a better final processing and analysis. Copyright © 2018 Elsevier B.V. All rights reserved.
Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case
NASA Astrophysics Data System (ADS)
Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann
2017-04-01
Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.
An underwater turbulence degraded image restoration algorithm
NASA Astrophysics Data System (ADS)
Furhad, Md. Hasan; Tahtali, Murat; Lambert, Andrew
2017-09-01
Underwater turbulence occurs due to random fluctuations of temperature and salinity in the water. These fluctuations are responsible for variations in water density, refractive index and attenuation. These impose random geometric distortions, spatio-temporal varying blur, limited range visibility and limited contrast on the acquired images. There are some restoration techniques developed to address this problem, such as image registration based, lucky region based and centroid-based image restoration algorithms. Although these methods demonstrate better results in terms of removing turbulence, they require computationally intensive image registration, higher CPU load and memory allocations. Thus, in this paper, a simple patch based dictionary learning algorithm is proposed to restore the image by alleviating the costly image registration step. Dictionary learning is a machine learning technique which builds a dictionary of non-zero atoms derived from the sparse representation of an image or signal. The image is divided into several patches and the sharp patches are detected from them. Next, dictionary learning is performed on these patches to estimate the restored image. Finally, an image deconvolution algorithm is employed on the estimated restored image to remove noise that still exists.
Gaura, Elena; Kemp, John; Brusey, James
2013-12-01
The paper demonstrates that wearable sensor systems, coupled with real-time on-body processing and actuation, can enhance safety for wearers of heavy protective equipment who are subjected to harsh thermal environments by reducing risk of Uncompensable Heat Stress (UHS). The work focuses on Explosive Ordnance Disposal operatives and shows that predictions of UHS risk can be performed in real-time with sufficient accuracy for real-world use. Furthermore, it is shown that the required sensory input for such algorithms can be obtained with wearable, non-intrusive sensors. Two algorithms, one based on Bayesian nets and another on decision trees, are presented for determining the heat stress risk, considering the mean skin temperature prediction as a proxy. The algorithms are trained on empirical data and have accuracies of 92.1±2.9% and 94.4±2.1%, respectively when tested using leave-one-subject-out cross-validation. In applications such as Explosive Ordnance Disposal operative monitoring, such prediction algorithms can enable autonomous actuation of cooling systems and haptic alerts to minimize casualties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lau, Sarah J.; Moore, David G.; Stair, Sarah L.
Ultrasonic analysis is being explored as a way to capture events during melting of highly dispersive wax. Typical events include temperature changes in the material, phase transition of the material, surface flows and reformations, and void filling as the material melts. Melt tests are performed with wax to evaluate the usefulness of different signal processing algorithms in capturing event data. Several algorithm paths are being pursued. The first looks at changes in the velocity of the signal through the material. This is only appropriate when the changes from one ultrasonic signal to the next can be represented by a linearmore » relationship, which is not always the case. The second tracks changes in the frequency content of the signal. The third algorithm tracks changes in the temporal moments of a signal over a full test. This method does not require that the changes in the signal be represented by a linear relationship, but attaching changes in the temporal moments to physical events can be difficult. This study describes the algorithm paths applied to experimental data from ultrasonic signals as wax melts and explores different ways to display the results.« less
Zhou, Ruhong
2004-05-01
A highly parallel replica exchange method (REM) that couples with a newly developed molecular dynamics algorithm particle-particle particle-mesh Ewald (P3ME)/RESPA has been proposed for efficient sampling of protein folding free energy landscape. The algorithm is then applied to two separate protein systems, beta-hairpin and a designed protein Trp-cage. The all-atom OPLSAA force field with an explicit solvent model is used for both protein folding simulations. Up to 64 replicas of solvated protein systems are simulated in parallel over a wide range of temperatures. The combined trajectories in temperature and configurational space allow a replica to overcome free energy barriers present at low temperatures. These large scale simulations reveal detailed results on folding mechanisms, intermediate state structures, thermodynamic properties and the temperature dependences for both protein systems.
Simulated Annealing in the Variable Landscape
NASA Astrophysics Data System (ADS)
Hasegawa, Manabu; Kim, Chang Ju
An experimental analysis is conducted to test whether the appropriate introduction of the smoothness-temperature schedule enhances the optimizing ability of the MASSS method, the combination of the Metropolis algorithm (MA) and the search-space smoothing (SSS) method. The test is performed on two types of random traveling salesman problems. The results show that the optimization performance of the MA is substantially improved by a single smoothing alone and slightly more by a single smoothing with cooling and by a de-smoothing process with heating. The performance is compared to that of the parallel tempering method and a clear advantage of the idea of smoothing is observed depending on the problem.
Speculation and replication in temperature accelerated dynamics
Zamora, Richard J.; Perez, Danny; Voter, Arthur F.
2018-02-12
Accelerated Molecular Dynamics (AMD) is a class of MD-based algorithms for the long-time scale simulation of atomistic systems that are characterized by rare-event transitions. Temperature-Accelerated Dynamics (TAD), a traditional AMD approach, hastens state-to-state transitions by performing MD at an elevated temperature. Recently, Speculatively-Parallel TAD (SpecTAD) was introduced, allowing the TAD procedure to exploit parallel computing systems by concurrently executing in a dynamically generated list of speculative future states. Although speculation can be very powerful, it is not always the most efficient use of parallel resources. In this paper, we compare the performance of speculative parallelism with a replica-based technique, similarmore » to the Parallel Replica Dynamics method. A hybrid SpecTAD approach is also presented, in which each speculation process is further accelerated by a local set of replicas. Finally and overall, this work motivates the use of hybrid parallelism whenever possible, as some combination of speculation and replication is typically most efficient.« less
Speculation and replication in temperature accelerated dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zamora, Richard J.; Perez, Danny; Voter, Arthur F.
Accelerated Molecular Dynamics (AMD) is a class of MD-based algorithms for the long-time scale simulation of atomistic systems that are characterized by rare-event transitions. Temperature-Accelerated Dynamics (TAD), a traditional AMD approach, hastens state-to-state transitions by performing MD at an elevated temperature. Recently, Speculatively-Parallel TAD (SpecTAD) was introduced, allowing the TAD procedure to exploit parallel computing systems by concurrently executing in a dynamically generated list of speculative future states. Although speculation can be very powerful, it is not always the most efficient use of parallel resources. In this paper, we compare the performance of speculative parallelism with a replica-based technique, similarmore » to the Parallel Replica Dynamics method. A hybrid SpecTAD approach is also presented, in which each speculation process is further accelerated by a local set of replicas. Finally and overall, this work motivates the use of hybrid parallelism whenever possible, as some combination of speculation and replication is typically most efficient.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebert, Jon Llyod
This Small Business Innovative Research (SBIR) Phase I project will demonstrate the feasibility of an innovative temperature control technology for Metal-Organic Chemical Vapor Deposition (MOCVD) process used in the fabrication of Multi-Quantum Well (MQW) LEDs. The proposed control technology has the strong potential to improve both throughput and performance quality of the manufactured LED. The color of the light emitted by an LED is a strong function of the substrate temperature during the deposition process. Hence, accurate temperature control of the MOCVD process is essential for ensuring that the LED performance matches the design specification. The Gallium Nitride (GaN) epitaxymore » process involves depositing multiple layers at different temperatures. Much of the recipe time is spent ramping from one process temperature to another, adding significant overhead to the production time. To increase throughput, the process temperature must transition over a range of several hundred degrees Centigrade many times with as little overshoot and undershoot as possible, in the face of several sources of process disturbance such as changing emissivities. Any throughput increase achieved by faster ramping must also satisfy the constraint of strict temperature uniformity across the carrier so that yield is not affected. SC Solutions is a leading supplier of embedded real-time temperature control technology for MOCVD systems used in LED manufacturing. SC’s Multiple Input Multiple Output (MIMO) temperature controllers use physics-based models to achieve the performance demanded by our customers. However, to meet DOE’s ambitious goals of cost reduction of LED products, a new generation of temperature controllers has to be developed. SC believes that the proposed control technology will be made feasible by the confluence of mathematical formulation as a convex optimization problem, new efficient and scalable algorithms, and the increase in computational power available for real-time control.« less
Bing, Chenchen; Nofiele, Joris; Staruch, Robert; Ladouceur-Wodzak, Michelle; Chatzinoff, Yonatan; Ranjan, Ashish; Chopra, Rajiv
2015-01-01
Purpose Localised hyperthermia in rodent studies is challenging due to the small target size. This study describes the development and characterisation of an MRI-compatible high-intensity focused ultrasound (HIFU) system to perform localised mild hyperthermia treatments in rodent models. Material and methods The hyperthermia platform consisted of an MRI-compatible small animal HIFU system, focused transducers with sector-vortex lenses, a custom-made receive coil, and means to maintain systemic temperatures of rodents. The system was integrated into a 3T MR imager. Control software was developed to acquire images, process temperature maps, and adjust output power using a proportional-integral-derivative feedback control algorithm. Hyperthermia exposures were performed in tissue-mimicking phantoms and in a rodent model (n = 9). During heating, an ROI was assigned in the heated region for temperature control and the target temperature was 42 °C; 30 min mild hyperthermia treatment followed by a 10-min cooling procedure was performed on each animal. Results 3D-printed sector-vortex lenses were successful at creating annular focal regions which enables customisation of the heating volume. Localised mild hyperthermia performed in rats produced a mean ROI temperature of 42.1 ± 0.3 °C. The T10 and T90 percentiles were 43.2 ± 0.4 °C and 41.0 ± 0.3 °C, respectively. For a 30-min treatment, the mean time duration between 41–45 °C was 31.1 min within the ROI. Conclusions The MRI-compatible HIFU system was successfully adapted to perform localised mild hyperthermia treatment in rodent models. A target temperature of 42 °C was well-maintained in a rat thigh model for 30 min. PMID:26540488
Tang, Bo-Hui; Wu, Hua-; Li, Zhao-Liang; Nerry, Françoise
2012-07-30
This work addressed the validation of the MODIS-derived bidirectional reflectivity retrieval algorithm in mid-infrared (MIR) channel, proposed by Tang and Li [Int. J. Remote Sens. 29, 4907 (2008)], with ground-measured data, which were collected from a field campaign that took place in June 2004 at the ONERA (Office National d'Etudes et de Recherches Aérospatiales) center of Fauga-Mauzac, on the PIRRENE (Programme Interdisciplinaire de Recherche sur la Radiométrie en Environnement Extérieur) experiment site [Opt. Express 15, 12464 (2007)]. The leaving-surface spectral radiances measured by a BOMEM (MR250 Series) Fourier transform interferometer were used to calculate the ground brightness temperatures with the combination of the inversion of the Planck function and the spectral response functions of MODIS channels 22 and 23, and then to estimate the ground brightness temperature without the contribution of the solar direct beam and the bidirectional reflectivity by using Tang and Li's proposed algorithm. On the other hand, the simultaneously measured atmospheric profiles were used to obtain the atmospheric parameters and then to calculate the ground brightness temperature without the contribution of the solar direct beam, based on the atmospheric radiative transfer equation in the MIR region. Comparison of those two kinds of brightness temperature obtained by two different methods indicated that the Root Mean Square Error (RMSE) between the brightness temperatures estimated respectively using Tang and Li's algorithm and the atmospheric radiative transfer equation is 1.94 K. In addition, comparison of the hemispherical-directional reflectances derived by Tang and Li's algorithm with those obtained from the field measurements showed that the RMSE is 0.011, which indicates that Tang and Li's algorithm is feasible to retrieve the bidirectional reflectivity in MIR channel from MODIS data.
An Evaluation of the HVAC Load Potential for Providing Load Balancing Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Ning
This paper investigates the potential of providing aggregated intra-hour load balancing services using heating, ventilating, and air-conditioning (HVAC) systems. A direct-load control algorithm is presented. A temperature-priority-list method is used to dispatch the HVAC loads optimally to maintain consumer-desired indoor temperatures and load diversity. Realistic intra-hour load balancing signals were used to evaluate the operational characteristics of the HVAC load under different outdoor temperature profiles and different indoor temperature settings. The number of HVAC units needed is also investigated. Modeling results suggest that the number of HVACs needed to provide a {+-}1-MW load balancing service 24 hours a day variesmore » significantly with baseline settings, high and low temperature settings, and the outdoor temperatures. The results demonstrate that the intra-hour load balancing service provided by HVAC loads meet the performance requirements and can become a major source of revenue for load-serving entities where the smart grid infrastructure enables direct load control over the HAVC loads.« less
Temperature Crosstalk Sensitivity of the Kummerow Rainfall Algorithm
NASA Technical Reports Server (NTRS)
Spencer, Roy W.; Petrenko, Boris
1999-01-01
Even though the signal source for passive microwave retrievals is thermal emission, retrievals of non-temperature geophysical parameters typically do not explicitly take into account the effects of temperature change on the retrievals. For global change research, changes in geophysical parameters (e.g. water vapor, rainfall, etc.) are referenced to the accompanying changes in temperature. If the retrieval of a certain parameter has a cross-talk response from temperature change alone, the retrievals might not be very useful for climate research. We investigated the sensitivity of the Kummerow rainfall retrieval algorithm to changes in air temperature. It was found that there was little net change in total rainfall with air temperature change. However, there were non-negligible changes within individual rain rate categories.
NASA Technical Reports Server (NTRS)
Kitzis, S. N.; Kitzis, J. L.
1979-01-01
The accuracy of the SEASAT-A SMMR antenna pattern correction (APC) algorithm was assessed. Interim APC brightness temperature measurements for the SMMR 6.6 GHz channels are compared with surface truth derived sea surface temperatures. Plots and associated statistics are presented for SEASAT-A SMMR data acquired for the Gulf of Alaska experiment. The cross-track gradients observed in the 6.6 GHz brightness temperature data are discussed.
[Investigation on Mobile Phone Based Thermal Imaging System and Its Preliminary Application].
Li, Fufeng; Chen, Feng; Liu, Jing
2015-03-01
The technical structure of a low-cost thermal imaging system (TIM) lunched on a mobile phone was investigated, which consists of a thermal infrared module and mobile phone and application software. The designing strategies and technical factors toward realizing various TIM array performances are interpreted, including sensor cost and Noise Equivalent Temperature Difference (NETD). In the software algorithm, a mechanism for scene-change detection was implemented to optimize the efficiency of non-uniformity correction (NUC). The performance experiments and analysis indicate that the NETD of the system can be smaller than 150 mK when the integration time is larger than 16 frames. Furthermore, a practical application for human temperature monitoring during physical exercise is proposed and interpreted. The measurement results support the feasibility and facility of the system in the medical application.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
Post-processing interstitialcy diffusion from molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Bhardwaj, U.; Bukkuru, S.; Warrier, M.
2016-01-01
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures is studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms.
Post-processing interstitialcy diffusion from molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhardwaj, U., E-mail: haptork@gmail.com; Bukkuru, S.; Warrier, M.
2016-01-15
An algorithm to rigorously trace the interstitialcy diffusion trajectory in crystals is developed. The algorithm incorporates unsupervised learning and graph optimization which obviate the need to input extra domain specific information depending on crystal or temperature of the simulation. The algorithm is implemented in a flexible framework as a post-processor to molecular dynamics (MD) simulations. We describe in detail the reduction of interstitialcy diffusion into known computational problems of unsupervised clustering and graph optimization. We also discuss the steps, computational efficiency and key components of the algorithm. Using the algorithm, thermal interstitialcy diffusion from low to near-melting point temperatures ismore » studied. We encapsulate the algorithms in a modular framework with functionality to calculate diffusion coefficients, migration energies and other trajectory properties. The study validates the algorithm by establishing the conformity of output parameters with experimental values and provides detailed insights for the interstitialcy diffusion mechanism. The algorithm along with the help of supporting visualizations and analysis gives convincing details and a new approach to quantifying diffusion jumps, jump-lengths, time between jumps and to identify interstitials from lattice atoms. -- Graphical abstract:.« less
The MEM of spectral analysis applied to L.O.D.
NASA Astrophysics Data System (ADS)
Fernandez, L. I.; Arias, E. F.
The maximum entropy method (MEM) has been widely applied for polar motion studies taking advantage of its performance on the management of complex time series. The authors used the algorithm of the MEM to estimate Cross Spectral function in order to compare interannual Length-of-Day (LOD) time series with Southern Oscillation Index (SOI) and Sea Surface Temperature (SST) series, which are close related to El Niño-Southern Oscillation (ENSO) events.
NASA Technical Reports Server (NTRS)
2013-01-01
Topics covered include: Remote Data Access with IDL Data Compression Algorithm Architecture for Large Depth-of-Field Particle Image Velocimeters Vectorized Rebinning Algorithm for Fast Data Down-Sampling Display Provides Pilots with Real-Time Sonic-Boom Information Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery Monitoring and Acquisition Real-time System (MARS) Analog Signal Correlating Using an Analog-Based Signal Conditioning Front End Micro-Textured Black Silicon Wick for Silicon Heat Pipe Array Robust Multivariable Optimization and Performance Simulation for ASIC Design; Castable Amorphous Metal Mirrors and Mirror Assemblies; Sandwich Core Heat-Pipe Radiator for Power and Propulsion Systems; Apparatus for Pumping a Fluid; Cobra Fiber-Optic Positioner Upgrade; Improved Wide Operating Temperature Range of Li-Ion Cells; Non-Toxic, Non-Flammable, -80 C Phase Change Materials; Soft-Bake Purification of SWCNTs Produced by Pulsed Laser Vaporization; Improved Cell Culture Method for Growing Contracting Skeletal Muscle Models; Hand-Based Biometric Analysis; The Next Generation of Cold Immersion Dry Suit Design Evolution for Hypothermia Prevention; Integrated Lunar Information Architecture for Decision Support Version 3.0 (ILIADS 3.0); Relay Forward-Link File Management Services (MaROS Phase 2); Two Mechanisms to Avoid Control Conflicts Resulting from Uncoordinated Intent; XTCE GOVSAT Tool Suite 1.0; Determining Temperature Differential to Prevent Hardware Cross-Contamination in a Vacuum Chamber; SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws; Remote Data Exploration with the Interactive Data Language (IDL); Mixture-Tuned, Clutter Matched Filter for Remote Detection of Subpixel Spectral Signals; Partitioned-Interval Quantum Optical Communications Receiver; and Practical UAV Optical Sensor Bench with Minimal Adjustability.
Surface emissivity and temperature retrieval for a hyperspectral sensor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borel, C.C.
1998-12-01
With the growing use of hyper-spectral imagers, e.g., AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. The author believes that this will enable him to get around using the present temperature-emissivity separation algorithms using methods which take advantage of the many channels available in hyper-spectral imagers. A simple fact used in coming up with a novel algorithm is that a typical surface emissivity spectrum are rather smooth compared to spectral features introduced by the atmosphere. Thus, a iterative solution technique can be devised which retrievesmore » emissivity spectra based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. One such iterative algorithm solves the radiative transfer equation for the radiance at the sensor for the unknown emissivity and uses the blackbody temperature computed in an atmospheric window to get a guess for the unknown surface temperature. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less
Calibration and Image Reconstruction for the Hurricane Imaging Radiometer (HIRAD)
NASA Technical Reports Server (NTRS)
Ruf, Christopher; Roberts, J. Brent; Biswas, Sayak; James, Mark W.; Miller, Timothy
2012-01-01
The Hurricane Imaging Radiometer (HIRAD) is a new airborne passive microwave synthetic aperture radiometer designed to provide wide swath images of ocean surface wind speed under heavy precipitation and, in particular, in tropical cyclones. It operates at 4, 5, 6 and 6.6 GHz and uses interferometric signal processing to synthesize a pushbroom imager in software from a low profile planar antenna with no mechanical scanning. HIRAD participated in NASA s Genesis and Rapid Intensification Processes (GRIP) mission during Fall 2010 as its first science field campaign. HIRAD produced images of upwelling brightness temperature over a aprox 70 km swath width with approx 3 km spatial resolution. From this, ocean surface wind speed and column averaged atmospheric liquid water content can be retrieved across the swath. The calibration and image reconstruction algorithms that were used to verify HIRAD functional performance during and immediately after GRIP were only preliminary and used a number of simplifying assumptions and approximations about the instrument design and performance. The development and performance of a more detailed and complete set of algorithms are reported here.
Machine vision guided sensor positioning system for leaf temperature assessment
NASA Technical Reports Server (NTRS)
Kim, Y.; Ling, P. P.; Janes, H. W. (Principal Investigator)
2001-01-01
A sensor positioning system was developed for monitoring plants' well-being using a non-contact sensor. Image processing algorithms were developed to identify a target region on a plant leaf. A novel algorithm to recover view depth was developed by using a camera equipped with a computer-controlled zoom lens. The methodology has improved depth recovery resolution over a conventional monocular imaging technique. An algorithm was also developed to find a maximum enclosed circle on a leaf surface so the conical field-of-view of an infrared temperature sensor could be filled by the target without peripheral noise. The center of the enclosed circle and the estimated depth were used to define the sensor 3-D location for accurate plant temperature measurement.
The modern temperature-accelerated dynamics approach
Zamora, Richard J.; Uberuaga, Blas P.; Perez, Danny; ...
2016-06-01
Accelerated molecular dynamics (AMD) is a class of MD-based methods used to simulate atomistic systems in which the metastable state-to-state evolution is slow compared with thermal vibrations. Temperature-accelerated dynamics (TAD) is a particularly efficient AMD procedure in which the predicted evolution is hastened by elevating the temperature of the system and then recovering the correct state-to-state dynamics at the temperature of interest. TAD has been used to study various materials applications, often revealing surprising behavior beyond the reach of direct MD. This success has inspired several algorithmic performance enhancements, as well as the analysis of its mathematical framework. Recently, thesemore » enhancements have leveraged parallel programming techniques to enhance both the spatial and temporal scaling of the traditional approach. Here, we review the ongoing evolution of the modern TAD method and introduce the latest development: speculatively parallel TAD.« less
Data Products From Particle Detectors On-Board NOAA's Newest Space Weather Monitor
NASA Astrophysics Data System (ADS)
Kress, B. T.; Rodriguez, J. V.; Onsager, T. G.
2017-12-01
NOAA's newest Geostationary Operational Environmental Satellite, GOES-16, was launched on 19 November 2016. Instrumentation on-board GOES-16 includes the new Space Environment In-Situ Suite (SEISS), which has been collecting data since 8 January 2017. SEISS is composed of five magnetospheric particle sensor units: an electrostatic analyzer for measuring 30 eV - 30 keV ions and electrons (MPS-LO), a high energy particle sensor (MPS-HI) that measures keV to MeV electrons and protons, east and west facing Solar and Galactic Proton Sensor (SGPS) units with 13 differential channels between 1-500 MeV, and an Energetic Heavy Ion Sensor (EHIS) that measures 30 species of heavy ions (He-Ni) in five energy bands in the 10-200 MeV/nuc range. Measurement of low energy magnetospheric particles by MPS-LO and heavy ions by EHIS are new capabilities not previously flown on the GOES system. Real-time data from GOES-16 will support space weather monitoring and first-principles space weather modeling by NOAA's Space Weather Prediction Center (SWPC). Space weather level 2+ data products under development at NOAA's National Centers for Environmental Information (NCEI) include the Solar Energetic Particle (SEP) Event Detection algorithm. Legacy components of the SEP event detection algorithm (currently produced by SWPC) include the Solar Radiation Storm Scales. New components will include, e.g., event fluences. New level 2+ data products also include the SEP event Linear Energy Transfer (LET) Algorithm, for transforming energy spectra from EHIS into LET spectra, and the Density and Temperature Moments and Spacecraft Charging algorithm. The moments and charging algorithm identifies electron and ion signatures of spacecraft surface (frame) charging in the MPS-LO fluxes. Densities and temperatures from MPS-LO will also be used to support a magnetopause crossing detection algorithm. The new data products will provide real-time indicators of potential radiation hazards for the satellite community and data for future studies of space weather effects. This presentation will include an overview of these algorithms and examples of their performance during recent co-rotation interaction region (CIR) associated radiation belt enhancements and a solar particle event on 14-15 July 2017.
NASA Astrophysics Data System (ADS)
Lipton, A.; Moncet, J. L.; Lynch, R.; Payne, V.; Alvarado, M. J.
2016-12-01
We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. For analysis of satellite profiles over multi-decade periods, a concern is that the algorithm could respond inadequately to climate change if it uses a static background as a retrieval constraint, leading to retrievals that underestimate secular changes over extended periods of time and become biased toward an outdated climatology. We assessed the ability of our algorithm to respond appropriately to changes in temperature and water vapor profiles associated with climate change and, in particular, on the impact of using a climatological background in retrievals when the climatology is not static. We simulated a scenario wherein our algorithm processes 30 years of data from CrIS and ATMS (CrIMSS) with a static background based on data from the start of the 30-year period. We performed simulations using products from Coupled Model Intercomparison Project 5 (CMIP5), and in particular the "representative concentration pathways" midrange emissions (RCP4.5) scenario from the GISS-E2-R model. We will present results indicating that regularization using empirical orthogonal functions (EOFs) from a 30-year outdated covariance had a negligible effect on results. For temperature, the secular change is represented with high fidelity with the CrIMSS retrievals. For water vapor, an outdated background adds distortion to the secular moistening trend in the troposphere only above 300 mb, where the sensor information content is less than at lower levels. We will also present results illustrating the consistency between retrievals from near-simultaneous AIRS and CrIMSS measurements.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Morrow, Thomas B [San Antonio, TX; Kelner, Eric [San Antonio, TX; Owen, Thomas E [Helotes, TX
2008-07-08
A gas energy meter that acquires the data and performs the processing for an inferential determination of one or more gas properties, such as heating value, molecular weight, or density. The meter has a sensor module that acquires temperature, pressure, CO2, and speed of sound data. Data is acquired at two different states of the gas, which eliminates the need to determine the concentration of nitrogen in the gas. A processing module receives this data and uses it to perform a "two-state" inferential algorithm.
Note: Wide-operating-range control for thermoelectric coolers.
Peronio, P; Labanca, I; Ghioni, M; Rech, I
2017-11-01
A new algorithm for controlling the temperature of a thermoelectric cooler is proposed. Unlike a classic proportional-integral-derivative (PID) control, which computes the bias voltage from the temperature error, the proposed algorithm exploits the linear relation that exists between the cold side's temperature and the amount of heat that is removed per unit time. Since this control is based on an existing linear relation, it is insensitive to changes in the operating point that are instead crucial in classic PID control of a non-linear system.
Note: Wide-operating-range control for thermoelectric coolers
NASA Astrophysics Data System (ADS)
Peronio, P.; Labanca, I.; Ghioni, M.; Rech, I.
2017-11-01
A new algorithm for controlling the temperature of a thermoelectric cooler is proposed. Unlike a classic proportional-integral-derivative (PID) control, which computes the bias voltage from the temperature error, the proposed algorithm exploits the linear relation that exists between the cold side's temperature and the amount of heat that is removed per unit time. Since this control is based on an existing linear relation, it is insensitive to changes in the operating point that are instead crucial in classic PID control of a non-linear system.
Ant colony system algorithm for the optimization of beer fermentation control.
Xiao, Jie; Zhou, Ze-Kui; Zhang, Guang-Xin
2004-12-01
Beer fermentation is a dynamic process that must be guided along a temperature profile to obtain the desired results. Ant colony system algorithm was applied to optimize the kinetic model of this process. During a fixed period of fermentation time, a series of different temperature profiles of the mixture were constructed. An optimal one was chosen at last. Optimal temperature profile maximized the final ethanol production and minimized the byproducts concentration and spoilage risk. The satisfactory results obtained did not require much computation effort.
NASA Technical Reports Server (NTRS)
Duda, James L.; Barth, Suzanna C
2005-01-01
The VIIRS sensor provides measurements for 22 Environmental Data Records (EDRs) addressing the atmosphere, ocean surface temperature, ocean color, land parameters, aerosols, imaging for clouds and ice, and more. That is, the VIIRS collects visible and infrared radiometric data of the Earth's atmosphere, ocean, and land surfaces. Data types include atmospheric, clouds, Earth radiation budget, land/water and sea surface temperature, ocean color, and low light imagery. This wide scope of measurements calls for the preparation of a multiplicity of Algorithm Theoretical Basis Documents (ATBDs), and, additionally, for intermediate products such as cloud mask, et al. Furthermore, the VIIRS interacts with three or more other sensors. This paper addresses selected and crucial elements of the process being used to convert and test an immense volume of a maturing and changing science code to the initial operational source code in preparation for launch of NPP. The integrity of the original science code is maintained and enhanced via baseline comparisons when re-hosted, in addition to multiple planned code performance reviews.
NASA Technical Reports Server (NTRS)
Yost, Christopher R.; Minnis, Patrick; Trepte, Qing Z.; Palikonda, Rabindra; Ayers, Jeffrey K.; Spangenberg, Doulas A.
2012-01-01
With geostationary satellite data it is possible to have a continuous record of diurnal cycles of cloud properties for a large portion of the globe. Daytime cloud property retrieval algorithms are typically superior to nighttime algorithms because daytime methods utilize measurements of reflected solar radiation. However, reflected solar radiation is difficult to accurately model for high solar zenith angles where the amount of incident radiation is small. Clear and cloudy scenes can exhibit very small differences in reflected radiation and threshold-based cloud detection methods have more difficulty setting the proper thresholds for accurate cloud detection. Because top-of-atmosphere radiances are typically more accurately modeled outside the terminator region, information from previous scans can help guide cloud detection near the terminator. This paper presents an algorithm that uses cloud fraction and clear and cloudy infrared brightness temperatures from previous satellite scan times to improve the performance of a threshold-based cloud mask near the terminator. Comparisons of daytime, nighttime, and terminator cloud fraction derived from Geostationary Operational Environmental Satellite (GOES) radiance measurements show that the algorithm greatly reduces the number of false cloud detections and smoothes the transition from the daytime to the nighttime clod detection algorithm. Comparisons with the Geoscience Laser Altimeter System (GLAS) data show that using this algorithm decreases the number of false detections by approximately 20 percentage points.
Siberia snow depth climatology derived from SSM/I data using a combined dynamic and static algorithm
Grippa, M.; Mognard, N.; Le, Toan T.; Josberger, E.G.
2004-01-01
One of the major challenges in determining snow depth (SD) from passive microwave measurements is to take into account the spatiotemporal variations of the snow grain size. Static algorithms based on a constant snow grain size cannot provide accurate estimates of snow pack thickness, particularly over large regions where the snow pack is subjected to big spatial temperature variations. A recent dynamic algorithm that accounts for the dependence of the microwave scattering on the snow grain size has been developed to estimate snow depth from the Special Sensor Microwave/Imager (SSM/I) over the Northern Great Plains (NGP) in the US. In this paper, we develop a combined dynamic and static algorithm to estimate snow depth from 13 years of SSM/I observations over Central Siberia. This region is characterised by extremely cold surface air temperatures and by the presence of permafrost that significantly affects the ground temperature. The dynamic algorithm is implemented to take into account these effects and it yields accurate snow depths early in the winter, when thin snowpacks combine with cold air temperatures to generate rapid crystal growth. However, it is not applicable later in the winter when the grain size growth slows. Combining the dynamic algorithm to a static algorithm, with a temporally constant but spatially varying coefficient, we obtain reasonable snow depth estimates throughout the entire snow season. Validation is carried out by comparing the satellite snow depth monthly averages to monthly climatological data. We show that the location of the snow depth maxima and minima is improved when applying the combined algorithm, since its dynamic portion explicitly incorporate the thermal gradient through the snowpack. The results obtained are presented and evaluated for five different vegetation zones of Central Siberia. Comparison with in situ measurements is also shown and discussed. ?? 2004 Elsevier Inc. All rights reserved.
Jin, Zhenong; Zhuang, Qianlai; Tan, Zeli; Dukes, Jeffrey S; Zheng, Bangyou; Melillo, Jerry M
2016-09-01
Stresses from heat and drought are expected to increasingly suppress crop yields, but the degree to which current models can represent these effects is uncertain. Here we evaluate the algorithms that determine impacts of heat and drought stress on maize in 16 major maize models by incorporating these algorithms into a standard model, the Agricultural Production Systems sIMulator (APSIM), and running an ensemble of simulations. Although both daily mean temperature and daylight temperature are common choice of forcing heat stress algorithms, current parameterizations in most models favor the use of daylight temperature even though the algorithm was designed for daily mean temperature. Different drought algorithms (i.e., a function of soil water content, of soil water supply to demand ratio, and of actual to potential transpiration ratio) simulated considerably different patterns of water shortage over the growing season, but nonetheless predicted similar decreases in annual yield. Using the selected combination of algorithms, our simulations show that maize yield reduction was more sensitive to drought stress than to heat stress for the US Midwest since the 1980s, and this pattern will continue under future scenarios; the influence of excessive heat will become increasingly prominent by the late 21st century. Our review of algorithms in 16 crop models suggests that the impacts of heat and drought stress on plant yield can be best described by crop models that: (i) incorporate event-based descriptions of heat and drought stress, (ii) consider the effects of nighttime warming, and (iii) coordinate the interactions among multiple stresses. Our study identifies the proficiency with which different model formulations capture the impacts of heat and drought stress on maize biomass and yield production. The framework presented here can be applied to other modeled processes and used to improve yield predictions of other crops with a wide variety of crop models. © 2016 John Wiley & Sons Ltd.
A low cost surface plasmon resonance biosensor using a laser line generator
NASA Astrophysics Data System (ADS)
Chen, Ruipeng; Wang, Manping; Wang, Shun; Liang, Hao; Hu, Xinran; Sun, Xiaohui; Zhu, Juanhua; Ma, Liuzheng; Jiang, Min; Hu, Jiandong; Li, Jianwei
2015-08-01
Due to the instrument designed by using a common surface plasmon resonance biosensor is extremely expensive, we established a portable and cost-effective surface plasmon resonance biosensing system. It is mainly composed of laser line generator, P-polarizer, customized prism, microfluidic cell, and line Charge Coupled Device (CCD) array. Microprocessor PIC24FJ128GA006 with embedded A/D converter, communication interface circuit and photoelectric signal amplifier circuit are used to obtain the weak signals from the biosensing system. Moreover, the line CCD module is checked and optimized on the number of pixels, pixels dimension, output amplifier and the timing diagram. The micro-flow cell is made of stainless steel with a high thermal conductivity, and the microprocessor based Proportional-Integral-Derivative (PID) temperature-controlled algorithm was designed to keep the constant temperature (25 °C) of the sample solutions. Correspondingly, the data algorithms designed especially to this biosensing system including amplitude-limiting filtering algorithm, data normalization and curve plotting were programmed efficiently. To validate the performance of the biosensor, ethanol solution samples at the concentrations of 5%, 7.5%, 10%, 12.5% and 15% in volumetric fractions were used, respectively. The fitting equation ΔRU = - 752987.265 + 570237.348 × RI with the R-Square of 0.97344 was established by delta response units (ΔRUs) to refractive indexes (RI). The maximum relative standard deviation (RSD) of 4.8% was obtained.
Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk
2018-04-20
Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.
Derivation of cloud-free-region atmospheric motion vectors from FY-2E thermal infrared imagery
NASA Astrophysics Data System (ADS)
Wang, Zhenhui; Sui, Xinxiu; Zhang, Qing; Yang, Lu; Zhao, Hang; Tang, Min; Zhan, Yizhe; Zhang, Zhiguo
2017-02-01
The operational cloud-motion tracking technique fails to retrieve atmospheric motion vectors (AMVs) in areas lacking cloud; and while water vapor shown in water vapor imagery can be used, the heights assigned to the retrieved AMVs are mostly in the upper troposphere. As the noise-equivalent temperature difference (NEdT) performance of FY-2E split window (10.3-11.5 μm, 11.6-12.8 μm) channels has been improved, the weak signals representing the spatial texture of water vapor and aerosols in cloud-free areas can be strengthened with algorithms based on the difference principle, and applied in calculating AMVs in the lower troposphere. This paper is a preliminary summary for this purpose, in which the principles and algorithm schemes for the temporal difference, split window difference and second-order difference (SD) methods are introduced. Results from simulation and cases experiments are reported in order to verify and evaluate the methods, based on comparison among retrievals and the "truth". The results show that all three algorithms, though not perfect in some cases, generally work well. Moreover, the SD method appears to be the best in suppressing the surface temperature influence and clarifying the spatial texture of water vapor and aerosols. The accuracy with respect to NCEP 800 hPa reanalysis data was found to be acceptable, as compared with the accuracy of the cloud motion vectors.
A Microwave Technique for Mapping Ice Temperature in the Arctic Seasonal Sea Ice Zone
NASA Technical Reports Server (NTRS)
St.Germain, Karen M.; Cavalieri, Donald J.
1997-01-01
A technique for deriving ice temperature in the Arctic seasonal sea ice zone from passive microwave radiances has been developed. The algorithm operates on brightness temperatures derived from the Special Sensor Microwave/Imager (SSM/I) and uses ice concentration and type from a previously developed thin ice algorithm to estimate the surface emissivity. Comparisons of the microwave derived temperatures with estimates derived from infrared imagery of the Bering Strait yield a correlation coefficient of 0.93 and an RMS difference of 2.1 K when coastal and cloud contaminated pixels are removed. SSM/I temperatures were also compared with a time series of air temperature observations from Gambell on St. Lawrence Island and from Point Barrow, AK weather stations. These comparisons indicate that the relationship between the air temperature and the ice temperature depends on ice type.
NASA Astrophysics Data System (ADS)
You, Weilong; Pei, Binbin; Sun, Ke; Zhang, Lei; Yang, Heng; Li, Xinxin
2017-10-01
This paper presents an oven controlled N++ [1 0 0] length-extensional mode silicon resonator, with a lookup-table based control algorithm. The temperature coefficient of resonant frequency (TCF) of the N++ doped resonator is nonlinear, and there is a turnover temperature point at which the TCF is equal to zero. The resonator is maintained at the turnover point by Joule heating; this temperature is a little higher than the upper limit of the industrial temperature range. It is demonstrated that the control algorithm based on the thermoresistor on the substrate and the lookup table for heating voltage versus chip temperature is sufficiently accurate to achieve a frequency stability of ±0.5 ppm over the industrial temperature range. Because only two leads are required for electrical heating and piezoresistive sensing, the power required for heating of this resonator can be potentially lower than that of the oscillators with closed-loop oven control algorithm. It is also shown that the phase noise can be suppressed at the turnover temperature because of the very low value of the TCF, which justifies the usage of the heating voltage as the excitation voltage of the Wheatstone half-bridge.
AMSR2 Soil Moisture Product Validation
NASA Technical Reports Server (NTRS)
Bindlish, R.; Jackson, T.; Cosh, M.; Koike, T.; Fuiji, X.; de Jeu, R.; Chan, S.; Asanuma, J.; Berg, A.; Bosch, D.;
2017-01-01
The Advanced Microwave Scanning Radiometer 2 (AMSR2) is part of the Global Change Observation Mission-Water (GCOM-W) mission. AMSR2 fills the void left by the loss of the Advanced Microwave Scanning Radiometer Earth Observing System (AMSR-E) after almost 10 years. Both missions provide brightness temperature observations that are used to retrieve soil moisture. Merging AMSR-E and AMSR2 will help build a consistent long-term dataset. Before tackling the integration of AMSR-E and AMSR2 it is necessary to conduct a thorough validation and assessment of the AMSR2 soil moisture products. This study focuses on validation of the AMSR2 soil moisture products by comparison with in situ reference data from a set of core validation sites. Three products that rely on different algorithms were evaluated; the JAXA Soil Moisture Algorithm (JAXA), the Land Parameter Retrieval Model (LPRM), and the Single Channel Algorithm (SCA). Results indicate that overall the SCA has the best performance based upon the metrics considered.
Wu, Tingzhu; Lin, Yue; Zheng, Lili; Guo, Ziquan; Xu, Jianxing; Liang, Shijie; Liu, Zhuguagn; Lu, Yijun; Shih, Tien-Mo; Chen, Zhong
2018-02-19
An optimal design of light-emitting diode (LED) lighting that benefits both the photosynthesis performance for plants and the visional health for human eyes has drawn considerable attention. In the present study, we have developed a multi-color driving algorithm that serves as a liaison between desired spectral power distributions and pulse-width-modulation duty cycles. With the aid of this algorithm, our multi-color plant-growth light sources can optimize correlated-color temperature (CCT) and color rendering index (CRI) such that photosynthetic luminous efficacy of radiation (PLER) is maximized regardless of the number of LEDs and the type of photosynthetic action spectrum (PAS). In order to illustrate the accuracies of the proposed algorithm and the practicalities of our plant-growth light sources, we choose six color LEDs and German PAS for experiments. Finally, our study can help provide a useful guide to improve light qualities in plant factories, in which long-term co-inhabitance of plants and human beings is required.
NASA Astrophysics Data System (ADS)
Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.
2016-07-01
In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.
Zhang, Pan; Moore, Cristopher
2014-01-01
Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions, with almost the same modularity, that are poorly correlated with each other. It can also produce illusory ‘‘communities’’ in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian at finite temperature and using an efficient belief propagation algorithm to obtain the consensus of many partitions with high modularity, rather than looking for a single partition that maximizes it. We show analytically and numerically that the proposed algorithm works all of the way down to the detectability transition in networks generated by the stochastic block model. It also performs well on real-world networks, revealing large communities in some networks where previous work has claimed no communities exist. Finally we show that by applying our algorithm recursively, subdividing communities until no statistically significant subcommunities can be found, we can detect hierarchical structure in real-world networks more efficiently than previous methods. PMID:25489096
A Comparison of Techniques for Scheduling Earth-Observing Satellites
NASA Technical Reports Server (NTRS)
Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna
2004-01-01
Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.
Muñoz, Mario A; Smith-Miles, Kate A
2017-01-01
This article presents a method for the objective assessment of an algorithm's strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.
Optimizing LX-17 Thermal Decomposition Model Parameters with Evolutionary Algorithms
NASA Astrophysics Data System (ADS)
Moore, Jason; McClelland, Matthew; Tarver, Craig; Hsu, Peter; Springer, H. Keo
2017-06-01
We investigate and model the cook-off behavior of LX-17 because this knowledge is critical to understanding system response in abnormal thermal environments. Thermal decomposition of LX-17 has been explored in conventional ODTX (One-Dimensional Time-to-eXplosion), PODTX (ODTX with pressure-measurement), TGA (thermogravimetric analysis), and DSC (differential scanning calorimetry) experiments using varied temperature profiles. These experimental data are the basis for developing multiple reaction schemes with coupled mechanics in LLNL's multi-physics hydrocode, ALE3D (Arbitrary Lagrangian-Eulerian code in 2D and 3D). We employ evolutionary algorithms to optimize reaction rate parameters on high performance computing clusters. Once experimentally validated, this model will be scalable to a number of applications involving LX-17 and can be used to develop more sophisticated experimental methods. Furthermore, the optimization methodology developed herein should be applicable to other high explosive materials. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC.
NASA Astrophysics Data System (ADS)
Deo, Ravinesh C.; Şahin, Mehmet
2015-07-01
The forecasting of drought based on cumulative influence of rainfall, temperature and evaporation is greatly beneficial for mitigating adverse consequences on water-sensitive sectors such as agriculture, ecosystems, wildlife, tourism, recreation, crop health and hydrologic engineering. Predictive models of drought indices help in assessing water scarcity situations, drought identification and severity characterization. In this paper, we tested the feasibility of the Artificial Neural Network (ANN) as a data-driven model for predicting the monthly Standardized Precipitation and Evapotranspiration Index (SPEI) for eight candidate stations in eastern Australia using predictive variable data from 1915 to 2005 (training) and simulated data for the period 2006-2012. The predictive variables were: monthly rainfall totals, mean temperature, minimum temperature, maximum temperature and evapotranspiration, which were supplemented by large-scale climate indices (Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and Indian Ocean Dipole) and the Sea Surface Temperatures (Nino 3.0, 3.4 and 4.0). A total of 30 ANN models were developed with 3-layer ANN networks. To determine the best combination of learning algorithms, hidden transfer and output functions of the optimum model, the Levenberg-Marquardt and Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton backpropagation algorithms were utilized to train the network, tangent and logarithmic sigmoid equations used as the activation functions and the linear, logarithmic and tangent sigmoid equations used as the output function. The best ANN architecture had 18 input neurons, 43 hidden neurons and 1 output neuron, trained using the Levenberg-Marquardt learning algorithm using tangent sigmoid equation as the activation and output functions. An evaluation of the model performance based on statistical rules yielded time-averaged Coefficient of Determination, Root Mean Squared Error and the Mean Absolute Error ranging from 0.9945-0.9990, 0.0466-0.1117, and 0.0013-0.0130, respectively for individual stations. Also, the Willmott's Index of Agreement and the Nash-Sutcliffe Coefficient of Efficiency were between 0.932-0.959 and 0.977-0.998, respectively. When checked for the severity (S), duration (D) and peak intensity (I) of drought events determined from the simulated and observed SPEI, differences in drought parameters ranged from - 1.41-0.64%, - 2.17-1.92% and - 3.21-1.21%, respectively. Based on performance evaluation measures, we aver that the Artificial Neural Network model is a useful data-driven tool for forecasting monthly SPEI and its drought-related properties in the region of study.
Performance seeking control: Program overview and future directions
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B.; Orme, John S.
1993-01-01
A flight test evaluation of the performance-seeking control (PSC) algorithm on the NASA F-15 highly integrated digital electronic control research aircraft was conducted for single-engine operation at subsonic and supersonic speeds. The model-based PSC system was developed with three optimization modes: minimum fuel flow at constant thrust, minimum turbine temperature at constant thrust, and maximum thrust at maximum dry and full afterburner throttle settings. Subsonic and supersonic flight testing were conducted at the NASA Dryden Flight Research Facility covering the three PSC optimization modes and over the full throttle range. Flight results show substantial benefits. In the maximum thrust mode, thrust increased up to 15 percent at subsonic and 10 percent at supersonic flight conditions. The minimum fan turbine inlet temperature mode reduced temperatures by more than 100 F at high altitudes. The minimum fuel flow mode results decreased fuel consumption up to 2 percent in the subsonic regime and almost 10 percent supersonically. These results demonstrate that PSC technology can benefit the next generation of fighter or transport aircraft. NASA Dryden is developing an adaptive aircraft performance technology system that is measurement based and uses feedback to ensure optimality. This program will address the technical weaknesses identified in the PSC program and will increase performance gains.
Digibaro pressure instrument onboard the Phoenix Lander
NASA Astrophysics Data System (ADS)
Harri, A.-M.; Polkko, J.; Kahanpää, H. H.; Schmidt, W.; Genzer, M. M.; Haukka, H.; Savijarv1, H.; Kauhanen, J.
2009-04-01
The Phoenix Lander landed successfully on the Martian northern polar region. The mission is part of the National Aeronautics and Space Administration's (NASA's) Scout program. Pressure observations onboard the Phoenix lander were performed by an FMI (Finnish Meteorological Institute) instrument, based on a silicon diaphragm sensor head manufactured by Vaisala Inc., combined with MDA data processing electronics. The pressure instrument performed successfully throughout the Phoenix mission. The pressure instrument had 3 pressure sensor heads. One of these was the primary sensor head and the other two were used for monitoring the condition of the primary sensor head during the mission. During the mission the primary sensor was read with a sampling interval of 2 s and the other two were read less frequently as a check of instrument health. The pressure sensor system had a real-time data-processing and calibration algorithm that allowed the removal of temperature dependent calibration effects. In the same manner as the temperature sensor, a total of 256 data records (8.53 min) were buffered and they could either be stored at full resolution, or processed to provide mean, standard deviation, maximum and minimum values for storage on the Phoenix Lander's Meteorological (MET) unit.The time constant was approximately 3s due to locational constraints and dust filtering requirements. Using algorithms compensating for the time constant effect the temporal resolution was good enough to detect pressure drops associated with the passage of nearby dust devils.
Sensitivity of blackbody effective emissivity to wavelength and temperature: By genetic algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ejigu, E. K.; Liedberg, H. G.
A variable-temperature blackbody (VTBB) is used to calibrate an infrared radiation thermometer (pyrometer). The effective emissivity (ε{sub eff}) of a VTBB is dependent on temperature and wavelength other than the geometry of the VTBB. In the calibration process the effective emissivity is often assumed to be constant within the wavelength and temperature range. There are practical situations where the sensitivity of the effective emissivity needs to be known and correction has to be applied. We present a method using a genetic algorithm to investigate the sensitivity of the effective emissivity to wavelength and temperature variation. Two matlab® programs are generated:more » the first to model the radiance temperature calculation and the second to connect the model to the genetic algorithm optimization toolbox. The effective emissivity parameter is taken as a chromosome and optimized at each wavelength and temperature point. The difference between the contact temperature (reading from a platinum resistance thermometer or liquid in glass thermometer) and radiance temperature (calculated from the ε{sub eff} values) is used as an objective function where merit values are calculated and best fit ε{sub eff} values selected. The best fit ε{sub eff} values obtained as a solution show how sensitive they are to temperature and wavelength parameter variation. Uncertainty components that arise from wavelength and temperature variation are determined based on the sensitivity analysis. Numerical examples are considered for illustration.« less
Development of model reference adaptive control theory for electric power plant control applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mabius, L.E.
1982-09-15
The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less
Microcomputer-based Peltier thermostat for precision optical radiation measurements
NASA Astrophysics Data System (ADS)
Zhu, Xiaosong; Krochmann, Eike; Chen, Jiashu
1992-03-01
We have developed a microcomputer-based thermostat for a light measuring head in precision optical radiation measurements. This thermostat consists of a single-chip microcomputer, a digital-to-analog converter, a liquid crystal display, a power operational amplifier, and a Peltier element (thermoelectric cooler). The Peltier element keeps the temperature of the photometer head at 20±0.1 °C in the ambient temperature range from -20 to 60 °C. A control algorithm which combines the ``Bang-Bang'' mode and proportional-plus-integral-plus-derivative mode is used to achieve fast and smooth thermostatic performance. This thermostat is effective, inexpensive, and easy to adjust. Several applications of the Peltier thermostat are mentioned.
Calibration of Passive Microwave Polarimeters that Use Hybrid Coupler-Based Correlators
NASA Technical Reports Server (NTRS)
Piepmeier, J. R.
2003-01-01
Four calibration algorithms are studied for microwave polarimeters that use hybrid coupler-based correlators: 1) conventional two-look of hot and cold sources, 2) three looks of hot and cold source combinations, 3) two-look with correlated source, and 4) four-look combining methods 2 and 3. The systematic errors are found to depend on the polarimeter component parameters and accuracy of calibration noise temperatures. A case study radiometer in four different remote sensing scenarios was considered in light of these results. Applications for Ocean surface salinity, Ocean surface winds, and soil moisture were found to be sensitive to different systematic errors. Finally, a standard uncertainty analysis was performed on the four-look calibration algorithm, which was found to be most sensitive to the correlated calibration source.
NASA Technical Reports Server (NTRS)
Burns, B. A.; Cavalieri, D. J.; Keller, M. R.
1986-01-01
Active and passive microwave data collected during the 1984 summer Marginal Ice Zone Experiment in the Fram Strait (MIZEX 84) are used to compare ice concentration estimates derived from synthetic aperture radar (SAR) data to those obtained from passive microwave imagery at several frequencies. The comparison is carried out to evaluate SAR performance against the more established passive microwave technique, and to investigate discrepancies in terms of how ice surface conditions, imaging geometry, and choice of algorithm parameters affect each sensor. Active and passive estimates of ice concentration agree on average to within 12%. Estimates from the multichannel passive microwave data show best agreement with the SAR estimates because the multichannel algorithm effectively accounts for the range in ice floe brightness temperatures observed in the MIZ.
The application of immune genetic algorithm in main steam temperature of PID control of BP network
NASA Astrophysics Data System (ADS)
Li, Han; Zhen-yu, Zhang
In order to overcome the uncertainties, large delay, large inertia and nonlinear property of the main steam temperature controlled object in the power plant, a neural network intelligent PID control system based on immune genetic algorithm and BP neural network is designed. Using the immune genetic algorithm global search optimization ability and good convergence, optimize the weights of the neural network, meanwhile adjusting PID parameters using BP network. The simulation result shows that the system is superior to conventional PID control system in the control of quality and robustness.
Analysis of breast thermograms for ROI extraction and description using mathematical morphology
NASA Astrophysics Data System (ADS)
Zermeño-Loreto, O. A.; Toxqui-Quitl, C.; Orozco Guillén, E. E.; Padilla-Vivanco, A.
2017-09-01
The detection of a temperature increase or hot spots in breast thermograms can be related with high metabolic activity of disease cells. Image processing algorithms to seek mainly temperature increases above 3°C which have a high probability of being a malignancy are proposed. Also a derivative operator is used to highlights breast regions of interest (ROI). In order to determinate a medical alert, a feature descriptor of the ROI is constructed using its maximum temperature, maximum increase of temperature, sector/quadrant position in the breast, and area. The proposed algorithms are tested in a home database and a public database for mastology research.
Jiang, Feng; Bai, Jingfeng; Chen, Yazhu
2005-08-01
Small-scale intellectualized medical instrument has attracted great attention in the field of biomedical engineering, and LabVIEW (Laboratory Virtual Instrument Engineering Workbench) provides a convenient environment for this application due to its inherent advantages. The principle and system structure of the hyperthermia instrument are presented. Type T thermocouples are employed as thermotransducers, whose amplifier consists of two stages, providing built-in ice point compensation and thus improving work stability over temperature. Control signals produced by specially designed circuit drive the programmable counter/timer 8254 chip to generate PWM (Pulse width modulation) wave, which is used as ultrasound radiation energy control signal. Subroutine design topics such as inner-tissue real time feedback temperature control algorithm, water temperature control in the ultrasound applicator are also described. In the cancer tissue temperature control subroutine, the authors exert new improvments to PID (Proportional Integral Differential) algorithm according to the specific demands of the system and achieve strict temperature control to the target tissue region. The system design and PID algorithm improvement have experimentally proved to be reliable and excellent, meeting the requirements of the hyperthermia system.
Neural network cloud top pressure and height for MODIS
NASA Astrophysics Data System (ADS)
Håkansson, Nina; Adok, Claudia; Thoss, Anke; Scheirer, Ronald; Hörnquist, Sara
2018-06-01
Cloud top height retrieval from imager instruments is important for nowcasting and for satellite climate data records. A neural network approach for cloud top height retrieval from the imager instrument MODIS (Moderate Resolution Imaging Spectroradiometer) is presented. The neural networks are trained using cloud top layer pressure data from the CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) dataset. Results are compared with two operational reference algorithms for cloud top height: the MODIS Collection 6 Level 2 height product and the cloud top temperature and height algorithm in the 2014 version of the NWC SAF (EUMETSAT (European Organization for the Exploitation of Meteorological Satellites) Satellite Application Facility on Support to Nowcasting and Very Short Range Forecasting) PPS (Polar Platform System). All three techniques are evaluated using both CALIOP and CPR (Cloud Profiling Radar for CloudSat (CLOUD SATellite)) height. Instruments like AVHRR (Advanced Very High Resolution Radiometer) and VIIRS (Visible Infrared Imaging Radiometer Suite) contain fewer channels useful for cloud top height retrievals than MODIS, therefore several different neural networks are investigated to test how infrared channel selection influences retrieval performance. Also a network with only channels available for the AVHRR1 instrument is trained and evaluated. To examine the contribution of different variables, networks with fewer variables are trained. It is shown that variables containing imager information for neighboring pixels are very important. The error distributions of the involved cloud top height algorithms are found to be non-Gaussian. Different descriptive statistic measures are presented and it is exemplified that bias and SD (standard deviation) can be misleading for non-Gaussian distributions. The median and mode are found to better describe the tendency of the error distributions and IQR (interquartile range) and MAE (mean absolute error) are found to give the most useful information of the spread of the errors. For all descriptive statistics presented MAE, IQR, RMSE (root mean square error), SD, mode, median, bias and percentage of absolute errors above 0.25, 0.5, 1 and 2 km the neural network perform better than the reference algorithms both validated with CALIOP and CPR (CloudSat). The neural networks using the brightness temperatures at 11 and 12 µm show at least 32 % (or 623 m) lower MAE compared to the two operational reference algorithms when validating with CALIOP height. Validation with CPR (CloudSat) height gives at least 25 % (or 430 m) reduction of MAE.
Development of numerical techniques for simulation of magnetogasdynamics and hypersonic chemistry
NASA Astrophysics Data System (ADS)
Damevin, Henri-Marie
Magnetogasdynamics, the science concerned with the mutual interaction between electromagnetic field and flow of electrically conducting gas, offers promising advances in flow control and propulsion of future hypersonic vehicles. Numerical simulations are essential for understanding phenomena, and for research and development. The current dissertation is devoted to the development and validation of numerical algorithms for the solution of multidimensional magnetogasdynamic equations and the simulation of hypersonic high-temperature effects. Governing equations are derived, based on classical magnetogasdynamic assumptions. Two sets of equations are considered, namely the full equations and equations in the low magnetic Reynolds number approximation. Equations are expressed in a suitable formulation for discretization by finite differences in a computational space. For the full equations, Gauss law for magnetism is enforced using Powell's methodology. The time integration method is a four-stage modified Runge-Kutta scheme, amended with a Total Variation Diminishing model in a postprocessing stage. The eigensystem, required for the Total Variation Diminishing scheme, is derived in generalized three-dimensional coordinate system. For the simulation of hypersonic high-temperature effects, two chemical models are utilized, namely a nonequilibrium model and an equilibrium model. A loosely coupled approach is implemented to communicate between the magnetogasdynamic equations and the chemical models. The nonequilibrium model is a one-temperature, five-species, seventeen-reaction model solved by an implicit flux-vector splitting scheme. The chemical equilibrium model computes thermodynamics properties using curve fit procedures. Selected results are provided, which explore the different features of the numerical algorithms. The shock-capturing properties are validated for shock-tube simulations using numerical solutions reported in the literature. The computations of superfast flows over corners and in convergent channels demonstrate the performances of the algorithm in multiple dimensions. The implementation of diffusion terms is validated by solving the magnetic Rayleigh problem and Hartmann problem, for which analytical solutions are available. Prediction of blunt-body type flow are investigated and compared with numerical solutions reported in the literature. The effectiveness of the chemical models for hypersonic flow over blunt body is examined in various flow conditions. It is shown that the proposed schemes perform well in a variety of test cases, though some limitations have been identified.
Magnetocaloric effect in Sr2CrIrO6 double perovskite: Monte Carlo simulation
NASA Astrophysics Data System (ADS)
El Rhazouani, O.; Slassi, A.; Ziat, Y.; Benyoussef, A.
2017-05-01
Monte Carlo simulation (MCS) combined with the Metropolis algorithm has been performed to study the magnetocaloric effect (MCE) in the promising double perovskite (DP) Sr2CrIrO6 that has not so far been synthetized. This paper presents the global magneto-thermodynamic behavior of Sr2CrIrO6 compound in term of MCE and discusses the behavior in comparison to other DPs. Thermal dependence of the magnetization has been investigated for different values of reduced external magnetic field. Thermal magnetic entropy and its change have been obtained. The adiabatic temperature change and the relative cooling power have been established. Through the obtained results, Sr2CrIrO6 DP could have some potential applications for magnetic refrigeration over a wide temperature range above room temperature and at large magnetic fields.
NASA Astrophysics Data System (ADS)
Hayati, M.; Rashidi, A. M.; Rezaei, A.
2012-10-01
In this paper, the applicability of ANFIS as an accurate model for the prediction of the mass gain during high temperature oxidation using experimental data obtained for aluminized nanostructured (NS) nickel is presented. For developing the model, exposure time and temperature are taken as input and the mass gain as output. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the network. We have compared the proposed ANFIS model with experimental data. The predicted data are found to be in good agreement with the experimental data with mean relative error less than 1.1%. Therefore, we can use ANFIS model to predict the performances of thermal systems in engineering applications, such as modeling the mass gain for NS materials.
Development of MODIS data-based algorithm for retrieving sea surface temperature in coastal waters.
Wang, Jiao; Deng, Zhiqiang
2017-06-01
A new algorithm was developed for retrieving sea surface temperature (SST) in coastal waters using satellite remote sensing data from Moderate Resolution Imaging Spectroradiometer (MODIS) aboard Aqua platform. The new SST algorithm was trained using the Artificial Neural Network (ANN) method and tested using 8 years of remote sensing data from MODIS Aqua sensor and in situ sensing data from the US coastal waters in Louisiana, Texas, Florida, California, and New Jersey. The ANN algorithm could be utilized to map SST in both deep offshore and particularly shallow nearshore waters at the high spatial resolution of 1 km, greatly expanding the coverage of remote sensing-based SST data from offshore waters to nearshore waters. Applications of the ANN algorithm require only the remotely sensed reflectance values from the two MODIS Aqua thermal bands 31 and 32 as input data. Application results indicated that the ANN algorithm was able to explaining 82-90% variations in observed SST in US coastal waters. While the algorithm is generally applicable to the retrieval of SST, it works best for nearshore waters where important coastal resources are located and existing algorithms are either not applicable or do not work well, making the new ANN-based SST algorithm unique and particularly useful to coastal resource management.
NASA Astrophysics Data System (ADS)
Pan, J.; Durand, M. T.; Jiang, L.; Liu, D.
2017-12-01
The newly-processed NASA MEaSures Calibrated Enhanced-Resolution Brightness Temperature (CETB) reconstructed using antenna measurement response function (MRF) is considered to have significantly improved fine-resolution measurements with better georegistration for time-series observations and equivalent field of view (FOV) for frequencies with the same monomial spatial resolution. We are looking forward to its potential for the global snow observing purposes, and therefore aim to test its performance for characterizing snow properties, especially the snow water equivalent (SWE) in large areas. In this research, two candidate SWE algorithms will be tested in China for the years between 2005 to 2010 using the reprocessed TB from the Advanced Microwave Scanning Radiometer for EOS (AMSR-E), with the results to be evaluated using the daily snow depth measurements at over 700 national synoptic stations. One of the algorithms is the SWE retrieval algorithm used for the FengYun (FY) - 3 Microwave Radiation Imager. This algorithm uses the multi-channel TB to calculate SWE for three major snow regions in China, with the coefficients adapted for different land cover types. The second algorithm is the newly-established Bayesian Algorithm for SWE Estimation with Passive Microwave measurements (BASE-PM). This algorithm uses the physically-based snow radiative transfer model to find the histogram of most-likely snow property that matches the multi-frequency TB from 10.65 to 90 GHz. It provides a rough estimation of snow depth and grain size at the same time and showed a 30 mm SWE RMS error using the ground radiometer measurements at Sodankyla. This study will be the first attempt to test it spatially for satellite. The use of this algorithm benefits from the high resolution and the spatial consistency between frequencies embedded in the new dataset. This research will answer three questions. First, to what extent can CETB increase the heterogeneity in the mapped SWE? Second, will the SWE estimation error statistics be improved using this high-resolution dataset? Third, how will the SWE retrieval accuracy be improved using CETB and the new SWE retrieval techniques?
Kast, Stefan M
2004-03-08
An argument brought forward by Sholl and Fichthorn against the stochastic collision-based constant temperature algorithm for molecular dynamics simulations developed by Kast et al. is refuted. It is demonstrated that the large temperature fluctuations noted by Sholl and Fichthorn are due to improperly chosen initial conditions within their formulation of the algorithm. With the original form or by suitable initialization of their variant no deficient behavior is observed.
Calibrated Noise Measurements with Induced Receiver Gain Fluctuations
NASA Technical Reports Server (NTRS)
Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly
2011-01-01
The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.
Ping, Bo; Su, Fenzhen; Meng, Yunshan
2016-01-01
In this study, an improved Data INterpolating Empirical Orthogonal Functions (DINEOF) algorithm for determination of missing values in a spatio-temporal dataset is presented. Compared with the ordinary DINEOF algorithm, the iterative reconstruction procedure until convergence based on every fixed EOF to determine the optimal EOF mode is not necessary and the convergence criterion is only reached once in the improved DINEOF algorithm. Moreover, in the ordinary DINEOF algorithm, after optimal EOF mode determination, the initial matrix with missing data will be iteratively reconstructed based on the optimal EOF mode until the reconstruction is convergent. However, the optimal EOF mode may be not the best EOF for some reconstructed matrices generated in the intermediate steps. Hence, instead of using asingle EOF to fill in the missing data, in the improved algorithm, the optimal EOFs for reconstruction are variable (because the optimal EOFs are variable, the improved algorithm is called VE-DINEOF algorithm in this study). To validate the accuracy of the VE-DINEOF algorithm, a sea surface temperature (SST) data set is reconstructed by using the DINEOF, I-DINEOF (proposed in 2015) and VE-DINEOF algorithms. Four parameters (Pearson correlation coefficient, signal-to-noise ratio, root-mean-square error, and mean absolute difference) are used as a measure of reconstructed accuracy. Compared with the DINEOF and I-DINEOF algorithms, the VE-DINEOF algorithm can significantly enhance the accuracy of reconstruction and shorten the computational time.
NASA Astrophysics Data System (ADS)
Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.
2013-05-01
Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.
Dimitrov, I. K.; Zhang, X.; Solovyov, V. F.; ...
2015-07-07
Recent advances in second-generation (YBCO) high-temperature superconducting wire could potentially enable the design of super high performance energy storage devices that combine the high energy density of chemical storage with the high power of superconducting magnetic storage. However, the high aspect ratio and the considerable filament size of these wires require the concomitant development of dedicated optimization methods that account for the critical current density in type-II superconductors. In this study, we report on the novel application and results of a CPU-efficient semianalytical computer code based on the Radia 3-D magnetostatics software package. Our algorithm is used to simulate andmore » optimize the energy density of a superconducting magnetic energy storage device model, based on design constraints, such as overall size and number of coils. The rapid performance of the code is pivoted on analytical calculations of the magnetic field based on an efficient implementation of the Biot-Savart law for a large variety of 3-D “base” geometries in the Radia package. The significantly reduced CPU time and simple data input in conjunction with the consideration of realistic input variables, such as material-specific, temperature, and magnetic-field-dependent critical current densities, have enabled the Radia-based algorithm to outperform finite-element approaches in CPU time at the same accuracy levels. Comparative simulations of MgB 2 and YBCO-based devices are performed at 4.2 K, in order to ascertain the realistic efficiency of the design configurations.« less
Missing value imputation for microarray data: a comprehensive comparison study and a web tool.
Chiu, Chia-Chun; Chan, Shih-Yao; Wang, Chung-Ching; Wu, Wei-Sheng
2013-01-01
Microarray data are usually peppered with missing values due to various reasons. However, most of the downstream analyses for microarray data require complete datasets. Therefore, accurate algorithms for missing value estimation are needed for improving the performance of microarray data analyses. Although many algorithms have been developed, there are many debates on the selection of the optimal algorithm. The studies about the performance comparison of different algorithms are still incomprehensive, especially in the number of benchmark datasets used, the number of algorithms compared, the rounds of simulation conducted, and the performance measures used. In this paper, we performed a comprehensive comparison by using (I) thirteen datasets, (II) nine algorithms, (III) 110 independent runs of simulation, and (IV) three types of measures to evaluate the performance of each imputation algorithm fairly. First, the effects of different types of microarray datasets on the performance of each imputation algorithm were evaluated. Second, we discussed whether the datasets from different species have different impact on the performance of different algorithms. To assess the performance of each algorithm fairly, all evaluations were performed using three types of measures. Our results indicate that the performance of an imputation algorithm mainly depends on the type of a dataset but not on the species where the samples come from. In addition to the statistical measure, two other measures with biological meanings are useful to reflect the impact of missing value imputation on the downstream data analyses. Our study suggests that local-least-squares-based methods are good choices to handle missing values for most of the microarray datasets. In this work, we carried out a comprehensive comparison of the algorithms for microarray missing value imputation. Based on such a comprehensive comparison, researchers could choose the optimal algorithm for their datasets easily. Moreover, new imputation algorithms could be compared with the existing algorithms using this comparison strategy as a standard protocol. In addition, to assist researchers in dealing with missing values easily, we built a web-based and easy-to-use imputation tool, MissVIA (http://cosbi.ee.ncku.edu.tw/MissVIA), which supports many imputation algorithms. Once users upload a real microarray dataset and choose the imputation algorithms, MissVIA will determine the optimal algorithm for the users' data through a series of simulations, and then the imputed results can be downloaded for the downstream data analyses.
Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data
NASA Technical Reports Server (NTRS)
Song, S.; Moore, R. K.
1996-01-01
The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John M.; Iredell, Lena; Keita, Fricky
2009-01-01
This paper describes the AIRS Science Team Version 5 retrieval algorithm in terms of its three most significant improvements over the methodology used in the AIRS Science Team Version 4 retrieval algorithm. Improved physics in Version 5 allows for use of AIRS clear column radiances in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profiles T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of clear column radiances .R(sub i) for all channels. This new approach allows for the generation of more accurate values of .R(sub i) and T(p) under most cloud conditions. Secondly, Version 5 contains a new methodology to provide accurate case-by-case error estimates for retrieved geophysical parameters and for channel-by-channel clear column radiances. Thresholds of these error estimates are used in a new approach for Quality Control. Finally, Version 5 also contains for the first time an approach to provide AIRS soundings in partially cloudy conditions that does not require use of any microwave data. This new AIRS Only sounding methodology, referred to as AIRS Version 5 AO, was developed as a backup to AIRS Version 5 should the AMSU-A instrument fail. Results are shown comparing the relative performance of the AIRS Version 4, Version 5, and Version 5 AO for the single day, January 25, 2003. The Goddard DISC is now generating and distributing products derived using the AIRS Science Team Version 5 retrieval algorithm. This paper also described the Quality Control flags contained in the DISC AIRS/AMSU retrieval products and their intended use for scientific research purposes.
Zhang, Jia-Hua; Li, Xin; Yao, Feng-Mei; Li, Xian-Hua
2009-08-01
Land surface temperature (LST) is an important parameter in the study on the exchange of substance and energy between land surface and air for the land surface physics process at regional and global scales. Many applications of satellites remotely sensed data must provide exact and quantificational LST, such as drought, high temperature, forest fire, earthquake, hydrology and the vegetation monitor, and the models of global circulation and regional climate also need LST as input parameter. Therefore, the retrieval of LST using remote sensing technology becomes one of the key tasks in quantificational remote sensing study. Normally, in the spectrum bands, the thermal infrared (TIR, 3-15 microm) and microwave bands (1 mm-1 m) are important for retrieval of the LST. In the present paper, firstly, several methods for estimating the LST on the basis of thermal infrared (TIR) remote sensing were synthetically reviewed, i. e., the LST measured with an ground-base infrared thermometer, the LST retrieval from mono-window algorithm (MWA), single-channel algorithm (SCA), split-window techniques (SWT) and multi-channels algorithm(MCA), single-channel & multi-angle algorithm and multi-channels algorithm & multi-angle algorithm, and retrieval method of land surface component temperature using thermal infrared remotely sensed satellite observation. Secondly, the study status of land surface emissivity (epsilon) was presented. Thirdly, in order to retrieve LST for all weather conditions, microwave remotely sensed data, instead of thermal infrared data, have been developed recently, and the LST retrieval method from passive microwave remotely sensed data was also introduced. Finally, the main merits and shortcomings of different kinds of LST retrieval methods were discussed, respectively.
Data Processing Algorithm for Diagnostics of Combustion Using Diode Laser Absorption Spectrometry.
Mironenko, Vladimir R; Kuritsyn, Yuril A; Liger, Vladimir V; Bolshov, Mikhail A
2018-02-01
A new algorithm for the evaluation of the integral line intensity for inferring the correct value for the temperature of a hot zone in the diagnostic of combustion by absorption spectroscopy with diode lasers is proposed. The algorithm is based not on the fitting of the baseline (BL) but on the expansion of the experimental and simulated spectra in a series of orthogonal polynomials, subtracting of the first three components of the expansion from both the experimental and simulated spectra, and fitting the spectra thus modified. The algorithm is tested in the numerical experiment by the simulation of the absorption spectra using a spectroscopic database, the addition of white noise, and the parabolic BL. Such constructed absorption spectra are treated as experimental in further calculations. The theoretical absorption spectra were simulated with the parameters (temperature, total pressure, concentration of water vapor) close to the parameters used for simulation of the experimental data. Then, spectra were expanded in the series of orthogonal polynomials and first components were subtracted from both spectra. The value of the correct integral line intensities and hence the correct temperature evaluation were obtained by fitting of the thus modified experimental and simulated spectra. The dependence of the mean and standard deviation of the evaluation of the integral line intensity on the linewidth and the number of subtracted components (first two or three) were examined. The proposed algorithm provides a correct estimation of temperature with standard deviation better than 60 K (for T = 1000 K) for the line half-width up to 0.6 cm -1 . The proposed algorithm allows for obtaining the parameters of a hot zone without the fitting of usually unknown BL.
Onboard Science and Applications Algorithm for Hyperspectral Data Reduction
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel
2012-01-01
An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track biological activity such as harmful algal blooms. Measuring surface water extent to track flooding is another rapid response product leveraging VSWIR spectral information.
Wu, Wei-Sheng; Jhou, Meng-Jhun
2017-01-13
Missing value imputation is important for microarray data analyses because microarray data with missing values would significantly degrade the performance of the downstream analyses. Although many microarray missing value imputation algorithms have been developed, an objective and comprehensive performance comparison framework is still lacking. To solve this problem, we previously proposed a framework which can perform a comprehensive performance comparison of different existing algorithms. Also the performance of a new algorithm can be evaluated by our performance comparison framework. However, constructing our framework is not an easy task for the interested researchers. To save researchers' time and efforts, here we present an easy-to-use web tool named MVIAeval (Missing Value Imputation Algorithm evaluator) which implements our performance comparison framework. MVIAeval provides a user-friendly interface allowing users to upload the R code of their new algorithm and select (i) the test datasets among 20 benchmark microarray (time series and non-time series) datasets, (ii) the compared algorithms among 12 existing algorithms, (iii) the performance indices from three existing ones, (iv) the comprehensive performance scores from two possible choices, and (v) the number of simulation runs. The comprehensive performance comparison results are then generated and shown as both figures and tables. MVIAeval is a useful tool for researchers to easily conduct a comprehensive and objective performance evaluation of their newly developed missing value imputation algorithm for microarray data or any data which can be represented as a matrix form (e.g. NGS data or proteomics data). Thus, MVIAeval will greatly expedite the progress in the research of missing value imputation algorithms.
NASA Astrophysics Data System (ADS)
Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas
2017-04-01
The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.
Convection equation modeling: A non-iterative direct matrix solution algorithm for use with SINDA
NASA Technical Reports Server (NTRS)
Schrage, Dean S.
1993-01-01
The determination of the boundary conditions for a component-level analysis, applying discrete finite element and finite difference modeling techniques often requires an analysis of complex coupled phenomenon that cannot be described algebraically. For example, an analysis of the temperature field of a coldplate surface with an integral fluid loop requires a solution to the parabolic heat equation and also requires the boundary conditions that describe the local fluid temperature. However, the local fluid temperature is described by a convection equation that can only be solved with the knowledge of the locally-coupled coldplate temperatures. Generally speaking, it is not computationally efficient, and sometimes, not even possible to perform a direct, coupled phenomenon analysis of the component-level and boundary condition models within a single analysis code. An alternative is to perform a disjoint analysis, but transmit the necessary information between models during the simulation to provide an indirect coupling. For this approach to be effective, the component-level model retains full detail while the boundary condition model is simplified to provide a fast, first-order prediction of the phenomenon in question. Specifically for the present study, the coldplate structure is analyzed with a discrete, numerical model (SINDA) while the fluid loop convection equation is analyzed with a discrete, analytical model (direct matrix solution). This indirect coupling allows a satisfactory prediction of the boundary condition, while not subjugating the overall computational efficiency of the component-level analysis. In the present study a discussion of the complete analysis of the derivation and direct matrix solution algorithm of the convection equation is presented. Discretization is analyzed and discussed to extend of solution accuracy, stability and computation speed. Case studies considering a pulsed and harmonic inlet disturbance to the fluid loop are analyzed to assist in the discussion of numerical dissipation and accuracy. In addition, the issues of code melding or integration with standard class solvers such as SINDA are discussed to advise the user of the potential problems to be encountered.
Mathur, Neha; Glesk, Ivan; Buis, Arjan
2016-10-01
Monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used impeding the required consistent positioning of the temperature sensors during donning and doffing. Predicting the in-socket residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. In this work, we propose to implement an adaptive neuro fuzzy inference strategy (ANFIS) to predict the in-socket residual limb temperature. ANFIS belongs to the family of fused neuro fuzzy system in which the fuzzy system is incorporated in a framework which is adaptive in nature. The proposed method is compared to our earlier work using Gaussian processes for machine learning. By comparing the predicted and actual data, results indicate that both the modeling techniques have comparable performance metrics and can be efficiently used for non-invasive temperature monitoring. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Research on Environmental Adjustment of Cloud Ranch Based on BP Neural Network PID Control
NASA Astrophysics Data System (ADS)
Ren, Jinzhi; Xiang, Wei; Zhao, Lin; Wu, Jianbo; Huang, Lianzhen; Tu, Qinggang; Zhao, Heming
2018-01-01
In order to make the intelligent ranch management mode replace the traditional artificial one gradually, this paper proposes a pasture environment control system based on cloud server, and puts forward the PID control algorithm based on BP neural network to control temperature and humidity better in the pasture environment. First, to model the temperature and humidity (controlled object) of the pasture, we can get the transfer function. Then the traditional PID control algorithm and the PID one based on BP neural network are applied to the transfer function. The obtained step tracking curves can be seen that the PID controller based on BP neural network has obvious superiority in adjusting time and error, etc. This algorithm, calculating reasonable control parameters of the temperature and humidity to control environment, can be better used in the cloud service platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haves, Phillip; Hencey, Brandon; Borrell, Francesco
2010-06-29
A Model Predictive Control algorithm was developed for the UC Merced campus chilled water plant. Model predictive control (MPC) is an advanced control technology that has proven successful in the chemical process industry and other industries. The main goal of the research was to demonstrate the practical and commercial viability of MPC for optimization of building energy systems. The control algorithms were developed and implemented in MATLAB, allowing for rapid development, performance, and robustness assessment. The UC Merced chilled water plant includes three water-cooled chillers and a two million gallon chilled water storage tank. The tank is charged during themore » night to minimize on-peak electricity consumption and take advantage of the lower ambient wet bulb temperature. The control algorithms determined the optimal chilled water plant operation including chilled water supply (CHWS) temperature set-point, condenser water supply (CWS) temperature set-point and the charging start and stop times to minimize a cost function that includes energy consumption and peak electrical demand over a 3-day prediction horizon. A detailed model of the chilled water plant and simplified models of the buildings served by the plant were developed using the equation-based modeling language Modelica. Steady state models of the chillers, cooling towers and pumps were developed, based on manufacturers performance data, and calibrated using measured data collected and archived by the control system. A detailed dynamic model of the chilled water storage tank was also developed and calibrated. Simple, semi-empirical models were developed to predict the temperature and flow rate of the chilled water returning to the plant from the buildings. These models were then combined and simplified for use in a model predictive control algorithm that determines the optimal chiller start and stop times and set-points for the condenser water temperature and the chilled water supply temperature. The report describes the development and testing of the algorithm and evaluates the resulting performance, concluding with a discussion of next steps in further research. The experimental results show a small improvement in COP over the baseline policy but it is difficult to draw any strong conclusions about the energy savings potential for MPC with this system only four days of suitable experimental data were obtained once correct operation of the MPC system had been achieved. These data show an improvement in COP of 3.1% {+-} 2.2% relative to a baseline established immediately prior to the period when the MPC was run in its final form. This baseline includes control policy improvements that the plant operators learned by observing the earlier implementations of MPC, including increasing the temperature of the water supplied to the chiller condensers from the cooling towers. The process of data collection and model development, necessary for any MPC project, resulted in the team uncovering various problems with the chilled water system. Although it is difficult to quantify the energy savings resulting from these problems being remedied, they were likely on the same order as the energy savings from the MPC itself. Although the types of problems uncovered and the level of energy savings may differ significantly from other projects, some of the benefits of detecting and diagnosing problems are expected from the use of MPC for any chilled water plant. The degree of chiller loading was found to be a key factor for efficiency. It is more efficient to operate the chillers at or near full load. In order to maximize the chiller load, one would maximize the temperature difference across chillers and the chilled water flow rate through the chillers. Thus, the CHWS set-point and the chilled water flow-rate can be used to limit the chiller loading to prevent chiller surging. Since the flow rate has an upper bound and the CHWS set point has a lower bound, the chiller loading is constrained and often determined by the chilled water return temperature (CHWR). The CHWR temperature is primarily comprised of warm water from the top of the TES tank. The CHWR temperature falls substantially as the thermocline approaches the top of the tank, which reduces the chiller loading. As a result, it has been determined that overcharging the TES tank can be detrimental to the chilled water plant efficiency. The resulting MPC policy differs from the current practice of fully charging the TES tank. A heuristic rule could possible avoid this problem without using predictive control. Similarly, the COP improvements from the change in CWS set-point were largely captured by a static set-point change by the operators. Further research is required to determine how much of the MPC savings could be garnered through simplified rules (based on the MPC study), with and without prediction.« less
Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2015-01-01
An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.
Improved Surface Parameter Retrievals using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel; Blaisdell, John
2008-01-01
The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Two very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; and 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions. In this methodology, longwave C02 channel observations in the spectral region 700 cm(exp -1) to 750 cm(exp -1) are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm(exp -1) 2395 cm(exp -1) are used for temperature sounding purposes. This allows for accurate temperature soundings under more difficult cloud conditions. This paper further improves on the methodology used in Version 5 to derive surface skin temperature and surface spectral emissivity from AIRS/AMSU observations. Now, following the approach used to improve tropospheric temperature profiles, surface skin temperature is also derived using only shortwave window channels. This produces improved surface parameters, both day and night, compared to what was obtained in Version 5. These in turn result in improved boundary layer temperatures and retrieved total O3 burden.
Hopf, Barbara; Dutz, Franz J; Bosselmann, Thomas; Willsch, Michael; Koch, Alexander W; Roths, Johannes
2018-04-30
A new iterative matrix algorithm has been applied to improve the precision of temperature and force decoupling in multi-parameter FBG sensing. For the first time, this evaluation technique allows the integration of nonlinearities in the sensor's temperature characteristic and the temperature dependence of the sensor's force sensitivity. Applied to a sensor cable consisting of two FBGs in fibers with 80 µm and 125 µm cladding diameter installed in a 7 m-long coiled PEEK capillary, this technique significantly reduced the uncertainties in friction-compensated temperature measurements. In the presence of high friction-induced forces of up to 1.6 N the uncertainties in temperature evaluation were reduced from several degrees Celsius if using a standard linear matrix approach to less than 0.5°C if using the iterative matrix approach in an extended temperature range between -35°C and 125°C.
Separating vegetation and soil temperature using airborne multiangular remote sensing image data
NASA Astrophysics Data System (ADS)
Liu, Qiang; Yan, Chunyan; Xiao, Qing; Yan, Guangjian; Fang, Li
2012-07-01
Land surface temperature (LST) is a key parameter in land process research. Many research efforts have been devoted to increase the accuracy of LST retrieval from remote sensing. However, because natural land surface is non-isothermal, component temperature is also required in applications such as evapo-transpiration (ET) modeling. This paper proposes a new algorithm to separately retrieve vegetation temperature and soil background temperature from multiangular thermal infrared (TIR) remote sensing data. The algorithm is based on the localized correlation between the visible/near-infrared (VNIR) bands and the TIR band. This method was tested on the airborne image data acquired during the Watershed Allied Telemetry Experimental Research (WATER) campaign. Preliminary validation indicates that the remote sensing-retrieved results can reflect the spatial and temporal trend of component temperatures. The accuracy is within three degrees while the difference between vegetation and soil temperature can be as large as twenty degrees.
NASA Astrophysics Data System (ADS)
Kim, Bong-Guk; Cho, Yang-Ki; Kim, Bong-Gwan; Kim, Young-Gi; Jung, Ji-Hoon
2015-04-01
Subsurface temperature plays an important role in determining heat contents in the upper ocean which are crucial in long-term and short-term weather systems. Furthermore, subsurface temperature affects significantly ocean ecology. In this study, a simple and practical algorithm has proposed. If we assume that subsurface temperature changes are proportional to surface heating or cooling, subsurface temperature at each depth (Sub_temp) can be estimated as follows PIC whereiis depth index, Clm_temp is temperature from climatology, dif0 is temperature difference between satellite and climatology in the surface, and ratio is ratio of temperature variability in each depth to surface temperature variability. Subsurface temperatures using this algorithm from climatology (WOA2013) and satellite SST (OSTIA) where calculated in the sea around Korean peninsula. Validation result with in-situ observation data show good agreement in the upper 50 m layer with RMSE (root mean square error) less than 2 K. The RMSE is smallest with less than 1 K in winter when surface mixed layer is thick, and largest with about 2~3 K in summer when surface mixed layer is shallow. The strong thermocline and large variability of the mixed layer depth might result in large RMSE in summer. Applying of mixed layer depth information for the algorithm may improve subsurface temperature estimation in summer. Spatial-temporal details on the improvement and its causes will be discussed.
Cloud Impacts on Pavement Temperature in Energy Balance Models
NASA Astrophysics Data System (ADS)
Walker, C. L.
2013-12-01
Forecast systems provide decision support for end-users ranging from the solar energy industry to municipalities concerned with road safety. Pavement temperature is an important variable when considering vehicle response to various weather conditions. A complex, yet direct relationship exists between tire and pavement temperatures. Literature has shown that as tire temperature increases, friction decreases which affects vehicle performance. Many forecast systems suffer from inaccurate radiation forecasts resulting in part from the inability to model different types of clouds and their influence on radiation. This research focused on forecast improvement by determining how cloud type impacts the amount of shortwave radiation reaching the surface and subsequent pavement temperatures. The study region was the Great Plains where surface solar radiation data were obtained from the High Plains Regional Climate Center's Automated Weather Data Network stations. Road pavement temperature data were obtained from the Meteorological Assimilation Data Ingest System. Cloud properties and radiative transfer quantities were obtained from the Clouds and Earth's Radiant Energy System mission via Aqua and Terra Moderate Resolution Imaging Spectroradiometer satellite products. An additional cloud data set was incorporated from the Naval Research Laboratory Cloud Classification algorithm. Statistical analyses using a modified nearest neighbor approach were first performed relating shortwave radiation variability with road pavement temperature fluctuations. Then statistical associations were determined between the shortwave radiation and cloud property data sets. Preliminary results suggest that substantial pavement forecasting improvement is possible with the inclusion of cloud-specific information. Future model sensitivity testing seeks to quantify the magnitude of forecast improvement.
Prediction of Baseflow Index of Catchments using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Yadav, B.; Hatfield, K.
2017-12-01
We present the results of eight machine learning techniques for predicting the baseflow index (BFI) of ungauged basins using a surrogate of catchment scale climate and physiographic data. The tested algorithms include ordinary least squares, ridge regression, least absolute shrinkage and selection operator (lasso), elasticnet, support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Our work seeks to identify the dominant controls of BFI that can be readily obtained from ancillary geospatial databases and remote sensing measurements, such that the developed techniques can be extended to ungauged catchments. More than 800 gauged catchments spanning the continental United States were selected to develop the general methodology. The BFI calculation was based on the baseflow separated from daily streamflow hydrograph using HYSEP filter. The surrogate catchment attributes were compiled from multiple sources including digital elevation model, soil, landuse, climate data, other publicly available ancillary and geospatial data. 80% catchments were used to train the ML algorithms, and the remaining 20% of the catchments were used as an independent test set to measure the generalization performance of fitted models. A k-fold cross-validation using exhaustive grid search was used to fit the hyperparameters of each model. Initial model development was based on 19 independent variables, but after variable selection and feature ranking, we generated revised sparse models of BFI prediction that are based on only six catchment attributes. These key predictive variables selected after the careful evaluation of bias-variance tradeoff include average catchment elevation, slope, fraction of sand, permeability, temperature, and precipitation. The most promising algorithms exceeding an accuracy score (r-square) of 0.7 on test data include support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Considering both the accuracy and the computational complexity of these algorithms, we identify the extremely randomized trees as the best performing algorithm for BFI prediction in ungauged basins.
NASA Astrophysics Data System (ADS)
Pieper, Michael
Accurate estimation or retrieval of surface emissivity spectra from long-wave infrared (LWIR) or Thermal Infrared (TIR) hyperspectral imaging data acquired by airborne or space-borne sensors is necessary for many scientific and defense applications. The at-aperture radiance measured by the sensor is a function of the ground emissivity and temperature, modified by the atmosphere. Thus the emissivity retrieval process consists of two interwoven steps: atmospheric compensation (AC) to retrieve the ground radiance from the measured at-aperture radiance and temperature-emissivity separation (TES) to separate the temperature and emissivity from the ground radiance. In-scene AC (ISAC) algorithms use blackbody-like materials in the scene, which have a linear relationship between their ground radiances and at-aperture radiances determined by the atmospheric transmission and upwelling radiance. Using a clear reference channel to estimate the ground radiance, a linear fitting of the at-aperture radiance and estimated ground radiance is done to estimate the atmospheric parameters. TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the sharp features added by the atmosphere. The ground temperature and emissivity are found by finding the temperature that provides the smoothest emissivity estimate. In this thesis we develop models to investigate the sensitivity of AC and TES to the basic assumptions enabling their performance. ISAC assumes that there are perfect blackbody pixels in a scene and that there is a clear channel, which is never the case. The developed ISAC model explains how the quality of blackbody-like pixels affect the shape of atmospheric estimates and the clear channel assumption affects their magnitude. Emissivity spectra for solids usually have some roughness. The TES model identifies four sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise and wavelength calibration. The ways these errors interact determines the overall TES performance. Since the AC and TES processes are interwoven, any errors in AC are transferred to TES and the final temperature and emissivity estimates. Combining the two models, shape errors caused by the blackbody assumption are transferred to the emissivity estimates, where magnitude errors from the clear channel assumption are compensated by TES temperature induced emissivity errors. The ability for the temperature induced error to compensate for such atmospheric errors makes it difficult to determine the correct atmospheric parameters for a scene. With these models we are able to determine the expected quality of estimated emissivity spectra based on the quality of blackbody-like materials on the ground, the emissivity of the materials being searched for, and the properties of the sensor. The quality of material emissivity spectra is a key factor in determining detection performance for a material in a scene.
Real-time photo-magnetic imaging.
Nouizi, Farouk; Erkol, Hakan; Luk, Alex; Unlu, Mehmet B; Gulsen, Gultekin
2016-10-01
We previously introduced a new high resolution diffuse optical imaging modality termed, photo-magnetic imaging (PMI). PMI irradiates the object under investigation with near-infrared light and monitors the variations of temperature using magnetic resonance thermometry (MRT). In this paper, we present a real-time PMI image reconstruction algorithm that uses analytic methods to solve the forward problem and assemble the Jacobian matrix much faster. The new algorithm is validated using real MRT measured temperature maps. In fact, it accelerates the reconstruction process by more than 250 times compared to a single iteration of the FEM-based algorithm, which opens the possibility for the real-time PMI.
“Skin-Core-Skin” Structure of Polymer Crystallization Investigated by Multiscale Simulation
Ruan, Chunlei
2018-01-01
“Skin-core-skin” structure is a typical crystal morphology in injection products. Previous numerical works have rarely focused on crystal evolution; rather, they have mostly been based on the prediction of temperature distribution or crystallization kinetics. The aim of this work was to achieve the “skin-core-skin” structure and investigate the role of external flow and temperature fields on crystal morphology. Therefore, the multiscale algorithm was extended to the simulation of polymer crystallization in a pipe flow. The multiscale algorithm contains two parts: a collocated finite volume method at the macroscopic level and a morphological Monte Carlo method at the microscopic level. The SIMPLE (semi-implicit method for pressure linked equations) algorithm was used to calculate the polymeric model at the macroscopic level, while the Monte Carlo method with stochastic birth-growth process of spherulites and shish-kebabs was used at the microscopic level. Results show that our algorithm is valid to predict “skin-core-skin” structure, and the initial melt temperature and the maximum velocity of melt at the inlet mainly affects the morphology of shish-kebabs. PMID:29659516
NASA Technical Reports Server (NTRS)
Hulley, G.; Malakar, N.; Hughes, T.; Islam, T.; Hook, S.
2016-01-01
This document outlines the theory and methodology for generating the Moderate Resolution Imaging Spectroradiometer (MODIS) Level-2 daily daytime and nighttime 1-km land surface temperature (LST) and emissivity product using the Temperature Emissivity Separation (TES) algorithm. The MODIS-TES (MOD21_L2) product, will include the LST and emissivity for three MODIS thermal infrared (TIR) bands 29, 31, and 32, and will be generated for data from the NASA-EOS AM and PM platforms. This is version 1.0 of the ATBD and the goal is maintain a 'living' version of this document with changes made when necessary. The current standard baseline MODIS LST products (MOD11*) are derived from the generalized split-window (SW) algorithm (Wan and Dozier 1996), which produces a 1-km LST product and two classification-based emissivities for bands 31 and 32; and a physics-based day/night algorithm (Wan and Li 1997), which produces a 5-km (C4) and 6-km (C5) LST product and emissivity for seven MODIS bands: 20, 22, 23, 29, 31-33.
Fast algorithm for spectral processing with application to on-line welding quality assurance
NASA Astrophysics Data System (ADS)
Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.
2006-10-01
A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.
Global potential distribution of Drosophila suzukii (Diptera, Drosophilidae)
dos Santos, Luana A.; Mendes, Mayara F.; Krüger, Alexandra P.; Blauth, Monica L.; Gottschalk, Marco S.
2017-01-01
Drosophila suzukii (Matsumura) is a species native to Western Asia that is able to pierce intact fruit during egg laying, causing it to be considered a fruit crop pest in many countries. Drosophila suzukii have a rapid expansion worldwide; occurrences were recorded in North America and Europe in 2008, and South America in 2013. Due to this rapid expansion, we modeled the potential distribution of this species using the Maximum Entropy Modeling (MaxEnt) algorithm and the Genetic Algorithm for Ruleset Production (GARP) using 407 sites with known occurrences worldwide and 11 predictor variables. After 1000 replicates, the value of the average area under the curve (AUC) of the model predictions with 1000 replicates was 0.97 for MaxEnt and 0.87 for GARP, indicating that both models had optimal performances. The environmental variables that most influenced the prediction of the MaxEnt model were the annual mean temperature, the maximum temperature of the warmest month, the mean temperature of the coldest quarter and the annual precipitation. The models indicated high environmental suitability, mainly in temperate and subtropical areas in the continents of Asia, Europe and North and South America, where the species has already been recorded. The potential for further invasions of the African and Australian continents is predicted due to the environmental suitability of these areas for this species. PMID:28323903
Validation of Infrared Azimuthal Model as Applied to GOES Data Over the ARM SGP
NASA Technical Reports Server (NTRS)
Gambheer, Arvind V.; Doelling, David R.; Spangenberg, Douglas A.; Minnis, Patrick
2004-01-01
The goal of this research is to identify and reduce the GOES-8 IR temperature biases, induced by a fixed geostationary position, during the course of a day. In this study, the same CERES LW window channel model is applied to GOES-8 IR temperatures during clear days over the Atmospheric Radiation Measurement-Southern Great Plains Central Facility (SCF). The model-adjusted and observed IR temperatures are compared with topof- the-atmosphere (TOA) estimated temperatures derived from a radiative transfer algorithm based on the atmospheric profile and surface radiometer measurements. This algorithm can then be incorporated to derive more accurate Ts from real-time satellite operational products.
A theoretical comparison of evolutionary algorithms and simulated annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less
NASA Technical Reports Server (NTRS)
Hoffman, Matthew J.; Eluszkiewicz, Janusz; Weisenstein, Deborah; Uymin, Gennady; Moncet, Jean-Luc
2012-01-01
Motivated by the needs of Mars data assimilation. particularly quantification of measurement errors and generation of averaging kernels. we have evaluated atmospheric temperature retrievals from Mars Global Surveyor (MGS) Thermal Emission Spectrometer (TES) radiances. Multiple sets of retrievals have been considered in this study; (1) retrievals available from the Planetary Data System (PDS), (2) retrievals based on variants of the retrieval algorithm used to generate the PDS retrievals, and (3) retrievals produced using the Mars 1-Dimensional Retrieval (M1R) algorithm based on the Optimal Spectral Sampling (OSS ) forward model. The retrieved temperature profiles are compared to the MGS Radio Science (RS) temperature profiles. For the samples tested, the M1R temperature profiles can be made to agree within 2 K with the RS temperature profiles, but only after tuning the prior and error statistics. Use of a global prior that does not take into account the seasonal dependence leads errors of up 6 K. In polar samples. errors relative to the RS temperature profiles are even larger. In these samples, the PDS temperature profiles also exhibit a poor fit with RS temperatures. This fit is worse than reported in previous studies, indicating that the lack of fit is due to a bias correction to TES radiances implemented after 2004. To explain the differences between the PDS and Ml R temperatures, the algorithms are compared directly, with the OSS forward model inserted into the PDS algorithm. Factors such as the filtering parameter, the use of linear versus nonlinear constrained inversion, and the choice of the forward model, are found to contribute heavily to the differences in the temperature profiles retrieved in the polar regions, resulting in uncertainties of up to 6 K. Even outside the poles, changes in the a priori statistics result in different profile shapes which all fit the radiances within the specified error. The importance of the a priori statistics prevents reliable global retrievals based a single a priori and strongly implies that a robust science analysis must instead rely on retrievals employing localized a priori information, for example from an ensemble based data assimilation system such as the Local Ensemble Transform Kalman Filter (LETKF).
Temperature dataloggers as stove use monitors (SUMs): Field methods and signal analysis
Ruiz-Mercado, Ilse; Canuz, Eduardo; Smith, Kirk R.
2013-01-01
We report the field methodology of a 32-month monitoring study with temperature dataloggers as Stove Use Monitors (SUMs) to quantify usage of biomass cookstoves in 80 households of rural Guatemala. The SUMs were deployed in two stoves types: a well-operating chimney cookstove and the traditional open-cookfire. We recorded a total of 31,112 days from all chimney cookstoves, with a 10% data loss rate. To count meals and determine daily use of the stoves we implemented a peak selection algorithm based on the instantaneous derivatives and the statistical long-term behavior of the stove and ambient temperature signals. Positive peaks with onset and decay slopes exceeding predefined thresholds were identified as “fueling events”, the minimum unit of stove use. Adjacent fueling events detected within a fixed-time window were clustered in single “cooking events” or “meals”. The observed means of the population usage were: 89.4% days in use from all cookstoves and days monitored, 2.44 meals per day and 2.98 fueling events. We found that at this study site a single temperature threshold from the annual distribution of daily ambient temperatures was sufficient to differentiate days of use with 0.97 sensitivity and 0.95 specificity compared to the peak selection algorithm. With adequate placement, standardized data collection protocols and careful data management the SUMs can provide objective stove-use data with resolution, accuracy and level of detail not possible before. The SUMs enable unobtrusive monitoring of stove-use behavior and its systematic evaluation with stove performance parameters of air pollution, fuel consumption and climate-altering emissions. PMID:25225456
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less
A quantum–quantum Metropolis algorithm
Yung, Man-Hong; Aspuru-Guzik, Alán
2012-01-01
The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584
NASA Astrophysics Data System (ADS)
Wei, B. G.; Wu, X. Y.; Yao, Z. F.; Huang, H.
2017-11-01
Transformers are essential devices of the power system. The accurate computation of the highest temperature (HST) of a transformer’s windings is very significant, as for the HST is a fundamental parameter in controlling the load operation mode and influencing the life time of the insulation. Based on the analysis of the heat transfer processes and the thermal characteristics inside transformers, there is taken into consideration the influence of factors like the sunshine, external wind speed etc. on the oil-immersed transformers. Experimental data and the neural network are used for modeling and protesting of the HST, and furthermore, investigations are conducted on the optimization of the structure and algorithms of neutral network are conducted. Comparison is made between the measured values and calculated values by using the recommended algorithm of IEC60076 and by using the neural network algorithm proposed by the authors; comparison that shows that the value computed with the neural network algorithm approximates better the measured value than the value computed with the algorithm proposed by IEC60076.
Multifractal detrending moving-average cross-correlation analysis
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.
Missing value imputation for microarray data: a comprehensive comparison study and a web tool
2013-01-01
Background Microarray data are usually peppered with missing values due to various reasons. However, most of the downstream analyses for microarray data require complete datasets. Therefore, accurate algorithms for missing value estimation are needed for improving the performance of microarray data analyses. Although many algorithms have been developed, there are many debates on the selection of the optimal algorithm. The studies about the performance comparison of different algorithms are still incomprehensive, especially in the number of benchmark datasets used, the number of algorithms compared, the rounds of simulation conducted, and the performance measures used. Results In this paper, we performed a comprehensive comparison by using (I) thirteen datasets, (II) nine algorithms, (III) 110 independent runs of simulation, and (IV) three types of measures to evaluate the performance of each imputation algorithm fairly. First, the effects of different types of microarray datasets on the performance of each imputation algorithm were evaluated. Second, we discussed whether the datasets from different species have different impact on the performance of different algorithms. To assess the performance of each algorithm fairly, all evaluations were performed using three types of measures. Our results indicate that the performance of an imputation algorithm mainly depends on the type of a dataset but not on the species where the samples come from. In addition to the statistical measure, two other measures with biological meanings are useful to reflect the impact of missing value imputation on the downstream data analyses. Our study suggests that local-least-squares-based methods are good choices to handle missing values for most of the microarray datasets. Conclusions In this work, we carried out a comprehensive comparison of the algorithms for microarray missing value imputation. Based on such a comprehensive comparison, researchers could choose the optimal algorithm for their datasets easily. Moreover, new imputation algorithms could be compared with the existing algorithms using this comparison strategy as a standard protocol. In addition, to assist researchers in dealing with missing values easily, we built a web-based and easy-to-use imputation tool, MissVIA (http://cosbi.ee.ncku.edu.tw/MissVIA), which supports many imputation algorithms. Once users upload a real microarray dataset and choose the imputation algorithms, MissVIA will determine the optimal algorithm for the users' data through a series of simulations, and then the imputed results can be downloaded for the downstream data analyses. PMID:24565220
NASA Astrophysics Data System (ADS)
Xia, Huihui; Kan, Ruifeng; Xu, Zhenyu; Liu, Jianguo; He, Yabai; Yang, Chenguang; Chen, Bing; Wei, Min; Yao, Lu; Zhang, Guangle
2016-10-01
In this paper, the reconstruction of axisymmetric temperature and H2O concentration distributions in a flat flame burner is realized by tunable diode laser absorption spectroscopy (TDLAS) and filtered back-projection (FBP) algorithm. Two H2O absorption transitions (7154.354/7154.353 cm-1 and 7467.769 cm-1) are selected as line pair for temperature measurement, and time division multiplexing technology is adopted to scan this two H2O absorption transitions simultaneously at 1 kHz repetition rate. In the experiment, FBP algorithm can be used for reconstructing axisymmetric distributions of flow field parameters with only single view parallel-beam TDLAS measurements, and the same data sets from the given parallel beam are used for other virtual projection angles and beams scattered between 0° and 180°. The real-time online measurements of projection data, i.e., integrated absorbance both for pre-selected transitions on CH4/air flat flame burner are realized by Voigt on-line fitting, and the fitting residuals are less than 0.2%. By analyzing the projection data from different views based on FBP algorithm, the distributions of temperature and concentration along radial direction can be known instantly. The results demonstrate that the system and the proposed innovative FBP algorithm are capable for accurate reconstruction of axisymmetric temperature and H2O concentration distribution in combustion systems and facilities.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Jacomy, Mathieu; Venturini, Tommaso; Heymann, Sebastien; Bastian, Mathieu
2014-01-01
Gephi is a network visualization software used in various disciplines (social network analysis, biology, genomics...). One of its key features is the ability to display the spatialization process, aiming at transforming the network into a map, and ForceAtlas2 is its default layout algorithm. The latter is developed by the Gephi team as an all-around solution to Gephi users' typical networks (scale-free, 10 to 10,000 nodes). We present here for the first time its functioning and settings. ForceAtlas2 is a force-directed layout close to other algorithms used for network spatialization. We do not claim a theoretical advance but an attempt to integrate different techniques such as the Barnes Hut simulation, degree-dependent repulsive force, and local and global adaptive temperatures. It is designed for the Gephi user experience (it is a continuous algorithm), and we explain which constraints it implies. The algorithm benefits from much feedback and is developed in order to provide many possibilities through its settings. We lay out its complete functioning for the users who need a precise understanding of its behaviour, from the formulas to graphic illustration of the result. We propose a benchmark for our compromise between performance and quality. We also explain why we integrated its various features and discuss our design choices.
Jacomy, Mathieu; Venturini, Tommaso; Heymann, Sebastien; Bastian, Mathieu
2014-01-01
Gephi is a network visualization software used in various disciplines (social network analysis, biology, genomics…). One of its key features is the ability to display the spatialization process, aiming at transforming the network into a map, and ForceAtlas2 is its default layout algorithm. The latter is developed by the Gephi team as an all-around solution to Gephi users’ typical networks (scale-free, 10 to 10,000 nodes). We present here for the first time its functioning and settings. ForceAtlas2 is a force-directed layout close to other algorithms used for network spatialization. We do not claim a theoretical advance but an attempt to integrate different techniques such as the Barnes Hut simulation, degree-dependent repulsive force, and local and global adaptive temperatures. It is designed for the Gephi user experience (it is a continuous algorithm), and we explain which constraints it implies. The algorithm benefits from much feedback and is developed in order to provide many possibilities through its settings. We lay out its complete functioning for the users who need a precise understanding of its behaviour, from the formulas to graphic illustration of the result. We propose a benchmark for our compromise between performance and quality. We also explain why we integrated its various features and discuss our design choices. PMID:24914678
Weighted community detection and data clustering using message passing
NASA Astrophysics Data System (ADS)
Shi, Cheng; Liu, Yanchen; Zhang, Pan
2018-03-01
Grouping objects into clusters based on the similarities or weights between them is one of the most important problems in science and engineering. In this work, by extending message-passing algorithms and spectral algorithms proposed for an unweighted community detection problem, we develop a non-parametric method based on statistical physics, by mapping the problem to the Potts model at the critical temperature of spin-glass transition and applying belief propagation to solve the marginals corresponding to the Boltzmann distribution. Our algorithm is robust to over-fitting and gives a principled way to determine whether there are significant clusters in the data and how many clusters there are. We apply our method to different clustering tasks. In the community detection problem in weighted and directed networks, we show that our algorithm significantly outperforms existing algorithms. In the clustering problem, where the data were generated by mixture models in the sparse regime, we show that our method works all the way down to the theoretical limit of detectability and gives accuracy very close to that of the optimal Bayesian inference. In the semi-supervised clustering problem, our method only needs several labels to work perfectly in classic datasets. Finally, we further develop Thouless-Anderson-Palmer equations which heavily reduce the computation complexity in dense networks but give almost the same performance as belief propagation.
Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng
2015-01-01
Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs.
2015-01-01
Background Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. Results The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Conclusions Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs. PMID:26677932
Smart building temperature control using occupant feedback
NASA Astrophysics Data System (ADS)
Gupta, Santosh K.
This work was motivated by the problem of computing optimal commonly-agreeable thermal settings in spaces with multiple occupants. In this work we propose algorithms that take into account each occupant's preferences along with the thermal correlations between different zones in a building, to arrive at optimal thermal settings for all zones of the building in a coordinated manner. In the first part of this work we incorporate active occupant feedback to minimize aggregate user discomfort and total energy cost. User feedback is used to estimate the users comfort range, taking into account possible inaccuracies in the feedback. The control algorithm takes the energy cost into account, trading it off optimally with the aggregate user discomfort. A lumped heat transfer model based on thermal resistance and capacitance is used to model a multi-zone building. We provide a stability analysis and establish convergence of the proposed solution to a desired temperature that minimizes the sum of energy cost and aggregate user discomfort. However, for convergence to the optimal, sufficient separation between the user feedback frequency and the dynamics of the system is necessary; otherwise, the user feedback provided do not correctly reflect the effect of current control input value on user discomfort. The algorithm is further extended using singular perturbation theory to determine the minimum time between successive user feedback solicitations. Under sufficient time scale separation, we establish convergence of the proposed solution. Simulation study and experimental runs on the Watervliet based test facility demonstrates performance of the algorithm. In the second part we develop a consensus algorithm for attaining a common temperature set-point that is agreeable to all occupants of a zone in a typical multi-occupant space. The information on the comfort range functions is indeed held privately by each occupant. Using occupant differentiated dynamically adjusted prices as feedback signals, we propose a distributed solution, which ensures that a consensus is attained among all occupants upon convergence, irrespective of their temperature preferences being in coherence or conflicting. Occupants are only assumed to be rational, in that they choose their own temperature set-points so as to minimize their individual energy cost plus discomfort. We use Alternating Direction Method of Multipliers ( ADMM) to solve our consensus problem. We further establish the convergence of the proposed algorithm to the optimal thermal set point values that minimize the sum of the energy cost and the aggregate discomfort of all occupants in a multi-zone building. For simulating our consensus algorithm we use realistic building parameters based on the Watervliet test facility. The simulation study based on real world building parameters establish the validity of our theoretical model and provide insights on the dynamics of the system with a mobile user population. In the third part we present a game-theoretic (auction) mechanism, that requires occupants to "purchase" their individualized comfort levels beyond what is provided by default by the building operator. The comfort pricing policy, derived as an extension of Vickrey-Clarke-Groves (VCG) pricing, ensures incentive-compatibility of the mechanism, i.e., an occupant acting in self-interest cannot benefit from declaring their comfort function untruthfully, irrespective of the choices made by other occupants. The declared (or estimated) occupant comfort ranges (functions) are then utilized by the building operator---along with the energy cost information---to set the environment controls to optimally balance the aggregate discomfort of the occupants and the energy cost of the building operator. We use realistic building model and parameters based on our test facility to demonstrate the convergence of the actual temperatures in different zones to the desired temperatures, and provide insight to the pricing structure necessary for truthful comfort feedback from the occupants. Finally, we present an end-to-end framework designed for enabling occupant feedback collection and incorporating the feedback data towards energy efficient operation of a building. We have designed a mobile application that occupants can use on their smart phones to provide their thermal preference feedback. When relaying the occupant feedback to the central server the mobile application also uses indoor localization techniques to tie the occupant preference to their current thermal zone. Texas Instruments sensortags are used for real time zonal temperature readings. The mobile application relays the occupant preference along with the location to a central server that also hosts our learning algorithm to learn the environment and using occupant feedback calculates the optimal temperature set point. The entire process is triggered upon change of occupancy, environmental conditions, and or occupant preference. The learning algorithm is scheduled to run at regular intervals to respond dynamically to environmental and occupancy changes. We describe results from experimental studies in two different settings: a single family residential home setting and in a university based laboratory space setting. (Abstract shortened by UMI.).
Inter-method Performance Study of Tumor Volumetry Assessment on Computed Tomography Test-retest Data
Buckler, Andrew J.; Danagoulian, Jovanna; Johnson, Kjell; Peskin, Adele; Gavrielides, Marios A.; Petrick, Nicholas; Obuchowski, Nancy A.; Beaumont, Hubert; Hadjiiski, Lubomir; Jarecha, Rudresh; Kuhnigk, Jan-Martin; Mantri, Ninad; McNitt-Gray, Michael; Moltz, Jan Hendrik; Nyiri, Gergely; Peterson, Sam; Tervé, Pierre; Tietjen, Christian; von Lavante, Etienne; Ma, Xiaonan; Pierre, Samantha St.; Athelogou, Maria
2015-01-01
Rationale and objectives Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semi-automated lung tumor volume measurement algorithms from clinical thoracic CT datasets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) CT Volumetry Profile. Materials and Methods Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. Results Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility determined in three partitions and found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters above 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not just in overall volume but in detail. Conclusions Nine of the twelve participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the current study was not designed to explicitly evaluate algorithm Profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes above 10 mm. No partition of the algorithms were able to meet the QIBA requirements for interchangeability down to 10 mm, though the partition comprised of the best performing algorithms did meet this requirement above a tumor size of approximately 40 mm. PMID:26376841
GPU based cloud system for high-performance arrhythmia detection with parallel k-NN algorithm.
Tae Joon Jun; Hyun Ji Park; Hyuk Yoo; Young-Hak Kim; Daeyoung Kim
2016-08-01
In this paper, we propose an GPU based Cloud system for high-performance arrhythmia detection. Pan-Tompkins algorithm is used for QRS detection and we optimized beat classification algorithm with K-Nearest Neighbor (K-NN). To support high performance beat classification on the system, we parallelized beat classification algorithm with CUDA to execute the algorithm on virtualized GPU devices on the Cloud system. MIT-BIH Arrhythmia database is used for validation of the algorithm. The system achieved about 93.5% of detection rate which is comparable to previous researches while our algorithm shows 2.5 times faster execution time compared to CPU only detection algorithm.
Stochastic series expansion simulation of the t -V model
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Troyer, Matthias
2016-04-01
We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements
NASA Astrophysics Data System (ADS)
Tsouri, Gill R.; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K.
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
Constrained independent component analysis approach to nonobtrusive pulse rate measurements.
Tsouri, Gill R; Kyal, Survi; Dianat, Sohail; Mestha, Lalit K
2012-07-01
Nonobtrusive pulse rate measurement using a webcam is considered. We demonstrate how state-of-the-art algorithms based on independent component analysis suffer from a sorting problem which hinders their performance, and propose a novel algorithm based on constrained independent component analysis to improve performance. We present how the proposed algorithm extracts a photoplethysmography signal and resolves the sorting problem. In addition, we perform a comparative study between the proposed algorithm and state-of-the-art algorithms over 45 video streams using a finger probe oxymeter for reference measurements. The proposed algorithm provides improved accuracy: the root mean square error is decreased from 20.6 and 9.5 beats per minute (bpm) for existing algorithms to 3.5 bpm for the proposed algorithm. An error of 3.5 bpm is within the inaccuracy expected from the reference measurements. This implies that the proposed algorithm provided performance of equal accuracy to the finger probe oximeter.
NASA Astrophysics Data System (ADS)
Liu, Sha; Liu, Shi; Tong, Guowei
2017-11-01
In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.
NASA Astrophysics Data System (ADS)
Pelicano, Christian Mark; Rapadas, Nick; Cagatan, Gerard; Magdaluyo, Eduardo
2017-12-01
Herein, the crystallite size and band gap energy of zinc oxide (ZnO) quantum dots were predicted using artificial neural network (ANN). Three input factors including reagent ratio, growth time, and growth temperature were examined with respect to crystallite size and band gap energy as response factors. The generated results from neural network model were then compared with the experimental results. Experimental crystallite size and band gap energy of ZnO quantum dots were measured from TEM images and absorbance spectra, respectively. The Levenberg-Marquardt (LM) algorithm was used as the learning algorithm for the ANN model. The performance of the ANN model was then assessed through mean square error (MSE) and regression values. Based on the results, the ANN modelling results are in good agreement with the experimental data.
Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.
Kubitzki, Marcus B; de Groot, Bert L
2007-06-15
Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.
Minimalist ensemble algorithms for genome-wide protein localization prediction.
Lin, Jhih-Rong; Mondal, Ananda Mohan; Liu, Rong; Hu, Jianjun
2012-07-03
Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi.
Minimalist ensemble algorithms for genome-wide protein localization prediction
2012-01-01
Background Computational prediction of protein subcellular localization can greatly help to elucidate its functions. Despite the existence of dozens of protein localization prediction algorithms, the prediction accuracy and coverage are still low. Several ensemble algorithms have been proposed to improve the prediction performance, which usually include as many as 10 or more individual localization algorithms. However, their performance is still limited by the running complexity and redundancy among individual prediction algorithms. Results This paper proposed a novel method for rational design of minimalist ensemble algorithms for practical genome-wide protein subcellular localization prediction. The algorithm is based on combining a feature selection based filter and a logistic regression classifier. Using a novel concept of contribution scores, we analyzed issues of algorithm redundancy, consensus mistakes, and algorithm complementarity in designing ensemble algorithms. We applied the proposed minimalist logistic regression (LR) ensemble algorithm to two genome-wide datasets of Yeast and Human and compared its performance with current ensemble algorithms. Experimental results showed that the minimalist ensemble algorithm can achieve high prediction accuracy with only 1/3 to 1/2 of individual predictors of current ensemble algorithms, which greatly reduces computational complexity and running time. It was found that the high performance ensemble algorithms are usually composed of the predictors that together cover most of available features. Compared to the best individual predictor, our ensemble algorithm improved the prediction accuracy from AUC score of 0.558 to 0.707 for the Yeast dataset and from 0.628 to 0.646 for the Human dataset. Compared with popular weighted voting based ensemble algorithms, our classifier-based ensemble algorithms achieved much better performance without suffering from inclusion of too many individual predictors. Conclusions We proposed a method for rational design of minimalist ensemble algorithms using feature selection and classifiers. The proposed minimalist ensemble algorithm based on logistic regression can achieve equal or better prediction performance while using only half or one-third of individual predictors compared to other ensemble algorithms. The results also suggested that meta-predictors that take advantage of a variety of features by combining individual predictors tend to achieve the best performance. The LR ensemble server and related benchmark datasets are available at http://mleg.cse.sc.edu/LRensemble/cgi-bin/predict.cgi. PMID:22759391
NASA Astrophysics Data System (ADS)
May, J. C.; Rowley, C. D.; Meyer, H.
2017-12-01
The Naval Research Laboratory (NRL) Ocean Surface Flux System (NFLUX) is an end-to-end data processing and assimilation system used to provide near-real-time satellite-based surface heat flux fields over the global ocean. The first component of NFLUX produces near-real-time swath-level estimates of surface state parameters and downwelling radiative fluxes. The focus here will be on the satellite swath-level state parameter retrievals, namely surface air temperature, surface specific humidity, and surface scalar wind speed over the ocean. Swath-level state parameter retrievals are produced from satellite sensor data records (SDRs) from four passive microwave sensors onboard 10 platforms: the Special Sensor Microwave Imager/Sounder (SSMIS) sensor onboard the DMSP F16, F17, and F18 platforms; the Advanced Microwave Sounding Unit-A (AMSU-A) sensor onboard the NOAA-15, NOAA-18, NOAA-19, Metop-A, and Metop-B platforms; the Advanced Technology Microwave Sounder (ATMS) sensor onboard the S-NPP platform; and the Advanced Microwave Scannin Radiometer 2 (AMSR2) sensor onboard the GCOM-W1 platform. The satellite SDRs are translated into state parameter estimates using multiple polynomial regression algorithms. The coefficients to the algorithms are obtained using a bootstrapping technique with all available brightness temperature channels for a given sensor, in addition to a SST field. For each retrieved parameter for each sensor-platform combination, unique algorithms are developed for ascending and descending orbits, as well as clear vs cloudy conditions. Each of the sensors produces surface air temperature and surface specific humidity retrievals. The SSMIS and AMSR2 sensors also produce surface scalar wind speed retrievals. Improvement is seen in the SSMIS retrievals when separate algorithms are used for the even and odd scans, with the odd scans performing better than the even scans. Currently, NFLUX treats all SSMIS scans as even scans. Additional improvement in all of the surface retrievals comes from using a 3-hourly SST field, as opposed to a daily SST field.
Power Control and Optimization of Photovoltaic and Wind Energy Conversion Systems
NASA Astrophysics Data System (ADS)
Ghaffari, Azad
Power map and Maximum Power Point (MPP) of Photovoltaic (PV) and Wind Energy Conversion Systems (WECS) highly depend on system dynamics and environmental parameters, e.g., solar irradiance, temperature, and wind speed. Power optimization algorithms for PV systems and WECS are collectively known as Maximum Power Point Tracking (MPPT) algorithm. Gradient-based Extremum Seeking (ES), as a non-model-based MPPT algorithm, governs the system to its peak point on the steepest descent curve regardless of changes of the system dynamics and variations of the environmental parameters. Since the power map shape defines the gradient vector, then a close estimate of the power map shape is needed to create user assignable transients in the MPPT algorithm. The Hessian gives a precise estimate of the power map in a neighborhood around the MPP. The estimate of the inverse of the Hessian in combination with the estimate of the gradient vector are the key parts to implement the Newton-based ES algorithm. Hence, we generate an estimate of the Hessian using our proposed perturbation matrix. Also, we introduce a dynamic estimator to calculate the inverse of the Hessian which is an essential part of our algorithm. We present various simulations and experiments on the micro-converter PV systems to verify the validity of our proposed algorithm. The ES scheme can also be used in combination with other control algorithms to achieve desired closed-loop performance. The WECS dynamics is slow which causes even slower response time for the MPPT based on the ES. Hence, we present a control scheme, extended from Field-Oriented Control (FOC), in combination with feedback linearization to reduce the convergence time of the closed-loop system. Furthermore, the nonlinear control prevents magnetic saturation of the stator of the Induction Generator (IG). The proposed control algorithm in combination with the ES guarantees the closed-loop system robustness with respect to high level parameter uncertainty in the IG dynamics. The simulation results verify the effectiveness of the proposed algorithm.
Iterative simulated quenching for designing irregular-spot-array generators.
Gillet, J N; Sheng, Y
2000-07-10
We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.
NASA Astrophysics Data System (ADS)
Yoon, Kyung-Beom; Park, Won-Hee
2015-04-01
The convective heat transfer coefficient and surface emissivity before and after flame occurrence on a wood specimen surface and the flame heat flux were estimated using the repulsive particle swarm optimization algorithm and cone heater test results. The cone heater specified in the ISO 5660 standards was used, and six cone heater heat fluxes were tested. Preservative-treated Douglas fir 21 mm in thickness was used as the wood specimen in the tests. This study confirmed that the surface temperature of the specimen, which was calculated using the convective heat transfer coefficient, surface emissivity and flame heat flux on the wood specimen by a repulsive particle swarm optimization algorithm, was consistent with the measured temperature. Considering the measurement errors in the surface temperature of the specimen, the applicability of the optimization method considered in this study was evaluated.
NASA Astrophysics Data System (ADS)
Ren, Qianyu; Li, Junhong; Hong, Yingping; Jia, Pinggang; Xiong, Jijun
2017-09-01
A new demodulation algorithm of the fiber-optic Fabry-Perot cavity length based on the phase generated carrier (PGC) is proposed in this paper, which can be applied in the high-temperature pressure sensor. This new algorithm based on arc tangent function outputs two orthogonal signals by utilizing an optical system, which is designed based on the field-programmable gate array (FPGA) to overcome the range limit of the original PGC arc tangent function demodulation algorithm. The simulation and analysis are also carried on. According to the analysis of demodulation speed and precision, the simulation of different numbers of sampling points, and measurement results of the pressure sensor, the arc tangent function demodulation method has good demodulation results: 1 MHz processing speed of single data and less than 1% error showing practical feasibility in the fiber-optic Fabry-Perot cavity length demodulation of the Fabry-Perot high-temperature pressure sensor.
Elements de conception d'un systeme geothermique hybride par optimisation financiere
NASA Astrophysics Data System (ADS)
Henault, Benjamin
The choice of design parameters for a hybrid geothermal system is usually based on current practices or questionable assumptions. In fact, the main purpose of a hybrid geothermal system is to maximize the energy savings associated with heating and cooling requirements while minimizing the costs of operation and installation. This thesis presents a strategy to maximize the net present value of a hybrid geothermal system. This objective is expressed by a series of equations that lead to a global objective function. Iteratively, the algorithm converges to an optimal solution by using an optimization method: the conjugate gradient combined with a combinatorial method. The objective function presented in this paper makes use of a simulation algorithm for predicting the fluid temperature of a hybrid geothermal system on an hourly basis. Thus, the optimization method selects six variables iteratively, continuous and integer type, affecting project costs and energy savings. These variables are the limit temperature at the entry of the heat pump (geothermal side), the number of heat pumps, the number of geothermal wells and the distance in X and Y between the geothermal wells. Generally, these variables have a direct impact on the cost of the installation, on the entering water temperature at the heat pumps, the cost of equipment, the thermal interference between boreholes, the total capacity of geothermal system, on system performance, etc. On the other hand, the arrangement of geothermal wells is variable and is often irregular depending on the number of selected boreholes by the algorithm. Removal or addition of one or more borehole is guided by a predefined order dicted by the designer. This feature of irregular arrangement represents an innovation in the field and is necessary for the operation of this algorithm. Indeed, this ensures continuity between the number of boreholes allowing the use of the conjugate gradient method. The proposed method provides as outputs the net present value of the optimal solution, the position of the vertical boreholes, the number of installed heat pumps, the limits of entering water temperature at the heat pumps and energy consumption of the hybrid geothermal system. To demonstrate the added value of this design method, two case studies are analyzed, for a commercial building and a residential. The two studies allow to conclude that: the net present value of hybrid geothermal systems can be significantly improved by the choice of right specifications; the economic value of a geothermal project is strongly influenced by the number of heat pumps and the number of geothermal wells or the temperature limit in heating mode; the choice of design parameters should always be driven by an objective function and not by the designer; peak demand charges favor hybrid geothermal systems with a higher capacity. Then, in order to validate the operation, this new design method is compared to the standard sizing method which is commonly used. By designing the hybrid geothermal system according to standard sizing method and to meet 70% of peak heating, the net present value over 20 years for the residential project is negative, at -61,500 while it is 43,700 for commercial hybrid geothermal system. Using the new design method presented in this thesis, the net present values of projects are respectively 162,000 and 179,000. The use of this algorithm is beneficial because it significantly increases the net present value of projects. The research presented in this thesis allows to optimize the financial performance of hybrid geothermal systems. The proposed method will allow industry stakeholders to increase the profitability of their projects associated with low temperature geothermal energy.
NASA Technical Reports Server (NTRS)
McClain, Charles R.; Signorini, Sergio
2002-01-01
Sensitivity analyses of sea-air CO2 flux to gas transfer algorithms, climatological wind speeds, sea surface temperatures (SST) and salinity (SSS) were conducted for the global oceans and selected regional domains. Large uncertainties in the global sea-air flux estimates are identified due to different gas transfer algorithms, global climatological wind speeds, and seasonal SST and SSS data. The global sea-air flux ranges from -0.57 to -2.27 Gt/yr, depending on the combination of gas transfer algorithms and global climatological wind speeds used. Different combinations of SST and SSS global fields resulted in changes as large as 35% on the oceans global sea-air flux. An error as small as plus or minus 0.2 in SSS translates into a plus or minus 43% deviation on the mean global CO2 flux. This result emphasizes the need for highly accurate satellite SSS observations for the development of remote sensing sea-air flux algorithms.
Solar Irradiance from GOES Albedo performance in a Hydrologic Model Simulation of Snowmelt Runoff
NASA Astrophysics Data System (ADS)
Sumargo, E.; Cayan, D. R.; McGurk, B. J.
2015-12-01
In many hydrologic modeling applications, solar radiation has been parameterized using commonly available measures, such as the daily temperature range, due to scarce in situ solar radiation measurement network. However, these parameterized estimates often produce significant biases. Here we test hourly solar irradiance derived from the Geostationary Operational Environmental Satellite (GOES) visible albedo product, using several established algorithms. Focusing on the Sierra Nevada and White Mountain in California, we compared the GOES irradiance and that from a traditional temperature-based algorithm with incoming irradiance from pyranometers at 19 stations. The GOES based estimates yielded 21-27% reduction in root-mean-squared error (average over 19 sites). The derived irradiance is then prescribed as an input to Precipitation-Runoff Modeling System (PRMS). We constrain our experiment to the Tuolumne River watershed and focus our attention on the winter and spring of 1996-2014. A root-mean-squared error reduction of 2-6% in daily inflow to Hetch Hetchy at the lower end of the Tuolumne catchment was achieved by incorporating the insolation estimates at only 8 out of 280 Hydrologic Response Units (HRUs) within the basin. Our ongoing work endeavors to apply satellite-derived irradiance at each individual HRU.
Momenbeik, Fariborz; Roosta, Mostafa; Nikoukar, Ali Akbar
2010-06-11
An environmentally benign and simple method has been proposed for separation and determination of fat-soluble vitamins using isocratic microemulsion liquid chromatography. Optimization of parameters affecting the separation selectivity and efficiency including surfactant concentration, percent of cosurfactant (1-butanol), and percent of organic oily solvent (diethyl ether), temperature and pH were performed simultaneously using genetic algorithm method. A new software package, MLR-GA, was developed for this purpose. The results indicated that 73.6mM sodium dodecyl sulfate, 13.64% (v/v) 1-butanol, 0.48% (v/v) diethyl ether, column temperature of 32.5 degrees C and 0.02M phosphate buffer of pH 6.99 are the best conditions for separation of fat-soluble vitamins. At the optimized conditions, the calibration plots for the vitamins were obtained and detection limits (1.06-3.69microgmL(-1)), accuracy (recoveries>94.3), precision (RSD<3.96) and linearity (0.01-10mgmL(-1)) were estimated. Finally, the amount of vitamins in multivitamin syrup and a sample of fish oil capsule were determined. The results showed a good agreement with those reported by manufactures. Copyright 2010 Elsevier B.V. All rights reserved.
Multi channel thermal hydraulic analysis of gas cooled fast reactor using genetic algorithm
NASA Astrophysics Data System (ADS)
Drajat, R. Z.; Su'ud, Z.; Soewono, E.; Gunawan, A. Y.
2012-05-01
There are three analyzes to be done in the design process of nuclear reactor i.e. neutronic analysis, thermal hydraulic analysis and thermodynamic analysis. The focus in this article is the thermal hydraulic analysis, which has a very important role in terms of system efficiency and the selection of the optimal design. This analysis is performed in a type of Gas Cooled Fast Reactor (GFR) using cooling Helium (He). The heat from nuclear fission reactions in nuclear reactors will be distributed through the process of conduction in fuel elements. Furthermore, the heat is delivered through a process of heat convection in the fluid flow in cooling channel. Temperature changes that occur in the coolant channels cause a decrease in pressure at the top of the reactor core. The governing equations in each channel consist of mass balance, momentum balance, energy balance, mass conservation and ideal gas equation. The problem is reduced to finding flow rates in each channel such that the pressure drops at the top of the reactor core are all equal. The problem is solved numerically with the genetic algorithm method. Flow rates and temperature distribution in each channel are obtained here.
NASA Astrophysics Data System (ADS)
Alavi Fazel, S. Ali
2017-09-01
A new optimized model which can predict the heat transfer in the nucleate boiling at isolated bubble regime is proposed for pool boiling on a horizontal rod heater. This model is developed based on the results of direct observations of the physical boiling phenomena. Boiling heat flux, wall temperature, bubble departing diameter, bubble generation frequency and bubble nucleation site density have been experimentally measured. Water and ethanol have been used as two different boiling fluids. Heating surface was made by several metals and various degrees of roughness. The mentioned model considers various mechanisms such as latent heat transfer due to micro-layer evaporation, transient conduction due to thermal boundary layer reformation, natural convection, heat transfer due to the sliding bubbles and bubble super-heating. The fractional contributions of individual mentioned heat transfer mechanisms have been calculated by genetic algorithm. The results show that at wall temperature difference more that about 3 K, bubble sliding transient conduction, non-sliding transient conduction, micro-layer evaporation, natural convection, radial forced convection and bubble super-heating have higher to lower fractional contributions respectively. The performance of the new optimized model has been verified by comparison of the existing experimental data.
NASA Astrophysics Data System (ADS)
Rinne, J.; Tuittila, E. S.; Peltola, O.; Li, X.; Raivonen, M.; Alekseychik, P.; Haapanala, S.; Pihlatie, M.; Aurela, M.; Mammarella, I.; Vesala, T.
2017-12-01
Models for calculating methane emission from wetland ecosystems typically relate the methane emission to carbon dioxide assimilation. Other parameters that control emission in these models are e.g. peat temperature and water table position. Many of these relations are derived from spatial variation between chamber measurements by space-for-time approach. Continuous longer term ecosystem scale methane emission measurements by eddy covariance method provide us independent data to assess the validity of the relations derived by space-for-time approach.We have analyzed eleven-year methane flux data-set, measured at a boreal fen, together with data on environmental parameters and carbon dioxide exchange to assess the relations to typical model drivers. The data was obtained by the eddy covariance method at Siikaneva mire complex, Southern Finland, during 2005-2015. The methane flux showed seasonal cycles in methane emission, with strongest correlation with peat temperature at 35 cm depth. The temperature relation was exponential throughout the whole peat temperature range of 0-16°C. The methane emission normalized to remove temperature dependence showed a non-monotonous relation on water table and positive correlation with gross primary production (GPP). However, inclusion of these as explaining variables improved algorithm-measurement correlation only slightly, with r2=0.74 for exponential temperature dependent algorithm, r2=0.76 for temperature - water table algorithm, and r2=0.79 for temperature - GPP algorithm. The methane emission lagged behind net ecosystem exchange (NEE) and GPP by two to three weeks. Annual methane emission ranged from 8.3 to 14 gC m-2, and was 20 % of NEE and 2.8 % of GPP. The inter-annual variation of methane emission was of similar magnitude as that of GPP and ecosystem respiration (Reco), but much smaller than that of NEE. The interannual variability of June-September average methane emission correlated significantly with that of GPP indicating a close link between these two processes in boreal fen ecosystems.
NASA Astrophysics Data System (ADS)
Li, Shuai; Wang, Yiping; Wang, Tao; Yang, Xue; Deng, Yadong; Su, Chuqi
2017-05-01
Thermoelectric generators (TEGs) have become a topic of interest for vehicle exhaust energy recovery. Electrical power generation is deeply influenced by temperature differences, temperature uniformity and topological structures of TEGs. When the dimpled surfaces are adopted in heat exchangers, the heat transfer rates can be augmented with a minimal pressure drop. However, the temperature distribution shows a large gradient along the flow direction which has adverse effects on the power generation. In the current study, the heat exchanger performance was studied in a computational fluid dynamics (CFD) model. The dimple depth, dimple print diameter, and channel height were chosen as design variables. The objective function was defined as a combination of average temperature, temperature uniformity and pressure loss. The optimal Latin hypercube method was used to determine the experiment points as a method of design of the experiment in order to analyze the sensitivity of the design variables. A Kriging surrogate model was built and verified according to the database resulting from the CFD simulation. A multi-island genetic algorithm was used to optimize the structure in the heat exchanger based on the surrogate model. The results showed that the average temperature of the heat exchanger was most sensitive to the dimple depth. The pressure loss and temperature uniformity were most sensitive to the parameter of channel rear height, h 2. With an optimal design of channel structure, the temperature uniformity can be greatly improved compared with the initial exchanger, and the additional pressure loss also increased.
Improving the Numerical Stability of Fast Matrix Multiplication
Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...
2016-10-04
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less
NASA Astrophysics Data System (ADS)
Passarelli, G.; De Filippis, G.; Cataudella, V.; Lucignano, P.
2018-02-01
We investigate the quantum annealing of the ferromagnetic p -spin model in a dissipative environment (p =5 and p =7 ). This model, in the large-p limit, codifies Grover's algorithm for searching in an unsorted database [L. K. Grover, Proceedings of the 28th Annual ACM Symposium on Theory of Computing (ACM, New York, 1996), pp. 212-219]. The dissipative environment is described by a phonon bath in thermal equilibrium at finite temperature. The dynamics is studied in the framework of a Lindblad master equation for the reduced density matrix describing only the spins. Exploiting the symmetries of our model Hamiltonian, we can describe many spins and extrapolate expected trends for large N and p . While at weak system-bath coupling the dissipative environment has detrimental effects on the annealing results, we show that in the intermediate-coupling regime, the phonon bath seems to speed up the annealing at low temperatures. This improvement in the performance is likely not due to thermal fluctuation but rather arises from a correlated spin-bath state and persists even at zero temperature. This result may pave the way to a new scenario in which, by appropriately engineering the system-bath coupling, one may optimize quantum annealing performances below either the purely quantum or the classical limit.
Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J
2016-04-01
Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of -4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and -5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of -5.6 to 5.2 bpm and a bias of -0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available.
Charlton, Peter H; Bonnici, Timothy; Tarassenko, Lionel; Clifton, David A; Beale, Richard; Watkinson, Peter J
2016-01-01
Abstract Over 100 algorithms have been proposed to estimate respiratory rate (RR) from the electrocardiogram (ECG) and photoplethysmogram (PPG). As they have never been compared systematically it is unclear which algorithm performs the best. Our primary aim was to determine how closely algorithms agreed with a gold standard RR measure when operating under ideal conditions. Secondary aims were: (i) to compare algorithm performance with IP, the clinical standard for continuous respiratory rate measurement in spontaneously breathing patients; (ii) to compare algorithm performance when using ECG and PPG; and (iii) to provide a toolbox of algorithms and data to allow future researchers to conduct reproducible comparisons of algorithms. Algorithms were divided into three stages: extraction of respiratory signals, estimation of RR, and fusion of estimates. Several interchangeable techniques were implemented for each stage. Algorithms were assembled using all possible combinations of techniques, many of which were novel. After verification on simulated data, algorithms were tested on data from healthy participants. RRs derived from ECG, PPG and IP were compared to reference RRs obtained using a nasal-oral pressure sensor using the limits of agreement (LOA) technique. 314 algorithms were assessed. Of these, 270 could operate on either ECG or PPG, and 44 on only ECG. The best algorithm had 95% LOAs of −4.7 to 4.7 bpm and a bias of 0.0 bpm when using the ECG, and −5.1 to 7.2 bpm and 1.0 bpm when using PPG. IP had 95% LOAs of −5.6 to 5.2 bpm and a bias of −0.2 bpm. Four algorithms operating on ECG performed better than IP. All high-performing algorithms consisted of novel combinations of time domain RR estimation and modulation fusion techniques. Algorithms performed better when using ECG than PPG. The toolbox of algorithms and data used in this study are publicly available. PMID:27027672
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
The numerical simulation of meso-, convective-, and microscale atmospheric flows requires the solution of the Euler or the Navier-Stokes equations. Nonhydrostatic weather prediction algorithms often solve the equations in terms of derived quantities such as Exner pressure and potential temperature (and are thus not conservative) and/or as perturbations to the hydrostatically balanced equilibrium state. This paper presents a well-balanced, conservative finite difference formulation for the Euler equations with a gravitational source term, where the governing equations are solved as conservation laws for mass, momentum, and energy. Preservation of the hydrostatic balance to machine precision by the discretized equations is essentialmore » because atmospheric phenomena are often small perturbations to this balance. The proposed algorithm uses the weighted essentially nonoscillatory and compact-reconstruction weighted essentially nonoscillatory schemes for spatial discretization that yields high-order accurate solutions for smooth flows and is essentially nonoscillatory across strong gradients; however, the well-balanced formulation may be used with other conservative finite difference methods. The performance of the algorithm is demonstrated on test problems as well as benchmark atmospheric flow problems, and the results are verified with those in the literature.« less
NASA Astrophysics Data System (ADS)
Vignesh, S.; Dinesh Babu, P.; Surya, G.; Dinesh, S.; Marimuthu, P.
2018-02-01
The ultimate goal of all production entities is to select the process parameters that would be of maximum strength, minimum wear and friction. The friction and wear are serious problems in most of the industries which are influenced by the working set of parameters, oxidation characteristics and mechanism involved in formation of wear. The experimental input parameters such as sliding distance, applied load, and temperature are utilized in finding out the optimized solution for achieving the desired output responses such as coefficient of friction, wear rate, and volume loss. The optimization is performed with the help of a novel method, Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) based on an evolutionary algorithm. The regression equations obtained using Response Surface Methodology (RSM) are used in determining the optimum process parameters. Further, the results achieved through desirability approach in RSM are compared with that of the optimized solution obtained through NSGA-II. The results conclude that proposed evolutionary technique is much effective and faster than the desirability approach.
NASA Technical Reports Server (NTRS)
Meissner, Thomas; Wentz, Frank J.
2008-01-01
We have developed an algorithm that retrieves wind speed under rain using C-hand and X-band channels of passive microwave satellite radiometers. The spectral difference of the brightness temperature signals due to wind or rain allows to find channel combinations that are sufficiently sensitive to wind speed but little or not sensitive to rain. We &ve trained a statistical algorithm that applies under hurricane conditions and is able to measure wind speeds in hurricanes to an estimated accuracy of about 2 m/s. We have also developed a global algorithm, that is less accurate but can be applied under all conditions. Its estimated accuracy is between 2 and 5 mls, depending on wind speed and rain rate. We also extend the wind speed region in our model for the wind induced sea surface emissivity from currently 20 m/s to 40 mls. The data indicate that the signal starts to saturate above 30 mls. Finally, we make an assessment of the performance of wind direction retrievals from polarimetric radiometers as function of wind speed and rain rate
An inverse method for estimation of the acoustic intensity in the focused ultrasound field
NASA Astrophysics Data System (ADS)
Yu, Ying; Shen, Guofeng; Chen, Yazhu
2017-03-01
Recently, a new method which based on infrared (IR) imaging was introduced. Authors (A. Shaw, et al and M. R. Myers, et al) have established the relationship between absorber surface temperature and incident intensity during the absorber was irradiated by the transducer. Theoretically, the shorter irradiating time makes estimation more in line with the actual results. But due to the influence of noise and performance constrains of the IR camera, it is hard to identify the difference in temperature with short heating time. An inverse technique is developed to reconstruct the incident intensity distribution using the surface temperature with shorter irradiating time. The algorithm is validated using surface temperature data generated numerically from three-layer model which was developed to calculate the acoustic field in the absorber, the absorbed acoustic energy during the irradiation, and the consequent temperature elevation. To assess the effect of noisy data on the reconstructed intensity profile, in the simulations, the different noise levels with zero mean were superposed on the exact data. Simulation results demonstrate that the inversion technique can provide fairly reliable intensity estimation with satisfactory accuracy.
NASA Technical Reports Server (NTRS)
Santee, M.; Crisp, D.
1992-01-01
The temperature structure and dust loading of the Martian atmosphere are investigated using thermal emission spectra recorded in 1972 by the Mariner 9 infrared interferometer spectrometer (IRIS). The analysis focuses on a subset of data consisting of approximately 2400 spectra obtained near the end of the southern summer season (L(sub s) equal to 343 deg to 348 deg), after the global dust storm had largely abated and airborne dust amounts were subsiding to background values. Simultaneous retrieval of the vertical distribution of both atmospheric temperature and dust optical depth is accomplished through an iterative procedure which is performed on each individual spectrum. The atmospheric transmittances are calculated using a Voigt quasi-random band model, which includes absorption by CO2 and dust, but neglects the effects of multiple scattering. Vertical profiles of temperature and dust optical depth are obtained using modified algorithms. These profiles are used to construct global maps of temperature and dust optical depth as functions of latitude (+/- 90 deg), altitude (approximately 0-50 km), and local time of day.
An enhanced TIMESAT algorithm for estimating vegetation phenology metrics from MODIS data
Tan, B.; Morisette, J.T.; Wolfe, R.E.; Gao, F.; Ederer, G.A.; Nightingale, J.; Pedelty, J.A.
2011-01-01
An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates. ?? 2010 IEEE.
An Enhanced TIMESAT Algorithm for Estimating Vegetation Phenology Metrics from MODIS Data
NASA Technical Reports Server (NTRS)
Tan, Bin; Morisette, Jeffrey T.; Wolfe, Robert E.; Gao, Feng; Ederer, Gregory A.; Nightingale, Joanne; Pedelty, Jeffrey A.
2012-01-01
An enhanced TIMESAT algorithm was developed for retrieving vegetation phenology metrics from 250 m and 500 m spatial resolution Moderate Resolution Imaging Spectroradiometer (MODIS) vegetation indexes (VI) over North America. MODIS VI data were pre-processed using snow-cover and land surface temperature data, and temporally smoothed with the enhanced TIMESAT algorithm. An objective third derivative test was applied to define key phenology dates and retrieve a set of phenology metrics. This algorithm has been applied to two MODIS VIs: Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). In this paper, we describe the algorithm and use EVI as an example to compare three sets of TIMESAT algorithm/MODIS VI combinations: a) original TIMESAT algorithm with original MODIS VI, b) original TIMESAT algorithm with pre-processed MODIS VI, and c) enhanced TIMESAT and pre-processed MODIS VI. All retrievals were compared with ground phenology observations, some made available through the National Phenology Network. Our results show that for MODIS data in middle to high latitude regions, snow and land surface temperature information is critical in retrieving phenology metrics from satellite observations. The results also show that the enhanced TIMESAT algorithm can better accommodate growing season start and end dates that vary significantly from year to year. The TIMESAT algorithm improvements contribute to more spatial coverage and more accurate retrievals of the phenology metrics. Among three sets of TIMESAT/MODIS VI combinations, the start of the growing season metric predicted by the enhanced TIMESAT algorithm using pre-processed MODIS VIs has the best associations with ground observed vegetation greenup dates.
NASA Astrophysics Data System (ADS)
Rounds, S. A.; Buccola, N. L.
2014-12-01
The two-dimensional (longitudinal, vertical) water-quality model CE-QUAL-W2, version 3.7, was enhanced with new features to help dam operators and managers efficiently explore and optimize potential solutions for temperature management downstream of thermally stratified reservoirs. Such temperature management often is accomplished by blending releases from multiple dam outlets that access water of different temperatures at different depths in the reservoir. The original blending algorithm in this version of the model was limited to mixing releases from two outlets at a time, and few constraints could be imposed. The new enhanced blending algorithm allows the user to (1) specify a time-series of target release temperatures, (2) designate from 2 to 10 floating or fixed-elevation outlets for blending, (3) impose maximum head constraints as well as minimum and maximum flow constraints for any blended outlet, and (4) set a priority designation for each outlet that allows the model to choose which outlets to use and how to balance releases among them. The modified model was tested against a previously calibrated model of Detroit Lake on the North Santiam River in northwestern Oregon, and the results compared well. The enhanced model code is being used to evaluate operational and structural scenarios at multiple dam/reservoir systems in the Willamette River basin in Oregon, where downstream temperature management for endangered fish is a high priority for resource managers and dam operators. These updates to the CE-QUAL-W2 blending algorithm allow scenarios involving complicated dam operations and/or hypothetical outlet structures to be evaluated more efficiently with the model, with decreased need for multiple/iterative model runs or preprocessing of model inputs to fully characterize the operational constraints.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Transistor Level Circuit Experiments using Evolvable Hardware
NASA Technical Reports Server (NTRS)
Stoica, A.; Zebulum, R. S.; Keymeulen, D.; Ferguson, M. I.; Daud, Taher; Thakoor, A.
2005-01-01
The Jet Propulsion Laboratory (JPL) performs research in fault tolerant, long life, and space survivable electronics for the National Aeronautics and Space Administration (NASA). With that focus, JPL has been involved in Evolvable Hardware (EHW) technology research for the past several years. We have advanced the technology not only by simulation and evolution experiments, but also by designing, fabricating, and evolving a variety of transistor-based analog and digital circuits at the chip level. EHW refers to self-configuration of electronic hardware by evolutionary/genetic search mechanisms, thereby maintaining existing functionality in the presence of degradations due to aging, temperature, and radiation. In addition, EHW has the capability to reconfigure itself for new functionality when required for mission changes or encountered opportunities. Evolution experiments are performed using a genetic algorithm running on a DSP as the reconfiguration mechanism and controlling the evolvable hardware mounted on a self-contained circuit board. Rapid reconfiguration allows convergence to circuit solutions in the order of seconds. The paper illustrates hardware evolution results of electronic circuits and their ability to perform under 230 C temperature as well as radiations of up to 250 kRad.
Short-term Temperature Prediction Using Adaptive Computing on Dynamic Scales
NASA Astrophysics Data System (ADS)
Hu, W.; Cervone, G.; Jha, S.; Balasubramanian, V.; Turilli, M.
2017-12-01
When predicting temperature, there are specific places and times when high accuracy predictions are harder. For example, not all the sub-regions in the domain require the same amount of computing resources to generate an accurate prediction. Plateau areas might require less computing resources than mountainous areas because of the steeper gradient of temperature change in the latter. However, it is difficult to estimate beforehand the optimal allocation of computational resources because several parameters play a role in determining the accuracy of the forecasts, in addition to orography. The allocation of resources to perform simulations can become a bottleneck because it requires human intervention to stop jobs or start new ones. The goal of this project is to design and develop a dynamic approach to generate short-term temperature predictions that can automatically determines the required computing resources and the geographic scales of the predictions based on the spatial and temporal uncertainties. The predictions and the prediction quality metrics are computed using a numeric weather prediction model, Analog Ensemble (AnEn), and the parallelization on high performance computing systems is accomplished using Ensemble Toolkit, one component of the RADICAL-Cybertools family of tools. RADICAL-Cybertools decouple the science needs from the computational capabilities by building an intermediate layer to run general ensemble patterns, regardless of the science. In this research, we show how the ensemble toolkit allows generating high resolution temperature forecasts at different spatial and temporal resolution. The AnEn algorithm is run using NAM analysis and forecasts data for the continental United States for a period of 2 years. AnEn results show that temperature forecasts perform well according to different probabilistic and deterministic statistical tests.
NASA Astrophysics Data System (ADS)
Khan, F.; Pilz, J.; Spöck, G.
2017-12-01
Spatio-temporal dependence structures play a pivotal role in understanding the meteorological characteristics of a basin or sub-basin. This further affects the hydrological conditions and consequently will provide misleading results if these structures are not taken into account properly. In this study we modeled the spatial dependence structure between climate variables including maximum, minimum temperature and precipitation in the Monsoon dominated region of Pakistan. For temperature, six, and for precipitation four meteorological stations have been considered. For modelling the dependence structure between temperature and precipitation at multiple sites, we utilized C-Vine, D-Vine and Student t-copula models. For temperature, multivariate mixture normal distributions and for precipitation gamma distributions have been used as marginals under the copula models. A comparison was made between C-Vine, D-Vine and Student t-copula by observational and simulated spatial dependence structure to choose an appropriate model for the climate data. The results show that all copula models performed well, however, there are subtle differences in their performances. The copula models captured the patterns of spatial dependence structures between climate variables at multiple meteorological sites, however, the t-copula showed poor performance in reproducing the dependence structure with respect to magnitude. It was observed that important statistics of observed data have been closely approximated except of maximum values for temperature and minimum values for minimum temperature. Probability density functions of simulated data closely follow the probability density functions of observational data for all variables. C and D-Vines are better tools when it comes to modelling the dependence between variables, however, Student t-copulas compete closely for precipitation. Keywords: Copula model, C-Vine, D-Vine, Spatial dependence structure, Monsoon dominated region of Pakistan, Mixture models, EM algorithm.
Event reconstruction for the CBM-RICH prototype beamtest data in 2014
NASA Astrophysics Data System (ADS)
Adamczewski-Musch, J.; Akishin, P.; Becker, K.-H.; Belogurov, S.; Bendarouach, J.; Boldyreva, N.; Deveaux, C.; Dobyrn, V.; Dürr, M.; Eschke, J.; Förtsch, J.; Heep, J.; Höhne, C.; Kampert, K.-H.; Kochenda, L.; Kopfer, J.; Kravtsov, P.; Kres, I.; Lebedev, S.; Lebedeva, E.; Leonova, E.; Linev, S.; Mahmoud, T.; Michel, J.; Miftakhov, N.; Niebur, W.; Ovcharenko, E.; Patel, V.; Pauly, C.; Pfeifer, D.; Querchfeld, S.; Rautenberg, J.; Reinecke, S.; Riabov, Y.; Roshchin, E.; Samsonov, V.; Schetinin, V.; Tarasenkova, O.; Traxler, M.; Ugur, C.; Vznuzdaev, E.; Vznuzdaev, M.
2017-12-01
The Compressed Baryonic Matter (CBM) experiment at the future FAIR facility will investigate the QCD phase diagram at high net baryon densities and moderate temperatures in A+A collisions from 2 to 11 AGeV (SIS100). Electron identification in CBM will be performed by a Ring Imaging Cherenkov (RICH) detector and Transition Radiation Detectors (TRD). A real size prototype of the RICH detector was tested together with other CBM groups at the CERN PS/T9 beam line in 2014. For the first time the data format used the FLESnet protocol from CBM delivering free streaming data. The analysis was fully performed within the CBMROOT framework. In this contribution the data analysis and the event reconstruction methods which were used for obtained data are discussed. Rings were reconstructed using an algorithm based on the Hough Transform method and their parameters were derived with high accuracy by circle and ellipse fitting procedures. We present results of the application of the presented algorithms. In particular we compare results with and without Wavelength shifting (WLS) coating.
NASA Astrophysics Data System (ADS)
Ilhan, Z.; Wehner, W. P.; Schuster, E.; Boyer, M. D.; Gates, D. A.; Gerhardt, S.; Menard, J.
2015-11-01
Active control of the toroidal current density profile is crucial to achieve and maintain high-performance, MHD-stable plasma operation in NSTX-U. A first-principles-driven, control-oriented model describing the temporal evolution of the current profile has been proposed earlier by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. A feedforward + feedback control scheme for the requlation of the current profile is constructed by embedding the proposed nonlinear, physics-based model into the control design process. Firstly, nonlinear optimization techniques are used to design feedforward actuator trajectories that steer the plasma to a desired operating state with the objective of supporting the traditional trial-and-error experimental process of advanced scenario planning. Secondly, a feedback control algorithm to track a desired current profile evolution is developed with the goal of adding robustness to the overall control scheme. The effectiveness of the combined feedforward + feedback control algorithm for current profile regulation is tested in predictive simulations carried out in TRANSP. Supported by PPPL.
Monthly evaporation forecasting using artificial neural networks and support vector machines
NASA Astrophysics Data System (ADS)
Tezel, Gulay; Buyukyildiz, Meral
2016-04-01
Evaporation is one of the most important components of the hydrological cycle, but is relatively difficult to estimate, due to its complexity, as it can be influenced by numerous factors. Estimation of evaporation is important for the design of reservoirs, especially in arid and semi-arid areas. Artificial neural network methods and support vector machines (SVM) are frequently utilized to estimate evaporation and other hydrological variables. In this study, usability of artificial neural networks (ANNs) (multilayer perceptron (MLP) and radial basis function network (RBFN)) and ɛ-support vector regression (SVR) artificial intelligence methods was investigated to estimate monthly pan evaporation. For this aim, temperature, relative humidity, wind speed, and precipitation data for the period 1972 to 2005 from Beysehir meteorology station were used as input variables while pan evaporation values were used as output. The Romanenko and Meyer method was also considered for the comparison. The results were compared with observed class A pan evaporation data. In MLP method, four different training algorithms, gradient descent with momentum and adaptive learning rule backpropagation (GDX), Levenberg-Marquardt (LVM), scaled conjugate gradient (SCG), and resilient backpropagation (RBP), were used. Also, ɛ-SVR model was used as SVR model. The models were designed via 10-fold cross-validation (CV); algorithm performance was assessed via mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R 2). According to the performance criteria, the ANN algorithms and ɛ-SVR had similar results. The ANNs and ɛ-SVR methods were found to perform better than the Romanenko and Meyer methods. Consequently, the best performance using the test data was obtained using SCG(4,2,2,1) with R 2 = 0.905.
Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model
NASA Astrophysics Data System (ADS)
Kim, Sangjo; Kim, Kuisoon; Son, Changmin
2018-04-01
An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.
Revisiting the choice of the driving temperature for eddy covariance CO2 flux partitioning
Wohlfahrt, Georg; Galvagno, Marta
2017-01-01
So-called CO2 flux partitioning algorithms are widely used to partition the net ecosystem CO2 exchange into the two component fluxes, gross primary productivity and ecosystem respiration. Common CO2 flux partitioning algorithms conceptualize ecosystem respiration to originate from a single source, requiring the choice of a corresponding driving temperature. Using a conceptual dual-source respiration model, consisting of an above- and a below-ground respiration source each driven by a corresponding temperature, we demonstrate that the typical phase shift between air and soil temperature gives rise to a hysteresis relationship between ecosystem respiration and temperature. The hysteresis proceeds in a clockwise fashion if soil temperature is used to drive ecosystem respiration, while a counter-clockwise response is observed when ecosystem respiration is related to air temperature. As a consequence, nighttime ecosystem respiration is smaller than daytime ecosystem respiration when referenced to soil temperature, while the reverse is true for air temperature. We confirm these qualitative modelling results using measurements of day and night ecosystem respiration made with opaque chambers in a short-statured mountain grassland. Inferring daytime from nighttime ecosystem respiration or vice versa, as attempted by CO2 flux partitioning algorithms, using a single-source respiration model is thus an oversimplification resulting in biased estimates of ecosystem respiration. We discuss the likely magnitude of the bias, options for minimizing it and conclude by emphasizing that the systematic uncertainty of gross primary productivity and ecosystem respiration inferred through CO2 flux partitioning needs to be better quantified and reported. PMID:28439145
Xu, Hang; Su, Shi; Tang, Wuji; Wei, Meng; Wang, Tao; Wang, Dongjin; Ge, Weihong
2015-09-01
A large number of warfarin pharmacogenetics algorithms have been published. Our research was aimed to evaluate the performance of the selected pharmacogenetic algorithms in patients with surgery of heart valve replacement and heart valvuloplasty during the phase of initial and stable anticoagulation treatment. 10 pharmacogenetic algorithms were selected by searching PubMed. We compared the performance of the selected algorithms in a cohort of 193 patients during the phase of initial and stable anticoagulation therapy. Predicted dose was compared to therapeutic dose by using a predicted dose percentage that falls within 20% threshold of the actual dose (percentage within 20%) and mean absolute error (MAE). The average warfarin dose for patients was 3.05±1.23mg/day for initial treatment and 3.45±1.18mg/day for stable treatment. The percentages of the predicted dose within 20% of the therapeutic dose were 44.0±8.8% and 44.6±9.7% for the initial and stable phases, respectively. The MAEs of the selected algorithms were 0.85±0.18mg/day and 0.93±0.19mg/day, respectively. All algorithms had better performance in the ideal group than in the low dose and high dose groups. The only exception is the Wadelius et al. algorithm, which had better performance in the high dose group. The algorithms had similar performance except for the Wadelius et al. and Miao et al. algorithms, which had poor accuracy in our study cohort. The Gage et al. algorithm had better performance in both phases of initial and stable treatment. Algorithms had relatively higher accuracy in the >50years group of patients on the stable phase. Copyright © 2015 Elsevier Ltd. All rights reserved.
Control Algorithms For Liquid-Cooled Garments
NASA Technical Reports Server (NTRS)
Drew, B.; Harner, K.; Hodgson, E.; Homa, J.; Jennings, D.; Yanosy, J.
1988-01-01
Three algorithms developed for control of cooling in protective garments. Metabolic rate inferred from temperatures of cooling liquid outlet and inlet, suitably filtered to account for thermal lag of human body. Temperature at inlet adjusted to value giving maximum comfort at inferred metabolic rate. Applicable to space suits, used for automatic control of cooling in suits worn by workers in radioactive, polluted, or otherwise hazardous environments. More effective than manual control, subject to frequent, overcompensated adjustments as level of activity varies.
NASA Astrophysics Data System (ADS)
ZáVody, A. M.; Mutlow, C. T.; Llewellyn-Jones, D. T.
1995-01-01
The measurements made by the along-track scanning radiometer are now converted routinely into sea surface temperature (SST). The details of the atmospheric model which had been used for deriving the SST algorithms are given, together with tables of the coefficients in the algorithms for the different SST products. The accuracy of the retrieval under normal conditions and the effect of errors in the model on the retrieved SST are briefly discussed.
NASA Astrophysics Data System (ADS)
Hsiao, Y. R.; Tsai, C.
2017-12-01
As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.
Assessment of the Utility of the Advanced Himawari Imager to Detect Active Fire Over Australia
NASA Astrophysics Data System (ADS)
Hally, B.; Wallace, L.; Reinke, K.; Jones, S.
2016-06-01
Wildfire detection and attribution is an issue of importance due to the socio-economic impact of fires in Australia. Early detection of fires allows emergency response agencies to make informed decisions in order to minimise loss of life and protect strategic resources in threatened areas. Until recently, the ability of land management authorities to accurately assess fire through satellite observations of Australia was limited to those made by polar orbiting satellites. The launch of the Japan Meteorological Agency (JMA) Himawari-8 satellite, with the 16-band Advanced Himawari Imager (AHI-8) onboard, in October 2014 presents a significant opportunity to improve the timeliness of satellite fire detection across Australia. The near real-time availability of images, at a ten minute frequency, may also provide contextual information (background temperature) leading to improvements in the assessment of fire characteristics. This paper investigates the application of the high frequency observation data supplied by this sensor for fire detection and attribution. As AHI-8 is a new sensor we have performed an analysis of the noise characteristics of the two spectral bands used for fire attribution across various land use types which occur in Australia. Using this information we have adapted existing algorithms, based upon least squares error minimisation and Kalman filtering, which utilise high frequency observations of surface temperature to detect and attribute fire. The fire detection and attribution information provided by these algorithms is then compared to existing satellite based fire products as well as in-situ information provided by land management agencies. These comparisons were made Australia-wide for an entire fire season - including many significant fire events (wildfires and prescribed burns). Preliminary detection results suggest that these methods for fire detection perform comparably to existing fire products and fire incident reporting from relevant fire authorities but with the advantage of being near-real time. Issues remain for detection due to cloud and smoke obscuration, along with validation of the attribution of fire characteristics using these algorithms.
Nonlinear dynamics of homeothermic temperature control in skunk cabbage, Symplocarpus foetidus
NASA Astrophysics Data System (ADS)
Ito, Takanori; Ito, Kikukatsu
2005-11-01
Certain primitive plants undergo orchestrated temperature control during flowering. Skunk cabbage, Symplocarpus foetidus, has been demonstrated to maintain an internal temperature of around 20 °C even when the ambient temperature drops below freezing. However, it is not clear whether a unique algorithm controls the homeothermic behavior of S. foetidus, or whether such an algorithm might exhibit linear or nonlinear thermoregulatory dynamics. Here we report the underlying dynamics of temperature control in S. foetidus using nonlinear forecasting, attractor and correlation dimension analyses. It was shown that thermoregulation in S. foetidus was governed by low-dimensional chaotic dynamics, the geometry of which showed a strange attractor named the “Zazen attractor.” Our data suggest that the chaotic thermoregulation in S. foetidus is inherent and that it is an adaptive response to the natural environment.
Development of anomaly detection models for deep subsurface monitoring
NASA Astrophysics Data System (ADS)
Sun, A. Y.
2017-12-01
Deep subsurface repositories are used for waste disposal and carbon sequestration. Monitoring deep subsurface repositories for potential anomalies is challenging, not only because the number of sensor networks and the quality of data are often limited, but also because of the lack of labeled data needed to train and validate machine learning (ML) algorithms. Although physical simulation models may be applied to predict anomalies (or the system's nominal state for that sake), the accuracy of such predictions may be limited by inherent conceptual and parameter uncertainties. The main objective of this study was to demonstrate the potential of data-driven models for leakage detection in carbon sequestration repositories. Monitoring data collected during an artificial CO2 release test at a carbon sequestration repository were used, which include both scalar time series (pressure) and vector time series (distributed temperature sensing). For each type of data, separate online anomaly detection algorithms were developed using the baseline experiment data (no leak) and then tested on the leak experiment data. Performance of a number of different online algorithms was compared. Results show the importance of including contextual information in the dataset to mitigate the impact of reservoir noise and reduce false positive rate. The developed algorithms were integrated into a generic Web-based platform for real-time anomaly detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, David D.; Clough, Shepard A.; Liljegren, James C.
2007-11-01
Ground-based two-channel microwave radiometers have been used for over 15 years by the Atmospheric Radiation Measurement (ARM) program to provide observations of downwelling emitted radiance from which precipitable water vapor (PWV) and liquid water path (LWP) – twp geophysical parameters critical for many areas of atmospheric research – are retrieved. An algorithm that utilizes two advanced retrieval techniques, a computationally expensive physical-iterative approach and an efficient statistical method, has been developed to retrieve these parameters. An important component of this Microwave Retrieval (MWRRET) algorithm is the determination of small (< 1K) offsets that are subtracted from the observed brightness temperaturesmore » before the retrievals are performed. Accounting for these offsets removes systematic biases from the observations and/or the model spectroscopy necessary for the retrieval, significantly reducing the systematic biases in the retrieved LWP. The MWRRET algorithm provides significantly more accurate retrievals than the original ARM statistical retrieval which uses monthly retrieval coefficients. By combining the two retrieval methods with the application of brightness temperature offsets to reduce the spurious LWP bias in clear skies, the MWRRET algorithm provides significantly better retrievals of PWV and LWP from the ARM 2-channel microwave radiometers compared to the original ARM product.« less
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
Results are presented from the evaluation of the performance seeking control (PSC) optimization algorithm developed by Smith et al. (1990) for F-15 aircraft, which optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. Comparisons are presented between the load cell measurements, PSC onboard model thrust calculations, and posttest state variable model computations. Actual performance improvements using the PSC algorithm are presented for its various modes. The results of using PSC algorithm are compared with similar test case results using the HIDEC algorithm.
NASA Astrophysics Data System (ADS)
Moyer, D.; Moeller, C.; De Luccia, F.
2013-09-01
The Visible Infrared Imager Radiometer Suite (VIIRS), a primary sensor on-board the Suomi-National Polar-orbiting Partnership (SNPP) spacecraft, was launched October 28, 2011. It has 22 bands: 7 thermal emissive bands (TEBs), 14 reflective solar bands (RSBs) and a Day Night Band (DNB). The TEBs cover the spectral wavelengths between 3.7 to 12 μm and have two 371 m and five 742 m spatial resolution bands. A VIIRS Key Performance Parameter (KPP) is the sea surface temperature (SST) which uses bands M12 (3.7 μm), M15 (10.8 μm) and M16's (12.0 μm) calibrated Science Data Records (SDRs). The TEB SDRs rely on pre-launch calibration coefficients used in a quadratic algorithm to convert the detector's response to calibrated radiance. This paper will evaluate the performance of these prelaunch calibration coefficients using vicarious calibration information from the Cross-track Infrared Sounder (CrIS) also onboard the SNPP spacecraft and the Infrared Atmospheric Sounding Interferometer (IASI) on-board the Meteorological Operational (MetOp) satellite. Changes to the pre-launch calibration coefficients' offset term c0 to improve the SDR's performance at cold scene temperatures will also be discussed.
A real-time ECG data compression and transmission algorithm for an e-health device.
Lee, SangJoon; Kim, Jungkuk; Lee, Myoungho
2011-09-01
This paper introduces a real-time data compression and transmission algorithm between e-health terminals for a periodic ECGsignal. The proposed algorithm consists of five compression procedures and four reconstruction procedures. In order to evaluate the performance of the proposed algorithm, the algorithm was applied to all 48 recordings of MIT-BIH arrhythmia database, and the compress ratio (CR), percent root mean square difference (PRD), percent root mean square difference normalized (PRDN), rms, SNR, and quality score (QS) values were obtained. The result showed that the CR was 27.9:1 and the PRD was 2.93 on average for all 48 data instances with a 15% window size. In addition, the performance of the algorithm was compared to those of similar algorithms introduced recently by others. It was found that the proposed algorithm showed clearly superior performance in all 48 data instances at a compression ratio lower than 15:1, whereas it showed similar or slightly inferior PRD performance for a data compression ratio higher than 20:1. In light of the fact that the similarity with the original data becomes meaningless when the PRD is higher than 2, the proposed algorithm shows significantly better performance compared to the performance levels of other algorithms. Moreover, because the algorithm can compress and transmit data in real time, it can be served as an optimal biosignal data transmission method for limited bandwidth communication between e-health devices.
Combined Dust Detection Algorithm by Using MODIS Infrared Channels over East Asia
NASA Technical Reports Server (NTRS)
Park, Sang Seo; Kim, Jhoon; Lee, Jaehwa; Lee, Sukjo; Kim, Jeong Soo; Chang, Lim Seok; Ou, Steve
2014-01-01
A new dust detection algorithm is developed by combining the results of multiple dust detectionmethods using IR channels onboard the MODerate resolution Imaging Spectroradiometer (MODIS). Brightness Temperature Difference (BTD) between two wavelength channels has been used widely in previous dust detection methods. However, BTDmethods have limitations in identifying the offset values of the BTDto discriminate clear-sky areas. The current algorithm overcomes the disadvantages of previous dust detection methods by considering the Brightness Temperature Ratio (BTR) values of the dual wavelength channels with 30-day composite, the optical properties of the dust particles, the variability of surface properties, and the cloud contamination. Therefore, the current algorithm shows improvements in detecting the dust loaded region over land during daytime. Finally, the confidence index of the current dust algorithm is shown in 10 × 10 pixels of the MODIS observations. From January to June, 2006, the results of the current algorithm are within 64 to 81% of those found using the fine mode fraction (FMF) and aerosol index (AI) from the MODIS and Ozone Monitoring Instrument (OMI). The agreement between the results of the current algorithm and the OMI AI over the non-polluted land also ranges from 60 to 67% to avoid errors due to the anthropogenic aerosol. In addition, the developed algorithm shows statistically significant results at four AErosol RObotic NETwork (AERONET) sites in East Asia.
Ozcift, Akin; Gulten, Arif
2011-12-01
Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature. While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC). Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases. RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Inverse analysis of non-uniform temperature distributions using multispectral pyrometry
NASA Astrophysics Data System (ADS)
Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling
2016-05-01
Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.
Design of PID temperature control system based on STM32
NASA Astrophysics Data System (ADS)
Zhang, Jianxin; Li, Hailin; Ma, Kai; Xue, Liang; Han, Bianhua; Dong, Yuemeng; Tan, Yue; Gu, Chengru
2018-03-01
A rapid and high-accuracy temperature control system was designed using proportional-integral-derivative (PID) control algorithm with STM32 as micro-controller unit (MCU). The temperature control system can be applied in the fields which have high requirements on the response speed and accuracy of temperature control. The temperature acquisition circuit in system adopted Pt1000 resistance thermometer as temperature sensor. Through this acquisition circuit, the monitoring actual temperature signal could be converted into voltage signal and transmitted into MCU. A TLP521-1 photoelectric coupler was matched with BD237 power transistor to drive the thermoelectric cooler (TEC) in FTA951 module. The effective electric power of TEC was controlled by the pulse width modulation (PWM) signals which generated by MCU. The PWM signal parameters could be adjusted timely by PID algorithm according to the difference between monitoring actual temperature and set temperature. The upper computer was used to input the set temperature and monitor the system running state via serial port. The application experiment results show that the temperature control system is featured by simple structure, rapid response speed, good stability and high temperature control accuracy with the error less than ±0.5°C.
NASA Technical Reports Server (NTRS)
George, William K.; Rae, William J.; Woodward, Scott H.
1991-01-01
The importance of frequency response considerations in the use of thin-film gages for unsteady heat transfer measurements in transient facilities is considered, and methods for evaluating it are proposed. A departure frequency response function is introduced and illustrated by an existing analog circuit. A Fresnel integral temperature which possesses the essential features of the film temperature in transient facilities is introduced and is used to evaluate two numerical algorithms. Finally, criteria are proposed for the use of finite-difference algorithms for the calculation of the unsteady heat flux from a sampled temperature signal.
An acoustic backscatter thermometer for remotely mapping seafloor water temperature
NASA Astrophysics Data System (ADS)
Jackson, Darrell R.; Dworski, J. George
1992-01-01
A bottom-mounted, circularly scanning sonar operating at 40 kHz has been used to map changes in water sound speed over a circular region 150 m in diameter. If it is assumed that the salinity remains constant, the change in sound speed can be converted to a change in temperature. For the present system, the spatial resolution is 7.5 m and the temperature resolution is 0.05°C. The technique is based on comparison of successive sonar scans by means of a correlation algorithm. The algorithm is illustrated using data from the Sediment Transport Events on Slopes and Shelves (STRESS) experiment.
Infrared remote sensing of the vertical and horizontal distribution of clouds
NASA Technical Reports Server (NTRS)
Chahine, M. T.; Haskins, R. D.
1982-01-01
An algorithm has been developed to derive the horizontal and vertical distribution of clouds from the same set of infrared radiance data used to retrieve atmospheric temperature profiles. The method leads to the determination of the vertical atmospheric temperature structure and the cloud distribution simultaneously, providing information on heat sources and sinks, storage rates and transport phenomena in the atmosphere. Experimental verification of this algorithm was obtained using the 15-micron data measured by the NOAA-VTPR temperature sounder. After correcting for water vapor emission, the results show that the cloud cover derived from 15-micron data is less than that obtained from visible data.
Algorithmic Coordination in Robotic Networks
2010-11-29
appropriate performance, robustness and scalability properties for various task allocation , surveillance, and information gathering applications is...networking, we envision designing and analyzing algorithms with appropriate performance, robustness and scalability properties for various task ...distributed algorithms for target assignments; based on the classic auction algorithms in static networks, we intend to design efficient algorithms in worst
A semi-active suspension control algorithm for vehicle comprehensive vertical dynamics performance
NASA Astrophysics Data System (ADS)
Nie, Shida; Zhuang, Ye; Liu, Weiping; Chen, Fan
2017-08-01
Comprehensive performance of the vehicle, including ride qualities and road-holding, is essentially of great value in practice. Many up-to-date semi-active control algorithms improve vehicle dynamics performance effectively. However, it is hard to improve comprehensive performance for the conflict between ride qualities and road-holding around the second-order resonance. Hence, a new control algorithm is proposed to achieve a good trade-off between ride qualities and road-holding. In this paper, the properties of the invariant points are analysed, which gives an insight into the performance conflicting around the second-order resonance. Based on it, a new control algorithm is proposed. The algorithm employs a novel frequency selector to balance suspension ride and handling performance by adopting a medium damping around the second-order resonance. The results of this study show that the proposed control algorithm could improve the performance of ride qualities and suspension working space up to 18.3% and 8.2%, respectively, with little loss of road-holding compared to the passive suspension. Consequently, the comprehensive performance can be improved by 6.6%. Hence, the proposed algorithm is of great potential to be implemented in practice.
NASA Astrophysics Data System (ADS)
Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui
2016-04-01
Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao; Huang, Xuhui, E-mail: xuhuihuang@ust.hk
2016-04-21
Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kineticsmore » are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.« less
Automatic reactor control system for transient operation
NASA Astrophysics Data System (ADS)
Lipinski, Walter C.; Bhattacharyya, Samit K.; Hanan, Nelson A.
Various programmatic considerations have delayed the upgrading of the TREAT reactor and the performance of the control system is not yet experimentally verified. The current schedule calls for the upgrading activities to occur last in the calendar year 1987. Detailed simulation results, coupled with earlier validation of individual components of the control strategy in TREAT, verify the performance of the algorithms. The control system operates within the safety envelope provided by a protection system designed to ensure reactor safety under conditions of spurious reactivity additions. The approach should be directly applicable to MMW systems, with appropriate accounting of temperature rate limitations of key components and of the inertia of the secondary system components.
NASA Technical Reports Server (NTRS)
Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)
2001-01-01
The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.
Bouc-Wen hysteresis model identification using Modified Firefly Algorithm
NASA Astrophysics Data System (ADS)
Zaman, Mohammad Asif; Sikder, Urmita
2015-12-01
The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.
Stratiform/convective rain delineation for TRMM microwave imager
NASA Astrophysics Data System (ADS)
Islam, Tanvir; Srivastava, Prashant K.; Dai, Qiang; Gupta, Manika; Wan Jaafar, Wan Zurina
2015-10-01
This article investigates the potential for using machine learning algorithms to delineate stratiform/convective (S/C) rain regimes for passive microwave imager taking calibrated brightness temperatures as only spectral parameters. The algorithms have been implemented for the Tropical Rainfall Measuring Mission (TRMM) microwave imager (TMI), and calibrated as well as validated taking the Precipitation Radar (PR) S/C information as the target class variables. Two different algorithms are particularly explored for the delineation. The first one is metaheuristic adaptive boosting algorithm that includes the real, gentle, and modest versions of the AdaBoost. The second one is the classical linear discriminant analysis that includes the Fisher's and penalized versions of the linear discriminant analysis. Furthermore, prior to the development of the delineation algorithms, a feature selection analysis has been conducted for a total of 85 features, which contains the combinations of brightness temperatures from 10 GHz to 85 GHz and some derived indexes, such as scattering index, polarization corrected temperature, and polarization difference with the help of mutual information aided minimal redundancy maximal relevance criterion (mRMR). It has been found that the polarization corrected temperature at 85 GHz and the features derived from the "addition" operator associated with the 85 GHz channels have good statistical dependency to the S/C target class variables. Further, it has been shown how the mRMR feature selection technique helps to reduce the number of features without deteriorating the results when applying through the machine learning algorithms. The proposed scheme is able to delineate the S/C rain regimes with reasonable accuracy. Based on the statistical validation experience from the validation period, the Matthews correlation coefficients are in the range of 0.60-0.70. Since, the proposed method does not rely on any a priori information, this makes it very suitable for other microwave sensors having similar channels to the TMI. The method could possibly benefit the constellation sensors in the Global Precipitation Measurement (GPM) mission era.
Comparison of Nimbus-7 SMMR and GOES-1 VISSR Atmospheric Liquid Water Content.
NASA Astrophysics Data System (ADS)
Lojou, Jean-Yves; Frouin, Robert; Bernard, René
1991-02-01
Vertically integrated atmospheric liquid water content derived from Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR) brightness temperatures and from GOES-1 Visible and Infrared Spin-Scan Radiometer (VISSR) radiances in the visible are compared over the Indian Ocean during MONEX (monsoon experiment). In the retrieval procedure, Wilheit and Chang' algorithm and Stephens' parameterization schemes are applied to the SMMR and VISSR data, respectively. The results indicate that in the 0-100 mg cm2 range of liquid water content considered, the correlation coefficient between the two types of estimates is 0.83 (0.81- 0.85 at the 99 percent confidence level). The Wilheit and Chang algorithm, however, yields values lower than those obtained with Stephens's schemes by 24.5 mg cm2 on the average, and occasionally the SMMR-based values are negative. Alternative algorithms are proposed for use with SMMR data, which eliminate the bias, augment the correlation coefficient, and reduce the rms difference. These algorithms include using the Witheit and Chang formula with modified coefficients (multilinear regression), the Wilheit and Chang formula with the same coefficients but different equivalent atmospheric temperatures for each channel (temperature bias adjustment), and a second-order polynomial in brightness temperatures at 18, 21, and 37 GHz (polynomial development). When applied to a dataset excluded from the regressionn dataset, the multilinear regression algorithm provides the best results, namely a 0.91 correlation coefficient, a 5.2 mg cm2 (residual) difference, and a 2.9 mg cm2 bias. Simply shifting the liquid water content predicted by the Wilheit and Chang algorithm does not yield as good comparison statistics, indicating that the occasional negative values are not due only to a bias. The more accurate SMMR-derived liquid water content allows one to better evaluate cloud transmittance in the solar spectrum, at least in the area and during the period analyzed. Combining this cloud transmittance with a clear sky model would provide ocean surface insulation estimates from SMMR data alone.
Hybrid analysis for indicating patients with breast cancer using temperature time series.
Silva, Lincoln F; Santos, Alair Augusto S M D; Bravo, Renato S; Silva, Aristófanes C; Muchaluat-Saade, Débora C; Conci, Aura
2016-07-01
Breast cancer is the most common cancer among women worldwide. Diagnosis and treatment in early stages increase cure chances. The temperature of cancerous tissue is generally higher than that of healthy surrounding tissues, making thermography an option to be considered in screening strategies of this cancer type. This paper proposes a hybrid methodology for analyzing dynamic infrared thermography in order to indicate patients with risk of breast cancer, using unsupervised and supervised machine learning techniques, which characterizes the methodology as hybrid. The dynamic infrared thermography monitors or quantitatively measures temperature changes on the examined surface, after a thermal stress. In the dynamic infrared thermography execution, a sequence of breast thermograms is generated. In the proposed methodology, this sequence is processed and analyzed by several techniques. First, the region of the breasts is segmented and the thermograms of the sequence are registered. Then, temperature time series are built and the k-means algorithm is applied on these series using various values of k. Clustering formed by k-means algorithm, for each k value, is evaluated using clustering validation indices, generating values treated as features in the classification model construction step. A data mining tool was used to solve the combined algorithm selection and hyperparameter optimization (CASH) problem in classification tasks. Besides the classification algorithm recommended by the data mining tool, classifiers based on Bayesian networks, neural networks, decision rules and decision tree were executed on the data set used for evaluation. Test results support that the proposed analysis methodology is able to indicate patients with breast cancer. Among 39 tested classification algorithms, K-Star and Bayes Net presented 100% classification accuracy. Furthermore, among the Bayes Net, multi-layer perceptron, decision table and random forest classification algorithms, an average accuracy of 95.38% was obtained. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Modeling and Compensating Temperature-Dependent Non-Uniformity Noise in IR Microbolometer Cameras
Wolf, Alejandro; Pezoa, Jorge E.; Figueroa, Miguel
2016-01-01
Images rendered by uncooled microbolometer-based infrared (IR) cameras are severely degraded by the spatial non-uniformity (NU) noise. The NU noise imposes a fixed-pattern over the true images, and the intensity of the pattern changes with time due to the temperature instability of such cameras. In this paper, we present a novel model and a compensation algorithm for the spatial NU noise and its temperature-dependent variations. The model separates the NU noise into two components: a constant term, which corresponds to a set of NU parameters determining the spatial structure of the noise, and a dynamic term, which scales linearly with the fluctuations of the temperature surrounding the array of microbolometers. We use a black-body radiator and samples of the temperature surrounding the IR array to offline characterize both the constant and the temperature-dependent NU noise parameters. Next, the temperature-dependent variations are estimated online using both a spatially uniform Hammerstein-Wiener estimator and a pixelwise least mean squares (LMS) estimator. We compensate for the NU noise in IR images from two long-wave IR cameras. Results show an excellent NU correction performance and a root mean square error of less than 0.25 ∘C, when the array’s temperature varies by approximately 15 ∘C. PMID:27447637
Contribution of the AIRS Shortwave Sounding Channels to Retrieval Accuracy
NASA Technical Reports Server (NTRS)
Susskind, Joel; Kouvaris, Louis
2006-01-01
AIRS contains 2376 high spectral resolution channels between 650/cm and 2665/cm, including channels in both the 15 micron (near 667/cm) and 4.2 micron (near 2400/cm) COP sounding bands. Use of temperature sounding channels in the 15 micron CO2 band has considerable heritage in infra-red remote sensing. Channels in the 4.2 micron CO2 band have potential advantages for temperature sounding purposes because they are essentially insensitive to absorption by water vapor and ozone, and also have considerably sharper lower tropospheric temperature sounding weighting functions than do the 15 micron temperature sounding channels. Potential drawbacks with regard to use of 4.2 micron channels arise from effects on the observed radiances of solar radiation reflected by the surface and clouds, as well as effects of non-local thermodynamic equilibrium on shortwave observations during the day. These are of no practical consequences, however, when properly accounted for. We show results of experiments performed utilizing different spectral regions of AIRS, conducted with the AIRS Science Team candidate Version 5 algorithm. Experiments were performed using temperature sounding channels within the entire AIRS spectral coverage, within only the spectral region 650/cm to 1614 /cm; and within only the spectral region 1000/cm-2665/cm. These show the relative importance of utilizing only 15 micron temperature sounding channels, only the 4.2 micron temperature sounding channels, and both, with regards to sounding accuracy. The spectral region 2380/cm to 2400/cm is shown to contribute significantly to improve sounding accuracy in the lower troposphere, both day and night.
NASA Technical Reports Server (NTRS)
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.
Improved algorithms for estimating Total Alkalinity in Northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Devkota, M.; Dash, P.
2017-12-01
Ocean Acidification (OA) is one of the serious challenges that have significant impacts on ocean. About 25% of anthropologically generated CO2 is absorbed by the oceans which decreases average ocean pH. This change has critical impacts on marine species, ocean ecology, and associated economics. 35 years of observation concluded that the rate of alteration in OA parameters varies geographically with higher variations in the northern Gulf of Mexico (N-GoM). Several studies have suggested that the Mississippi River affects the carbon dynamics of the N-GoM coastal ecosystem significantly. Total Alkalinity (TA) algorithms developed for major ocean basins produce inaccurate estimations in this region. Hence, a local algorithm to estimate TA is the need for this region, which would incorporate the local effects of oceanographic processes and complex spatial influences. In situ data collected in N-GoM region during the GOMECC-I and II cruises, and GISR Cruises (G-1, 3, 5) from 2007 to 2013 were assimilated and used to calculate the efficiency of the existing TA algorithm that uses Sea Surface Temperature (SST) and Sea Surface Salinity (SSS) as explanatory variables. To improve this algorithm, firstly, statistical analyses were performed to improve the coefficients and the functional form of this algorithm. Then, chlorophyll a (Chl-a) was included as an additional explanatory variable in the multiple linear regression approach in addition to SST and SSS. Based on the average concentration of Chl-a for last 15 years, the N-GoM was divided into two regions, and two separate algorithms were developed for each region. Finally, to address spatial non-stationarity, a Geographically Weighted Regression (GWR) algorithm was developed. The existing TA algorithm resulted considerable algorithm bias with a larger bias in the coastal waters. Chl-a as an additional explanatory variable reduced the bias in the residuals and improved the algorithm efficiency. Chl-a worked as a proxy for addressing the organic pump's pronounced effects in the coastal waters. The GWR algorithm provided a raster surface of the coefficients with even more reliable algorithms to estimate TA with least error. The GWR algorithm addressed the spatial non-stationarity of OA in N-GoM, which apparently was not addressed in the previously developed algorithms.
Kalman-Predictive-Proportional-Integral-Derivative (KPPID) Temperature Control
NASA Astrophysics Data System (ADS)
Fluerasu, Andrei; Sutton, Mark
2003-09-01
With third generation synchrotron X-ray sources, it is possible to acquire detailed structural information about the system under study with time resolution orders of magnitude faster than was possible a few years ago. These advances have generated many new challenges for changing and controlling the state of the system on very short time scales, in a uniform and controlled manner. For our particular X-ray experiments [1] on crystallization or order-disorder phase transitions in metallic alloys, we need to change the sample temperature by hundreds of degrees as fast as possible while avoiding over or under shooting. To achieve this, we designed and implemented a computer-controlled temperature tracking system which combines standard Proportional-Integral-Derivative (PID) feedback, thermal modeling and finite difference thermal calculations (feedforward), and Kalman filtering of the temperature readings in order to reduce the noise. The resulting Kalman-Predictive-Proportional-Integral-Derivative (KPPID) algorithm allows us to obtain accurate control, to minimize the response time and to avoid over/under shooting, even in systems with inherently noisy temperature readings and time delays. The KPPID temperature controller was successfully implemented at the Advanced Photon Source at Argonne National Laboratories and was used to perform coherent and time-resolved X-ray diffraction experiments.
Integrated thermal and energy management of plug-in hybrid electric vehicles
NASA Astrophysics Data System (ADS)
Shams-Zahraei, Mojtaba; Kouzani, Abbas Z.; Kutter, Steffen; Bäker, Bernard
2012-10-01
In plug-in hybrid electric vehicles (PHEVs), the engine temperature declines due to reduced engine load and extended engine off period. It is proven that the engine efficiency and emissions depend on the engine temperature. Also, temperature influences the vehicle air-conditioner and the cabin heater loads. Particularly, while the engine is cold, the power demand of the cabin heater needs to be provided by the batteries instead of the waste heat of engine coolant. The existing energy management strategies (EMS) of PHEVs focus on the improvement of fuel efficiency based on hot engine characteristics neglecting the effect of temperature on the engine performance and the vehicle power demand. This paper presents a new EMS incorporating an engine thermal management method which derives the global optimal battery charge depletion trajectories. A dynamic programming-based algorithm is developed to enforce the charge depletion boundaries, while optimizing a fuel consumption cost function by controlling the engine power. The optimal control problem formulates the cost function based on two state variables: battery charge and engine internal temperature. Simulation results demonstrate that temperature and the cabin heater/air-conditioner power demand can significantly influence the optimal solution for the EMS, and accordingly fuel efficiency and emissions of PHEVs.
Efficient implementation of parallel three-dimensional FFT on clusters of PCs
NASA Astrophysics Data System (ADS)
Takahashi, Daisuke
2003-05-01
In this paper, we propose a high-performance parallel three-dimensional fast Fourier transform (FFT) algorithm on clusters of PCs. The three-dimensional FFT algorithm can be altered into a block three-dimensional FFT algorithm to reduce the number of cache misses. We show that the block three-dimensional FFT algorithm improves performance by utilizing the cache memory effectively. We use the block three-dimensional FFT algorithm to implement the parallel three-dimensional FFT algorithm. We succeeded in obtaining performance of over 1.3 GFLOPS on an 8-node dual Pentium III 1 GHz PC SMP cluster.
Morgan, R; Gallagher, M
2012-01-01
In this paper we extend a previously proposed randomized landscape generator in combination with a comparative experimental methodology to study the behavior of continuous metaheuristic optimization algorithms. In particular, we generate two-dimensional landscapes with parameterized, linear ridge structure, and perform pairwise comparisons of algorithms to gain insight into what kind of problems are easy and difficult for one algorithm instance relative to another. We apply this methodology to investigate the specific issue of explicit dependency modeling in simple continuous estimation of distribution algorithms. Experimental results reveal specific examples of landscapes (with certain identifiable features) where dependency modeling is useful, harmful, or has little impact on mean algorithm performance. Heat maps are used to compare algorithm performance over a large number of landscape instances and algorithm trials. Finally, we perform a meta-search in the landscape parameter space to find landscapes which maximize the performance between algorithms. The results are related to some previous intuition about the behavior of these algorithms, but at the same time lead to new insights into the relationship between dependency modeling in EDAs and the structure of the problem landscape. The landscape generator and overall methodology are quite general and extendable and can be used to examine specific features of other algorithms.
NASA Astrophysics Data System (ADS)
Fang, Li
The Geostationary Operational Environmental Satellites (GOES) have been continuously monitoring the earth surface since 1970, providing valuable and intensive data from a very broad range of wavelengths, day and night. The National Oceanic and Atmospheric Administration's (NOAA's) National Environmental Satellite, Data, and Information Service (NESDIS) is currently operating GOES-15 and GOES-13. The design of the GOES series is now heading to the 4 th generation. GOES-R, as a representative of the new generation of the GOES series, is scheduled to be launched in 2015 with higher spatial and temporal resolution images and full-time soundings. These frequent observations provided by GOES Image make them attractive for deriving information on the diurnal land surface temperature (LST) cycle and diurnal temperature range (DTR). These parameters are of great value for research on the Earth's diurnal variability and climate change. Accurate derivation of satellite-based LSTs from thermal infrared data has long been an interesting and challenging research area. To better support the research on climate change, the generation of consistent GOES LST products for both GOES-East and GOES-West from operational dataset as well as historical archive is in great demand. The derivation of GOES LST products and the evaluation of proposed retrieval methods are two major objectives of this study. Literature relevant to satellite-based LST retrieval techniques was reviewed. Specifically, the evolution of two LST algorithm families and LST retrieval methods for geostationary satellites were summarized in this dissertation. Literature relevant to the evaluation of satellite-based LSTs was also reviewed. All the existing methods are a valuable reference to develop the GOES LST product. The primary objective of this dissertation is the development of models for deriving consistent GOES LSTs with high spatial and high temporal coverage. Proper LST retrieval algorithms were studied according to the characteristics of the imager onboard the GOES series. For the GOES 8-11 and GOES R series with split window (SW) channels, a new temperature and emissivity separation (TES) approach was proposed for deriving LST and LSE simultaneously by using multiple-temporal satellite observations. Two split-window regression formulas were selected for this approach, and two satellite observations over the same geo-location within a certain time interval were utilized. This method is particularly applicable to geostationary satellite missions from which qualified multiple-temporal observations are available. For the GOES M(12)-Q series without SW channels, the dual-window LST algorithm was adopted to derive LST. Instead of using the conventional training method to generate coefficients for the LST regression algorithms, a machine training technique was introduced to automatically select the criteria and the boundary of the sub-ranges for generating algorithm coefficients under different conditions. A software package was developed to produce a brand new GOES LST product from both operational GOES measurements and historical archive. The system layers of the software and related system input and output were illustrated in this work. Comprehensive evaluation of GOES LST products was conducted by validating products against multiple ground-based LST observations, LST products from fine-resolution satellites (e.g. MODIS) and GSIP LST products. The key issues relevant to the cloud diffraction effect were studied as well. GOES measurements as well as ancillary data, including satellite and solar geometry, water vapor, cloud mask, land emissivity etc., were collected to generate GOES LST products. In addition, multiple in situ temperature measurements were collected to test the performance of the proposed GOES LST retrieval algorithms. The ground-based dataset included direct surface temperature measurements from the Atmospheric Radiation Measurement program (ARM), and indirect measurements (surface long-wave radiation observations) from the SURFace RADiation Budget (SURFRAD) Network. A simulated dataset was created to analyse the sensitivity of the proposed retrieval algorithms. In addition, the MODIS LST and GSIP LST products were adopted to cross-evaluate the accuracy of the GOES LST products. Evaluation results demonstrate that the proposed GOES LST system is capable of deriving consistent land surface temperatures with good retrieval precision. Consistent GOES LST products with high spatial/temporal coverage and reliable accuracy will better support detections and observations of meteorological over land surfaces.
SMOS/SMAP Synergy for SMAP Level 2 Soil Moisture Algorithm Evaluation
NASA Technical Reports Server (NTRS)
Bindlish, Rajat; Jackson, Thomas J.; Zhao, Tianjie; Cosh, Michael; Chan, Steven; O'Neill, Peggy; Njoku, Eni; Colliander, Andreas; Kerr, Yann
2011-01-01
Soil Moisture Active Passive (SMAP) satellite has been proposed to provide global measurements of soil moisture and land freeze/thaw state at 10 km and 3 km resolutions, respectively. SMAP would also provide a radiometer-only soil moisture product at 40-km spatial resolution. This product and the supporting brightness temperature observations are common to both SMAP and European Space Agency's Soil Moisture and Ocean Salinity (SMOS) mission. As a result, there are opportunities for synergies between the two missions. These include exploiting the data for calibration and validation and establishing longer term L-band brightness temperature and derived soil moisture products. In this investigation we will be using SMOS brightness temperature, ancillary data, and soil moisture products to develop and evaluate a candidate SMAP L2 passive soil moisture retrieval algorithm. This work will begin with evaluations based on the SMOS product grids and ancillary data sets and transition to those that will be used by SMAP. An important step in this analysis is reprocessing the multiple incidence angle observations provided by SMOS to a global brightness temperature product that simulates the constant 40 degree incidence angle observations that SMAP will provide. The reprocessed brightness temperature data provide a basis for evaluating different SMAP algorithm alternatives. Several algorithms are being considered for the SMAP radiometer-only soil moisture retrieval. In this first phase, we utilized only the Single Channel Algorithm (SCA), which is based on the radiative transfer equation and uses the channel that is most sensitive to soil moisture (H-pol). Brightness temperature is corrected sequentially for the effects of temperature, vegetation, roughness (dynamic ancillary data sets) and soil texture (static ancillary data set). European Centre for Medium-Range Weather Forecasts (ECMWF) estimates of soil temperature for the top layer (as provided as part of the SMOS ancillary data) were used to correct for surface temperature effects and to derive microwave emissivity. ECMWF data were also used for precipitation forecasts, presence of snow, and frozen ground. Vegetation options are described below. One year of soil moisture observations from a set of four watersheds in the U.S. were used to evaluate four different retrieval methodologies: (1) SMOS soil moisture estimates (version 400), (2) SeA soil moisture estimates using the SMOS/SMAP data with SMOS estimated vegetation optical depth, which is part of the SMOS level 2 product, (3) SeA soil moisture estimates using the SMOS/SMAP data and the MODIS-based vegetation climatology data, and (4) SeA soil moisture estimates using the SMOS/SMAP data and actual MODIS observations. The use of SMOS real-world global microwave observations and the analyses described here will help in the development and selection of different land surface parameters and ancillary observations needed for the SMAP soil moisture algorithms. These investigations will greatly improve the quality and reliability of this SMAP product at launch.
NASA Technical Reports Server (NTRS)
Kitzis, J. L.; Kitzis, S. N.
1979-01-01
The brightness temperature data produced by the SMMR Antenna Pattern Correction algorithm are evaluated. The evaluation consists of: (1) a direct comparison of the outputs of the interim, cross, and nominal APC modes; (2) a refinement of the previously determined cos beta estimates; and (3) a comparison of the world brightness temperature (T sub B) map with actual SMMR measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-12-09
PV_LIB comprises a library of Matlab? code for modeling photovoltaic (PV) systems. Included are functions to compute solar position and to estimate irradiance in the PV system's plane of array, cell temperature, PV module electrical output, and conversion from DC to AC power. Also included are functions that aid in determining parameters for module performance models from module characterization testing. PV_LIB is open source code primarily intended for research and academic purposes. All algorithms are documented in openly available literature with the appropriate references included in comments within the code.
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed. PMID:25879067
Pei, Yan
2015-01-01
We present and discuss philosophy and methodology of chaotic evolution that is theoretically supported by chaos theory. We introduce four chaotic systems, that is, logistic map, tent map, Gaussian map, and Hénon map, in a well-designed chaotic evolution algorithm framework to implement several chaotic evolution (CE) algorithms. By comparing our previous proposed CE algorithm with logistic map and two canonical differential evolution (DE) algorithms, we analyse and discuss optimization performance of CE algorithm. An investigation on the relationship between optimization capability of CE algorithm and distribution characteristic of chaotic system is conducted and analysed. From evaluation result, we find that distribution of chaotic system is an essential factor to influence optimization performance of CE algorithm. We propose a new interactive EC (IEC) algorithm, interactive chaotic evolution (ICE) that replaces fitness function with a real human in CE algorithm framework. There is a paired comparison-based mechanism behind CE search scheme in nature. A simulation experimental evaluation is conducted with a pseudo-IEC user to evaluate our proposed ICE algorithm. The evaluation result indicates that ICE algorithm can obtain a significant better performance than or the same performance as interactive DE. Some open topics on CE, ICE, fusion of these optimization techniques, algorithmic notation, and others are presented and discussed.
NASA Astrophysics Data System (ADS)
Huang, Jie; Li, Piao; Yao, Weixing
2018-05-01
A loosely coupled fluid-structural thermal numerical method is introduced for the thermal protection system (TPS) gap thermal control analysis in this paper. The aerodynamic heating and structural thermal are analyzed by computational fluid dynamics (CFD) and numerical heat transfer (NHT) methods respectively. An interpolation algorithm based on the control surface is adopted for the data exchanges on the coupled surface. In order to verify the analysis precision of the loosely coupled method, a circular tube example was analyzed, and the wall temperature agrees well with the test result. TPS gap thermal control performance was studied by the loosely coupled method successfully. The gap heat flux is mainly distributed in the small region at the top of the gap which is the high temperature region. Besides, TPS gap temperature and the power of the active cooling system (CCS) calculated by the traditional uncoupled method are higher than that calculated by the coupled method obviously. The reason is that the uncoupled method doesn't consider the coupled effect between the aerodynamic heating and structural thermal, however the coupled method considers it, so TPS gap thermal control performance can be analyzed more accurately by the coupled method.
[Near infrared distance sensing method for Chang'e-3 alpha particle X-ray spectrometer].
Liang, Xiao-Hua; Wu, Ming-Ye; Wang, Huan-Yu; Peng, Wen-Xi; Zhang, Cheng-Mo; Cui, Xing-Zhu; Wang, Jin-Zhou; Zhang, Jia-Yu; Yang, Jia-Wei; Fan, Rui-Rui; Gao, Min; Liu, Ya-Qing; Zhang, Fei; Dong, Yi-Fan; Guo, Dong-Ya
2013-05-01
Alpha particle X-ray spectrometer (APXS) is one of the payloads of Chang'E-3 lunar rover, the scientific objective of which is in-situ observation and off-line analysis of lunar regolith and rock. Distance measurement is one of the important functions for APXS to perform effective detection on the moon. The present paper will first give a brief introduction to APXS, and then analyze the specific requirements and constraints to realize distance measurement, at last present a new near infrared distance sensing algorithm by using the inflection point of response curve. The theoretical analysis and the experiment results verify the feasibility of this algorithm. Although the theoretical analysis shows that this method is not sensitive to the operating temperature and reflectance of the lunar surface, the solar infrared radiant intensity may make photosensor saturation. The solutions are reducing the gain of device and avoiding direct exposure to sun light.
Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods.
Gonzalez-Navarro, Felix F; Stilianova-Stoytcheva, Margarita; Renteria-Gutierrez, Livier; Belanche-Muñoz, Lluís A; Flores-Rios, Brenda L; Ibarra-Esquer, Jorge E
2016-10-26
Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB) modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.