Sample records for emissivity separation algorithm

  1. Separation of Atmospheric and Surface Spectral Features in Mars Global Surveyor Thermal Emission Spectrometer (TES) Spectra

    NASA Technical Reports Server (NTRS)

    Smith, Michael D.; Bandfield, Joshua L.; Christensen, Philip R.

    2000-01-01

    We present two algorithms for the separation of spectral features caused by atmospheric and surface components in Thermal Emission Spectrometer (TES) data. One algorithm uses radiative transfer and successive least squares fitting to find spectral shapes first for atmospheric dust, then for water-ice aerosols, and then, finally, for surface emissivity. A second independent algorithm uses a combination of factor analysis, target transformation, and deconvolution to simultaneously find dust, water ice, and surface emissivity spectral shapes. Both algorithms have been applied to TES spectra, and both find very similar atmospheric and surface spectral shapes. For TES spectra taken during aerobraking and science phasing periods in nadir-geometry these two algorithms give meaningful and usable surface emissivity spectra that can be used for mineralogical identification.

  2. STARBLADE: STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission

    NASA Astrophysics Data System (ADS)

    Knollmüller, Jakob; Frank, Philipp; Ensslin, Torsten A.

    2018-05-01

    STARBLADE (STar and Artefact Removal with a Bayesian Lightweight Algorithm from Diffuse Emission) separates superimposed point-like sources from a diffuse background by imposing physically motivated models as prior knowledge. The algorithm can also be used on noisy and convolved data, though performing a proper reconstruction including a deconvolution prior to the application of the algorithm is advised; the algorithm could also be used within a denoising imaging method. STARBLADE learns the correlation structure of the diffuse emission and takes it into account to determine the occurrence and strength of a superimposed point source.

  3. Performance limitations of temperature-emissivity separation techniques in long-wave infrared hyperspectral imaging applications

    NASA Astrophysics Data System (ADS)

    Pieper, Michael; Manolakis, Dimitris; Truslow, Eric; Cooley, Thomas; Brueggeman, Michael; Jacobson, John; Weisner, Andrew

    2017-08-01

    Accurate estimation or retrieval of surface emissivity from long-wave infrared or thermal infrared (TIR) hyperspectral imaging data acquired by airborne or spaceborne sensors is necessary for many scientific and defense applications. This process consists of two interwoven steps: atmospheric compensation and temperature-emissivity separation (TES). The most widely used TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the atmospheric transmission function. We develop a model to explain and evaluate the performance of TES algorithms using a smoothing approach. Based on this model, we identify three sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise. For each TES smoothing technique, we analyze the bias and variability of the temperature errors, which translate to emissivity errors. The performance model explains how the errors interact to generate temperature errors. Since we assume exact knowledge of the atmosphere, the presented results provide an upper bound on the performance of TES algorithms based on the smoothness assumption.

  4. Temperature - Emissivity Separation Assessment in a Sub-Urban Scenario

    NASA Astrophysics Data System (ADS)

    Moscadelli, M.; Diani, M.; Corsini, G.

    2017-10-01

    In this paper, a methodology that aims at evaluating the effectiveness of different TES strategies is presented. The methodology takes into account the specific material of interest in the monitored scenario, sensor characteristics, and errors in the atmospheric compensation step. The methodology is proposed in order to predict and analyse algorithms performances during the planning of a remote sensing mission, aimed to discover specific materials of interest in the monitored scenario. As case study, the proposed methodology is applied to a real airborne data set of a suburban scenario. In order to perform the TES problem, three state-of-the-art algorithms, and a recently proposed one, are investigated: Temperature-Emissivity Separation '98 (TES-98) algorithm, Stepwise Refining TES (SRTES) algorithm, Linear piecewise TES (LTES) algorithm, and Optimized Smoothing TES (OSTES) algorithm. At the end, the accuracy obtained with real data, and the ones predicted by means of the proposed methodology are compared and discussed.

  5. Technical Note: A simple calculation algorithm to separate high-resolution CH4 flux measurements into ebullition and diffusion-derived components

    NASA Astrophysics Data System (ADS)

    Hoffmann, M.; Schulz-Hanke, M.; Garcia Alba, J.; Jurisch, N.; Hagemann, U.; Sachs, T.; Sommer, M.; Augustin, J.

    2015-08-01

    Processes driving the production, transformation and transport of methane (CH4) in wetland ecosystems are highly complex. Thus, serious challenges are constitutes in terms of the mechanistic process understanding, the identification of potential environmental drivers and the calculation of reliable CH4 emission estimates. We present a simple calculation algorithm to separate open-water CH4 fluxes measured with automatic chambers into diffusion- and ebullition-derived components, which helps facilitating the identification of underlying dynamics and potential environmental drivers. Flux separation is based on ebullition related sudden concentration changes during single measurements. A variable ebullition filter is applied, using the lower and upper quartile and the interquartile range (IQR). Automation of data processing is achieved by using an established R-script, adjusted for the purpose of CH4 flux calculation. The algorithm was tested using flux measurement data (July to September 2013) from a former fen grassland site, converted into a shallow lake as a result of rewetting ebullition and diffusion contributed 46 and 55 %, respectively, to total CH4 emissions, which is comparable to those previously reported by literature. Moreover, the separation algorithm revealed a concealed shift in the diurnal trend of diffusive fluxes throughout the measurement period.

  6. Moderate Resolution Imaging Spectroradiometer (MODIS) MOD21 Land Surface Temperature and Emissivity Algorithm Theoretical Basis Document

    NASA Technical Reports Server (NTRS)

    Hulley, G.; Malakar, N.; Hughes, T.; Islam, T.; Hook, S.

    2016-01-01

    This document outlines the theory and methodology for generating the Moderate Resolution Imaging Spectroradiometer (MODIS) Level-2 daily daytime and nighttime 1-km land surface temperature (LST) and emissivity product using the Temperature Emissivity Separation (TES) algorithm. The MODIS-TES (MOD21_L2) product, will include the LST and emissivity for three MODIS thermal infrared (TIR) bands 29, 31, and 32, and will be generated for data from the NASA-EOS AM and PM platforms. This is version 1.0 of the ATBD and the goal is maintain a 'living' version of this document with changes made when necessary. The current standard baseline MODIS LST products (MOD11*) are derived from the generalized split-window (SW) algorithm (Wan and Dozier 1996), which produces a 1-km LST product and two classification-based emissivities for bands 31 and 32; and a physics-based day/night algorithm (Wan and Li 1997), which produces a 5-km (C4) and 6-km (C5) LST product and emissivity for seven MODIS bands: 20, 22, 23, 29, 31-33.

  7. Surface emissivity and temperature retrieval for a hyperspectral sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borel, C.C.

    1998-12-01

    With the growing use of hyper-spectral imagers, e.g., AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. The author believes that this will enable him to get around using the present temperature-emissivity separation algorithms using methods which take advantage of the many channels available in hyper-spectral imagers. A simple fact used in coming up with a novel algorithm is that a typical surface emissivity spectrum are rather smooth compared to spectral features introduced by the atmosphere. Thus, a iterative solution technique can be devised which retrievesmore » emissivity spectra based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. One such iterative algorithm solves the radiative transfer equation for the radiance at the sensor for the unknown emissivity and uses the blackbody temperature computed in an atmospheric window to get a guess for the unknown surface temperature. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less

  8. Extraction of incident irradiance from LWIR hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Lahaie, Pierre

    2014-10-01

    The atmospheric correction of thermal hyperspectral imagery can be separated in two distinct processes: Atmospheric Compensation (AC) and Temperature and Emissivity separation (TES). TES requires for input at each pixel, the ground leaving radiance and the atmospheric downwelling irradiance, which are the outputs of the AC process. The extraction from imagery of the downwelling irradiance requires assumptions about some of the pixels' nature, the sensor and the atmosphere. Another difficulty is that, often the sensor's spectral response is not well characterized. To deal with this unknown, we defined a spectral mean operator that is used to filter the ground leaving radiance and a computation of the downwelling irradiance from MODTRAN. A user will select a number of pixels in the image for which the emissivity is assumed to be known. The emissivity of these pixels is assumed to be smooth and that the only spectrally fast varying variable in the downwelling irradiance. Using these assumptions we built an algorithm to estimate the downwelling irradiance. The algorithm is used on all the selected pixels. The estimated irradiance is the average on the spectral channels of the resulting computation. The algorithm performs well in simulation and results are shown for errors in the assumed emissivity and for errors in the atmospheric profiles. The sensor noise influences mainly the required number of pixels.

  9. An End-to-End simulator for the development of atmospheric corrections and temperature - emissivity separation algorithms in the TIR spectral domain

    NASA Astrophysics Data System (ADS)

    Rock, Gilles; Fischer, Kim; Schlerf, Martin; Gerhards, Max; Udelhoven, Thomas

    2017-04-01

    The development and optimization of image processing algorithms requires the availability of datasets depicting every step from earth surface to the sensor's detector. The lack of ground truth data obliges to develop algorithms on simulated data. The simulation of hyperspectral remote sensing data is a useful tool for a variety of tasks such as the design of systems, the understanding of the image formation process, and the development and validation of data processing algorithms. An end-to-end simulator has been set up consisting of a forward simulator, a backward simulator and a validation module. The forward simulator derives radiance datasets based on laboratory sample spectra, applies atmospheric contributions using radiative transfer equations, and simulates the instrument response using configurable sensor models. This is followed by the backward simulation branch, consisting of an atmospheric correction (AC), a temperature and emissivity separation (TES) or a hybrid AC and TES algorithm. An independent validation module allows the comparison between input and output dataset and the benchmarking of different processing algorithms. In this study, hyperspectral thermal infrared scenes of a variety of surfaces have been simulated to analyze existing AC and TES algorithms. The ARTEMISS algorithm was optimized and benchmarked against the original implementations. The errors in TES were found to be related to incorrect water vapor retrieval. The atmospheric characterization could be optimized resulting in increasing accuracies in temperature and emissivity retrieval. Airborne datasets of different spectral resolutions were simulated from terrestrial HyperCam-LW measurements. The simulated airborne radiance spectra were subjected to atmospheric correction and TES and further used for a plant species classification study analyzing effects related to noise and mixed pixels.

  10. Comparison of ASTER Global Emissivity Database (ASTER-GED) With In-Situ Measurement In Italian Vulcanic Areas

    NASA Astrophysics Data System (ADS)

    Silvestri, M.; Musacchio, M.; Buongiorno, M. F.; Amici, S.; Piscini, A.

    2015-12-01

    LP DAAC released the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Emissivity Database (GED) datasets on April 2, 2014. The database was developed by the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL), California Institute of Technology. The database includes land surface emissivities derived from ASTER data acquired over the contiguous United States, Africa, Arabian Peninsula, Australia, Europe, and China. In this work we compare ground measurements of emissivity acquired by means of Micro-FTIR (Fourier Thermal Infrared spectrometer) instrument with the ASTER emissivity map extract from ASTER-GED and the emissivity obtained by using single ASTER data. Through this analysis we want to investigate differences existing between the ASTER-GED dataset (average from 2000 to 2008 seasoning independent) and fall in-situ emissivity measurement. Moreover the role of different spatial resolution characterizing ASTER and MODIS, 90mt and 1km respectively, by comparing them with in situ measurements. Possible differences can be due also to the different algorithms used for the emissivity estimation, Temperature and Emissivity Separation algorithm for ASTER TIR band( Gillespie et al, 1998) and the classification-based emissivity method (Snyder and al, 1998) for MODIS. In-situ emissivity measurements have been collected during dedicated fields campaign on Mt. Etna vulcano and Solfatara of Pozzuoli. Gillespie, A. R., Matsunaga, T., Rokugawa, S., & Hook, S. J. (1998). Temperature and emissivity separation from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images. IEEE Transactions on Geoscience and Remote Sensing, 36, 1113-1125. Snyder, W.C., Wan, Z., Zhang, Y., & Feng, Y.-Z. (1998). Classification-based emissivity for land surface temperature measurement from space. International Journal of Remote Sensing, 19, 2753-2574.

  11. Comparison of in-situ measurements and satellite-derived surface emissivity over Italian volcanic areas

    NASA Astrophysics Data System (ADS)

    Silvestri, Malvina; Musacchio, Massimo; Cammarano, Diego; Fabrizia Buongiorno, Maria; Amici, Stefania; Piscini, Alessandro

    2016-04-01

    In this work we compare ground measurements of emissivity collected during dedicated fields campaign on Mt. Etna and Solfatara of Pozzuoli volcanoes and acquired by means of Micro-FTIR (Fourier Thermal Infrared spectrometer) instrument with the emissivity obtained by using single ASTER data (Advanced Spaceborne Thermal Emission and Reflection Radiometer, ASTER 05) and the ASTER emissivity map extract from ASTER Global Emissivity Database (GED), released by LP DAAC on April 2, 2014. The database was developed by the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL), California Institute of Technology. The database includes land surface emissivity derived from ASTER data acquired over the contiguous United States, Africa, Arabian Peninsula, Australia, Europe, and China. Through this analysis we want to investigate the differences existing between the ASTER-GED dataset (average from 2000 to 2008 seasoning independent) and fall in-situ emissivity measurement. Moreover the role of different spatial resolution characterizing ASTER and MODIS, 90mt and 1km respectively, by comparing them with in situ measurements, is analyzed. Possible differences can be due also to the different algorithms used for the emissivity estimation, Temperature and Emissivity Separation algorithm for ASTER TIR band( Gillespie et al, 1998) and the classification-based emissivity method (Snyder and al, 1998) for MODIS. Finally land surface temperature products generated using ASTER-GED and ASTER 05 emissivity are also analyzed. Gillespie, A. R., Matsunaga, T., Rokugawa, S., & Hook, S. J. (1998). Temperature and emissivity separation from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images. IEEE Transactions on Geoscience and Remote Sensing, 36, 1113-1125. Snyder, W.C., Wan, Z., Zhang, Y., & Feng, Y.-Z. (1998). Classification-based emissivity for land surface temperature measurement from space. International Journal of Remote Sensing, 19, 2753-2574.

  12. CO Component Estimation Based on the Independent Component Analysis

    NASA Astrophysics Data System (ADS)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki; Takeuchi, Tsutomu T.; Fukui, Yasuo

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independent component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.

  13. CO component estimation based on the independent component analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ichiki, Kiyotomo; Kaji, Ryohei; Yamamoto, Hiroaki

    2014-01-01

    Fast Independent Component Analysis (FastICA) is a component separation algorithm based on the levels of non-Gaussianity. Here we apply FastICA to the component separation problem of the microwave background, including carbon monoxide (CO) line emissions that are found to contaminate the PLANCK High Frequency Instrument (HFI) data. Specifically, we prepare 100 GHz, 143 GHz, and 217 GHz mock microwave sky maps, which include galactic thermal dust, NANTEN CO line, and the cosmic microwave background (CMB) emissions, and then estimate the independent components based on the kurtosis. We find that FastICA can successfully estimate the CO component as the first independentmore » component in our deflection algorithm because its distribution has the largest degree of non-Gaussianity among the components. Thus, FastICA can be a promising technique to extract CO-like components without prior assumptions about their distributions and frequency dependences.« less

  14. Iterative retrieval of surface emissivity and temperature for a hyperspectral sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borel, C.C.

    1997-11-01

    The central problem of temperature-emissivity separation is that we obtain N spectral measurements of radiance and need to find N + 1 unknowns (N emissivities and one temperature). To solve this problem in the presence of the atmosphere we need to find even more unknowns: N spectral transmissions {tau}{sub atmo}({lambda}) up-welling path radiances L{sub path}{up_arrow}({lambda}) and N down-welling path radiances L{sub path}{down_arrow}({lambda}). Fortunately there are radiative transfer codes such as MODTRAN 3 and FASCODE available to get good estimates of {tau}{sub atmo}({lambda}), L{sub path}{up_arrow}({lambda}) and L{sub path}{down_arrow}({lambda}) in the order of a few percent. With the growing use of hyperspectralmore » imagers, e.g. AVIRIS in the visible and short-wave infrared there is hope of using such instruments in the mid-wave and thermal IR (TIR) some day. We believe that this will enable us to get around using the present temperature - emissivity separation (TES) algorithms using methods which take advantage of the many channels available in hyperspectral imagers. The first idea we had is to take advantage of the simple fact that a typical surface emissivity spectrum is rather smooth compared to spectral features introduced by the atmosphere. Thus iterative solution techniques can be devised which retrieve emissivity spectra {epsilon} based on spectral smoothness. To make the emissivities realistic, atmospheric parameters are varied using approximations, look-up tables derived from a radiative transfer code and spectral libraries. By varying the surface temperature over a small range a series of emissivity spectra are calculated. The one with the smoothest characteristic is chosen. The algorithm was tested on synthetic data using MODTRAN and the Salisbury emissivity database.« less

  15. ASTER Thermal Anomalies in Western Colorado

    DOE Data Explorer

    Richard E. Zehner

    2013-01-01

    This layer contains the areas identified as areas of anomalous surface temperature from ASTER satellite imagery. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. Areas that had temperature greater than 2o, and areas with temperature equal to 1o to 2o, were considered ASTER modeled very warm and warm surface exposures (thermal anomalies), respectively Note: 'o' is used in place of lowercase sigma in this description.

  16. C-band Joint Active/Passive Dual Polarization Sea Ice Detection

    NASA Astrophysics Data System (ADS)

    Keller, M. R.; Gifford, C. M.; Winstead, N. S.; Walton, W. C.; Dietz, J. E.

    2017-12-01

    A technique for synergistically-combining high-resolution SAR returns with like-frequency passive microwave emissions to detect thin (<30 cm) ice under the difficult conditions of late melt and freeze-up is presented. As the Arctic sea ice cover thins and shrinks, the algorithm offers an approach to adapting existing sensors monitoring thicker ice to provide continuing coverage. Lower resolution (10-26 km) ice detections with spaceborne radiometers and scatterometers are challenged by rapidly changing thin ice. Synthetic Aperture Radar (SAR) is high resolution (5-100m) but because of cross section ambiguities automated algorithms have had difficulty separating thin ice types from water. The radiometric emissivity of thin ice versus water at microwave frequencies is generally unambiguous in the early stages of ice growth. The method, developed using RADARSAT-2 and AMSR-E data, uses higher-ordered statistics. For the SAR, the COV (coefficient of variation, ratio of standard deviation to mean) has fewer ambiguities between ice and water than cross sections, but breaking waves still produce ice-like signatures for both polarizations. For the radiometer, the PRIC (polarization ratio ice concentration) identifies areas that are unambiguously water. Applying cumulative statistics to co-located COV levels adaptively determines an ice/water threshold. Outcomes from extensive testing with Sentinel and AMSR-2 data are shown in the results. The detection algorithm was applied to the freeze-up in the Beaufort, Chukchi, Barents, and East Siberian Seas in 2015 and 2016, spanning mid-September to early November of both years. At the end of the melt, 6 GHz PRIC values are 5-10% greater than those reported by radiometric algorithms at 19 and 37 GHz. During freeze-up, COV separates grease ice (<5 cm thick) from water. As the ice thickens, the COV is less reliable, but adding a mask based on either the PRIC or the cross-pol/co-pol SAR ratio corrects for COV deficiencies. In general, the dual-sensor detection algorithm reports 10-15% higher total ice concentrations than operational scatterometer or radiometer algorithms, mostly from ice edge and coastal areas. In conclusion, the algorithm presented combines high-resolution SAR returns with passive microwave emissions for automated ice detection at SAR resolutions.

  17. Sensitive Dual Color in vivo Bioluminescence Imaging Using a New Red Codon Optimized Firefly Luciferase and a Green Click Beetle Luciferase

    DTIC Science & Technology

    2011-04-01

    Sensitive Dual Color In Vivo Bioluminescence Imaging Using a New Red Codon Optimized Firefly Luciferase and a Green Click Beetle Luciferase Laura...20 nm). Spectral unmixing algorithms were applied to the images where good separation of signals was observed. Furthermore, HEK293 cells that...spectral emissions using a suitable spectral unmixing algorithm . This new D-luciferin-dependent reporter gene couplet opens up the possibility in the future

  18. Target Discrimination Using Infrared Techniques: Theoretical Considerations.

    DTIC Science & Technology

    1985-02-01

    the construction of algorithms to be used as a back - ground information filter to aid in the separation of targets from background. *20. DISTRIBUTION... vc =. ]:l:on’:JllV• t .-2/ 1 , S i:itar Lx, for tarze: S: - ~ ~ ~ *S + -* 2 2 2 2, ,__-2 _ _ 2 2;.£ ZO,1, (5. 18) v’nere BE is the blacbody radiant...target temperature TA and (I - cA) where E target emissivity for background temperature TB = 275°K, back - ground emissivity eB = 0.90 and atmospheric

  19. Validation of the MODIS MOD21 and MOD11 land surface temperature and emissivity products in an arid area of Northwest China

    NASA Astrophysics Data System (ADS)

    Li, H.; Yang, Y.; Yongming, D.; Cao, B.; Qinhuo, L.

    2017-12-01

    Land surface temperature (LST) is a key parameter for hydrological, meteorological, climatological and environmental studies. During the past decades, many efforts have been devoted to the establishment of methodology for retrieving the LST from remote sensing data and significant progress has been achieved. Many operational LST products have been generated using different remote sensing data. MODIS LST product (MOD11) is one of the most commonly used LST products, which is produced using a generalized split-window algorithm. Many validation studies have showed that MOD11 LST product agrees well with ground measurements over vegetated and inland water surfaces, however, large negative biases of up to 5 K are present over arid regions. In addition, land surface emissivity of MOD11 are estimated by assigning fixed emissivities according to a land cover classification dataset, which may introduce large errors to the LST product due to misclassification of the land cover. Therefore, a new MODIS LSE&E product (MOD21) is developed based on the temperature emissivity separation (TES) algorithm, and the water vapor scaling (WVS) method has also been incorporated into the MODIS TES algorithm for improving the accuracy of the atmospheric correction. The MOD21 product will be released with MODIS collection 6 Tier-2 land products in 2017. Due to the MOD21 products are not available right now, the MODTES algorithm was implemented including the TES and WVS methods as detailed in the MOD21 Algorithm Theoretical Basis Document. The MOD21 and MOD11 C6 LST products are validated using ground measurements and ASTER LST products collected in an arid area of Northwest China during the Heihe Watershed Allied Telemetry Experimental Research (HiWATER) experiment. In addition, lab emissivity spectra of four sand dunes in the Northwest China are also used to validate the MOD21 and MOD11 emissivity products.

  20. A new method to analyze UV stellar occultation data

    NASA Astrophysics Data System (ADS)

    Evdokimova, D.; Baggio, L.; Montmessin, F.; Belyaev, D.; Bertaux, J.-L.

    2017-09-01

    In this paper we present a new method of data processing and a classification of different types of stray light at SPICAV UV stellar occultations. The method was developed on a basis of Richardson-Lucy algorithm including: (a) deconvolution process of measured star light and (b) separation of extra emissions registered by the spectrometer.

  1. Hyperspectrally-Resolved Surface Emissivity Derived Under Optically Thin Clouds

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2010-01-01

    Surface spectral emissivity derived from current and future satellites can and will reveal critical information about the Earth s ecosystem and land surface type properties, which can be utilized as a means of long-term monitoring of global environment and climate change. Hyperspectrally-resolved surface emissivities are derived with an algorithm utilizes a combined fast radiative transfer model (RTM) with a molecular RTM and a cloud RTM accounting for both atmospheric absorption and cloud absorption/scattering. Clouds are automatically detected and cloud microphysical parameters are retrieved; and emissivity is retrieved under clear and optically thin cloud conditions. This technique separates surface emissivity from skin temperature by representing the emissivity spectrum with eigenvectors derived from a laboratory measured emissivity database; in other words, using the constraint as a means for the emissivity to vary smoothly across atmospheric absorption lines. Here we present the emissivity derived under optically thin clouds in comparison with that under clear conditions.

  2. Simultaneous reconstruction of emission activity and attenuation coefficient distribution from TOF data, acquired with external transmission source

    NASA Astrophysics Data System (ADS)

    Panin, V. Y.; Aykac, M.; Casey, M. E.

    2013-06-01

    The simultaneous PET data reconstruction of emission activity and attenuation coefficient distribution is presented, where the attenuation image is constrained by exploiting an external transmission source. Data are acquired in time-of-flight (TOF) mode, allowing in principle for separation of emission and transmission data. Nevertheless, here all data are reconstructed at once, eliminating the need to trace the position of the transmission source in sinogram space. Contamination of emission data by the transmission source and vice versa is naturally modeled. Attenuated emission activity data also provide additional information about object attenuation coefficient values. The algorithm alternates between attenuation and emission activity image updates. We also proposed a method of estimation of spatial scatter distribution from the transmission source by incorporating knowledge about the expected range of attenuation map values. The reconstruction of experimental data from the Siemens mCT scanner suggests that simultaneous reconstruction improves attenuation map image quality, as compared to when data are separated. In the presented example, the attenuation map image noise was reduced and non-uniformity artifacts that occurred due to scatter estimation were suppressed. On the other hand, the use of transmission data stabilizes attenuation coefficient distribution reconstruction from TOF emission data alone. The example of improving emission images by refining a CT-based patient attenuation map is presented, revealing potential benefits of simultaneous CT and PET data reconstruction.

  3. A New GPU-Enabled MODTRAN Thermal Model for the PLUME TRACKER Volcanic Emission Analysis Toolkit

    NASA Astrophysics Data System (ADS)

    Acharya, P. K.; Berk, A.; Guiang, C.; Kennett, R.; Perkins, T.; Realmuto, V. J.

    2013-12-01

    Real-time quantification of volcanic gaseous and particulate releases is important for (1) recognizing rapid increases in SO2 gaseous emissions which may signal an impending eruption; (2) characterizing ash clouds to enable safe and efficient commercial aviation; and (3) quantifying the impact of volcanic aerosols on climate forcing. The Jet Propulsion Laboratory (JPL) has developed state-of-the-art algorithms, embedded in their analyst-driven Plume Tracker toolkit, for performing SO2, NH3, and CH4 retrievals from remotely sensed multi-spectral Thermal InfraRed spectral imagery. While Plume Tracker provides accurate results, it typically requires extensive analyst time. A major bottleneck in this processing is the relatively slow but accurate FORTRAN-based MODTRAN atmospheric and plume radiance model, developed by Spectral Sciences, Inc. (SSI). To overcome this bottleneck, SSI in collaboration with JPL, is porting these slow thermal radiance algorithms onto massively parallel, relatively inexpensive and commercially-available GPUs. This paper discusses SSI's efforts to accelerate the MODTRAN thermal emission algorithms used by Plume Tracker. Specifically, we are developing a GPU implementation of the Curtis-Godson averaging and the Voigt in-band transmittances from near line center molecular absorption, which comprise the major computational bottleneck. The transmittance calculations were decomposed into separate functions, individually implemented as GPU kernels, and tested for accuracy and performance relative to the original CPU code. Speedup factors of 14 to 30× were realized for individual processing components on an NVIDIA GeForce GTX 295 graphics card with no loss of accuracy. Due to the separate host (CPU) and device (GPU) memory spaces, a redesign of the MODTRAN architecture was required to ensure efficient data transfer between host and device, and to facilitate high parallel throughput. Currently, we are incorporating the separate GPU kernels into a single function for calculating the Voigt in-band transmittance, and subsequently for integration into the re-architectured MODTRAN6 code. Our overall objective is that by combining the GPU processing with more efficient Plume Tracker retrieval algorithms, a 100-fold increase in the computational speed will be realized. Since the Plume Tracker runs on Windows-based platforms, the GPU-enhanced MODTRAN6 will be packaged as a DLL. We do however anticipate that the accelerated option will be made available to the general MODTRAN community through an application programming interface (API).

  4. Passive microwave remote sensing of rainfall with SSM/I: Algorithm development and implementation

    NASA Technical Reports Server (NTRS)

    Ferriday, James G.; Avery, Susan K.

    1994-01-01

    A physically based algorithm sensitive to emission and scattering is used to estimate rainfall using the Special Sensor Microwave/Imager (SSM/I). The algorithm is derived from radiative transfer calculations through an atmospheric cloud model specifying vertical distributions of ice and liquid hydrometeors as a function of rain rate. The algorithm is structured in two parts: SSM/I brightness temperatures are screened to detect rainfall and are then used in rain-rate calculation. The screening process distinguishes between nonraining background conditions and emission and scattering associated with hydrometeors. Thermometric temperature and polarization thresholds determined from the radiative transfer calculations are used to detect rain, whereas the rain-rate calculation is based on a linear function fit to a linear combination of channels. Separate calculations for ocean and land account for different background conditions. The rain-rate calculation is constructed to respond to both emission and scattering, reduce extraneous atmospheric and surface effects, and to correct for beam filling. The resulting SSM/I rain-rate estimates are compared to three precipitation radars as well as to a dynamically simulated rainfall event. Global estimates from the SSM/I algorithm are also compared to continental and shipboard measurements over a 4-month period. The algorithm is found to accurately describe both localized instantaneous rainfall events and global monthly patterns over both land and ovean. Over land the 4-month mean difference between SSM/I and the Global Precipitation Climatology Center continental rain gauge database is less than 10%. Over the ocean, the mean difference between SSM/I and the Legates and Willmott global shipboard rain gauge climatology is less than 20%.

  5. Soil emissivity and reflectance spectra measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobrino, Jose A.; Mattar, Cristian; Pardo, Pablo

    We present an analysis of the laboratory reflectance and emissivity spectra of 11 soil samples collected on different field campaigns carried out over a diverse suite of test sites in Europe, North Africa, and South America from 2002 to 2008. Hemispherical reflectance spectra were measured from 2.0 to 14 {mu}m with a Fourier transform infrared spectrometer, and x-ray diffraction analysis (XRD) was used to determine the mineralogical phases of the soil samples. Emissivity spectra were obtained from the hemispherical reflectance measurements using Kirchhoff's law and compared with in situ radiance measurements obtained with a CIMEL Electronique CE312-2 thermal radiometer andmore » converted to emissivity using the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) temperature and emissivity separation algorithm. The CIMEL has five narrow bands at approximately the same positions as the ASTER. Results show a root mean square error typically below 0.015 between laboratory emissivity measurements and emissivity measurements derived from the field radiometer.« less

  6. Estimating representative background PM2.5 concentration in heavily polluted areas using baseline separation technique and chemical mass balance model

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Yang, Wen; Zhang, Hui; Sun, Yanling; Mao, Jian; Ma, Zhenxing; Cong, Zhiyuan; Zhang, Xian; Tian, Shasha; Azzi, Merched; Chen, Li; Bai, Zhipeng

    2018-02-01

    The determination of background concentration of PM2.5 is important to understand the contribution of local emission sources to total PM2.5 concentration. The purpose of this study was to exam the performance of baseline separation techniques to estimate PM2.5 background concentration. Five separation methods, which included recursive digital filters (Lyne-Hollick, one-parameter algorithm, and Boughton two-parameter algorithm), sliding interval and smoothed minima, were applied to one-year PM2.5 time-series data in two heavily polluted cities, Tianjin and Jinan. To obtain the proper filter parameters and recession constants for the separation techniques, we conducted regression analysis at a background site during the emission reduction period enforced by the Government for the 2014 Asia-Pacific Economic Cooperation (APEC) meeting in Beijing. Background concentrations in Tianjin and Jinan were then estimated by applying the determined filter parameters and recession constants. The chemical mass balance (CMB) model was also applied to ascertain the effectiveness of the new approach. Our results showed that the contribution of background PM concentration to ambient pollution was at a comparable level to the contribution obtained from the previous study. The best performance was achieved using the Boughton two-parameter algorithm. The background concentrations were estimated at (27 ± 2) μg/m3 for the whole year, (34 ± 4) μg/m3 for the heating period (winter), (21 ± 2) μg/m3 for the non-heating period (summer), and (25 ± 2) μg/m3 for the sandstorm period in Tianjin. The corresponding values in Jinan were (30 ± 3) μg/m3, (40 ± 4) μg/m3, (24 ± 5) μg/m3, and (26 ± 2) μg/m3, respectively. The study revealed that these baseline separation techniques are valid for estimating levels of PM2.5 air pollution, and that our proposed method has great potential for estimating the background level of other air pollutants.

  7. Areas of Anomalous Surface Temperature in Archuleta County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Archuleta County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma

  8. Areas of Anomalous Surface Temperature in Dolores County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Dolores County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma

  9. Areas of Anomalous Surface Temperature in Chaffee County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Chaffee County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma

  10. Areas of Anomalous Surface Temperature in Garfield County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Garfield County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  11. Areas of Anomalous Surface Temperature in Routt County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Routt County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  12. Radiated Emissions from a Remote-Controlled Airplane-Measured in a Reverberation Chamber

    NASA Technical Reports Server (NTRS)

    Ely, Jay J.; Koppen, Sandra V.; Nguyen, Truong X.; Dudley, Kenneth L.; Szatkowski, George N.; Quach, Cuong C.; Vazquez, Sixto L.; Mielnik, John J.; Hogge, Edward F.; Hill, Boyd L.; hide

    2011-01-01

    A full-vehicle, subscale all-electric model airplane was tested for radiated emissions, using a reverberation chamber. The mission of the NASA model airplane is to test in-flight airframe damage diagnosis and battery prognosis algorithms, and provide experimental data for other aviation safety research. Subscale model airplanes are economical experimental tools, but assembling their systems from hobbyist and low-cost components may lead to unforseen electromagnetic compatibility problems. This report provides a guide for accommodating the on-board radio systems, so that all model airplane systems may be operated during radiated emission testing. Radiated emission data are provided for on-board systems being operated separately and together, so that potential interferors can be isolated and mitigated. The report concludes with recommendations for EMI/EMC best practices for subscale model airplanes and airships used for research.

  13. WHOLE BODY NONRIGID CT-PET REGISTRATION USING WEIGHTED DEMONS.

    PubMed

    Suh, J W; Kwon, Oh-K; Scheinost, D; Sinusas, A J; Cline, Gary W; Papademetris, X

    2011-03-30

    We present a new registration method for whole-body rat computed tomography (CT) image and positron emission tomography (PET) images using a weighted demons algorithm. The CT and PET images are acquired in separate scanners at different times and the inherent differences in the imaging protocols produced significant nonrigid changes between the two acquisitions in addition to heterogeneous image characteristics. In this situation, we utilized both the transmission-PET and the emission-PET images in the deformable registration process emphasizing particular regions of the moving transmission-PET image using the emission-PET image. We validated our results with nine rat image sets using M-Hausdorff distance similarity measure. We demonstrate improved performance compared to standard methods such as Demons and normalized mutual information-based non-rigid FFD registration.

  14. Areas of Anomalous Surface Temperature in Alamosa and Saguache Counties, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This layer contains areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies) Note: 'o' is used in this description to represent lowercase sigma.

  15. Directly data processing algorithm for multi-wavelength pyrometer (MWP).

    PubMed

    Xing, Jian; Peng, Bo; Ma, Zhao; Guo, Xin; Dai, Li; Gu, Weihong; Song, Wenlong

    2017-11-27

    Data processing of multi-wavelength pyrometer (MWP) is a difficult problem because unknown emissivity. So far some solutions developed generally assumed particular mathematical relations for emissivity versus wavelength or emissivity versus temperature. Due to the deviation between the hypothesis and actual situation, the inversion results can be seriously affected. So directly data processing algorithm of MWP that does not need to assume the spectral emissivity model in advance is main aim of the study. Two new data processing algorithms of MWP, Gradient Projection (GP) algorithm and Internal Penalty Function (IPF) algorithm, each of which does not require to fix emissivity model in advance, are proposed. The novelty core idea is that data processing problem of MWP is transformed into constraint optimization problem, then it can be solved by GP or IPF algorithms. By comparison of simulation results for some typical spectral emissivity models, it is found that IPF algorithm is superior to GP algorithm in terms of accuracy and efficiency. Rocket nozzle temperature experiment results show that true temperature inversion results from IPF algorithm agree well with the theoretical design temperature as well. So the proposed combination IPF algorithm with MWP is expected to be a directly data processing algorithm to clear up the unknown emissivity obstacle for MWP.

  16. EM reconstruction of dual isotope PET using staggered injections and prompt gamma positron emitters

    PubMed Central

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2014-01-01

    Purpose: The aim of dual isotope positron emission tomography (DIPET) is to create two separate images of two coinjected PET radiotracers. DIPET shortens the duration of the study, reduces patient discomfort, and produces perfectly coregistered images compared to the case when two radiotracers would be imaged independently (sequential PET studies). Reconstruction of data from such simultaneous acquisition of two PET radiotracers is difficult because positron decay of any isotope creates only 511 keV photons; therefore, the isotopes cannot be differentiated based on the detected energy. Methods: Recently, the authors have proposed a DIPET technique that uses a combination of radiotracer A which is a pure positron emitter (such as 18F or 11C) and radiotracer B in which positron decay is accompanied by the emission of a high-energy (HE) prompt gamma (such as 38K or 60Cu). Events that are detected as triple coincidences of HE gammas with the corresponding two 511 keV photons allow the authors to identify the lines-of-response (LORs) of isotope B. These LORs are used to separate the two intertwined distributions, using a dedicated image reconstruction algorithm. In this work the authors propose a new version of the DIPET EM-based reconstruction algorithm that allows the authors to include an additional, independent estimate of radiotracer A distribution which may be obtained if radioisotopes are administered using a staggered injections method. In this work the method is tested on simple simulations of static PET acquisitions. Results: The authors’ experiments performed using Monte-Carlo simulations with static acquisitions demonstrate that the combined method provides better results (crosstalk errors decrease by up to 50%) than the positron-gamma DIPET method or staggered injections alone. Conclusions: The authors demonstrate that the authors’ new EM algorithm which combines information from triple coincidences with prompt gammas and staggered injections improves the accuracy of DIPET reconstructions for static acquisitions so they reach almost the benchmark level calculated for perfectly separated tracers. PMID:24506645

  17. Acceleration of the direct reconstruction of linear parametric images using nested algorithms.

    PubMed

    Wang, Guobao; Qi, Jinyi

    2010-03-07

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  18. How to determine the GHG budget of a pasture field with grazing animals

    NASA Astrophysics Data System (ADS)

    Ammann, Christof; Neftel, Albrecht; Felber, Raphael

    2016-04-01

    Up to now the scientific investigation and description of the agriculture related greenhouse gas (GHG) exchange has been largely separated into (i) direct animal related and (ii) ecosystem area related processes and measurement methods. An overlap of the two usually separated topics occurs for grazed pastures, where direct animal and pasture area emissions are relevant. In the present study eddy covariance (EC) flux measurements on the field scale were combined with a source location attribution (footprint) model and with GPS position measurements of the individual animals. The experiment was performed on a pasture field in Switzerland under a rotational full grazing regime with dairy cows. The exchange fluxes of CH4, CO2, and N2O were measured simultaneously over the entire year. The observed CH4 emission fluxes correlated well with the presence of cows in the flux footprint. When converted to average emission per cow, the results agreed with published values from respiration chamber experiments with similar cows. For CO2 a sophisticated partitioning algorithm was applied to separate the pasture and animal contributions, because both were in the same order of magnitude. The N2O exchange fully attributable to the pasture soil showed considerable and continuous emissions through the entire seasonal course mainly modulated by soil moisture and temperature. The resulting GHG budget shows that the largest GHG effect of the pasture system was due to enteric CH4 emissions followed by soil N2O emissions, but that the carbon storage change was affected by a much larger uncertainty. The results demonstrate that the EC technique in combination with animal position information allows to consistently quantify the exchange of all three GHG on the pasture and to adequately distinguish between direct animal and diffuse area sources (and sinks). Yet questions concerning a standardized attribution of animal related emissions to the pasture GHG budget still need to be resolved.

  19. Multiway analysis methods applied to the fluorescence excitation-emission dataset for the simultaneous quantification of valsartan and amlodipine in tablets

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Ertekin, Zehra Ceren; Büker, Eda

    2017-09-01

    In this study, excitation-emission matrix datasets, which have strong overlapping bands, were processed by using four different chemometric calibration algorithms consisting of parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares for the simultaneous quantitative estimation of valsartan and amlodipine besylate in tablets. In analyses, preliminary separation step was not used before the application of parallel factor analysis Tucker3, three-way partial least squares and unfolded partial least squares approaches for the analysis of the related drug substances in samples. Three-way excitation-emission matrix data array was obtained by concatenating excitation-emission matrices of the calibration set, validation set, and commercial tablet samples. The excitation-emission matrix data array was used to get parallel factor analysis, Tucker3, three-way partial least squares and unfolded partial least squares calibrations and to predict the amounts of valsartan and amlodipine besylate in samples. For all the methods, calibration and prediction of valsartan and amlodipine besylate were performed in the working concentration ranges of 0.25-4.50 μg/mL. The validity and the performance of all the proposed methods were checked by using the validation parameters. From the analysis results, it was concluded that the described two-way and three-way algorithmic methods were very useful for the simultaneous quantitative resolution and routine analysis of the related drug substances in marketed samples.

  20. Atmospheric Compensation and Surface Temperature and Emissivity Retrieval with LWIR Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Pieper, Michael

    Accurate estimation or retrieval of surface emissivity spectra from long-wave infrared (LWIR) or Thermal Infrared (TIR) hyperspectral imaging data acquired by airborne or space-borne sensors is necessary for many scientific and defense applications. The at-aperture radiance measured by the sensor is a function of the ground emissivity and temperature, modified by the atmosphere. Thus the emissivity retrieval process consists of two interwoven steps: atmospheric compensation (AC) to retrieve the ground radiance from the measured at-aperture radiance and temperature-emissivity separation (TES) to separate the temperature and emissivity from the ground radiance. In-scene AC (ISAC) algorithms use blackbody-like materials in the scene, which have a linear relationship between their ground radiances and at-aperture radiances determined by the atmospheric transmission and upwelling radiance. Using a clear reference channel to estimate the ground radiance, a linear fitting of the at-aperture radiance and estimated ground radiance is done to estimate the atmospheric parameters. TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the sharp features added by the atmosphere. The ground temperature and emissivity are found by finding the temperature that provides the smoothest emissivity estimate. In this thesis we develop models to investigate the sensitivity of AC and TES to the basic assumptions enabling their performance. ISAC assumes that there are perfect blackbody pixels in a scene and that there is a clear channel, which is never the case. The developed ISAC model explains how the quality of blackbody-like pixels affect the shape of atmospheric estimates and the clear channel assumption affects their magnitude. Emissivity spectra for solids usually have some roughness. The TES model identifies four sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise and wavelength calibration. The ways these errors interact determines the overall TES performance. Since the AC and TES processes are interwoven, any errors in AC are transferred to TES and the final temperature and emissivity estimates. Combining the two models, shape errors caused by the blackbody assumption are transferred to the emissivity estimates, where magnitude errors from the clear channel assumption are compensated by TES temperature induced emissivity errors. The ability for the temperature induced error to compensate for such atmospheric errors makes it difficult to determine the correct atmospheric parameters for a scene. With these models we are able to determine the expected quality of estimated emissivity spectra based on the quality of blackbody-like materials on the ground, the emissivity of the materials being searched for, and the properties of the sensor. The quality of material emissivity spectra is a key factor in determining detection performance for a material in a scene.

  1. Key issues of ultraviolet radiation of OH at high altitudes

    NASA Astrophysics Data System (ADS)

    Zhang, Yuhuai; Wan, Tian; Jiang, Jianzheng; Fan, Jing

    2014-12-01

    Ultraviolet (UV) emissions radiated by hydroxyl (OH) is one of the fundamental elements in the prediction of radiation signature of high-altitude and high-speed vehicle. In this work, the OH A2Σ+→ X2Π ultraviolet emission band behind the bow shock is computed under the experimental condition of the second bow-shock ultraviolet flight (BSUV-2). Four related key issues are discussed, namely, the source of hydrogen element in the high-altitude atmosphere, the formation mechanism of OH species, efficient computational algorithm of trace species in rarefied flows, and accurate calculation of OH emission spectra. Firstly, by analyzing the typical atmospheric model, the vertical distributions of the number densities of different species containing hydrogen element are given. According to the different dominating species containing hydrogen element, the atmosphere is divided into three zones, and the formation mechanism of OH species is analyzed in the different zones. The direct simulation Monte Carlo (DSMC) method and the Navier-Stokes equations are employed to compute the number densities of the different OH electronically and vibrationally excited states. Different to the previous work, the trace species separation (TSS) algorithm is applied twice in order to accurately calculate the densities of OH species and its excited states. Using a non-equilibrium radiation model, the OH ultraviolet emission spectra and intensity at different altitudes are computed, and good agreement is obtained with the flight measured data.

  2. A support vector machine for spectral classification of emission-line galaxies from the Sloan Digital Sky Survey

    NASA Astrophysics Data System (ADS)

    Shi, Fei; Liu, Yu-Yan; Sun, Guang-Lan; Li, Pei-Yu; Lei, Yu-Ming; Wang, Jian

    2015-10-01

    The emission-lines of galaxies originate from massive young stars or supermassive blackholes. As a result, spectral classification of emission-line galaxies into star-forming galaxies, active galactic nucleus (AGN) hosts, or compositions of both relates closely to formation and evolution of galaxy. To find efficient and automatic spectral classification method, especially in large surveys and huge data bases, a support vector machine (SVM) supervised learning algorithm is applied to a sample of emission-line galaxies from the Sloan Digital Sky Survey (SDSS) data release 9 (DR9) provided by the Max Planck Institute and the Johns Hopkins University (MPA/JHU). A two-step approach is adopted. (i) The SVM must be trained with a subset of objects that are known to be AGN hosts, composites or star-forming galaxies, treating the strong emission-line flux measurements as input feature vectors in an n-dimensional space, where n is the number of strong emission-line flux ratios. (ii) After training on a sample of emission-line galaxies, the remaining galaxies are automatically classified. In the classification process, we use a 10-fold cross-validation technique. We show that the classification diagrams based on the [N II]/Hα versus other emission-line ratio, such as [O III]/Hβ, [Ne III]/[O II], ([O III]λ4959+[O III]λ5007)/[O III]λ4363, [O II]/Hβ, [Ar III]/[O III], [S II]/Hα, and [O I]/Hα, plus colour, allows us to separate unambiguously AGN hosts, composites or star-forming galaxies. Among them, the diagram of [N II]/Hα versus [O III]/Hβ achieved an accuracy of 99 per cent to separate the three classes of objects. The other diagrams above give an accuracy of ˜91 per cent.

  3. Separation of irradiance and reflectance from observed color images by logarithmical nonlinear diffusion process

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi

    2006-02-01

    The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.

  4. Spectral unmixing of multi-color tissue specific in vivo fluorescence in mice

    NASA Astrophysics Data System (ADS)

    Zacharakis, Giannis; Favicchio, Rosy; Garofalakis, Anikitos; Psycharakis, Stylianos; Mamalaki, Clio; Ripoll, Jorge

    2007-07-01

    Fluorescence Molecular Tomography (FMT) has emerged as a powerful tool for monitoring biological functions in vivo in small animals. It provides the means to determine volumetric images of fluorescent protein concentration by applying the principles of diffuse optical tomography. Using different probes tagged to different proteins or cells, different biological functions and pathways can be simultaneously imaged in the same subject. In this work we present a spectral unmixing algorithm capable of separating signal from different probes when combined with the tomographic imaging modality. We show results of two-color imaging when the algorithm is applied to separate fluorescence activity originating from phantoms containing two different fluorophores, namely CFSE and SNARF, with well separated emission spectra, as well as Dsred- and GFP-fused cells in F5-b10 transgenic mice in vivo. The same algorithm can furthermore be applied to tissue-specific spectroscopy data. Spectral analysis of a variety of organs from control, DsRed and GFP F5/B10 transgenic mice showed that fluorophore detection by optical systems is highly tissue-dependent. Spectral data collected from different organs can provide useful insight into experimental parameter optimisation (choice of filters, fluorophores, excitation wavelengths) and spectral unmixing can be applied to measure the tissue-dependency, thereby taking into account localized fluorophore efficiency. Summed up, tissue spectral unmixing can be used as criteria in choosing the most appropriate tissue targets as well as fluorescent markers for specific applications.

  5. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  6. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishizuka, N.; Kubo, Y.; Den, M.

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutralmore » lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.« less

  7. Seasonal Surface Spectral Emissivity Derived from Terra MODIS Data

    NASA Technical Reports Server (NTRS)

    Sun-Mack, Sunny; Chen, Yan; Minnis, Patrick; Young, DavidF.; Smith, William J., Jr.

    2004-01-01

    The CERES (Clouds and the Earth's Radiant Energy System) Project is measuring broadband shortwave and longwave radiances and deriving cloud properties form various images to produce a combined global radiation and cloud property data set. In this paper, simultaneous data from Terra MODIS (Moderate Resolution Imaging Spectroradiometer) taken at 3.7, 8.5, 11.0, and 12.0 m are used to derive the skin temperature and the surface emissivities at the same wavelengths. The methodology uses separate measurements of clear sky temperature in each channel determined by scene classification during the daytime and at night. The relationships between the various channels at night are used during the day when solar reflectance affects the 3.7- m radiances. A set of simultaneous equations is then solved to derive the emissivities. Global monthly emissivity maps are derived from Terra MODIS data while numerical weather analyses provide soundings for correcting the observed radiances for atmospheric absorption. These maps are used by CERES and other cloud retrieval algorithms.

  8. Observations of Ultraviolet Emission from Mg+ in the Lower and Middle Thermosphere

    NASA Astrophysics Data System (ADS)

    Minschwaner, K.; Shukla, N.; Fortna, C.; Budzien, S.; Dymond, K.; McCoy, R.

    2004-12-01

    New observations of ionized magnesium dayglow are reported from the Ionospheric Spectroscopy and Atmospheric Chemistry (ISAAC) instrument on the ARGOS satellite. We focused on two periods, October 14-28 1999 and November 15-30 1999, when ISAAC obtained high quality limb spectra between 2600 and 3000 Å and from 85 to 350 km tangent altitude. In addition to the resonant scattering by Mg+ near 2800 Å, these limb spectra also contain signatures of fluorescent scattering by nitric oxide in the gamma bands, emission by molecular nitrogen in the Vergard-Kaplan bands, and atomic emission by oxygen in the 2972 Å line. A retrieval algorithm has been developed to measure the abundance of nitric oxide using the intensity of fluorescent scattering in the γ (1,5) band at 2670 Å. This technique then allows for separating the overlapping emission by nitric oxide in the γ (1,6) band from the Mg+ doublet at 2800 Å. Retrieved Mg+ column densities have been mapped as a function of altitude and geomagnetic latitude.

  9. Exploring the fine structure at the limb in coronal holes

    NASA Technical Reports Server (NTRS)

    Karovska, Magarita; Blundell, Solon F.; Habbal, Shadia Rifai

    1994-01-01

    The fine structure of the solar limb in coronal holes is explored at temperatures ranging from 10(exp 4) to 10(exp 6) K. An image enhancement algorithm orignally developed for solar eclipse observations is applied to a number of simultaneous multiwavelength observations made with the Harvard Extreme Ultraviolet Spectrometer experiment on Skylab. The enhanced images reveal the presence of filamentary structures above the limb with a characteristic separation of approximately 10 to 15 sec . Some of the structures extend from the solar limb into the corona to at least 4 min above the solar limb. The brightness of these structures changes as a function of height above the limb. The brightest emission is associated with spiculelike structures in the proximity of the limb. The emission characteristic of high-temperature plasma is not cospatial with the emission at lower temperatures, indicating the presence of different temperature plasmas in the field of view.

  10. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data around Pinkerton Hot Springs, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in northern Saguache Counties identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  11. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data in Northwest Delta, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in northern Saguache Counties identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  12. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data in Ouray, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Ouray identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  13. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data in Southwest Steamboat Springs, Garfield County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature around south Steamboat Springs as identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  14. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data in Northern Saguache County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in northern Saguache Counties identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  15. An EEG blind source separation algorithm based on a weak exclusion principle.

    PubMed

    Lan Ma; Blu, Thierry; Wang, William S-Y

    2016-08-01

    The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.

  16. Separation analysis, a tool for analyzing multigrid algorithms

    NASA Technical Reports Server (NTRS)

    Costiner, Sorin; Taasan, Shlomo

    1995-01-01

    The separation of vectors by multigrid (MG) algorithms is applied to the study of convergence and to the prediction of the performance of MG algorithms. The separation operator for a two level cycle algorithm is derived. It is used to analyze the efficiency of the cycle when mixing of eigenvectors occurs. In particular cases the separation analysis reduces to Fourier type analysis. The separation operator of a two level cycle for a Schridubger eigenvalue problem, is derived and analyzed in a Fourier basis. Separation analysis gives information on how to choose performance relaxations and inter-level transfers. Separation analysis is a tool for analyzing and designing algorithms, and for optimizing their performance.

  17. Application of independent component analysis for speech-music separation using an efficient score function estimation

    NASA Astrophysics Data System (ADS)

    Pishravian, Arash; Aghabozorgi Sahaf, Masoud Reza

    2012-12-01

    In this paper speech-music separation using Blind Source Separation is discussed. The separating algorithm is based on the mutual information minimization where the natural gradient algorithm is used for minimization. In order to do that, score function estimation from observation signals (combination of speech and music) samples is needed. The accuracy and the speed of the mentioned estimation will affect on the quality of the separated signals and the processing time of the algorithm. The score function estimation in the presented algorithm is based on Gaussian mixture based kernel density estimation method. The experimental results of the presented algorithm on the speech-music separation and comparing to the separating algorithm which is based on the Minimum Mean Square Error estimator, indicate that it can cause better performance and less processing time

  18. An algorithm to estimate aircraft cruise black carbon emissions for use in developing a cruise emissions inventory.

    PubMed

    Peck, Jay; Oluwole, Oluwayemisi O; Wong, Hsi-Wu; Miake-Lye, Richard C

    2013-03-01

    To provide accurate input parameters to the large-scale global climate simulation models, an algorithm was developed to estimate the black carbon (BC) mass emission index for engines in the commercial fleet at cruise. Using a high-dimensional model representation (HDMR) global sensitivity analysis, relevant engine specification/operation parameters were ranked, and the most important parameters were selected. Simple algebraic formulas were then constructed based on those important parameters. The algorithm takes the cruise power (alternatively, fuel flow rate), altitude, and Mach number as inputs, and calculates BC emission index for a given engine/airframe combination using the engine property parameters, such as the smoke number, available in the International Civil Aviation Organization (ICAO) engine certification databank. The algorithm can be interfaced with state-of-the-art aircraft emissions inventory development tools, and will greatly improve the global climate simulations that currently use a single fleet average value for all airplanes. An algorithm to estimate the cruise condition black carbon emission index for commercial aircraft engines was developed. Using the ICAO certification data, the algorithm can evaluate the black carbon emission at given cruise altitude and speed.

  19. EM reconstruction of dual isotope PET using staggered injections and prompt gamma positron emitters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreyev, Andriy, E-mail: andriy.andreyev-1@philips.com; Sitek, Arkadiusz; Celler, Anna

    2014-02-15

    Purpose: The aim of dual isotope positron emission tomography (DIPET) is to create two separate images of two coinjected PET radiotracers. DIPET shortens the duration of the study, reduces patient discomfort, and produces perfectly coregistered images compared to the case when two radiotracers would be imaged independently (sequential PET studies). Reconstruction of data from such simultaneous acquisition of two PET radiotracers is difficult because positron decay of any isotope creates only 511 keV photons; therefore, the isotopes cannot be differentiated based on the detected energy. Methods: Recently, the authors have proposed a DIPET technique that uses a combination of radiotracermore » A which is a pure positron emitter (such as{sup 18}F or {sup 11}C) and radiotracer B in which positron decay is accompanied by the emission of a high-energy (HE) prompt gamma (such as {sup 38}K or {sup 60}Cu). Events that are detected as triple coincidences of HE gammas with the corresponding two 511 keV photons allow the authors to identify the lines-of-response (LORs) of isotope B. These LORs are used to separate the two intertwined distributions, using a dedicated image reconstruction algorithm. In this work the authors propose a new version of the DIPET EM-based reconstruction algorithm that allows the authors to include an additional, independent estimate of radiotracer A distribution which may be obtained if radioisotopes are administered using a staggered injections method. In this work the method is tested on simple simulations of static PET acquisitions. Results: The authors’ experiments performed using Monte-Carlo simulations with static acquisitions demonstrate that the combined method provides better results (crosstalk errors decrease by up to 50%) than the positron-gamma DIPET method or staggered injections alone. Conclusions: The authors demonstrate that the authors’ new EM algorithm which combines information from triple coincidences with prompt gammas and staggered injections improves the accuracy of DIPET reconstructions for static acquisitions so they reach almost the benchmark level calculated for perfectly separated tracers.« less

  20. Finding Blackbody Temperature and Emissivity on a Sub-Pixel Scale

    NASA Astrophysics Data System (ADS)

    Bernstein, D. J.; Bausell, J.; Grigsby, S.; Kudela, R. M.

    2015-12-01

    Surface temperature and emissivity provide important insight into the ecosystem being remotely sensed. Dozier (1981) proposed a an algorithm to solve for percent coverage and temperatures of two different surface types (e.g. sea surface, cloud cover, etc.) within a given pixel, with a constant value for emissivity assumed. Here we build on Dozier (1981) by proposing an algorithm that solves for both temperature and emissivity of a water body within a satellite pixel by assuming known percent coverage of surface types within the pixel. Our algorithm generates thermal infrared (TIR) and emissivity end-member spectra for the two surface types. Our algorithm then superposes these end-member spectra on emissivity and TIR spectra emitted from four pixels with varying percent coverage of different surface types. The algorithm was tested preliminarily (48 iterations) using simulated pixels containing more than one surface type, with temperature and emissivity percent errors of ranging from 0 to 1.071% and 2.516 to 15.311% respectively[1]. We then tested the algorithm using a MASTER image from MASTER collected as part of the NASA Student Airborne Research Program (NASA SARP). Here the temperature of water was calculated to be within 0.22 K of in situ data. The algorithm calculated emissivity of water with an accuracy of 0.13 to 1.53% error for Salton Sea pixels collected with MASTER, also collected as part of NASA SARP. This method could improve retrievals for the HyspIRI sensor. [1] Percent error for emissivity was generated by averaging percent error across all selected bands widths.

  1. Key issues of ultraviolet radiation of OH at high altitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yuhuai; Wan, Tian; Jiang, Jianzheng

    2014-12-09

    Ultraviolet (UV) emissions radiated by hydroxyl (OH) is one of the fundamental elements in the prediction of radiation signature of high-altitude and high-speed vehicle. In this work, the OH A{sup 2}Σ{sup +}→X{sup 2}Π ultraviolet emission band behind the bow shock is computed under the experimental condition of the second bow-shock ultraviolet flight (BSUV-2). Four related key issues are discussed, namely, the source of hydrogen element in the high-altitude atmosphere, the formation mechanism of OH species, efficient computational algorithm of trace species in rarefied flows, and accurate calculation of OH emission spectra. Firstly, by analyzing the typical atmospheric model, the verticalmore » distributions of the number densities of different species containing hydrogen element are given. According to the different dominating species containing hydrogen element, the atmosphere is divided into three zones, and the formation mechanism of OH species is analyzed in the different zones. The direct simulation Monte Carlo (DSMC) method and the Navier-Stokes equations are employed to compute the number densities of the different OH electronically and vibrationally excited states. Different to the previous work, the trace species separation (TSS) algorithm is applied twice in order to accurately calculate the densities of OH species and its excited states. Using a non-equilibrium radiation model, the OH ultraviolet emission spectra and intensity at different altitudes are computed, and good agreement is obtained with the flight measured data.« less

  2. Unsupervised learning of structure in spectroscopic cubes

    NASA Astrophysics Data System (ADS)

    Araya, M.; Mendoza, M.; Solar, M.; Mardones, D.; Bayo, A.

    2018-07-01

    We consider the problem of analyzing the structure of spectroscopic cubes using unsupervised machine learning techniques. We propose representing the target's signal as a homogeneous set of volumes through an iterative algorithm that separates the structured emission from the background while not overestimating the flux. Besides verifying some basic theoretical properties, the algorithm is designed to be tuned by domain experts, because its parameters have meaningful values in the astronomical context. Nevertheless, we propose a heuristic to automatically estimate the signal-to-noise ratio parameter of the algorithm directly from data. The resulting light-weighted set of samples (≤ 1% compared to the original data) offer several advantages. For instance, it is statistically correct and computationally inexpensive to apply well-established techniques of the pattern recognition and machine learning domains; such as clustering and dimensionality reduction algorithms. We use ALMA science verification data to validate our method, and present examples of the operations that can be performed by using the proposed representation. Even though this approach is focused on providing faster and better analysis tools for the end-user astronomer, it also opens the possibility of content-aware data discovery by applying our algorithm to big data.

  3. The difference between laboratory and in-situ pixel-averaged emissivity: The effects on temperature-emissivity separation

    NASA Technical Reports Server (NTRS)

    Matsunaga, Tsuneo

    1993-01-01

    Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a Japanese future imaging sensor which has five channels in thermal infrared (TIR) region. To extract spectral emissivity information from ASTER and/or TIMS data, various temperature-emissivity (T-E) separation methods have been developed to date. Most of them require assumptions on surface emissivity, in which emissivity measured in a laboratory is often used instead of in-situ pixel-averaged emissivity. But if these two emissivities are different, accuracies of separated emissivity and surface temperature are reduced. In this study, the difference between laboratory and in-situ pixel-averaged emissivity and its effect on T-E separation are discussed. TIMS data of an area containing both rocks and vegetation were also processed to retrieve emissivity spectra using two T-E separation methods.

  4. Comparison of the MODIS Collection 5 Multilayer Cloud Detection Product with CALIPSO

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Gala; King, Michael D.; Holz, Robert E.; Ackerman, Steven A.; Nagle, Fred W.

    2010-01-01

    CALIPSO, launched in June 2006, provides global active remote sensing measurements of clouds and aerosols that can be used for validation of a variety of passive imager retrievals derived from instruments flying on the Aqua spacecraft and other A-Train platforms. The most recent processing effort for the MODIS Atmosphere Team, referred to as the Collection 5 scream, includes a research-level multilayer cloud detection algorithm that uses both thermodynamic phase information derived from a combination of solar and thermal emission bands to discriminate layers of different phases, as well as true layer separation discrimination using a moderately absorbing water vapor band. The multilayer detection algorithm is designed to provide a means of assessing the applicability of 1D cloud models used in the MODIS cloud optical and microphysical product retrieval, which are generated at a 1 km resolution. Using pixel-level collocations of MODIS Aqua, CALIOP, we investigate the global performance of multilayer cloud detection algorithms (and thermodynamic phase).

  5. Estimation of Multiple Parameters over Vegetated Surfaces by Integrating Optical-Thermal Remote Sensing Observations

    NASA Astrophysics Data System (ADS)

    Ma, H.

    2016-12-01

    Land surface parameters from remote sensing observations are critical in monitoring and modeling of global climate change and biogeochemical cycles. Current methods for estimating land surface parameters are generally parameter-specific algorithms and are based on instantaneous physical models, which result in spatial, temporal and physical inconsistencies in current global products. Besides, optical and Thermal Infrared (TIR) remote sensing observations are usually separated to use based on different models , and the Middle InfraRed (MIR) observations have received little attention due to the complexity of the radiometric signal that mixes both reflected and emitted fluxes. In this paper, we proposed a unified algorithm for simultaneously retrieving a total of seven land surface parameters, including Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), land surface albedo, Land Surface Temperature (LST), surface emissivity, downward and upward longwave radiation, by exploiting remote sensing observations from visible to TIR domain based on a common physical Radiative Transfer (RT) model and a data assimilation framework. The coupled PROSPECT-VISIR and 4SAIL RT model were used for canopy reflectance modeling. At first, LAI was estimated using a data assimilation method that combines MODIS daily reflectance observation and a phenology model. The estimated LAI values were then input into the RT model to simulate surface spectral emissivity and surface albedo. Besides, the background albedo and the transmittance of solar radiation, and the canopy albedo were also calculated to produce FAPAR. Once the spectral emissivity of seven MODIS MIR to TIR bands were retrieved, LST can be estimated from the atmospheric corrected surface radiance by exploiting an optimization method. At last, the upward longwave radiation were estimated using the retrieved LST, broadband emissivity (converted from spectral emissivity) and the downward longwave radiation (modeled by MODTRAN). These seven parameters were validated over several representative sites with different biome type, and compared with MODIS and GLASS product. Results showed that this unified inversion algorithm can retrieve temporally complete and physical consistent land surface parameters with high accuracy.

  6. Isoprene emission potentials from European oak forests derived from canopy flux measurements: an assessment of uncertainties and inter-algorithm variability

    NASA Astrophysics Data System (ADS)

    Langford, Ben; Cash, James; Acton, W. Joe F.; Valach, Amy C.; Hewitt, C. Nicholas; Fares, Silvano; Goded, Ignacio; Gruening, Carsten; House, Emily; Kalogridis, Athina-Cerise; Gros, Valérie; Schafers, Richard; Thomas, Rick; Broadmeadow, Mark; Nemitz, Eiko

    2017-12-01

    Biogenic emission algorithms predict that oak forests account for ˜ 70 % of the total European isoprene budget. Yet the isoprene emission potentials (IEPs) that underpin these model estimates are calculated from a very limited number of leaf-level observations and hence are highly uncertain. Increasingly, micrometeorological techniques such as eddy covariance are used to measure whole-canopy fluxes directly, from which isoprene emission potentials can be calculated. Here, we review five observational datasets of isoprene fluxes from a range of oak forests in the UK, Italy and France. We outline procedures to correct the measured net fluxes for losses from deposition and chemical flux divergence, which were found to be on the order of 5-8 and 4-5 %, respectively. The corrected observational data were used to derive isoprene emission potentials at each site in a two-step process. Firstly, six commonly used emission algorithms were inverted to back out time series of isoprene emission potential, and then an average isoprene emission potential was calculated for each site with an associated uncertainty. We used these data to assess how the derived emission potentials change depending upon the specific emission algorithm used and, importantly, on the particular approach adopted to derive an average site-specific emission potential. Our results show that isoprene emission potentials can vary by up to a factor of 4 depending on the specific algorithm used and whether or not it is used in a big-leaf or canopy environment (CE) model format. When using the same algorithm, the calculated average isoprene emission potential was found to vary by as much as 34 % depending on how the average was derived. Using a consistent approach with version 2.1 of the Model for Emissions of Gases and Aerosols from Nature (MEGAN), we derive new ecosystem-scale isoprene emission potentials for the five measurement sites: Alice Holt, UK (10 500 ± 2500 µg m-2 h-1); Bosco Fontana, Italy (1610 ± 420 µg m-2 h-1); Castelporziano, Italy (121 ± 15 µg m-2 h-1); Ispra, Italy (7590 ± 1070 µg m-2 h-1); and the Observatoire de Haute Provence, France (7990 ± 1010 µg m-2 h-1). Ecosystem-scale isoprene emission potentials were then extrapolated to the leaf-level and compared to previous leaf-level measurements for Quercus robur and Quercus pubescens, two species thought to account for 50 % of the total European isoprene budget. The literature values agreed closely with emission potentials calculated using the G93 algorithm, which were 85 ± 75 and 78 ± 25 µg g-1 h-1 for Q. robur and Q. pubescens, respectively. By contrast, emission potentials calculated using the G06 algorithm, the same algorithm used in a previous study to derive the European budget, were significantly lower, which we attribute to the influence of past light and temperature conditions. Adopting these new G06 specific emission potentials for Q. robur (55 ± 24 µg g-1 h-1) and Q. pubescens (47 ± 16 µg g-1 h-1) reduced the projected European budget by ˜ 17 %. Our findings demonstrate that calculated isoprene emission potentials vary considerably depending upon the specific approach used in their calculation. Therefore, it is our recommendation that the community now adopt a standardised approach to the way in which micrometeorological flux measurements are corrected and used to derive isoprene, and other biogenic volatile organic compounds, emission potentials.

  7. Green analytical determination of emerging pollutants in environmental waters using excitation-emission photoinduced fluorescence data and multivariate calibration.

    PubMed

    Hurtado-Sánchez, María Del Carmen; Lozano, Valeria A; Rodríguez-Cáceres, María Isabel; Durán-Merás, Isabel; Escandar, Graciela M

    2015-03-01

    An eco-friendly strategy for the simultaneous quantification of three emerging pharmaceutical contaminants is presented. The proposed analytical method, which involves photochemically induced fluorescence matrix data combined with second-order chemometric analysis, was used for the determination of carbamazepine, ofloxacin and piroxicam in water samples of different complexity without the need of chromatographic separation. Excitation-emission photoinduced fluorescence matrices were obtained after UV irradiation, and processed with second-order algorithms. Only one of the tested algorithms was able to overcome the strong spectral overlapping among the studied pollutants and allowed their successful quantitation in very interferent media. The method sensitivity in superficial and underground water samples was enhanced by a simple solid-phase extraction with C18 membranes, which was successful for the extraction/preconcentration of the pollutants at trace levels. Detection limits in preconcentrated (1:125) real water samples ranged from 0.04 to 0.3 ng mL(-1). Relative prediction errors around 10% were achieved. The proposed strategy is significantly simpler and greener than liquid chromatography-mass spectrometry methods, without compromising the analytical quality of the results. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Areas with Surface Thermal Anomalies as Detected by ASTER and LANDSAT Data around South Canyon Hot Springs, Garfield County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature around South Canyon Hot Springs as identified from ASTER and LANDSAT thermal data and spatial based insolation model. The temperature for the ASTER data was calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas having anomalous temperature in the ASTER data are shown in blue diagonal hatch, while areas having anomalous temperature in the LANDSAT data are shown in magenta on the map. Thermal springs and areas with favorable geochemistry are also shown. Springs or wells having non-favorable geochemistry are shown as blue dots.

  9. INFLUENCE OF INCREASED ISOPRENE EMISSIONS ON REGIONAL OZONE MODELING

    EPA Science Inventory

    The role of biogenic hydrocarbons on ozone modeling has been a controversial issue since the 1970s. In recent years, changes in biogenic emission algorithms have resulted in large increases in estimated isoprene emissions. This paper describes a recent algorithm, the second gener...

  10. Beyond filtered backprojection: A reconstruction software package for ion beam microtomography data

    NASA Astrophysics Data System (ADS)

    Habchi, C.; Gordillo, N.; Bourret, S.; Barberet, Ph.; Jovet, C.; Moretto, Ph.; Seznec, H.

    2013-01-01

    A new version of the TomoRebuild data reduction software package is presented, for the reconstruction of scanning transmission ion microscopy tomography (STIMT) and particle induced X-ray emission tomography (PIXET) images. First, we present a state of the art of the reconstruction codes available for ion beam microtomography. The algorithm proposed here brings several advantages. It is a portable, multi-platform code, designed in C++ with well-separated classes for easier use and evolution. Data reduction is separated in different steps and the intermediate results may be checked if necessary. Although no additional graphic library or numerical tool is required to run the program as a command line, a user friendly interface was designed in Java, as an ImageJ plugin. All experimental and reconstruction parameters may be entered either through this plugin or directly in text format files. A simple standard format is proposed for the input of experimental data. Optional graphic applications using the ROOT interface may be used separately to display and fit energy spectra. Regarding the reconstruction process, the filtered backprojection (FBP) algorithm, already present in the previous version of the code, was optimized so that it is about 10 times as fast. In addition, Maximum Likelihood Expectation Maximization (MLEM) and its accelerated version Ordered Subsets Expectation Maximization (OSEM) algorithms were implemented. A detailed user guide in English is available. A reconstruction example of experimental data from a biological sample is given. It shows the capability of the code to reduce noise in the sinograms and to deal with incomplete data, which puts a new perspective on tomography using low number of projections or limited angle.

  11. Analysing the Effects of Different Land Cover Types on Land Surface Temperature Using Satellite Data

    NASA Astrophysics Data System (ADS)

    Şekertekin, A.; Kutoglu, Ş. H.; Kaya, S.; Marangoz, A. M.

    2015-12-01

    Monitoring Land Surface Temperature (LST) via remote sensing images is one of the most important contributions to climatology. LST is an important parameter governing the energy balance on the Earth and it also helps us to understand the behavior of urban heat islands. There are lots of algorithms to obtain LST by remote sensing techniques. The most commonly used algorithms are split-window algorithm, temperature/emissivity separation method, mono-window algorithm and single channel method. In this research, mono window algorithm was implemented to Landsat 5 TM image acquired on 28.08.2011. Besides, meteorological data such as humidity and temperature are used in the algorithm. Moreover, high resolution Geoeye-1 and Worldview-2 images acquired on 29.08.2011 and 12.07.2013 respectively were used to investigate the relationships between LST and land cover type. As a result of the analyses, area with vegetation cover has approximately 5 ºC lower temperatures than the city center and arid land., LST values change about 10 ºC in the city center because of different surface properties such as reinforced concrete construction, green zones and sandbank. The temperature around some places in thermal power plant region (ÇATES and ZETES) Çatalağzı, is about 5 ºC higher than city center. Sandbank and agricultural areas have highest temperature due to the land cover structure.

  12. A Modified Empirical Wavelet Transform for Acoustic Emission Signal Decomposition in Structural Health Monitoring.

    PubMed

    Dong, Shaopeng; Yuan, Mei; Wang, Qiusheng; Liang, Zhiling

    2018-05-21

    The acoustic emission (AE) method is useful for structural health monitoring (SHM) of composite structures due to its high sensitivity and real-time capability. The main challenge, however, is how to classify the AE data into different failure mechanisms because the detected signals are affected by various factors. Empirical wavelet transform (EWT) is a solution for analyzing the multi-component signals and has been used to process the AE data. In order to solve the spectrum separation problem of the AE signals, this paper proposes a novel modified separation method based on local window maxima (LWM) algorithm. It searches the local maxima of the Fourier spectrum in a proper window, and automatically determines the boundaries of spectrum segmentations, which helps to eliminate the impact of noise interference or frequency dispersion in the detected signal and obtain the meaningful empirical modes that are more related to the damage characteristics. Additionally, both simulation signal and AE signal from the composite structures are used to verify the effectiveness of the proposed method. Finally, the experimental results indicate that the proposed method performs better than the original EWT method in identifying different damage mechanisms of composite structures.

  13. A Modified Empirical Wavelet Transform for Acoustic Emission Signal Decomposition in Structural Health Monitoring

    PubMed Central

    Dong, Shaopeng; Yuan, Mei; Wang, Qiusheng; Liang, Zhiling

    2018-01-01

    The acoustic emission (AE) method is useful for structural health monitoring (SHM) of composite structures due to its high sensitivity and real-time capability. The main challenge, however, is how to classify the AE data into different failure mechanisms because the detected signals are affected by various factors. Empirical wavelet transform (EWT) is a solution for analyzing the multi-component signals and has been used to process the AE data. In order to solve the spectrum separation problem of the AE signals, this paper proposes a novel modified separation method based on local window maxima (LWM) algorithm. It searches the local maxima of the Fourier spectrum in a proper window, and automatically determines the boundaries of spectrum segmentations, which helps to eliminate the impact of noise interference or frequency dispersion in the detected signal and obtain the meaningful empirical modes that are more related to the damage characteristics. Additionally, both simulation signal and AE signal from the composite structures are used to verify the effectiveness of the proposed method. Finally, the experimental results indicate that the proposed method performs better than the original EWT method in identifying different damage mechanisms of composite structures. PMID:29883411

  14. Optimal Integration of Departures and Arrivals in Terminal Airspace

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon Jean

    2013-01-01

    Coordination of operations with spatially and temporally shared resources, such as route segments, fixes, and runways, improves the efficiency of terminal airspace management. Problems in this category are, in general, computationally difficult compared to conventional scheduling problems. This paper presents a fast time algorithm formulation using a non-dominated sorting genetic algorithm (NSGA). It was first applied to a test problem introduced in existing literature. An experiment with a test problem showed that new methods can solve the 20 aircraft problem in fast time with a 65% or 440 second delay reduction using shared departure fixes. In order to test its application in a more realistic and complicated problem, the NSGA algorithm was applied to a problem in LAX terminal airspace, where interactions between 28% of LAX arrivals and 10% of LAX departures are resolved by spatial separation in current operations, which may introduce unnecessary delays. In this work, three types of separations - spatial, temporal, and hybrid separations - were formulated using the new algorithm. The hybrid separation combines both temporal and spatial separations. Results showed that although temporal separation achieved less delay than spatial separation with a small uncertainty buffer, spatial separation outperformed temporal separation when the uncertainty buffer was increased. Hybrid separation introduced much less delay than both spatial and temporal approaches. For a total of 15 interacting departures and arrivals, when compared to spatial separation, the delay reduction of hybrid separation varied between 11% or 3.1 minutes and 64% or 10.7 minutes corresponding to an uncertainty buffer from 0 to 60 seconds. Furthermore, as a comparison with the NSGA algorithm, a First-Come-First-Serve based heuristic method was implemented for the hybrid separation. Experiments showed that the results from the NSGA algorithm have 9% to 42% less delay than the heuristic method with varied uncertainty buffer sizes.

  15. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.

    PubMed

    Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long

    2017-10-03

    For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.

  16. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  17. "Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"

    EPA Science Inventory

    We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...

  18. Comparison of the MODIS Multilayer Cloud Detection and Thermodynamic Phase Products with CALIPSO and CloudSat

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; King, Michael D.; Wind, Gala; Holz, Robert E.; Ackerman, Steven A.; Nagle, Fred W.

    2008-01-01

    CALIPSO and CloudSat, launched in June 2006, provide global active remote sensing measurements of clouds and aerosols that can be used for validation of a variety of passive imager retrievals derived from instruments flying on the Aqua spacecraft and other A-Train platforms. The most recent processing effort for the MODIS Atmosphere Team, referred to as the "Collection 5" stream, includes a research-level multilayer cloud detection algorithm that uses both thermodynamic phase information derived from a combination of solar and thermal emission bands to discriminate layers of different phases, as well as true layer separation discrimination using a moderately absorbing water vapor band. The multilayer detection algorithm is designed to provide a means of assessing the applicability of 1D cloud models used in the MODIS cloud optical and microphysical product retrieval, which are generated at a 1 h resolution. Using pixel-level collocations of MODIS Aqua, CALIOP, and CloudSat radar measurements, we investigate the global performance of the thermodynamic phase and multilayer cloud detection algorithms.

  19. Rapid, simultaneous and interference-free determination of three rhodamine dyes illegally added into chilli samples using excitation-emission matrix fluorescence coupled with second-order calibration method.

    PubMed

    Chang, Yue-Yue; Wu, Hai-Long; Fang, Huan; Wang, Tong; Liu, Zhi; Ouyang, Yang-Zi; Ding, Yu-Jie; Yu, Ru-Qin

    2018-06-15

    In this study, a smart and green analytical method based on the second-order calibration algorithm coupled with excitation-emission matrix (EEM) fluorescence was developed for the determination of rhodamine dyes illegally added into chilli samples. The proposed method not only has the advantage of high sensitivity over the traditional fluorescence method but also fully displays the "second-order advantage". Pure signals of analytes were successfully extracted from severely interferential EEMs profiles via using alternating trilinear decomposition (ATLD) algorithm even in the presence of common fluorescence problems such as scattering, peak overlaps and unknown interferences. It is worth noting that the unknown interferents can denote different kinds of backgrounds, not only refer to a constant background. In addition, the method using interpolation method could avoid the information loss of analytes of interest. The use of "mathematical separation" instead of complicated "chemical or physical separation" strategy can be more effective and environmentally friendly. A series of statistical parameters including figures of merit and RSDs of intra- (≤1.9%) and inter-day (≤6.6%) were calculated to validate the accuracy of the proposed method. Furthermore, the authoritative method of HPLC-FLD was adopted to verify the qualitative and quantitative results of the proposed method. Compared with the two methods, it also showed that the ATLD-EEMs method has the advantages of accuracy, rapidness, simplicity and green, which is expected to be developed as an attractive alternative method for simultaneous and interference-free determination of rhodamine dyes illegally added into complex matrices. Copyright © 2018. Published by Elsevier B.V.

  20. Hydrothermal alteration maps of the central and southern Basin and Range province of the United States compiled from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data

    USGS Publications Warehouse

    Mars, John L.

    2013-01-01

    Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data and Interactive Data Language (IDL) logical operator algorithms were used to map hydrothermally altered rocks in the central and southern parts of the Basin and Range province of the United States. The hydrothermally altered rocks mapped in this study include (1) hydrothermal silica-rich rocks (hydrous quartz, chalcedony, opal, and amorphous silica), (2) propylitic rocks (calcite-dolomite and epidote-chlorite mapped as separate mineral groups), (3) argillic rocks (alunite-pyrophyllite-kaolinite), and (4) phyllic rocks (sericite-muscovite). A series of hydrothermal alteration maps, which identify the potential locations of hydrothermal silica-rich, propylitic, argillic, and phyllic rocks on Landsat Thematic Mapper (TM) band 7 orthorectified images, and geographic information systems shape files of hydrothermal alteration units are provided in this study.

  1. A quantitative reconstruction software suite for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Namías, Mauro; Jeraj, Robert

    2017-11-01

    Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.

  2. Enhancements in Deriving Smoke Emission Coefficients from Fire Radiative Power Measurements

    NASA Technical Reports Server (NTRS)

    Ellison, Luke; Ichoku, Charles

    2011-01-01

    Smoke emissions have long been quantified after-the-fact by simple multiplication of burned area, biomass density, fraction of above-ground biomass, and burn efficiency. A new algorithm has been suggested, as described in Ichoku & Kaufman (2005), for use in calculating smoke emissions directly from fire radiative power (FRP) measurements such that the latency and uncertainty associated with the previously listed variables are avoided. Application of this new, simpler and more direct algorithm is automatic, based only on a fire's FRP measurement and a predetermined coefficient of smoke emission for a given location. Attaining accurate coefficients of smoke emission is therefore critical to the success of this algorithm. In the aforementioned paper, an initial effort was made to derive coefficients of smoke emission for different large regions of interest using calculations of smoke emission rates from MODIS FRP and aerosol optical depth (AOD) measurements. Further work had resulted in a first draft of a 1 1 resolution map of these coefficients. This poster will present the work done to refine this algorithm toward the first production of global smoke emission coefficients. Main updates in the algorithm include: 1) inclusion of wind vectors to help refine several parameters, 2) defining new methods for calculating the fire-emitted AOD fractions, and 3) calculating smoke emission rates on a per-pixel basis and aggregating to grid cells instead of doing so later on in the process. In addition to a presentation of the methodology used to derive this product, maps displaying preliminary results as well as an outline of the future application of such a product into specific research opportunities will be shown.

  3. Sensoring fusion data from the optic and acoustic emissions of electric arcs in the GMAW-S process for welding quality assessment.

    PubMed

    Alfaro, Sadek Crisóstomo Absi; Cayo, Eber Huanca

    2012-01-01

    The present study shows the relationship between welding quality and optical-acoustic emissions from electric arcs, during welding runs, in the GMAW-S process. Bead on plate welding tests was carried out with pre-set parameters chosen from manufacturing standards. During the welding runs interferences were induced on the welding path using paint, grease or gas faults. In each welding run arc voltage, welding current, infrared and acoustic emission values were acquired and parameters such as arc power, acoustic peaks rate and infrared radiation rate computed. Data fusion algorithms were developed by assessing known welding quality parameters from arc emissions. These algorithms have showed better responses when they are based on more than just one sensor. Finally, it was concluded that there is a close relation between arc emissions and quality in welding and it can be measured from arc emissions sensing and data fusion algorithms.

  4. A review on economic emission dispatch problems using quantum computational intelligence

    NASA Astrophysics Data System (ADS)

    Mahdi, Fahad Parvez; Vasant, Pandian; Kallimani, Vish; Abdullah-Al-Wadud, M.

    2016-11-01

    Economic emission dispatch (EED) problems are one of the most crucial problems in power systems. Growing energy demand, limitation of natural resources and global warming make this topic into the center of discussion and research. This paper reviews the use of Quantum Computational Intelligence (QCI) in solving Economic Emission Dispatch problems. QCI techniques like Quantum Genetic Algorithm (QGA) and Quantum Particle Swarm Optimization (QPSO) algorithm are discussed here. This paper will encourage the researcher to use more QCI based algorithm to get better optimal result for solving EED problems.

  5. Improvement in thin cirrus retrievals using an emissivity-adjusted CO2 slicing algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Menzel, W. Paul

    2002-09-01

    CO2 slicing has been generally accepted as a useful algorithm for determining cloud top pressure (CTP) and effective cloud amount (ECA) for tropospheric clouds above 600 hPa. To date, the technique has assumed that the surface emissivity is that of a blackbody in the long-wavelength infrared radiances and that the cloud emissivities in spectrally close bands are approximately equal. The modified CO2 slicing algorithm considers adjustments of both surface emissivity and cloud emissivity ratio. Surface emissivity is adjusted according to the surface types. The ratio of cloud emissivities in spectrally close bands is adjusted away from unity according to radiative transfer calculations. The new CO2 slicing algorithm is examined with Moderate Resolution Imaging Spectroradiometer (MODIS) Airborne Simulator (MAS) CO2 band radiance measurements over thin clouds and validated against Cloud Lidar System (CLS) measurements of the same clouds; it is also applied to Geostationary Operational Environmental Satellite (GOES) Sounder data to study the overall impact on cloud property determinations. For high thin clouds an improved product emerges, while for thick and opaque clouds there is little change. For very thin clouds, the CTP increases by about 10-20 hPa and RMS (root mean square bias) difference is approximately 50 hPa; for thin clouds, the CTP increase is about 10 hPa bias and RMS difference is approximately 30 hPa. The new CO2 slicing algorithm places the clouds lower in the troposphere.

  6. Smartphone-Based Indoor Localization with Bluetooth Low Energy Beacons

    PubMed Central

    Zhuang, Yuan; Yang, Jun; Li, You; Qi, Longning; El-Sheimy, Naser

    2016-01-01

    Indoor wireless localization using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. In this paper, we propose an algorithm that uses the combination of channel-separate polynomial regression model (PRM), channel-separate fingerprinting (FP), outlier detection and extended Kalman filtering (EKF) for smartphone-based indoor localization with BLE beacons. The proposed algorithm uses FP and PRM to estimate the target’s location and the distances between the target and BLE beacons respectively. We compare the performance of distance estimation that uses separate PRM for three advertisement channels (i.e., the separate strategy) with that use an aggregate PRM generated through the combination of information from all channels (i.e., the aggregate strategy). The performance of FP-based location estimation results of the separate strategy and the aggregate strategy are also compared. It was found that the separate strategy can provide higher accuracy; thus, it is preferred to adopt PRM and FP for each BLE advertisement channel separately. Furthermore, to enhance the robustness of the algorithm, a two-level outlier detection mechanism is designed. Distance and location estimates obtained from PRM and FP are passed to the first outlier detection to generate improved distance estimates for the EKF. After the EKF process, the second outlier detection algorithm based on statistical testing is further performed to remove the outliers. The proposed algorithm was evaluated by various field experiments. Results show that the proposed algorithm achieved the accuracy of <2.56 m at 90% of the time with dense deployment of BLE beacons (1 beacon per 9 m), which performs 35.82% better than <3.99 m from the Propagation Model (PM) + EKF algorithm and 15.77% more accurate than <3.04 m from the FP + EKF algorithm. With sparse deployment (1 beacon per 18 m), the proposed algorithm achieves the accuracies of <3.88 m at 90% of the time, which performs 49.58% more accurate than <8.00 m from the PM + EKF algorithm and 21.41% better than <4.94 m from the FP + EKF algorithm. Therefore, the proposed algorithm is especially useful to improve the localization accuracy in environments with sparse beacon deployment. PMID:27128917

  7. Smartphone-Based Indoor Localization with Bluetooth Low Energy Beacons.

    PubMed

    Zhuang, Yuan; Yang, Jun; Li, You; Qi, Longning; El-Sheimy, Naser

    2016-04-26

    Indoor wireless localization using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. In this paper, we propose an algorithm that uses the combination of channel-separate polynomial regression model (PRM), channel-separate fingerprinting (FP), outlier detection and extended Kalman filtering (EKF) for smartphone-based indoor localization with BLE beacons. The proposed algorithm uses FP and PRM to estimate the target's location and the distances between the target and BLE beacons respectively. We compare the performance of distance estimation that uses separate PRM for three advertisement channels (i.e., the separate strategy) with that use an aggregate PRM generated through the combination of information from all channels (i.e., the aggregate strategy). The performance of FP-based location estimation results of the separate strategy and the aggregate strategy are also compared. It was found that the separate strategy can provide higher accuracy; thus, it is preferred to adopt PRM and FP for each BLE advertisement channel separately. Furthermore, to enhance the robustness of the algorithm, a two-level outlier detection mechanism is designed. Distance and location estimates obtained from PRM and FP are passed to the first outlier detection to generate improved distance estimates for the EKF. After the EKF process, the second outlier detection algorithm based on statistical testing is further performed to remove the outliers. The proposed algorithm was evaluated by various field experiments. Results show that the proposed algorithm achieved the accuracy of <2.56 m at 90% of the time with dense deployment of BLE beacons (1 beacon per 9 m), which performs 35.82% better than <3.99 m from the Propagation Model (PM) + EKF algorithm and 15.77% more accurate than <3.04 m from the FP + EKF algorithm. With sparse deployment (1 beacon per 18 m), the proposed algorithm achieves the accuracies of <3.88 m at 90% of the time, which performs 49.58% more accurate than <8.00 m from the PM + EKF algorithm and 21.41% better than <4.94 m from the FP + EKF algorithm. Therefore, the proposed algorithm is especially useful to improve the localization accuracy in environments with sparse beacon deployment.

  8. Improvements to an earth observing statistical performance model with applications to LWIR spectral variability

    NASA Astrophysics Data System (ADS)

    Zhao, Runchen; Ientilucci, Emmett J.

    2017-05-01

    Hyperspectral remote sensing systems provide spectral data composed of hundreds of narrow spectral bands. Spectral remote sensing systems can be used to identify targets, for example, without physical interaction. Often it is of interested to characterize the spectral variability of targets or objects. The purpose of this paper is to identify and characterize the LWIR spectral variability of targets based on an improved earth observing statistical performance model, known as the Forecasting and Analysis of Spectroradiometric System Performance (FASSP) model. FASSP contains three basic modules including a scene model, sensor model and a processing model. Instead of using mean surface reflectance only as input to the model, FASSP transfers user defined statistical characteristics of a scene through the image chain (i.e., from source to sensor). The radiative transfer model, MODTRAN, is used to simulate the radiative transfer based on user defined atmospheric parameters. To retrieve class emissivity and temperature statistics, or temperature / emissivity separation (TES), a LWIR atmospheric compensation method is necessary. The FASSP model has a method to transform statistics in the visible (ie., ELM) but currently does not have LWIR TES algorithm in place. This paper addresses the implementation of such a TES algorithm and its associated transformation of statistics.

  9. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  10. Independent component analysis algorithm FPGA design to perform real-time blind source separation

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe; Odom, Crispin; Botella, Guillermo; Meyer-Baese, Anke

    2015-05-01

    The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.

  11. The Search for Effective Algorithms for Recovery from Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narawicz, Anthony J.

    2012-01-01

    Our previous work presented an approach for developing high confidence algorithms for recovering aircraft from loss of separation situations. The correctness theorems for the algorithms relied on several key assumptions, namely that state data for all local aircraft is perfectly known, that resolution maneuvers can be achieved instantaneously, and that all aircraft compute resolutions using exactly the same data. Experiments showed that these assumptions were adequate in cases where the aircraft are far away from losing separation, but are insufficient when the aircraft have already lost separation. This paper describes the results of this experimentation and proposes a new criteria specification for loss of separation recovery that preserves the formal safety properties of the previous criteria while overcoming some key limitations. Candidate algorithms that satisfy the new criteria are presented.

  12. Mapping advanced argillic alteration zones with ASTER and Hyperion data in the Andes Mountains of Peru

    NASA Astrophysics Data System (ADS)

    Ramos, Yuddy; Goïta, Kalifa; Péloquin, Stéphane

    2016-04-01

    This study evaluates Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) and Hyperion hyperspectral sensor datasets to detect advanced argillic minerals. The spectral signatures of some alteration clay minerals, such as dickite and alunite, have similar absorption features; thus separating them using multispectral satellite images is a complex challenge. However, Hyperion with its fine spectral bands has potential for good separability of features. The Spectral Angle Mapper algorithm was used in this study to map three advanced argillic alteration minerals (alunite, kaolinite, and dickite) in a known alteration zone in the Peruvian Andes. The results from ASTER and Hyperion were analyzed, compared, and validated using a Portable Infrared Mineral Analyzer field spectrometer. The alterations corresponding to kaolinite and alunite were detected with both ASTER and Hyperion (80% to 84% accuracy). However, the dickite mineral was identified only with Hyperion (82% accuracy).

  13. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Archuleta County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  14. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, San Miguel County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  15. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Fremont County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  16. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Routt County, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled"warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  17. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Alamosa and Saguache Counties, Colorado

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  18. Surface Temperature Anomalies Derived from Night Time ASTER Data Corrected for Solar and Topographic Effects, Dolores County

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    This map shows areas of anomalous surface temperature in Alamosa and Saguache Counties identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled "very warm modeled surface temperature" are shown in red on the map. Areas that had temperatures between 1o and 2o were considered ASTER modeled "warm modeled surface temperature" are shown in yellow on the map. This map also includes the locations of shallow temperature survey points, locations of springs or wells with favorable geochemistry, faults, transmission lines, and areas of modeled basement weakness "fairways." Note: 'o' is used in this description to represent lowercase sigma.

  19. Areas of Weakly Anomalous to Anomalous Surface Temperature in Chaffee County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Chaffee County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  20. Areas of Weakly Anomalous to Anomalous Surface Temperature in Garfield County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Garfield County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature between 1o and 2o were considered ASTER modeled warm surface exposures (thermal anomalies) Note: 'o' is used in this description to represent lowercase sigma.

  1. Areas of Weakly Anomalous to Anomalous Surface Temperature in Routt County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Routt County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature between 1o and 2o were considered ASTER modeled warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  2. Areas of Weakly Anomalous to Anomalous Surface Temperature in Dolores County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Dolores County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature greater than 2o were considered ASTER modeled very warm surface exposures (thermal anomalies) Note: 'o' is used in this description to represent lowercase sigma.

  3. Areas of Weakly Anomalous to Anomalous Surface Temperature in Archuleta County, Colorado, as Identified from ASTER Thermal Data

    DOE Data Explorer

    Khalid Hussein

    2012-02-01

    Note: This "Weakly Anomalous to Anomalous Surface Temperature" dataset differs from the "Anomalous Surface Temperature" dataset for this county (another remotely sensed CIRES product) by showing areas of modeled temperatures between 1o and 2o above the mean, as opposed to the greater than 2o temperatures contained in the "Anomalous Surface Temperature" dataset. This layer contains areas of anomalous surface temperature in Archuleta County identified from ASTER thermal data and spatial based insolation model. The temperature is calculated using the Emissivity Normalization Algorithm that separate temperature from emissivity. The incoming solar radiation was calculated using spatial based insolation model developed by Fu and Rich (1999). Then the temperature due to solar radiation was calculated using emissivity derived from ASTER data. The residual temperature, i.e. temperature due to solar radiation subtracted from ASTER temperature was used to identify thermally anomalous areas. Areas that had temperature between 1o and 2o were considered ASTER modeled warm surface exposures (thermal anomalies). Note: 'o' is used in this description to represent lowercase sigma.

  4. Fast emission estimates in China and South Africa constrained by satellite observations

    NASA Astrophysics Data System (ADS)

    Mijling, Bas; van der A, Ronald

    2013-04-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for emerging economies such as China and South Africa, where rapid economic growth change emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. However, constraining emissions from observations of concentrations is computationally challenging. Within the GlobEmission project (part of the Data User Element programme of ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China and South Africa, using the CHIMERE chemical transport model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e.g. shipping emissions). The new emission inventories result in a better agreement between observations and simulations of air pollutant concentrations, facilitating improved air quality forecasts.

  5. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    PubMed

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  6. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  7. Automated Conflict Resolution, Arrival Management and Weather Avoidance for ATM

    NASA Technical Reports Server (NTRS)

    Erzberger, H.; Lauderdale, Todd A.; Chu, Yung-Cheng

    2010-01-01

    The paper describes a unified solution to three types of separation assurance problems that occur in en-route airspace: separation conflicts, arrival sequencing, and weather-cell avoidance. Algorithms for solving these problems play a key role in the design of future air traffic management systems such as NextGen. Because these problems can arise simultaneously in any combination, it is necessary to develop integrated algorithms for solving them. A unified and comprehensive solution to these problems provides the foundation for a future air traffic management system that requires a high level of automation in separation assurance. The paper describes the three algorithms developed for solving each problem and then shows how they are used sequentially to solve any combination of these problems. The first algorithm resolves loss-of-separation conflicts and is an evolution of an algorithm described in an earlier paper. The new version generates multiple resolutions for each conflict and then selects the one giving the least delay. Two new algorithms, one for sequencing and merging of arrival traffic, referred to as the Arrival Manager, and the other for weather-cell avoidance are the major focus of the paper. Because these three problems constitute a substantial fraction of the workload of en-route controllers, integrated algorithms to solve them is a basic requirement for automated separation assurance. The paper also reviews the Advanced Airspace Concept, a proposed design for a ground-based system that postulates redundant systems for separation assurance in order to achieve both high levels of safety and airspace capacity. It is proposed that automated separation assurance be introduced operationally in several steps, each step reducing controller workload further while increasing airspace capacity. A fast time simulation was used to determine performance statistics of the algorithm at up to 3 times current traffic levels.

  8. An algorithm for the estimation of bounds on the emissivity and temperatures from thermal multispectral airborne remotely sensed data

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Quattrochi, D.; Baskin, R.

    1992-01-01

    The effective flux incident upon the detectors of a thermal sensor, after it has been corrected for atmospheric effects, is a function of a non-linear combination of the emissivity of the target for that channel and the temperature of the target. The sensor system cannot separate the contribution from the emissivity and the temperature that constitute the flux value. A method that estimates the bounds on these temperatures and emissivities from thermal data is described. This method is then tested with remotely sensed data obtained from NASA's Thermal Infrared Multispectral Scanner (TIMS) - a 6 channel thermal sensor. Since this is an under-determined set of equations i.e. there are 7 unknowns (6 emissivities and 1 temperature) and 6 equations (corresponding to the 6 channel fluxes), there exist theoretically an infinite combination of values of emissivities and temperature that can satisfy these equations. Using some realistic bounds on the emissivities, bounds on the temperature are calculated. These bounds on the temperature are refined to estimate a tighter bound on the emissivity of the source. An error analysis is also carried out to quantitatively determine the extent of uncertainty introduced in the estimate of these parameters. This method is useful only when a realistic set of bounds can be obtained for the emissivities of the data. In the case of water the lower and upper bounds were set at 0.97 and 1.00 respectively. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected with the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. Ground truth temperatures using thermometers and radiometers were also obtained over an area of the reservoir. The results of two independent runs of the radiometer data averaged 7.03 plus or minus .70 for the first run and 7.31 plus or minus .88 for the second run. The results of the algorithm yield a temperature of 7.68 for the low altitude data to 8.73 for the high altitude data.

  9. Characterization of Methane Emission Sources Using Genetic Algorithms and Atmospheric Transport Modeling

    NASA Astrophysics Data System (ADS)

    Cao, Y.; Cervone, G.; Barkley, Z.; Lauvaux, T.; Deng, A.; Miles, N.; Richardson, S.

    2016-12-01

    Fugitive methane emission rates for the Marcellus shale area are estimated using a genetic algorithm that finds optimal weights to minimize the error between simulated and observed concentrations. The overall goal is to understand the relative contribution of methane due to Shale gas extraction. Methane sensors were installed on four towers located in northeastern Pennsylvania to measure atmospheric concentrations since May 2015. Inverse Lagrangian dispersion model runs are performed from each of these tower locations for each hour of 2015. Simulated methane concentrations at each of the four towers are computed by multiplying the resulting footprints from the atmospheric simulations by thousands of emission sources grouped into 11 classes. The emission sources were identified using GIS techniques, and include conventional and unconventional wells, different types of compressor stations, pipelines, landfills, farming and wetlands. Initial estimates for each source are calculated based on emission factors from EPA and few regional studies. A genetic algorithm is then used to identify optimal emission rates for the 11 classes of methane emissions and to explore extreme events and spatial and temporal structures in the emissions associated with natural gas activities.

  10. ELVES Research at the Pierre Auger Observatory: Optical Emission Simulation and Time Evolution, WWLLN-LIS-Auger Correlations, and Double ELVES Observations and Simulation.

    NASA Astrophysics Data System (ADS)

    Merenda, K. D.

    2016-12-01

    Since 2013, the Pierre Auger Cosmic Ray Observatory in Mendoza, Argentina, extended its trigger algorithm to detect emissions of light consistent with the signature from very low frequency perturbations due to electromagnetic pulse sources (ELVES). Correlations with the World Wide Lightning Location Network (WWLLN), the Lightning Imaging Sensor (LIS) and simulated events were used to assess the quality of the reconstructed data. The FD is a pixel array telescope sensitive to the deep UV emissions of ELVES. The detector provides the finest time resolution of 100 nanoseconds ever applied to the study of ELVES. Four eyes, separated by approximately 40 kilometers, consist of six telescopes and span a total of 360 degrees of azimuth angle. The detector operates at night when storms are not in the field of view. An existing 3D EMP Model solves Maxwell's equations using a three dimensional finite-difference time-domain model to describe the propagation of electromagnetic pulses from lightning sources to the ionosphere. The simulation also provides a projection of the resulting ELVES onto the pixel array of the FD. A full reconstruction of simulated events is under development. We introduce the analog signal time evolution comparison between Auger reconstructed data and simulated events on individual FD pixels. In conjunction, we will present a study of the angular distribution of light emission around the vertical and above the causative lightning source. We will also contrast, with Monte Carlo, Auger double ELVES events separated by at most 5 microseconds. These events are too short to be explained by multiple return strokes, ground reflections, or compact intra-cloud lightning sources. Reconstructed ELVES data is 40% correlated to WWLLN data and an analysis with the LIS database is underway.

  11. Probabilistic location estimation of acoustic emission sources in isotropic plates with one sensor

    NASA Astrophysics Data System (ADS)

    Ebrahimkhanlou, Arvin; Salamone, Salvatore

    2017-04-01

    This paper presents a probabilistic acoustic emission (AE) source localization algorithm for isotropic plate structures. The proposed algorithm requires only one sensor and uniformly monitors the entire area of such plates without any blind zones. In addition, it takes a probabilistic approach and quantifies localization uncertainties. The algorithm combines a modal acoustic emission (MAE) and a reflection-based technique to obtain information pertaining to the location of AE sources. To estimate confidence contours for the location of sources, uncertainties are quantified and propagated through the two techniques. The approach was validated using standard pencil lead break (PLB) tests on an Aluminum plate. The results demonstrate that the proposed source localization algorithm successfully estimates confidence contours for the location of AE sources.

  12. Land surface temperature measurements from EOS MODIS data

    NASA Technical Reports Server (NTRS)

    Wan, Zhengming

    1994-01-01

    A generalized split-window method for retrieving land-surface temperature (LST) from AVHRR and MODIS data has been developed. Accurate radiative transfer simulations show that the coefficients in the split-window algorithm for LST must depend on the viewing angle, if we are to achieve a LST accuracy of about 1 K for the whole scan swath range (+/-55.4 deg and +/-55 deg from nadir for AVHRR and MODIS, respectively) and for the ranges of surface temperature and atmospheric conditions over land, which are much wider than those over oceans. We obtain these coefficients from regression analysis of radiative transfer simulations, and we analyze sensitivity and error by using results from systematic radiative transfer simulations over wide ranges of surface temperatures and emissivities, and atmospheric water vapor abundance and temperatures. Simulations indicated that as atmospheric column water vapor increases and viewing angle is larger than 45 deg it is necessary to optimize the split-window method by separating the ranges of the atmospheric column water vapor and lower boundary temperature, and the surface temperature into tractable sub-ranges. The atmospheric lower boundary temperature and (vertical) column water vapor values retrieved from HIRS/2 or MODIS atmospheric sounding channels can be used to determine the range where the optimum coefficients of the split-window method are given. This new LST algorithm not only retrieves LST more accurately but also is less sensitive than viewing-angle independent LST algorithms to the uncertainty in the band emissivities of the land-surface in the split-window and to the instrument noise.

  13. Partitioning sparse matrices with eigenvectors of graphs

    NASA Technical Reports Server (NTRS)

    Pothen, Alex; Simon, Horst D.; Liou, Kang-Pu

    1990-01-01

    The problem of computing a small vertex separator in a graph arises in the context of computing a good ordering for the parallel factorization of sparse, symmetric matrices. An algebraic approach for computing vertex separators is considered in this paper. It is shown that lower bounds on separator sizes can be obtained in terms of the eigenvalues of the Laplacian matrix associated with a graph. The Laplacian eigenvectors of grid graphs can be computed from Kronecker products involving the eigenvectors of path graphs, and these eigenvectors can be used to compute good separators in grid graphs. A heuristic algorithm is designed to compute a vertex separator in a general graph by first computing an edge separator in the graph from an eigenvector of the Laplacian matrix, and then using a maximum matching in a subgraph to compute the vertex separator. Results on the quality of the separators computed by the spectral algorithm are presented, and these are compared with separators obtained from other algorithms for computing separators. Finally, the time required to compute the Laplacian eigenvector is reported, and the accuracy with which the eigenvector must be computed to obtain good separators is considered. The spectral algorithm has the advantage that it can be implemented on a medium-size multiprocessor in a straightforward manner.

  14. SSULI/SSUSI UV Tomographic Images of Large-Scale Plasma Structuring

    NASA Astrophysics Data System (ADS)

    Hei, M. A.; Budzien, S. A.; Dymond, K.; Paxton, L. J.; Schaefer, R. K.; Groves, K. M.

    2015-12-01

    We present a new technique that creates tomographic reconstructions of atmospheric ultraviolet emission based on data from the Special Sensor Ultraviolet Limb Imager (SSULI) and the Special Sensor Ultraviolet Spectrographic Imager (SSUSI), both flown on the Defense Meteorological Satellite Program (DMSP) Block 5D3 series satellites. Until now, the data from these two instruments have been used independently of each other. The new algorithm combines SSULI/SSUSI measurements of 135.6 nm emission using the tomographic technique; the resultant data product - whole-orbit reconstructions of atmospheric volume emission within the satellite orbital plane - is substantially improved over the original data sets. Tests using simulated atmospheric emission verify that the algorithm performs well in a variety of situations, including daytime, nighttime, and even in the challenging terminator regions. A comparison with ALTAIR radar data validates that the volume emission reconstructions can be inverted to yield maps of electron density. The algorithm incorporates several innovative new features, including the use of both SSULI and SSUSI data to create tomographic reconstructions, the use of an inversion algorithm (Richardson-Lucy; RL) that explicitly accounts for the Poisson statistics inherent in optical measurements, and a pseudo-diffusion based regularization scheme implemented between iterations of the RL code. The algorithm also explicitly accounts for extinction due to absorption by molecular oxygen.

  15. Multi-scale graph-cut algorithm for efficient water-fat separation.

    PubMed

    Berglund, Johan; Skorpil, Mikael

    2017-09-01

    To improve the accuracy and robustness to noise in water-fat separation by unifying the multiscale and graph cut based approaches to B 0 -correction. A previously proposed water-fat separation algorithm that corrects for B 0 field inhomogeneity in 3D by a single quadratic pseudo-Boolean optimization (QPBO) graph cut was incorporated into a multi-scale framework, where field map solutions are propagated from coarse to fine scales for voxels that are not resolved by the graph cut. The accuracy of the single-scale and multi-scale QPBO algorithms was evaluated against benchmark reference datasets. The robustness to noise was evaluated by adding noise to the input data prior to water-fat separation. Both algorithms achieved the highest accuracy when compared with seven previously published methods, while computation times were acceptable for implementation in clinical routine. The multi-scale algorithm was more robust to noise than the single-scale algorithm, while causing only a small increase (+10%) of the reconstruction time. The proposed 3D multi-scale QPBO algorithm offers accurate water-fat separation, robustness to noise, and fast reconstruction. The software implementation is freely available to the research community. Magn Reson Med 78:941-949, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  16. High temperature thermometric phosphors

    DOEpatents

    Allison, Stephen W.; Cates, Michael R.; Boatner, Lynn A.; Gillies, George T.

    1999-03-23

    A high temperature phosphor consists essentially of a material having the general formula LuPO.sub.4 :Dy.sub.(x),Eu.sub.y) wherein: 0.1 wt %.ltoreq.x.ltoreq.20 wt % and 0.1 wt %.ltoreq.y.ltoreq.20 wt %. The high temperature phosphor is in contact with an article whose temperature is to be determined. The article having the phosphor in contact with it is placed in the environment for which the temperature of the article is to be determined. The phosphor is excited by a laser causing the phosphor to fluoresce. The emission from the phosphor is optically focused into a beam-splitting mirror which separates the emission into two separate emissions, the emission caused by the dysprosium dopant and the emission caused by the europium dopent. The separated emissions are optically filtered and the intensities of the emission are detected and measured. The ratio of the intensity of each emission is determined and the temperature of the article is calculated from the ratio of the intensities of the separate emissions.

  17. High temperature thermometric phosphors

    DOEpatents

    Allison, S.W.; Cates, M.R.; Boatner, L.A.; Gillies, G.T.

    1999-03-23

    A high temperature phosphor consists essentially of a material having the general formula LuPO{sub 4}:Dy{sub x},Eu{sub y} wherein: 0.1 wt % {<=} x {<=} 20 wt % and 0.1 wt % {<=} y {<=} 20 wt %. The high temperature phosphor is in contact with an article whose temperature is to be determined. The article having the phosphor in contact with it is placed in the environment for which the temperature of the article is to be determined. The phosphor is excited by a laser causing the phosphor to fluoresce. The emission from the phosphor is optically focused into a beam-splitting mirror which separates the emission into two separate emissions, the emission caused by the dysprosium dopant and the emission caused by the europium dopant. The separated emissions are optically filtered and the intensities of the emission are detected and measured. The ratio of the intensity of each emission is determined and the temperature of the article is calculated from the ratio of the intensities of the separate emissions. 2 figs.

  18. High temperature thermometric phosphors for use in a temperature sensor

    DOEpatents

    Allison, S.W.; Cates, M.R.; Boatner, L.A.; Gillies, G.T.

    1998-03-24

    A high temperature phosphor consists essentially of a material having the general formula LuPO{sub 4}:Dy{sub (x)},Eu{sub (y)}, wherein: 0.1 wt %{<=}x{<=}20 wt % and 0.1 wt %{<=}y{<=}20 wt %. The high temperature phosphor is in contact with an article whose temperature is to be determined. The article having the phosphor in contact with it is placed in the environment for which the temperature of the article is to be determined. The phosphor is excited by a laser causing the phosphor to fluoresce. The emission from the phosphor is optically focused into a beam-splitting mirror which separates the emission into two separate emissions, the emission caused by the dysprosium dopant and the emission caused by the europium dopant. The separated emissions are optically filtered and the intensities of the emission are detected and measured. The ratio of the intensity of each emission is determined and the temperature of the article is calculated from the ratio of the intensities of the separate emissions. 2 figs.

  19. Fast Emission Estimates in China Constrained by Satellite Observations (Invited)

    NASA Astrophysics Data System (ADS)

    Mijling, B.; van der A, R.

    2013-12-01

    Emission inventories of air pollutants are crucial information for policy makers and form important input data for air quality models. Unfortunately, bottom-up emission inventories, compiled from large quantities of statistical data, are easily outdated for an emerging economy such as China, where rapid economic growth changes emissions accordingly. Alternatively, top-down emission estimates from satellite observations of air constituents have important advantages of being spatial consistent, having high temporal resolution, and enabling emission updates shortly after the satellite data become available. Constraining emissions from concentration measurements is, however, computationally challenging. Within the GlobEmission project of the European Space Agency (ESA) a new algorithm has been developed, specifically designed for fast daily emission estimates of short-lived atmospheric species on a mesoscopic scale (0.25 × 0.25 degree) from satellite observations of column concentrations. The algorithm needs only one forward model run from a chemical transport model to calculate the sensitivity of concentration to emission, using trajectory analysis to account for transport away from the source. By using a Kalman filter in the inverse step, optimal use of the a priori knowledge and the newly observed data is made. We apply the algorithm for NOx emission estimates in East China, using the CHIMERE model together with tropospheric NO2 column retrievals of the OMI and GOME-2 satellite instruments. The observations are used to construct a monthly emission time series, which reveal important emission trends such as the emission reduction measures during the Beijing Olympic Games, and the impact and recovery from the global economic crisis. The algorithm is also able to detect emerging sources (e.g. new power plants) and improve emission information for areas where proxy data are not or badly known (e.g. shipping emissions). The new emission estimates result in a better agreement between observations and simulations of air pollutant concentrations, facilitating improved air quality forecasts. The EU project MarcoPolo will combine these emission estimates from space with statistical information on e.g. land use, population density and traffic to construct a new up-to-date emission inventory for China.

  20. Simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps and traditional Chinese medicine Radix angelicae pubescentis using excitation-emission matrix fluorescence coupled with second-order calibration method

    NASA Astrophysics Data System (ADS)

    Wang, Li; Wu, Hai-Long; Yin, Xiao-Li; Hu, Yong; Gu, Hui-Wen; Yu, Ru-Qin

    2017-01-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method is presented for simultaneous determination of umbelliferone and scopoletin in Tibetan medicine Saussurea laniceps (SL) and traditional Chinese medicine Radix angelicae pubescentis (RAP). Using the strategy of combining EEM fluorescence data with second-order calibration method based on the alternating trilinear decomposition (ATLD) algorithm, the simultaneous quantification of umbelliferone and scopoletin in the two different complex systems was achieved successfully, even in the presence of potential interferents. The pretreatment is simple due to the "second-order advantage" and the use of "mathematical separation" instead of awkward "physical or chemical separation". Satisfactory results have been achieved with the limits of detection (LODs) of umbelliferone and scopoletin being 0.06 ng mL- 1 and 0.16 ng mL- 1, respectively. The average spike recoveries of umbelliferone and scopoletin are 98.8 ± 4.3% and 102.5 ± 3.3%, respectively. Besides, HPLC-DAD method was used to further validate the presented strategy, and t-test indicates that prediction results of the two methods have no significant differences. Satisfactory experimental results imply that our method is fast, low-cost and sensitive when compared with HPLC-DAD method.

  1. Sensoring Fusion Data from the Optic and Acoustic Emissions of Electric Arcs in the GMAW-S Process for Welding Quality Assessment

    PubMed Central

    Alfaro, Sadek Crisóstomo Absi; Cayo, Eber Huanca

    2012-01-01

    The present study shows the relationship between welding quality and optical-acoustic emissions from electric arcs, during welding runs, in the GMAW-S process. Bead on plate welding tests was carried out with pre-set parameters chosen from manufacturing standards. During the welding runs interferences were induced on the welding path using paint, grease or gas faults. In each welding run arc voltage, welding current, infrared and acoustic emission values were acquired and parameters such as arc power, acoustic peaks rate and infrared radiation rate computed. Data fusion algorithms were developed by assessing known welding quality parameters from arc emissions. These algorithms have showed better responses when they are based on more than just one sensor. Finally, it was concluded that there is a close relation between arc emissions and quality in welding and it can be measured from arc emissions sensing and data fusion algorithms. PMID:22969330

  2. Separation of left and right lungs using 3D information of sequential CT images and a guided dynamic programming algorithm

    PubMed Central

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104

  3. Separation of left and right lungs using 3-dimensional information of sequential computed tomography images and a guided dynamic programming algorithm.

    PubMed

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.

  4. System and method for resolving gamma-ray spectra

    DOEpatents

    Gentile, Charles A.; Perry, Jason; Langish, Stephen W.; Silber, Kenneth; Davis, William M.; Mastrovito, Dana

    2010-05-04

    A system for identifying radionuclide emissions is described. The system includes at least one processor for processing output signals from a radionuclide detecting device, at least one training algorithm run by the at least one processor for analyzing data derived from at least one set of known sample data from the output signals, at least one classification algorithm derived from the training algorithm for classifying unknown sample data, wherein the at least one training algorithm analyzes the at least one sample data set to derive at least one rule used by said classification algorithm for identifying at least one radionuclide emission detected by the detecting device.

  5. Bridging Ground Validation and Algorithms: Using Scattering and Integral Tables to Incorporate Observed DSD Correlations into Satellite Algorithms

    NASA Astrophysics Data System (ADS)

    Williams, C. R.

    2012-12-01

    The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V-polarization at the instrument view angles of nadir to 17 degrees (for DPR) and 48 & 53 degrees off nadir (for GMI). The GPM DSD Working Group is generating integral tables with GV observed DSD correlations and is performing sensitivity and verification tests. One advantage of keeping scattering tables separate from integral tables is that research can progress on the electromagnetic scattering of particles independent of cloud microphysics research. Another advantage of keeping the tables separate is that multiple scattering tables will be needed for frozen precipitation. Scattering tables are being developed for individual frozen particles based on habit, density and operating frequency. And a third advantage of keeping scattering and integral tables separate is that this framework provides an opportunity to communicate GV findings about DSD correlations into integral tables, and thus, into satellite algorithms.

  6. High temperature thermometric phosphors for use in a temperature sensor

    DOEpatents

    Allison, Stephen W.; Cates, Michael R.; Boatner, Lynn A.; Gillies, George T.

    1998-01-01

    A high temperature phosphor consists essentially of a material having the general formula LuPO.sub.4 :Dy.sub.(x),Eu.sub.(y), wherein: 0.1 wt %.ltoreq.x.ltoreq.20 wt % and 0.1 wt %.ltoreq.y.ltoreq.20 wt %. The high temperature phosphor is in contact with an article whose temperature is to be determined. The article having the phosphor in contact with it is placed in the environment for which the temperature of the article is to be determined. The phosphor is excited by a laser causing the phosphor to fluoresce. The emission from the phosphor is optically focused into a beam-splitting mirror which separates the emission into two separate emissions, the emission caused by the dysprosium dopant and the emission caused by the europium dopent. The separated emissions are optically filtered and the intensities of the emission are detected and measured. The ratio of the intensity of each emission is determined and the temperature of the article is calculated from the ratio of the intensities of the separate emissions.

  7. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    PubMed Central

    Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888

  8. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  9. 40 CFR 63.7910 - What emissions limitations and work practice standards must I meet for separators?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation Separators § 63.7910 What emissions limitations and work practice standards must...

  10. 40 CFR 63.7910 - What emissions limitations and work practice standards must I meet for separators?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation Separators § 63.7910 What emissions limitations and work practice standards must...

  11. 40 CFR 63.7910 - What emissions limitations and work practice standards must I meet for separators?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation Separators § 63.7910 What emissions limitations and work practice standards must...

  12. 40 CFR 63.7910 - What emissions limitations and work practice standards must I meet for separators?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation Separators § 63.7910 What emissions limitations and work practice standards must...

  13. 40 CFR 63.7910 - What emissions limitations and work practice standards must I meet for separators?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS... Pollutants: Site Remediation Separators § 63.7910 What emissions limitations and work practice standards must...

  14. Time-resolved tomography using acoustic emissions in the laboratory, and application to sandstone compaction

    NASA Astrophysics Data System (ADS)

    Brantut, Nicolas

    2018-02-01

    Acoustic emission and active ultrasonic wave velocity monitoring are often performed during laboratory rock deformation experiments, but are typically processed separately to yield homogenised wave velocity measurements and approximate source locations. Here I present a numerical method and its implementation in a free software to perform a joint inversion of acoustic emission locations together with the three-dimensional, anisotropic P-wave structure of laboratory samples. The data used are the P-wave first arrivals obtained from acoustic emissions and active ultrasonic measurements. The model parameters are the source locations and the P-wave velocity and anisotropy parameter (assuming transverse isotropy) at discrete points in the material. The forward problem is solved using the fast marching method, and the inverse problem is solved by the quasi-Newton method. The algorithms are implemented within an integrated free software package called FaATSO (Fast Marching Acoustic Emission Tomography using Standard Optimisation). The code is employed to study the formation of compaction bands in a porous sandstone. During deformation, a front of acoustic emissions progresses from one end of the sample, associated with the formation of a sequence of horizontal compaction bands. Behind the active front, only sparse acoustic emissions are observed, but the tomography reveals that the P-wave velocity has dropped by up to 15%, with an increase in anisotropy of up to 20%. Compaction bands in sandstones are therefore shown to produce sharp changes in seismic properties. This result highlights the potential of the methodology to image temporal variations of elastic properties in complex geomaterials, including the dramatic, localised changes associated with microcracking and damage generation.

  15. Detecting volcanic SO2 emissions with the Infrared Atmospheric Sounding Interferometer

    NASA Astrophysics Data System (ADS)

    Taylor, Isabelle; Carboni, Elisa; Mather, Tamsin; Grainger, Don

    2017-04-01

    Sulphur dioxide (SO2) emissions are one of the many hazards associated with volcanic activity. Close to the volcano they have negative impacts on human and animal health and affect the environment. Further afield they present a hazard to aviation (as well as being a proxy for volcanic ash) and can cause global changes to climate. These are all good reasons for monitoring gas emissions at volcanoes and this monitoring can also provide insight into volcanic, magmatic and geothermal processes. Advances in satellite technology mean that it is now possible to monitor these emissions from space. The Infrared Atmospheric Sounding Interferometer (IASI) on board the European Space Agency's MetOp satellites is commonly used, alongside other satellite products, for detecting SO2 emissions across the globe. A fast linear retrieval developed in Oxford separates the signal of the target species (SO2) from the spectral background by representing background variability (determined from pixels containing no SO2) in a background covariance matrix. SO2 contaminated pixels can be distinguished from this quickly, facilitating the use of this algorithm for near real time monitoring and for scanning of large datasets for signals to explore further with a full retrieval. In this study, the retrieval has been applied across the globe to identify volcanic emissions. Elevated signals are identified at numerous volcanoes including both explosive and passive emissions, which match reports of activity from other sources. Elevated signals are also evident from anthropogenic activity. These results imply that this tool could be successfully used to identify and monitor activity across the globe.

  16. Estimates of global biomass burning emissions for reactive greenhouse gases (CO, NMHCs, and NOx) and CO2

    NASA Astrophysics Data System (ADS)

    Jain, Atul K.; Tao, Zhining; Yang, Xiaojuan; Gillespie, Conor

    2006-03-01

    Open fire biomass burning and domestic biofuel burning (e.g., cooking, heating, and charcoal making) algorithms have been incorporated into a terrestrial ecosystem model to estimate CO2 and key reactive GHGs (CO, NOx, and NMHCs) emissions for the year 2000. The emissions are calculated over the globe at a 0.5° × 0.5° spatial resolution using tree density imagery, and two separate sets of data each for global area burned and land clearing for croplands, along with biofuel consumption rate data. The estimated global and annual total dry matter (DM) burned due to open fire biomass burning ranges between 5221 and 7346 Tg DM/yr, whereas the resultant emissions ranges are 6564-9093 Tg CO2/yr, 438-568 Tg CO/yr, 11-16 Tg NOx/yr (as NO), and 29-40 Tg NMHCs/yr. The results indicate that land use changes for cropland is one of the major sources of biomass burning, which amounts to 25-27% (CO2), 25 -28% (CO), 20-23% (NO), and 28-30% (NMHCs) of the total open fire biomass burning emissions of these gases. Estimated DM burned associated with domestic biofuel burning is 3,114 Tg DM/yr, and resultant emissions are 4825 Tg CO2/yr, 243 Tg CO/yr, 3 Tg NOx/yr, and 23 Tg NMHCs/yr. Total emissions from biomass burning are highest in tropical regions (Asia, America, and Africa), where we identify important contributions from primary forest cutting for croplands and domestic biofuel burning.

  17. The efficient simulation of separated three-dimensional viscous flows using the boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Van Dalsem, W. R.; Steger, J. L.

    1985-01-01

    A simple and computationally efficient algorithm for solving the unsteady three-dimensional boundary-layer equations in the time-accurate or relaxation mode is presented. Results of the new algorithm are shown to be in quantitative agreement with detailed experimental data for flow over a swept infinite wing. The separated flow over a 6:1 ellipsoid at angle of attack, and the transonic flow over a finite-wing with shock-induced 'mushroom' separation are also computed and compared with available experimental data. It is concluded that complex, separated, three-dimensional viscous layers can be economically and routinely computed using a time-relaxation boundary-layer algorithm.

  18. The Operational MODIS Cloud Optical and Microphysical Property Product: Overview of the Collection 6 Algorithm and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas

    2012-01-01

    Operational Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals of cloud optical and microphysical properties (part of the archived products MOD06 and MYD06, for MODIS Terra and Aqua, respectively) are currently being reprocessed along with other MODIS Atmosphere Team products. The latest "Collection 6" processing stream, which is expected to begin production by summer 2012, includes updates to the previous cloud retrieval algorithm along with new capabilities. The 1 km retrievals, based on well-known solar reflectance techniques, include cloud optical thickness, effective particle radius, and water path, as well as thermodynamic phase derived from a combination of solar and infrared tests. Being both global and of high spatial resolution requires an algorithm that is computationally efficient and can perform over all surface types. Collection 6 additions and enhancements include: (i) absolute effective particle radius retrievals derived separately from the 1.6 and 3.7 !-lm bands (instead of differences relative to the standard 2.1 !-lm retrieval), (ii) comprehensive look-up tables for cloud reflectance and emissivity (no asymptotic theory) with a wind-speed interpolated Cox-Munk BRDF for ocean surfaces, (iii) retrievals for both liquid water and ice phases for each pixel, and a subsequent determination of the phase based, in part, on effective radius retrieval outcomes for the two phases, (iv) new ice cloud radiative models using roughened particles with a specified habit, (v) updated spatially-complete global spectral surface albedo maps derived from MODIS Collection 5, (vi) enhanced pixel-level uncertainty calculations incorporating additional radiative error sources including the MODIS L1 B uncertainty index for assessing band and scene-dependent radiometric uncertainties, (v) and use of a new 1 km cloud top pressure/temperature algorithm (also part of MOD06) for atmospheric corrections and low cloud non-unity emissivity temperature adjustments.

  19. Organic light emitting device having multiple separate emissive layers

    DOEpatents

    Forrest, Stephen R [Ann Arbor, MI

    2012-03-27

    An organic light emitting device having multiple separate emissive layers is provided. Each emissive layer may define an exciton formation region, allowing exciton formation to occur across the entire emissive region. By aligning the energy levels of each emissive layer with the adjacent emissive layers, exciton formation in each layer may be improved. Devices incorporating multiple emissive layers with multiple exciton formation regions may exhibit improved performance, including internal quantum efficiencies of up to 100%.

  20. sparse-msrf:A package for sparse modeling and estimation of fossil-fuel CO2 emission fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-10-06

    The software is used to fit models of emission fields (e.g., fossil-fuel CO2 emissions) to sparse measurements of gaseous concentrations. Its primary aim is to provide an implementation and a demonstration for the algorithms and models developed in J. Ray, V. Yadav, A. M. Michalak, B. van Bloemen Waanders and S. A. McKenna, "A multiresolution spatial parameterization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions", accepted, Geoscientific Model Development, 2014. The software can be used to estimate emissions of non-reactive gases such as fossil-fuel CO2, methane etc. The software uses a proxy of the emission field beingmore » estimated (e.g., for fossil-fuel CO2, a population density map is a good proxy) to construct a wavelet model for the emission field. It then uses a shrinkage regression algorithm called Stagewise Orthogonal Matching Pursuit (StOMP) to fit the wavelet model to concentration measurements, using an atmospheric transport model to relate emission and concentration fields. Algorithmic novelties described in the paper above (1) ensure that the estimated emission fields are non-negative, (2) allow the use of guesses for emission fields to accelerate the estimation processes and (3) ensure that under/overestimates in the guesses do not skew the estimation.« less

  1. Multiple-component Decomposition from Millimeter Single-channel Data

    NASA Astrophysics Data System (ADS)

    Rodríguez-Montoya, Iván; Sánchez-Argüelles, David; Aretxaga, Itziar; Bertone, Emanuele; Chávez-Dagostino, Miguel; Hughes, David H.; Montaña, Alfredo; Wilson, Grant W.; Zeballos, Milagros

    2018-03-01

    We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data, we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the Aztronomical Thermal Emission Camera/ASTE survey of the Great Observatories Origins Deep Survey–South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise ratio, and minimizing information loss. In particular, GOODS-S is decomposed into four independent physical components: one of them is the already-known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.

  2. An Improved Algorithm for Retrieving Surface Downwelling Longwave Radiation from Satellite Measurements

    NASA Technical Reports Server (NTRS)

    Zhou, Yaping; Kratz, David P.; Wilber, Anne C.; Gupta, Shashi K.; Cess, Robert D.

    2006-01-01

    Retrieving surface longwave radiation from space has been a difficult task since the surface downwelling longwave radiation (SDLW) are integrations from radiation emitted by the entire atmosphere, while those emitted from the upper atmosphere are absorbed before reaching the surface. It is particularly problematic when thick clouds are present since thick clouds will virtually block all the longwave radiation from above, while satellites observe atmosphere emissions mostly from above the clouds. Zhou and Cess developed an algorithm for retrieving SDLW based upon detailed studies using radiative transfer model calculations and surface radiometric measurements. Their algorithm linked clear sky SDLW with surface upwelling longwave flux and column precipitable water vapor. For cloudy sky cases, they used cloud liquid water path as an additional parameter to account for the effects of clouds. Despite the simplicity of their algorithm, it performed very well for most geographical regions except for those regions where the atmospheric conditions near the surface tend to be extremely cold and dry. Systematic errors were also found for areas that were covered with ice clouds. An improved version of the algorithm was developed that prevents the large errors in the SDLW at low water vapor amounts. The new algorithm also utilizes cloud fraction and cloud liquid and ice water paths measured from the Cloud and the Earth's Radiant Energy System (CERES) satellites to separately compute the clear and cloudy portions of the fluxes. The new algorithm has been validated against surface measurements at 29 stations around the globe for the Terra and Aqua satellites. The results show significant improvement over the original version. The revised Zhou-Cess algorithm is also slightly better or comparable to more sophisticated algorithms currently implemented in the CERES processing. It will be incorporated in the CERES project as one of the empirical surface radiation algorithms.

  3. Formally Verified Practical Algorithms for Recovery from Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Munoz, Caesar A.

    2009-01-01

    In this paper, we develop and formally verify practical algorithms for recovery from loss of separation. The formal verification is performed in the context of a criteria-based framework. This framework provides rigorous definitions of horizontal and vertical maneuver correctness that guarantee divergence and achieve horizontal and vertical separation. The algorithms are shown to be independently correct, that is, separation is achieved when only one aircraft maneuvers, and implicitly coordinated, that is, separation is also achieved when both aircraft maneuver. In this paper we improve the horizontal criteria over our previous work. An important benefit of the criteria approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).

  4. An Aircraft Separation Algorithm with Feedback and Perturbation

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2010-01-01

    A separation algorithm is a set of rules that tell aircraft how to maneuver in order to maintain a minimum distance between them. This paper investigates demonstrating that separation algorithms satisfy the FAA requirement for the occurrence of incidents by means of simulation. Any demonstration that a separation algorithm, or any other aspect of flight, satisfies the FAA requirement is a challenge because of the stringent nature of the requirement and the complexity of airspace operations. The paper begins with a probability and statistical analysis of both the FAA requirement and demonstrating meeting it by a Monte Carlo approach. It considers the geometry of maintaining separation when one plane must change its flight path. It then develops a simple feedback control law that guides the planes on their paths. The presence of feedback control permits the introduction of perturbations, and the stochastic nature of the chosen perturbation is examined. The simulation program is described. This paper is an early effort in the realistic demonstration of a stringent requirement. Much remains to be done.

  5. Evaluation of thermal infrared hyperspectral imagery for the detection of onshore methane plumes: Significance for hydrocarbon exploration and monitoring

    NASA Astrophysics Data System (ADS)

    Scafutto, Rebecca DeĺPapa Moreira; de Souza Filho, Carlos Roberto; Riley, Dean N.; de Oliveira, Wilson Jose

    2018-02-01

    Methane (CH4) is the main constituent of natural gas. Fugitive CH4 emissions partially stem from geological reservoirs (seepages) and leaks in pipelines and petroleum production plants. Airborne hyperspectral sensors with enough spectral and spatial resolution and high signal-to-noise ratio can potentially detect these emissions. Here, a field experiment performed with controlled release CH4 sources was conducted in the Rocky Mountain Oilfield Testing Center (RMOTC), Casper, WY (USA). These sources were configured to deliver diverse emission types (surface and subsurface) and rates (20-1450 scf/hr), simulating natural (seepages) and anthropogenic (pipeline) CH4 leaks. The Aerospace Corporation's SEBASS (Spatially-Enhanced Broadband Array Spectrograph System) sensor acquired hyperspectral thermal infrared data over the experimental site with 128 bands spanning the 7.6 μm-13.5 μm range. The data was acquired with a spatial resolution of 0.5 m at 1500 ft and 0.84 m at 2500 ft above ground level. Radiance images were pre-processed with an adaptation of the In-Scene Atmospheric Compensation algorithm and converted to emissivity through the Emissivity Normalization algorithm. The data was processed with a Matched Filter. Results allowed the separation between endmembers related to the spectral signature of CH4 from the background. Pixels containing CH4 signatures (absorption bands at 7.69 μm and 7.88 μm) were highlighted and the gas plumes mapped with high definition in the imagery. The dispersion of the mapped plumes is consistent with the wind direction measured independently during the experiment. Variations in the dimension of mapped gas plumes were proportional to the emission rate of each CH4 source. Spectral analysis of the signatures within the plumes shows that CH4 spectral absorption features are sharper and deeper in pixels located near the emitting source, revealing regions with higher gas density and assisting in locating CH4 sources in the field accurately. These results indicate that thermal infrared hyperspectral imaging can support the oil industry profusely, by revealing new petroleum plays through direct detection of gaseous hydrocarbon seepages, serving as tools to monitor leaks along pipelines and oil processing plants, while simultaneously refining estimates of CH4 emissions.

  6. Overview of the CERES Edition-4 Multilayer Cloud Property Datasets

    NASA Astrophysics Data System (ADS)

    Chang, F. L.; Minnis, P.; Sun-Mack, S.; Chen, Y.; Smith, R. A.; Brown, R. R.

    2014-12-01

    Knowledge of the cloud vertical distribution is important for understanding the role of clouds on earth's radiation budget and climate change. Since high-level cirrus clouds with low emission temperatures and small optical depths can provide a positive feedback to a climate system and low-level stratus clouds with high emission temperatures and large optical depths can provide a negative feedback effect, the retrieval of multilayer cloud properties using satellite observations, like Terra and Aqua MODIS, is critically important for a variety of cloud and climate applications. For the objective of the Clouds and the Earth's Radiant Energy System (CERES), new algorithms have been developed using Terra and Aqua MODIS data to allow separate retrievals of cirrus and stratus cloud properties when the two dominant cloud types are simultaneously present in a multilayer system. In this paper, we will present an overview of the new CERES Edition-4 multilayer cloud property datasets derived from Terra as well as Aqua. Assessment of the new CERES multilayer cloud datasets will include high-level cirrus and low-level stratus cloud heights, pressures, and temperatures as well as their optical depths, emissivities, and microphysical properties.

  7. NOx emission estimates during the 2014 Youth Olympic Games in Nanjing

    NASA Astrophysics Data System (ADS)

    Ding, J.; van der A, R. J.; Mijling, B.; Levelt, P. F.; Hao, N.

    2015-08-01

    The Nanjing Government applied temporary environmental regulations to guarantee good air quality during the Youth Olympic Games (YOG) in 2014. We study the effect of those regulations by applying the emission estimate algorithm DECSO (Daily Emission estimates Constrained by Satellite Observations) to measurements of the Ozone Monitoring Instrument (OMI). We improved DECSO by updating the chemical transport model CHIMERE from v2006 to v2013 and by adding an Observation minus Forecast (OmF) criterion to filter outlying satellite retrievals due to high aerosol concentrations. The comparison of model results with both ground and satellite observations indicates that CHIMERE v2013 is better performing than CHIMERE v2006. After filtering the satellite observations with high aerosol loads that were leading to large OmF values, unrealistic jumps in the emission estimates are removed. Despite the cloudy conditions during the YOG we could still see a decrease of tropospheric NO2 column concentrations of about 32 % in the OMI observations when compared to the average NO2 columns from 2005 to 2012. The results of the improved DECSO algorithm for NOx emissions show a reduction of at least 25 % during the YOG period and afterwards. This indicates that air quality regulations taken by the local government have an effect in reducing NOx emissions. The algorithm is also able to detect an emission reduction of 10 % during the Chinese Spring Festival. This study demonstrates the capacity of the DECSO algorithm to capture the change of NOx emissions on a monthly scale. We also show that the observed NO2 columns and the derived emissions show different patterns that provide complimentary information. For example, the Nanjing smog episode in December 2013 led to a strong increase in NO2 concentrations without an increase in NOx emissions. Furthermore, DECSO gives us important information on the non-trivial seasonal relation between NOx emissions and NO2 concentrations on a local scale.

  8. Temporal variation of ecosystem scale methane emission from a boreal fen in relation to common model drivers

    NASA Astrophysics Data System (ADS)

    Rinne, J.; Tuittila, E. S.; Peltola, O.; Li, X.; Raivonen, M.; Alekseychik, P.; Haapanala, S.; Pihlatie, M.; Aurela, M.; Mammarella, I.; Vesala, T.

    2017-12-01

    Models for calculating methane emission from wetland ecosystems typically relate the methane emission to carbon dioxide assimilation. Other parameters that control emission in these models are e.g. peat temperature and water table position. Many of these relations are derived from spatial variation between chamber measurements by space-for-time approach. Continuous longer term ecosystem scale methane emission measurements by eddy covariance method provide us independent data to assess the validity of the relations derived by space-for-time approach.We have analyzed eleven-year methane flux data-set, measured at a boreal fen, together with data on environmental parameters and carbon dioxide exchange to assess the relations to typical model drivers. The data was obtained by the eddy covariance method at Siikaneva mire complex, Southern Finland, during 2005-2015. The methane flux showed seasonal cycles in methane emission, with strongest correlation with peat temperature at 35 cm depth. The temperature relation was exponential throughout the whole peat temperature range of 0-16°C. The methane emission normalized to remove temperature dependence showed a non-monotonous relation on water table and positive correlation with gross primary production (GPP). However, inclusion of these as explaining variables improved algorithm-measurement correlation only slightly, with r2=0.74 for exponential temperature dependent algorithm, r2=0.76 for temperature - water table algorithm, and r2=0.79 for temperature - GPP algorithm. The methane emission lagged behind net ecosystem exchange (NEE) and GPP by two to three weeks. Annual methane emission ranged from 8.3 to 14 gC m-2, and was 20 % of NEE and 2.8 % of GPP. The inter-annual variation of methane emission was of similar magnitude as that of GPP and ecosystem respiration (Reco), but much smaller than that of NEE. The interannual variability of June-September average methane emission correlated significantly with that of GPP indicating a close link between these two processes in boreal fen ecosystems.

  9. An assessment of the land surface emissivity in the 8 - 12 micrometer window determined from ASTER and MODIS data

    NASA Astrophysics Data System (ADS)

    Schmugge, T.; Hulley, G.; Hook, S.

    2009-04-01

    The land surface emissivity is often overlooked when considering surface properties that effect the energy balance. However, knowledge of the emissivity in the window region is important for determining the longwave radiation balance and its subsequent effect on surface temperature. The net longwave radiation (NLR) is strongly affected by the difference between the temperature of the emitting surface and the sky brightness temperature, this difference will be the greatest in the window region. Outside the window region any changes in the emitted radiation by emissivity variability are mostly compensated for by changes in the reflected sky brightness. The emissivity variability is typically greatest in arid regions where the exposed soil and rock surfaces display the widest range of emissivity. For example, the dune regions of North Africa have emissivities of 0.7 or less in the 8 to 9 micrometer wavelength band due to the quartz sands of the region, which can produce changes in NLR of more than 10 w/m*m compared to assuming a constant emissivity. The errors in retrievals of atmospheric temperature and moisture profiles from hyperspectral infrared radiances, such as those from the Atmospheric Infrared Sounder (AIRS) on the NASA Aqua satellite result from using constant or inaccurate surface emissivities, particularly over arid and semi-arid regions here the variation in emissivity is large, both spatially and spectrally. The multispectral thermal infrared data obtained from the Advanced Spaceborne Thermal Emission and Reflection (ASTER) radiometer and MODerate resolution Imaging Spectrometer (MODIS) sensors on NASA's Terra satellite have been shown to be of good quality and provide a unique new tool for studying the emissivity of the land surface. ASTER has 5 channels in the 8 to 12 micrometer waveband with 90 m spatial resolution, when the data are combined with the Temperature Emissivity Separation (TES) algorithm the surface emissivity over this wavelength region can be determined. The TES algorithm has been validated with field measurements using a multi-spectral radiometer having similar bands to ASTER. The ASTER data have now been used to produce a seasonal gridded database of the emissivity for North America and the results compared to laboratory measured emissivities of in-situ rock/sand samples collected at ten validation sites in the Western USA during 2008. The directional hemispherical reflectance of the in-situ samples are measured in the laboratory using a Nicolet Fourier Transform Interferometer (FTIR), converted to emissivity using Kirchoff's law, and convolving to the appropriate sensor spectral response functions. This ASTER database, termed the North American ASTER Land Surface Emissivity Database (NAALSED), was validated using the laboratory results from these ten sites to within 0.015 (1.5%) in emissivity. MODIS has 3 channels in this waveband with 1km spatial resolution and almost daily global coverage. The MODIS data are composited to 5 km resolution and day night pairs of observations are used to derive the emissivities. These results have been validated using the ASTER emissivities over selected test areas.

  10. Using a genetic algorithm as an optimal band selector in the mid and thermal infrared (2.5-14 μm) to discriminate vegetation species.

    PubMed

    Ullah, Saleem; Groen, Thomas A; Schlerf, Martin; Skidmore, Andrew K; Nieuwenhuis, Willem; Vaiphasa, Chaichoke

    2012-01-01

    Genetic variation between various plant species determines differences in their physio-chemical makeup and ultimately in their hyperspectral emissivity signatures. The hyperspectral emissivity signatures, on the one hand, account for the subtle physio-chemical changes in the vegetation, but on the other hand, highlight the problem of high dimensionality. The aim of this paper is to investigate the performance of genetic algorithms coupled with the spectral angle mapper (SAM) to identify a meaningful subset of wavebands sensitive enough to discriminate thirteen broadleaved vegetation species from the laboratory measured hyperspectral emissivities. The performance was evaluated using an overall classification accuracy and Jeffries Matusita distance. For the multiple plant species, the targeted bands based on genetic algorithms resulted in a high overall classification accuracy (90%). Concentrating on the pairwise comparison results, the selected wavebands based on genetic algorithms resulted in higher Jeffries Matusita (J-M) distances than randomly selected wavebands did. This study concludes that targeted wavebands from leaf emissivity spectra are able to discriminate vegetation species.

  11. Time-resolved acoustic emission tomography in the laboratory: tracking localised damage in rocks

    NASA Astrophysics Data System (ADS)

    Brantut, N.

    2017-12-01

    Over the past three decades, there has been tremendous technological developments of laboratory equipment and studies using acoustic emission and ultrasonic monitoring of rock samples during deformation. Using relatively standard seismological techniques, acoustic emissions can be detected, located in space and time, and source mechanisms can be obtained. In parallel, ultrasonic velocities can be measured routinely using standard pulse-receiver techniques.Despite these major developments, current acoustic emission and ultrasonic monitoring systems are typically used separately, and the poor spatial coverage of acoustic transducers precludes performing active 3D tomography in typical laboratory settings.Here, I present an algorithm and software package that uses both passive acoustic emission data and active ultrasonic measurements to determine acoustic emission locations together with the 3D, anisotropic P-wave structure of rock samples during deformation. The technique is analogous to local earthquake tomography, but tailored to the specificities of small scale laboratory tests. The fast marching method is employed to compute the forward problem. The acoustic emission locations and the anisotropic P-wave field are jointly inverted using the Quasi-Newton method.The method is used to track the propagation of compaction bands in a porous sandstone deformed in the ductile, cataclastic flow regime under triaxial stress conditions. Near the yield point, a compaction front forms at one end of the sample, and slowly progresses towards the other end. The front is illuminated by clusters of Acoustic Emissions, and leaves behind a heavily damaged material where the P-wave speed has dropped by up to 20%.The technique opens new possibilities to track in-situ strain localisation and damage around laboratory faults, and preliminary results on quasi-static rupture in granite will be presented.

  12. Land Surface Temperature Measurements form EOS MODIS Data

    NASA Technical Reports Server (NTRS)

    Wan, Zhengming

    1996-01-01

    We have developed a physics-based land-surface temperature (LST) algorithm for simultaneously retrieving surface band-averaged emissivities and temperatures from day/night pairs of MODIS (Moderate Resolution Imaging Spectroradiometer) data in seven thermal infrared bands. The set of 14 nonlinear equations in the algorithm is solved with the statistical regression method and the least-squares fit method. This new LST algorithm was tested with simulated MODIS data for 80 sets of band-averaged emissivities calculated from published spectral data of terrestrial materials in wide ranges of atmospheric and surface temperature conditions. Comprehensive sensitivity and error analysis has been made to evaluate the performance of the new LST algorithm and its dependence on variations in surface emissivity and temperature, upon atmospheric conditions, as well as the noise-equivalent temperature difference (NE(Delta)T) and calibration accuracy specifications of the MODIS instrument. In cases with a systematic calibration error of 0.5%, the standard deviations of errors in retrieved surface daytime and nighttime temperatures fall between 0.4-0.5 K over a wide range of surface temperatures for mid-latitude summer conditions. The standard deviations of errors in retrieved emissivities in bands 31 and 32 (in the 10-12.5 micrometer IR spectral window region) are 0.009, and the maximum error in retrieved LST values falls between 2-3 K. Several issues related to the day/night LST algorithm (uncertainties in the day/night registration and in surface emissivity changes caused by dew occurrence, and the cloud cover) have been investigated. The LST algorithms have been validated with MODIS Airborne Simulator (MAS) dada and ground-based measurement data in two field campaigns conducted in Railroad Valley playa, NV in 1995 and 1996. The MODIS LST version 1 software has been delivered.

  13. Seasonal variations of isoprene emissions from deciduous trees

    NASA Astrophysics Data System (ADS)

    Xiaoshan, Zhang; Yujing, Mu; Wenzhi, Song; Yahui, Zhuang

    Isoprene emission fluxes were investigated for 12 tree species in and around Beijing city. Bag-enclosure method was used to collect the air sample and GC-PID was used to directly analyze isoprene. Ginkgo and Magnolia denudata had negligible isoprene emissions, while significant emissions were observed for Platanus orientalis, Pendula loud, Populus simonii, and Salix matsudana koidz, and other remaining trees showed no sign of isoprene emission. Variations in isoprene emission with changes in light, temperature and season were investigated for Platanus orientalis and Pendula loud. Isoprene emission rates strongly depended on light, temperature and leaf age. The maximum emission rates for the two trees were observed in summer with values of about 232 and 213 μg g -1 dw h -1, respectively. The measured emission fluxes were used to evaluate "Guenther" emission algorithm. The emission fluxes predicted by the algorithm were in relatively good agreement with field measurements. However, there were large differences for the calculated median emission factors during spring, summer and fall. The 25-75 percentiles span of the emission factor data sets ranged from -33 to +15% of the median values.

  14. Impact of freeway weaving segment design on light-duty vehicle exhaust emissions.

    PubMed

    Li, Qing; Qiao, Fengxiang; Yu, Lei; Chen, Shuyan; Li, Tiezhu

    2018-06-01

    In the United States, 26% of greenhouse gas emissions is emitted from the transportation sector; these emisssions meanwhile are accompanied by enormous toxic emissions to humans, such as carbon monoxide (CO), nitrogen oxides (NO x ), and hydrocarbon (HC), approximately 2.5% and 2.44% of a total exhaust emissions for a petrol and a diesel engine, respectively. These exhaust emissions are typically subject to vehicles' intermittent operations, such as hard acceleration and hard braking. In practice, drivers are inclined to operate intermittently while driving through a weaving segment, due to complex vehicle maneuvering for weaving. As a result, the exhaust emissions within a weaving segment ought to vary from those on a basic segment. However, existing emission models usually rely on vehicle operation information, and compute a generalized emission result, regardless of road configuration. This research proposes to explore the impacts of weaving segment configuration on vehicle emissions, identify important predictors for emission estimations, and develop a nonlinear normalized emission factor (NEF) model for weaving segments. An on-board emission test was conducted on 12 subjects on State Highway 288 in Houston, Texas. Vehicles' activity information, road conditions, and real-time exhaust emissions were collected by on-board diagnosis (OBD), a smartphone-based roughness app, and a portable emission measurement system (PEMS), respectively. Five feature selection algorithms were used to identify the important predictors for the response of NEF and the modeling algorithm. The predictive power of four algorithm-based emission models was tested by 10-fold cross-validation. Results showed that emissions are also susceptible to the type and length of a weaving segment. Bagged decision tree algorithm was chosen to develop a 50-grown-tree NEF model, which provided a validation error of 0.0051. The estimated NEFs are highly correlated with the observed NEFs in the training data set as well as in the validation data set, with the R values of 0.91 and 0.90, respectively. Existing emission models usually rely on vehicle operation information to compute a generalized emission result, regardless of road configuration. In practice, while driving through a weaving segment, drivers are inclined to perform erratic maneuvers, such as hard braking and hard acceleration due to the complex weaving maneuver required. As a result, the exhaust emissions within a weaving segment vary from those on a basic segment. This research proposes to involve road configuration, in terms of the type and length of a weaving segment, in constructing an emission nonlinear model, which significantly improves emission estimations at a microscopic level.

  15. Greenhouse gas and ammonia emissions from production of compost bedding on a dairy farm.

    PubMed

    Fillingham, M A; VanderZaag, A C; Burtt, S; Baldé, H; Ngwabie, N M; Smith, W; Hakami, A; Wagner-Riddle, C; Bittman, S; MacDonald, D

    2017-12-01

    Recent developments in composting technology enable dairy farms to produce their own bedding from composted manure. This management practice alters the fate of carbon and nitrogen; however, there is little data available documenting how gaseous emissions are impacted. This study measured in-situ emissions of methane (CH 4 ), carbon dioxide (CO 2 ), nitrous oxide (N 2 O), and ammonia (NH 3 ) from an on-farm solid-liquid separation system followed by continuously-turned plug-flow composting over three seasons. Emissions were measured separately from the continuously-turned compost phase, and the compost-storage phase prior to the compost being used for cattle bedding. Active composting had low emissions of N 2 O and CH 4 with most carbon being emitted as CO 2 -C and most N emitted as NH 3 -N. Compost storage had higher CH 4 and N 2 O emissions than the active phase, while NH 3 was emitted at a lower rate, and CO 2 was similar. Overall, combining both the active composting and storage phases, the mean total emissions were 3.9×10 -2 gCH 4 kg -1 raw manure (RM), 11.3gCO 2 kg -1 RM, 2.5×10 -4 g N 2 O kg -1 RM, and 0.13g NH 3 kg -1 RM. Emissions with solid-separation and composting were compared to calculated emissions for a traditional (unseparated) liquid manure storage tank. The total greenhouse gas emissions (CH 4 +N 2 O) from solid separation, composting, compost storage, and separated liquid storage were reduced substantially on a CO 2 -equivalent basis compared to traditional liquid storage. Solid-liquid separation and well-managed composting could mitigate overall greenhouse gas emissions; however, an environmental trade off was that NH 3 was emitted at higher rates from the continuously turned composter than reported values for traditional storage. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  16. Estimating criteria pollutant emissions using the California Regional Multisector Air Quality Emissions (CA-REMARQUE) model v1.0

    NASA Astrophysics Data System (ADS)

    Zapata, Christina B.; Yang, Chris; Yeh, Sonia; Ogden, Joan; Kleeman, Michael J.

    2018-04-01

    The California Regional Multisector Air Quality Emissions (CA-REMARQUE) model is developed to predict changes to criteria pollutant emissions inventories in California in response to sophisticated emissions control programs implemented to achieve deep greenhouse gas (GHG) emissions reductions. Two scenarios for the year 2050 act as the starting point for calculations: a business-as-usual (BAU) scenario and an 80 % GHG reduction (GHG-Step) scenario. Each of these scenarios was developed with an energy economic model to optimize costs across the entire California economy and so they include changes in activity, fuels, and technology across economic sectors. Separate algorithms are developed to estimate emissions of criteria pollutants (or their precursors) that are consistent with the future GHG scenarios for the following economic sectors: (i) on-road, (ii) rail and off-road, (iii) marine and aviation, (iv) residential and commercial, (v) electricity generation, and (vi) biorefineries. Properly accounting for new technologies involving electrification, biofuels, and hydrogen plays a central role in these calculations. Critically, criteria pollutant emissions do not decrease uniformly across all sectors of the economy. Emissions of certain criteria pollutants (or their precursors) increase in some sectors as part of the overall optimization within each of the scenarios. This produces nonuniform changes to criteria pollutant emissions in close proximity to heavily populated regions when viewed at 4 km spatial resolution with implications for exposure to air pollution for those populations. As a further complication, changing fuels and technology also modify the composition of reactive organic gas emissions and the size and composition of particulate matter emissions. This is most notably apparent through a comparison of emissions reductions for different size fractions of primary particulate matter. Primary PM2.5 emissions decrease by 4 % in the GHG-Step scenario vs. the BAU scenario while corresponding primary PM0.1 emissions decrease by 36 %. Ultrafine particles (PM0.1) are an emerging pollutant of concern expected to impact public health in future scenarios. The complexity of this situation illustrates the need for realistic treatment of criteria pollutant emissions inventories linked to GHG emissions policies designed for fully developed countries and states with strict existing environmental regulations.

  17. A new method for quantifying the performance of EEG blind source separation algorithms by referencing a simultaneously recorded ECoG signal.

    PubMed

    Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka

    2017-09-01

    Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  18. Greenhouse gases emission from municipal waste management: The role of separate collection.

    PubMed

    Calabrò, Paolo S

    2009-07-01

    The municipal solid waste management significantly contributes to the emission in the atmosphere of greenhouse gases (e.g. CO(2), CH(4), N(2)O) and therefore the management process from collection to treatment and disposal has to be optimized in order to reduce these emissions. In this paper, starting from the average composition of undifferentiated municipal solid waste in Italy, the effect of separate collection on greenhouse gases emissions from municipal waste management has been assessed. Different combinations of separate collection scenarios and disposal options (i.e. landfilling and incineration) have been considered. The effect of energy recovery from waste both in landfills and incinerators has also been addressed. The results outline how a separate collection approach can have a significant effect on the emission of greenhouse gases and how wise municipal solid waste management, implying the adoption of Best Available Technologies (i.e. biogas recovery and exploitation system in landfills and energy recovery system in Waste to Energy plants), can not only significantly reduce greenhouse gases emissions but, in certain cases, can also make the overall process a carbon sink. Moreover it has been shown that separate collection of plastic is a major issue when dealing with global warming relevant emissions from municipal solid waste management.

  19. Detection and severity classification of extracardiac interference in {sup 82}Rb PET myocardial perfusion imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orton, Elizabeth J., E-mail: eorton@physics.carleton.ca; Kemp, Robert A. de; Glenn Wells, R.

    2014-10-15

    Purpose: Myocardial perfusion imaging (MPI) is used for diagnosis and prognosis of coronary artery disease. When MPI studies are performed with positron emission tomography (PET) and the radioactive tracer rubidium-82 chloride ({sup 82}Rb), a small but non-negligible fraction of studies (∼10%) suffer from extracardiac interference: high levels of tracer uptake in structures adjacent to the heart which mask the true cardiac tracer uptake. At present, there are no clinically available options for automated detection or correction of this problem. This work presents an algorithm that detects and classifies the severity of extracardiac interference in {sup 82}Rb PET MPI images andmore » reports the accuracy and failure rate of the method. Methods: A set of 200 {sup 82}Rb PET MPI images were reviewed by a trained nuclear cardiologist and interference severity reported on a four-class scale, from absent to severe. An automated algorithm was developed that compares uptake at the external border of the myocardium to three thresholds, separating the four interference severity classes. A minimum area of interference was required, and the search region was limited to that facing the stomach wall and spleen. Maximizing concordance (Cohen’s Kappa) and minimizing failure rate for the set of 200 clinician-read images were used to find the optimal population-based constants defining search limit and minimum area parameters and the thresholds for the algorithm. Tenfold stratified cross-validation was used to find optimal thresholds and report accuracy measures (sensitivity, specificity, and Kappa). Results: The algorithm was capable of detecting interference with a mean [95% confidence interval] sensitivity/specificity/Kappa of 0.97 [0.94, 1.00]/0.82 [0.66, 0.98]/0.79 [0.65, 0.92], and a failure rate of 1.0% ± 0.2%. The four-class overall Kappa was 0.72 [0.64, 0.81]. Separation of mild versus moderate-or-greater interference was performed with good accuracy (sensitivity/specificity/Kappa = 0.92 [0.86, 0.99]/0.86 [0.71, 1.00]/0.78 [0.64, 0.92]), while separation of moderate versus severe interference severity classes showed reduced sensitivity/Kappa but little change in specificity (sensitivity/specificity/Kappa = 0.83 [0.77, 0.88]/0.82 [0.77, 0.88]/0.65 [0.60, 0.70]). Specificity was greater than sensitivity for all interference classes. Algorithm execution time was <1 min. Conclusions: The algorithm produced here has a low failure rate and high accuracy for detection of extracardiac interference in {sup 82}Rb PET MPI scans. It provides a fast, reliable, automated method for assessing severity of extracardiac interference.« less

  20. Life cycle and economic assessment of source-separated MSW collection with regard to greenhouse gas emissions: a case study in China.

    PubMed

    Dong, Jun; Ni, Mingjiang; Chi, Yong; Zou, Daoan; Fu, Chao

    2013-08-01

    In China, the continuously increasing amount of municipal solid waste (MSW) has resulted in an urgent need for changing the current municipal solid waste management (MSWM) system based on mixed collection. A pilot program focusing on source-separated MSW collection was thus launched (2010) in Hangzhou, China, to lessen the related environmental loads. And greenhouse gas (GHG) emissions (Kyoto Protocol) are singled out in particular. This paper uses life cycle assessment modeling to evaluate the potential environmental improvement with regard to GHG emissions. The pre-existing MSWM system is assessed as baseline, while the source separation scenario is compared internally. Results show that 23 % GHG emissions can be decreased by source-separated collection compared with the base scenario. In addition, the use of composting and anaerobic digestion (AD) is suggested for further optimizing the management of food waste. 260.79, 82.21, and -86.21 thousand tonnes of GHG emissions are emitted from food waste landfill, composting, and AD, respectively, proving the emission reduction potential brought by advanced food waste treatment technologies. Realizing the fact, a modified MSWM system is proposed by taking AD as food waste substitution option, with additional 44 % GHG emissions saved than current source separation scenario. Moreover, a preliminary economic assessment is implemented. It is demonstrated that both source separation scenarios have a good cost reduction potential than mixed collection, with the proposed new system the most cost-effective one.

  1. Modeling Self-subtraction in Angular Differential Imaging: Application to the HD 32297 Debris Disk

    NASA Astrophysics Data System (ADS)

    Esposito, Thomas M.; Fitzgerald, Michael P.; Graham, James R.; Kalas, Paul

    2014-01-01

    We present a new technique for forward-modeling self-subtraction of spatially extended emission in observations processed with angular differential imaging (ADI) algorithms. High-contrast direct imaging of circumstellar disks is limited by quasi-static speckle noise, and ADI is commonly used to suppress those speckles. However, the application of ADI can result in self-subtraction of the disk signal due to the disk's finite spatial extent. This signal attenuation varies with radial separation and biases measurements of the disk's surface brightness, thereby compromising inferences regarding the physical processes responsible for the dust distribution. To compensate for this attenuation, we forward model the disk structure and compute the form of the self-subtraction function at each separation. As a proof of concept, we apply our method to 1.6 and 2.2 μm Keck adaptive optics NIRC2 scattered-light observations of the HD 32297 debris disk reduced using a variant of the "locally optimized combination of images" algorithm. We are able to recover disk surface brightness that was otherwise lost to self-subtraction and produce simplified models of the brightness distribution as it appears with and without self-subtraction. From the latter models, we extract radial profiles for the disk's brightness, width, midplane position, and color that are unbiased by self-subtraction. Our analysis of these measurements indicates a break in the brightness profile power law at r ≈ 110 AU and a disk width that increases with separation from the star. We also verify disk curvature that displaces the midplane by up to 30 AU toward the northwest relative to a straight fiducial midplane.

  2. Two-Photon Excitation STED Microscopy with Time-Gated Detection

    PubMed Central

    Coto Hernández, Iván; Castello, Marco; Lanzanò, Luca; d’Amora, Marta; Bianchini, Paolo; Diaspro, Alberto; Vicidomini, Giuseppe

    2016-01-01

    We report on a novel two-photon excitation stimulated emission depletion (2PE-STED) microscope based on time-gated detection. The time-gated detection allows for the effective silencing of the fluorophores using moderate stimulated emission beam intensity. This opens the possibility of implementing an efficient 2PE-STED microscope with a stimulated emission beam running in a continuous-wave. The continuous-wave stimulated emission beam tempers the laser architecture’s complexity and cost, but the time-gated detection degrades the signal-to-noise ratio (SNR) and signal-to-background ratio (SBR) of the image. We recover the SNR and the SBR through a multi-image deconvolution algorithm. Indeed, the algorithm simultaneously reassigns early-photons (normally discarded by the time-gated detection) to their original positions and removes the background induced by the stimulated emission beam. We exemplify the benefits of this implementation by imaging sub-cellular structures. Finally, we discuss of the extension of this algorithm to future all-pulsed 2PE-STED implementationd based on time-gated detection and a nanosecond laser source. PMID:26757892

  3. 40 CFR 63.1567 - What are my requirements for inorganic HAP emissions from catalytic reforming units?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... different reactors in the catalytic reforming unit are regenerated in separate regeneration systems, then these emission limitations apply to each separate regeneration system. These emission limitations apply... catalyst rejuvenation operations during coke burn-off and catalyst regeneration. You can choose from the...

  4. 40 CFR 63.1567 - What are my requirements for inorganic HAP emissions from catalytic reforming units?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... different reactors in the catalytic reforming unit are regenerated in separate regeneration systems, then these emission limitations apply to each separate regeneration system. These emission limitations apply... catalyst rejuvenation operations during coke burn-off and catalyst regeneration. You can choose from the...

  5. 40 CFR 63.1567 - What are my requirements for inorganic HAP emissions from catalytic reforming units?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... different reactors in the catalytic reforming unit are regenerated in separate regeneration systems, then these emission limitations apply to each separate regeneration system. These emission limitations apply... catalyst rejuvenation operations during coke burn-off and catalyst regeneration. You can choose from the...

  6. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    NASA Astrophysics Data System (ADS)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.

  7. A Formal Framework for the Analysis of Algorithms That Recover From Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, RIcky W.; Munoz, Cesar A.

    2008-01-01

    We present a mathematical framework for the specification and verification of state-based conflict resolution algorithms that recover from loss of separation. In particular, we propose rigorous definitions of horizontal and vertical maneuver correctness that yield horizontal and vertical separation, respectively, in a bounded amount of time. We also provide sufficient conditions for independent correctness, i.e., separation under the assumption that only one aircraft maneuvers, and for implicitly coordinated correctness, i.e., separation under the assumption that both aircraft maneuver. An important benefit of this approach is that different aircraft can execute different algorithms and implicit coordination will still be achieved, as long as they all meet the explicit criteria of the framework. Towards this end we have sought to make the criteria as general as possible. The framework presented in this paper has been formalized and mechanically verified in the Prototype Verification System (PVS).

  8. Active flow separation control by a position-based iterative learning control algorithm with experimental validation

    NASA Astrophysics Data System (ADS)

    Cai, Zhonglun; Chen, Peng; Angland, David; Zhang, Xin

    2014-03-01

    A novel iterative learning control (ILC) algorithm was developed and applied to an active flow control problem. The technique uses pulsed air jets to delay flow separation on a two-element high-lift wing. The ILC algorithm uses position-based pressure measurements to update the actuation. The method was experimentally tested on a wing model in a 0.9 m × 0.6 m low-speed wind tunnel at the University of Southampton. Compressed air and fast switching solenoid valves were used as actuators to excite the flow, and the pressure distribution around the chord of the wing was measured as a feedback control signal for the ILC controller. Experimental results showed that the actuation was able to delay the separation and increase the lift by approximately 10%-15%. By using the ILC algorithm, the controller was able to find the optimum control input and maintain the improvement despite sudden changes of the separation position.

  9. Global Multispectral Cloud Retrievals from MODIS

    NASA Technical Reports Server (NTRS)

    King, Michael D.; Platnick, Steven; Ackerman, Steven A.; Menzel, W. Paul; Riedi, Jerome C.; Baum, Bryan A.

    2003-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) was developed by NASA and launched onboard the Terra spacecraft on December 18,1999 and Aqua spacecraft on May 4,2002. It achieved its final orbit and began Earth observations on February 24, 2000 for Terra and June 24, 2002 for Aqua. A comprehensive set of remote sensing algorithms for cloud masking and the retrieval of cloud physical and optical properties has been developed by members of the MODIS atmosphere science team. The archived products from these algorithms have applications in climate change studies, climate modeling, numerical weather prediction, as well as fundamental atmospheric research. In addition to an extensive cloud mask, products include cloud-top properties (temperature, pressure, effective emissivity), cloud thermodynamic phase, cloud optical and microphysical parameters (optical thickness, effective particle radius, water path), as well as derived statistics. We will describe the various cloud properties being analyzed on a global basis from both Terra and Aqua, and will show characteristics of cloud optical and microphysical properties as a function of latitude for land and ocean separately, and contrast the statistical properties of similar cloud types in various parts of the world.

  10. A wildland fire emission inventory for the western United States - uncertainty across spatial and temporal scales

    Treesearch

    Shawn Urbanski; WeiMin Hao

    2010-01-01

    Emissions of trace gases and aerosols by biomass burning (BB) have a significant influence on the chemical composition of the atmosphere, air quality, and climate. BB emissions depend on a range of variables including burned area, fuels, meteorology, combustion completeness, and emission factors (EF). Emission algorithms provide BB emission inventories (EI) which serve...

  11. 40 CFR 63.1567 - What are my requirements for inorganic HAP emissions from catalytic reforming units?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... reactors in the catalytic reforming unit are regenerated in separate regeneration systems, then these emission limitations apply to each separate regeneration system. These emission limitations apply to... rejuvenation operations during coke burn-off and catalyst regeneration. You can choose from the two options in...

  12. The search for extended infrared emission near interacting and active galaxies

    NASA Technical Reports Server (NTRS)

    Appleton, Philip N.

    1991-01-01

    The following subject areas are covered: the search for extended far IR emission; the search for extended emission in galaxy groups; a brief review of the flattening algorithm; the target groups; extended emission from groups and intergalactic HI clouds; and morphological image processing.

  13. Separation of BSA through FAU-type zeolite ceramic composite membrane formed on tubular ceramic support: Optimization of process parameters by hybrid response surface methodology and biobjective genetic algorithm.

    PubMed

    Vinoth Kumar, R; Ganesh Moorthy, I; Pugazhenthi, G

    2017-08-09

    In this study, Faujasite (FAU) zeolite was coated on low-cost tubular ceramic support as a separating layer through hydrothermal route. The mixture of silicate and aluminate solutions was used to create a zeolitic separation layer on the support. The prepared zeolite ceramic composite membrane was characterized using X-ray powder diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), particle size distribution (PSD), field emission scanning electron microscopy (FESEM), and zeta potential measurements. The porosity of ceramic support (53%) was reduced by the deposition of FAU (43%) zeolite layer. The pore size and water permeability of the membrane were evaluated as 0.179 µm and 1.62 × 10 -7  m 3 /m 2  s kPa, respectively, which are lower than that of the support (pore size of 0.309 µm and water permeability of 5.93 × 10 -7  m 3 /m 2  s kPa). The permeate flux and rejection potential of the prepared membrane were evaluated by microfiltration of bovine serum albumin (BSA). To study the influences of three independent variables such as operating pressure (68.94-275.79 kPa), concentration of BSA (100-500 ppm), and solution pH (2-4) on permeate flux and percentage of rejection, the response surface methodology (RSM) was used. The predicted models for permeate flux and rejection were further subjected to biobjective genetic algorithm (GA). The hybrid RSM-GA approach resulted in a maximum permeate flux of 2.66 × 10 -5  m 3 /m 2  s and BSA rejection of 88.02%, at which the optimum conditions were attained as 100 ppm BSA concentration, 2 pH solution, and 275.79 kPa applied pressure. In addition, the separation efficiency was compared with other membranes applied for BSA separation to know the potential of the fabricated FAU zeolite ceramic composite membrane.

  14. EMISSION AND SURFACE EXCHANGE PROCESS

    EPA Science Inventory

    This task supports the development, evaluation, and application of emission and dry deposition algorithms in air quality simulation models, such as the Models-3/Community Multiscale Air Quality (CMAQ) modeling system. Emission estimates influence greatly the accuracy of air qual...

  15. Single-channel mixed signal blind source separation algorithm based on multiple ICA processing

    NASA Astrophysics Data System (ADS)

    Cheng, Xiefeng; Li, Ji

    2017-01-01

    Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.

  16. Climatological Characterization of Three-Dimensional Storm Structure from Operational Radar and Rain Gauge Data.

    NASA Astrophysics Data System (ADS)

    Steiner, Matthias; Houze, Robert A., Jr.; Yuter, Sandra E.

    1995-09-01

    Three algorithms extract information on precipitation type, structure, and amount from operational radar and rain gauge data. Tests on one month of data from one site show that the algorithms perform accurately and provide products that characterize the essential features of the precipitation climatology. Input to the algorithms are the operationally executed volume scans of a radar and the data from a surrounding rain gauge network. The algorithms separate the radar echoes into convective and stratiform regions, statistically summarize the vertical structure of the radar echoes, and determine precipitation rates and amounts on high spatial resolution.The convective and stratiform regions are separated on the basis of the intensity and sharpness of the peaks of echo intensity. The peaks indicate the centers of the convective region. Precipitation not identified as convective is stratiform. This method avoids the problem of underestimating the stratiform precipitation. The separation criteria are applied in exactly the same way throughout the observational domain and the product generated by the algorithm can be compared directly to model output. An independent test of the algorithm on data for which high-resolution dual-Doppler observations are available shows that the convective stratiform separation algorithm is consistent with the physical definitions of convective and stratiform precipitation.The vertical structure algorithm presents the frequency distribution of radar reflectivity as a function of height and thus summarizes in a single plot the vertical structure of all the radar echoes observed during a month (or any other time period). Separate plots reveal the essential differences in structure between the convective and stratiform echoes.Tests yield similar results (within less than 10%) for monthly rain statistics regardless of the technique used for estimating the precipitation, as long as the radar reflectivity values are adjusted to agree with monthly rain gauge data. It makes little difference whether the adjustment is by monthly mean rates or percentiles. Further tests show that 1-h sampling is sufficient to obtain an accurate estimate of monthly rain statistics.

  17. Extreme ultraviolet and Soft X-ray diagnostic upgrade on the HBT-EP tokamak: Progress and Results

    NASA Astrophysics Data System (ADS)

    Desanto, S.; Levesque, J. P.; Battey, A.; Brooks, J. W.; Mauel, M. E.; Navratil, G. A.; Hansen, C. J.

    2017-10-01

    In order to understand internal MHD mode structure in a tokamak plasma, it is helpful to understand temperature and density fluctuations within that plasma. In the HBT-EP tokamak, the plasma emits bremsstrahlung radiation in the extreme ultraviolet (EUV) and soft x-ray (SXR) regimes, and the emitted power is primarily related to electron density and temperature. This radiation is detected by photodiode arrays located at several different angular positions near the plasma's edge, each array making several views through a poloidal slice of plasma. From these measurements a 2-d emissivity profile of that slice can be reconstructed with tomographic algorithms. This profile cannot directly tell us whether the emissivity is due to electron density, temperature, line emission, or charge recombination; however, when combined with information from other diagnostics, it can provide strong evidence of the type of internal mode or modes depending on the temporal-spatial context. We present ongoing progress and results on the installation of a new system that will eventually consist of four arrays of 16 views each and a separate two-color, 16-chord tangential system, which will provide an improved understanding of the internal structure of HBT-EP plasmas. Supported by U.S. DOE Grant DE-FG02-86ER5322.

  18. State-Based Implicit Coordination and Applications

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.

    2011-01-01

    In air traffic management, pairwise coordination is the ability to achieve separation requirements when conflicting aircraft simultaneously maneuver to solve a conflict. Resolution algorithms are implicitly coordinated if they provide coordinated resolution maneuvers to conflicting aircraft when only surveillance data, e.g., position and velocity vectors, is periodically broadcast by the aircraft. This paper proposes an abstract framework for reasoning about state-based implicit coordination. The framework consists of a formalized mathematical development that enables and simplifies the design and verification of implicitly coordinated state-based resolution algorithms. The use of the framework is illustrated with several examples of algorithms and formal proofs of their coordination properties. The work presented here supports the safety case for a distributed self-separation air traffic management concept where different aircraft may use different conflict resolution algorithms and be assured that separation will be maintained.

  19. DEPENDENCE OF NITRIC OXIDE EMISSIONS ON VEHICLE LOAD: RESULTS FROM THE GTRP INSTRUMENTED VEHICLE PROGRAM

    EPA Science Inventory

    The presentation discussed the dependence of nitric oxide (NO) emissions on vehicle load, bases on results from an instrumented-vehicle program. The accuracy and feasibility of modal emissions models depend on algorithms to allocate vehicle emissions based on a vehicle operation...

  20. INTEGRATION OF THE BIOGENIC EMISSIONS INVENTORY SYSTEM (BEIS3) INTO THE COMMUNITY MULTISCALE AIR QUALITY MODELING SYSTEM

    EPA Science Inventory

    The importance of biogenic emissions for regional air quality modeling is generally recognized [Guenther et al., 2000]. Since the 1980s, biogenic emission estimates have been derived from algorithms such as the Biogenic Emissions Inventory System (BEIS) [Pierce et. al., 1998]....

  1. Statistical modeling of natural backgrounds in hyperspectral LWIR data

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph

    2016-09-01

    Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.

  2. [MODIS Investigation

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.

    1997-01-01

    We are responsible for the delivery of two at-launch products for AM-1: Fluorescence line height (FLH) and chlorophyll fluorescence efficiency (CFE). In our last report we had planned to combine the two separate algorithms into a single piece of code. However, after discussions with Bob Evans, it was decided that it was best to leave the two algorithms separate. They have been integrated into the MOCEAN processing system, and given their low computational requirements, it easier to keep them separate. In addition, there remain questions concerning the specific chlorophyll product that will be used for the CFE calculation. Presently, the CFE algorithm relies on the chlorophyll product produced by Ken Carder. This product is based on a reflectance model, and is theoretically different than the chlorophyll product being provided by Dennis Clark (NOAA). These two products will be compared systematically in the coming months. If we decide to switch to the Clark product, then it will be simpler to modify the CFE algorithm if it remains separate from the FLH algorithm. Our focus for the next six months is to refine the quality flags that were delivered as part of the algorithm last summer. A description of these flags was provided to Evans for the MOCEAN processing system. A summary was included in the revised ATBD. Some of the flags depend on flags produced by the input products so coordination will be required.

  3. Economic environmental dispatch using BSA algorithm

    NASA Astrophysics Data System (ADS)

    Jihane, Kartite; Mohamed, Cherkaoui

    2018-05-01

    Economic environmental dispatch problem (EED) is an important issue especially in the field of fossil fuel power plant system. It allows the network manager to choose among different units the most optimized in terms of fuel costs and emission level. The objective of this paper is to minimize the fuel cost with emissions constrained; the test is conducted for two cases: six generator unit and ten generator unit for the same power demand 1200Mw. The simulation has been computed in MATLAB and the result shows the robustness of the Backtracking Search optimization Algorithm (BSA) and the impact of the load demand on the emission.

  4. The Status of the NASA MEaSUREs Combined ASTER and MODIS Emissivity Over Land (CAMEL) Products

    NASA Astrophysics Data System (ADS)

    Borbas, E. E.; Feltz, M.; Hulley, G. C.; Knuteson, R. O.; Hook, S. J.

    2017-12-01

    As part of a NASA MEaSUREs Land Surface Temperature and Emissivity project, the University of Wisconsin, Space Science and Engineering Center and the NASA's Jet Propulsion Laboratory have developed a global monthly mean emissivity Earth System Data Record (ESDR). The CAMEL ESDR was produced by merging two current state-of-the-art emissivity datasets: the UW-Madison MODIS Infrared emissivity dataset (UWIREMIS), and the JPL ASTER Global Emissivity Dataset v4 (GEDv4). The dataset includes monthly global data records of emissivity, uncertainty at 13 hinge points between 3.6-14.3 µm, and Principal Components Analysis (PCA) coefficients at 5 kilometer resolution for years 2003 to 2015. A high spectral resolution algorithm is also provided for HSR applications. The dataset is currently being tested in sounder retrieval algorithm (e.g. CrIS, IASI) and has already been implemented in RTTOV-12 for immediate use in numerical weather modeling and data assimilation. This poster will present the current status of the dataset.

  5. Greenhouse gas and ammonia emissions from digested and separated dairy manure during storage and after land application

    USDA-ARS?s Scientific Manuscript database

    Manure management at dairy production facilities, including anaerobic digestion (AD) and solid-liquid separation (SLS), has a strong potential for the abatement of greenhouse gas (GHG) and ammonia (NH3) emissions. This study evaluated the effects of AD, SLS, and AD+SLS on GHG and NH3 emissions durin...

  6. A robust multilevel simultaneous eigenvalue solver

    NASA Technical Reports Server (NTRS)

    Costiner, Sorin; Taasan, Shlomo

    1993-01-01

    Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.

  7. Cubic scaling algorithms for RPA correlation using interpolative separable density fitting

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Thicke, Kyle

    2017-12-01

    We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.

  8. A Robust Automatic Ionospheric O/X Mode Separation Technique for Vertical Incidence Sounders

    NASA Astrophysics Data System (ADS)

    Harris, T. J.; Pederick, L. H.

    2017-12-01

    The sounding of the ionosphere by a vertical incidence sounder (VIS) is the oldest and most common technique for determining the state of the ionosphere. The automatic extraction of relevant ionospheric parameters from the ionogram image, referred to as scaling, is important for the effective utilization of data from large ionospheric sounder networks. Due to the Earth's magnetic field, the ionosphere is birefringent at radio frequencies, so a VIS will typically see two distinct returns for each frequency. For the automatic scaling of ionograms, it is highly desirable to be able to separate the two modes. Defence Science and Technology Group has developed a new VIS solution which is based on direct digital receiver technology and includes an algorithm to separate the O and X modes. This algorithm can provide high-quality separation even in difficult ionospheric conditions. In this paper we describe the algorithm and demonstrate its consistency and reliability in successfully separating 99.4% of the ionograms during a 27 day experimental campaign under sometimes demanding ionospheric conditions.

  9. Efficient image enhancement using sparse source separation in the Retinex theory

    NASA Astrophysics Data System (ADS)

    Yoon, Jongsu; Choi, Jangwon; Choe, Yoonsik

    2017-11-01

    Color constancy is the feature of the human vision system (HVS) that ensures the relative constancy of the perceived color of objects under varying illumination conditions. The Retinex theory of machine vision systems is based on the HVS. Among Retinex algorithms, the physics-based algorithms are efficient; however, they generally do not satisfy the local characteristics of the original Retinex theory because they eliminate global illumination from their optimization. We apply the sparse source separation technique to the Retinex theory to present a physics-based algorithm that satisfies the locality characteristic of the original Retinex theory. Previous Retinex algorithms have limited use in image enhancement because the total variation Retinex results in an overly enhanced image and the sparse source separation Retinex cannot completely restore the original image. In contrast, our proposed method preserves the image edge and can very nearly replicate the original image without any special operation.

  10. Fingerprint separation: an application of ICA

    NASA Astrophysics Data System (ADS)

    Singh, Meenakshi; Singh, Deepak Kumar; Kalra, Prem Kumar

    2008-04-01

    Among all existing biometric techniques, fingerprint-based identification is the oldest method, which has been successfully used in numerous applications. Fingerprint-based identification is the most recognized tool in biometrics because of its reliability and accuracy. Fingerprint identification is done by matching questioned and known friction skin ridge impressions from fingers, palms, and toes to determine if the impressions are from the same finger (or palm, toe, etc.). There are many fingerprint matching algorithms which automate and facilitate the job of fingerprint matching, but for any of these algorithms matching can be difficult if the fingerprints are overlapped or mixed. In this paper, we have proposed a new algorithm for separating overlapped or mixed fingerprints so that the performance of the matching algorithms will improve when they are fed with these inputs. Independent Component Analysis (ICA) has been used as a tool to separate the overlapped or mixed fingerprints.

  11. Driving mechanism of unsteady separation shock motion in hypersonic interactive flow

    NASA Technical Reports Server (NTRS)

    Dolling, D. S.; Narlo, J. C., II

    1987-01-01

    Wall pressure fluctuations were measured under the steady separation shock waves in Mach 5 turbulent interactions induced by unswept circular cylinders on a flat plate. The wall temperature was adiabatic. A conditional sampling algorithm was developed to examine the statistics of the shock wave motion. The same algorithm was used to examine data taken in earlier studies in the Princeton University Mach 3 blowdown tunnel. In these earlier studies, hemicylindrically blunted fins of different leading-edge diameters were tested in boundary layers which developed on the tunnel floor and on a flat plate. A description of the algorithm, the reasons why it was developed and the sensitivity of the results to the threshold settings, are discussed. The results from the algorithm, together with cross correlations and power spectral density estimates suggests that the shock motion is driven by the low-frequency unsteadiness of the downstream separated, vortical flow.

  12. Reassessment of the temperature-emissivity separation from multispectral thermal infrared data: Introducing the impact of vegetation canopy by simulating the cavity effect with the SAIL-Thermique model

    USDA-ARS?s Scientific Manuscript database

    We investigated the use of multispectral thermal imagery to retrieve land surface emissivity and temperature. Conversely to concurrent methods, the temperature emissivity separation (TES) method simply requires single overpass without any ancillary information. This is possible since TES makes use o...

  13. NASA's MODIS/VIIRS Land Surface Temperature and Emissivity Products: Asssessment of Accuracy, Continuity and Science Uses

    NASA Astrophysics Data System (ADS)

    Hulley, G. C.; Malakar, N.; Islam, T.

    2017-12-01

    Land Surface Temperature and Emissivity (LST&E) are an important Earth System Data Record (ESDR) and Environmental Climate Variable (ECV) defined by NASA and GCOS respectively. LST&E data are key variables used in land cover/land use change studies, in surface energy balance and atmospheric water vapor retrieval models and retrievals, and in climate research. LST&E products are currently produced on a routine basis using data from the MODIS instruments on the NASA EOS platforms and by the VIIRS instrument on the Suomi-NPP platform that serves as a bridge between NASA EOS and the next-generation JPSS platforms. Two new NASA LST&E products for MODIS (MxD21) and VIIRS (VNP21) are being produced during 2017 using a new approach that addresses discrepancies in accuracy and consistency between the current suite of split-window based LST products. The new approach uses a Temperature Emissivity Separation (TES) algorithm, originally developed for the ASTER instrument, to physically retrieve both LST and spectral emissivity consistently for both sensors with high accuracy and well defined uncertainties. This study provides a rigorous assessment of accuracy of the MxD21/VNP21 products using temperature- and radiance-based validation strategies and demonstrates continuity between the products using collocated matchups over CONUS. We will further demonstrate potential science use of the new products with studies related to heat waves, monitoring snow melt dynamics, and land cover/land use change.

  14. Correction of WindScat Scatterometric Measurements by Combining with AMSR Radiometric Data

    NASA Technical Reports Server (NTRS)

    Song, S.; Moore, R. K.

    1996-01-01

    The Seawinds scatterometer on the advanced Earth observing satellite-2 (ADEOS-2) will determine surface wind vectors by measuring the radar cross section. Multiple measurements will be made at different points in a wind-vector cell. When dense clouds and rain are present, the signal will be attenuated, thereby giving erroneous results for the wind. This report describes algorithms to use with the advanced mechanically scanned radiometer (AMSR) scanning radiometer on ADEOS-2 to correct for the attenuation. One can determine attenuation from a radiometer measurement based on the excess brightness temperature measured. This is the difference between the total measured brightness temperature and the contribution from surface emission. A major problem that the algorithm must address is determining the surface contribution. Two basic approaches were developed for this, one using the scattering coefficient measured along with the brightness temperature, and the other using the brightness temperature alone. For both methods, best results will occur if the wind from the preceding wind-vector cell can be used as an input to the algorithm. In the method based on the scattering coefficient, we need the wind direction from the preceding cell. In the method using brightness temperature alone, we need the wind speed from the preceding cell. If neither is available, the algorithm can work, but the corrections will be less accurate. Both correction methods require iterative solutions. Simulations show that the algorithms make significant improvements in the measured scattering coefficient and thus is the retrieved wind vector. For stratiform rains, the errors without correction can be quite large, so the correction makes a major improvement. For systems of separated convective cells, the initial error is smaller and the correction, although about the same percentage, has a smaller effect.

  15. A multimodal logistics service network design with time windows and environmental concerns

    PubMed Central

    Zhang, Dezhi; He, Runzhong; Wang, Zhongwei

    2017-01-01

    The design of a multimodal logistics service network with customer service time windows and environmental costs is an important and challenging issue. Accordingly, this work established a model to minimize the total cost of multimodal logistics service network design with time windows and environmental concerns. The proposed model incorporates CO2 emission costs to determine the optimal transportation mode combinations and investment selections for transfer nodes, which consider transport cost, transport time, carbon emission, and logistics service time window constraints. Furthermore, genetic and heuristic algorithms are proposed to set up the abovementioned optimal model. A numerical example is provided to validate the model and the abovementioned two algorithms. Then, comparisons of the performance of the two algorithms are provided. Finally, this work investigates the effects of the logistics service time windows and CO2 emission taxes on the optimal solution. Several important management insights are obtained. PMID:28934272

  16. A multimodal logistics service network design with time windows and environmental concerns.

    PubMed

    Zhang, Dezhi; He, Runzhong; Li, Shuangyan; Wang, Zhongwei

    2017-01-01

    The design of a multimodal logistics service network with customer service time windows and environmental costs is an important and challenging issue. Accordingly, this work established a model to minimize the total cost of multimodal logistics service network design with time windows and environmental concerns. The proposed model incorporates CO2 emission costs to determine the optimal transportation mode combinations and investment selections for transfer nodes, which consider transport cost, transport time, carbon emission, and logistics service time window constraints. Furthermore, genetic and heuristic algorithms are proposed to set up the abovementioned optimal model. A numerical example is provided to validate the model and the abovementioned two algorithms. Then, comparisons of the performance of the two algorithms are provided. Finally, this work investigates the effects of the logistics service time windows and CO2 emission taxes on the optimal solution. Several important management insights are obtained.

  17. WEATHER EFFECTS ON ISOPRENE EMISSION CAPACITY AND APPLICATIONS IN EMISSIONS ALGORITHMS

    EPA Science Inventory

    Many plants synthesize isoprene. Because it is volatile and reacts rapidly with hydroxyl radicals, it is emitted to the atmosphere and plays a critical role in atmospheric chemistry. Determining effective remediation efforts for ozone pollution requires accurate isoprene emission...

  18. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation.

    PubMed

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it.

  19. Localized Ambient Solidity Separation Algorithm Based Computer User Segmentation

    PubMed Central

    Sun, Xiao; Zhang, Tongda; Chai, Yueting; Liu, Yi

    2015-01-01

    Most of popular clustering methods typically have some strong assumptions of the dataset. For example, the k-means implicitly assumes that all clusters come from spherical Gaussian distributions which have different means but the same covariance. However, when dealing with datasets that have diverse distribution shapes or high dimensionality, these assumptions might not be valid anymore. In order to overcome this weakness, we proposed a new clustering algorithm named localized ambient solidity separation (LASS) algorithm, using a new isolation criterion called centroid distance. Compared with other density based isolation criteria, our proposed centroid distance isolation criterion addresses the problem caused by high dimensionality and varying density. The experiment on a designed two-dimensional benchmark dataset shows that our proposed LASS algorithm not only inherits the advantage of the original dissimilarity increments clustering method to separate naturally isolated clusters but also can identify the clusters which are adjacent, overlapping, and under background noise. Finally, we compared our LASS algorithm with the dissimilarity increments clustering method on a massive computer user dataset with over two million records that contains demographic and behaviors information. The results show that LASS algorithm works extremely well on this computer user dataset and can gain more knowledge from it. PMID:26221133

  20. SDSS-IV eBOSS emission-line galaxy pilot survey

    DOE PAGES

    Comparat, J.; Delubac, T.; Jouvel, S.; ...

    2016-08-09

    The Sloan Digital Sky Survey IV extended Baryonic Oscillation Spectroscopic Survey (SDSS-IV/eBOSS) will observe 195,000 emission-line galaxies (ELGs) to measure the Baryonic Acoustic Oscillation standard ruler (BAO) at redshift 0.9. To test different ELG selection algorithms, 9,000 spectra were observed with the SDSS spectrograph as a pilot survey based on data from several imaging surveys. First, using visual inspection and redshift quality flags, we show that the automated spectroscopic redshifts assigned by the pipeline meet the quality requirements for a reliable BAO measurement. We also show the correlations between sky emission, signal-to-noise ratio in the emission lines, and redshift error.more » Then we provide a detailed description of each target selection algorithm we tested and compare them with the requirements of the eBOSS experiment. As a result, we provide reliable redshift distributions for the different target selection schemes we tested. Lastly, we determine an target selection algorithms that is best suited to be applied on DECam photometry because they fulfill the eBOSS survey efficiency requirements.« less

  1. Explicit symplectic algorithms based on generating functions for charged particle dynamics.

    PubMed

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H(x,p)=p_{i}f(x) or H(x,p)=x_{i}g(p). Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  2. Explicit symplectic algorithms based on generating functions for charged particle dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  3. Evaluation of GMI and PMI diffeomorphic‐based demons algorithms for aligning PET and CT Images

    PubMed Central

    Yang, Juan; Zhang, You; Yin, Yong

    2015-01-01

    Fusion of anatomic information in computed tomography (CT) and functional information in F18‐FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined F18‐FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole‐body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)‐based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point‐wise mutual information (PMI) diffeomorphic‐based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB‐approved study. Whole‐body PET and CT images were acquired from a combined F18‐FDG PET/CT scanner for each patient. The modified Hausdorff distance (dMH) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of dMH were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI‐based demons and the PMI diffeomorphic‐based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined F18‐FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic‐based demons algorithm was more accurate than the GMI‐based demons algorithm in registering PET/CT esophageal images. PACS numbers: 87.57.nj, 87.57. Q‐, 87.57.uk PMID:26218993

  4. Evaluation of GMI and PMI diffeomorphic-based demons algorithms for aligning PET and CT Images.

    PubMed

    Yang, Juan; Wang, Hongjun; Zhang, You; Yin, Yong

    2015-07-08

    Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons algorithm was more accurate than the GMI-based demons algorithm in registering PET/CT esophageal images.

  5. Informed Source Separation: A Bayesian Tutorial

    NASA Technical Reports Server (NTRS)

    Knuth, Kevin H.

    2005-01-01

    Source separation problems are ubiquitous in the physical sciences; any situation where signals are superimposed calls for source separation to estimate the original signals. In h s tutorial I will discuss the Bayesian approach to the source separation problem. This approach has a specific advantage in that it requires the designer to explicitly describe the signal model in addition to any other information or assumptions that go into the problem description. This leads naturally to the idea of informed source separation, where the algorithm design incorporates relevant information about the specific problem. This approach promises to enable researchers to design their own high-quality algorithms that are specifically tailored to the problem at hand.

  6. PTM Along Track Algorithm to Maintain Spacing During Same Direction Pair-Wise Trajectory Management Operations

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.

    2015-01-01

    Pair-wise Trajectory Management (PTM) is a cockpit based delegated responsibility separation standard. When an air traffic service provider gives a PTM clearance to an aircraft and the flight crew accepts the clearance, the flight crew will maintain spacing and separation from a designated aircraft. A PTM along track algorithm will receive state information from the designated aircraft and from the own ship to produce speed guidance for the flight crew to maintain spacing and separation

  7. HEAVY DUTY DIESEL VEHICLE LOAD ESTIMATION: DEVELOPMENT OF VEHICLE ACTIVITY OPTIMIZATION ALGORITHM

    EPA Science Inventory

    The Heavy-Duty Vehicle Modal Emission Model (HDDV-MEM) developed by the Georgia Institute of Technology(Georgia Tech) has a capability to model link-specific second-by-second emissions using speed/accleration matrices. To estimate emissions, engine power demand calculated usin...

  8. Comparison of algorithms for computing the two-dimensional discrete Hartley transform

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Burton, John C.; Miller, Keith W.

    1989-01-01

    Three methods have been described for computing the two-dimensional discrete Hartley transform. Two of these employ a separable transform, the third method, the vector-radix algorithm, does not require separability. In-place computation of the vector-radix method is described. Operation counts and execution times indicate that the vector-radix method is fastest.

  9. Neuromimetic Sound Representation for Percept Detection and Manipulation

    NASA Astrophysics Data System (ADS)

    Zotkin, Dmitry N.; Chi, Taishih; Shamma, Shihab A.; Duraiswami, Ramani

    2005-12-01

    The acoustic wave received at the ears is processed by the human auditory system to separate different sounds along the intensity, pitch, and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent a signal along these attributes. In this paper, we discuss the creation of maximally separable sounds in auditory user interfaces and use a recently proposed cortical sound representation, which performs a biomimetic decomposition of an acoustic signal, to represent and manipulate sound for this purpose. We briefly overview algorithms for obtaining, manipulating, and inverting a cortical representation of a sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are also used to create sound of an instrument between a "guitar" and a "trumpet." Excellent sound quality can be achieved if processing time is not a concern, and intelligible signals can be reconstructed in reasonable processing time (about ten seconds of computational time for a one-second signal sampled at [InlineEquation not available: see fulltext.]). Work on bringing the algorithms into the real-time processing domain is ongoing.

  10. A Criteria Standard for Conflict Resolution: A Vision for Guaranteeing the Safety of Self-Separation in NextGen

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Butler, Ricky; Narkawicz, Anthony; Maddalon, Jeffrey; Hagen, George

    2010-01-01

    Distributed approaches for conflict resolution rely on analyzing the behavior of each aircraft to ensure that system-wide safety properties are maintained. This paper presents the criteria method, which increases the quality and efficiency of a safety assurance analysis for distributed air traffic concepts. The criteria standard is shown to provide two key safety properties: safe separation when only one aircraft maneuvers and safe separation when both aircraft maneuver at the same time. This approach is complemented with strong guarantees of correct operation through formal verification. To show that an algorithm is correct, i.e., that it always meets its specified safety property, one must only show that the algorithm satisfies the criteria. Once this is done, then the algorithm inherits the safety properties of the criteria. An important consequence of this approach is that there is no requirement that both aircraft execute the same conflict resolution algorithm. Therefore, the criteria approach allows different avionics manufacturers or even different airlines to use different algorithms, each optimized according to their own proprietary concerns.

  11. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  12. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  13. Advanced coal cleaning meets acid rain emission limits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boron, D.J.; Matoney, J.P.; Albrecht, M.C.

    1987-03-01

    The following processes were selected for study: fine-coal, heavy-medium cyclone separation/flotation, advanced flotation, Dow true heavy liquid separation, Advanced Energy Dynamics (AED) electrostatic separation, and National Research Council of Canada oil agglomeration. Advanced coal cleaning technology was done for the state of New York to investigate methods to use high sulfur coal in view of anticipated lower SO/sub 2/ emission limits.

  14. 40 CFR 61.347 - Standards: Oil-water separators.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 9 2012-07-01 2012-07-01 false Standards: Oil-water separators. 61.347 Section 61.347 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS National Emission Standard for Benzene...

  15. Unweighted least squares phase unwrapping by means of multigrid techniques

    NASA Astrophysics Data System (ADS)

    Pritt, Mark D.

    1995-11-01

    We present a multigrid algorithm for unweighted least squares phase unwrapping. This algorithm applies Gauss-Seidel relaxation schemes to solve the Poisson equation on smaller, coarser grids and transfers the intermediate results to the finer grids. This approach forms the basis of our multigrid algorithm for weighted least squares phase unwrapping, which is described in a separate paper. The key idea of our multigrid approach is to maintain the partial derivatives of the phase data in separate arrays and to correct these derivatives at the boundaries of the coarser grids. This maintains the boundary conditions necessary for rapid convergence to the correct solution. Although the multigrid algorithm is an iterative algorithm, we demonstrate that it is nearly as fast as the direct Fourier-based method. We also describe how to parallelize the algorithm for execution on a distributed-memory parallel processor computer or a network-cluster of workstations.

  16. Biogenic volatile organic compounds (BVOCs) emissions from Abies alba in a French forest.

    PubMed

    Moukhtar, S; Couret, C; Rouil, L; Simon, V

    2006-02-01

    Air quality studies need to be based on accurate and reliable data, particularly in the field of the emissions. Biogenic emissions from forests, crops, and grasslands are now considered as major compounds in photochemical processes. Unfortunately, depending on the type of vegetation, these emissions are not so often reliably defined. As an example, although the silver fir (Abies alba) is a very widespread conifer tree in the French and European areas, its standard emission rate is not available in the literature. This study investigates the isoprene and monoterpenes emission from A. alba in France measured during the fieldwork organised in the Fossé Rhénan, from May to June 2003. A dynamic cuvette method was used. Limonene was the predominant monoterpene emitted, followed by camphene, alpha-pinene and eucalyptol. No isoprene emission was detected. The four monoterpenes measured showed different behaviours according to micrometeorological conditions. In fact, emissions of limonene, alpha-pinene and camphene were temperature-dependant while eucalyptol emissions were temperature and light dependant. Biogenic volatile organic compounds emissions were modeled using information gathered during the field study. Emissions of the three monoterpenes previously quoted were achieved using the monoterpenes algorithm developed by Tingey et al. (1980) [Tingey D, Manning M, Grothaus L, Burns W. Influence of light and temperature on monoterpene emission rates from slash pine. Plant Physiol 1980;65: 797-801.] and the isoprene algorithm [Guenther, A., Monson, R., Fall, R., 1991. Isoprene and monoterpene emission rate variability: observations with eucalyptus and emission rate algorithm development. J Geophys Res 26A: 10799-10808.]; [Guenther, A., Zimmerman, P., Harley, P., Monson, R., Fall, R., 1993. Isoprene and monoterpene emission rate variability: model evaluation and sensitivity analysis. J Geophys Res 98D: 12609-12617.]) was used for the eucalyptol emission. With these methods, simulation results and observations agreed fairly well. The standard emission rate (303 K) and beta-coefficient averaged for limonene, camphene and alpha-pinene were respectively of 0.63 microg gdw-1 h-1 and 0.06 K-1. For eucalyptol, the standard emission rate (T=303 K and PAR=1000 micromol m-2 s-1) was 0.26 microg gdw-1 h-1. This classified A. alba as a weak monoterpenes emitter.

  17. Development of PET projection data correction algorithm

    NASA Astrophysics Data System (ADS)

    Bazhanov, P. V.; Kotina, E. D.

    2017-12-01

    Positron emission tomography is modern nuclear medicine method used in metabolism and internals functions examinations. This method allows to diagnosticate treatments on their early stages. Mathematical algorithms are widely used not only for images reconstruction but also for PET data correction. In this paper random coincidences and scatter correction algorithms implementation are considered, as well as algorithm of PET projection data acquisition modeling for corrections verification.

  18. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    NASA Astrophysics Data System (ADS)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  19. A hybrid algorithm for the segmentation of books in libraries

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Lei, Liang

    2016-05-01

    This paper proposes an algorithm for book segmentation based on bookshelves images. The algorithm can be separated into three parts. The first part is pre-processing, aiming at eliminating or decreasing the effect of image noise and illumination conditions. The second part is near-horizontal line detection based on Canny edge detector, and separating a bookshelves image into multiple sub-images so that each sub-image contains an individual shelf. The last part is book segmentation. In each shelf image, near-vertical line is detected, and obtained lines are used for book segmentation. The proposed algorithm was tested with the bookshelf images taken from OPIE library in MTU, and the experimental results demonstrate good performance.

  20. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  1. A hierarchical word-merging algorithm with class separability measure.

    PubMed

    Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan

    2014-03-01

    In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.

  2. Estimating terpene and terpenoid emissions from conifer oleoresin composition

    NASA Astrophysics Data System (ADS)

    Flores, Rosa M.; Doskey, Paul V.

    2015-07-01

    The following algorithm, which is based on the thermodynamics of nonelectrolyte partitioning, was developed to predict emission rates of terpenes and terpenoids from specific storage sites in conifers: Ei =xoriγoripi∘ where Ei is the emission rate (μg C gdw-1 h-1) and pi∘ is the vapor pressure (mm Hg) of the pure liquid terpene or terpenoid, respectively, and xori and γori are the mole fraction and activity coefficient (on a Raoult's law convention), respectively, of the terpene and terpenoid in the oleoresin. Activity coefficients are calculated with Hansen solubility parameters that account for dispersive, polar, and H-bonding interactions of the solutes with the oleoresin matrix. Estimates of pi∘ at 25 °C and molar enthalpies of vaporization are made with the SIMPOL.1 method and are used to estimate pi∘ at environmentally relevant temperatures. Estimated mixing ratios of terpenes and terpenols were comparatively higher above resin-acid- and monoterpene-rich oleoresins, respectively. The results indicated a greater affinity of terpenes and terpenols for the non-functionalized and carboxylic acid containing matrix through dispersive and H-bonding interactions, which are expressed in the emission algorithm by the activity coefficient. The correlation between measured emission rates of terpenes and terpenoids for Pinus strobus and emission rates predicted with the algorithm were very good (R = 0.95). Standard errors for the range and average of monoterpene emission rates were ±6 - ±86% and ±54%, respectively, and were similar in magnitude to reported standard deviations of monoterpene composition of foliar oils (±38 - ±51% and ±67%, respectively).

  3. Pig slurry acidification and separation techniques affect soil N and C turnover and N2O emissions from solid, liquid and biochar fractions.

    PubMed

    Gómez-Muñoz, B; Case, S D C; Jensen, L S

    2016-03-01

    The combined effects of pig slurry acidification, subsequent separation techniques and biochar production from the solid fraction on N mineralisation and N2O and CO2 emissions in soil were investigated in an incubation experiment. Acidification of pig slurry increased N availability from the separated solid fractions in soil, but did not affect N2O and CO2 emissions. However acidification reduced soil N and C turnover from the liquid fraction. The use of more advanced separation techniques (flocculation and drainage > decanting centrifuge > screw press) increased N mineralisation from acidified solid fractions, but also increased N2O and CO2 emissions in soil amended with the liquid fraction. Finally, the biochar production from the solid fraction of pig slurry resulted in a very recalcitrant material, which reduced N and C mineralisation in soil compared to the raw solid fractions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. The profile algorithm for microwave delay estimation from water vapor radiometer data

    NASA Technical Reports Server (NTRS)

    Robinson, Steven E.

    1988-01-01

    A new algorithm has been developed for the estimation of tropospheric microwave path delays from water vapor radiometer (WVR) data, which does not require site and weather dependent empirical parameters to produce accuracy better than 0.3 cm of delay. Instead of taking the conventional linear approach, the new algorithm first uses the observables with an emission model to determine an approximate form of the vertical water vapor distribution, which is then explicitly integrated to estimate wet path delays in a second step. The intrinsic accuracy of this algorithm, excluding uncertainties caused by the radiometers and the emission model, has been examined for two channel WVR data using path delays and corresponding simulated observables computed from archived radiosonde data. It is found that annual rms errors for a wide range of sites average 0.18 cm in the absence of clouds, 0.22 cm in cloudy weather, and 0.19 cm overall. In clear weather, the new algorithm's accuracy is comparable to the best that can be obtained from conventional linear algorithms, while in cloudy weather it offers a 35 percent improvement.

  5. A New Retrieval Algorithm for OMI NO2: Tropospheric Results and Comparisons with Measurements and Models

    NASA Technical Reports Server (NTRS)

    Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.

    2012-01-01

    Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.

  6. Multiple volatile organic compound vapor chamber testing with a frequency-agile CO2 DIAL system: field-test results

    NASA Astrophysics Data System (ADS)

    Carr, Lewis W.; Warren, Russell E.; Carlisle, Clinton B.; Carlisle, Sylvie A.; Cooper, David E.; Fletcher, Leland; Gotoff, Steven W.; Reyes, Felix

    1995-02-01

    Many of the 189 hazardous air pollutants (HAPs) listed in the Environmental Protection Agency regulations can be monitored by frequency agile CO2 DIAL (FACD) systems. These systems can be used to survey industrial and military installations and toxic waste repositories at ranges of a few kilometers from emission sources. FACD systems may become a valuable tool for detection and estimation of a wide array of HAPs. However, in most cases, several of the listed HAPs will be present simultaneously and discrimination of one HAP from another based on differences in spectral characteristics can be challenging for FACD systems. While FACD hardware is mature and is capable of addressing these discrimination issues, multiple-contaminate separation algorithms need to be developed. A one week field test was conducted at Los Banos, California, to gather multiple HAP data that will be used for future algorithm development. A vapor chamber was used to control disseminated concentrations of each HAP and reduce effects of atmospheric turbulence and wind direction and speed. Data was collected for several chemicals injected into the vapor chamber simultaneously. The data and results from the field test are presented and calibration issues are discussed.

  7. Sensitivity of blackbody effective emissivity to wavelength and temperature: By genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ejigu, E. K.; Liedberg, H. G.

    A variable-temperature blackbody (VTBB) is used to calibrate an infrared radiation thermometer (pyrometer). The effective emissivity (ε{sub eff}) of a VTBB is dependent on temperature and wavelength other than the geometry of the VTBB. In the calibration process the effective emissivity is often assumed to be constant within the wavelength and temperature range. There are practical situations where the sensitivity of the effective emissivity needs to be known and correction has to be applied. We present a method using a genetic algorithm to investigate the sensitivity of the effective emissivity to wavelength and temperature variation. Two matlab® programs are generated:more » the first to model the radiance temperature calculation and the second to connect the model to the genetic algorithm optimization toolbox. The effective emissivity parameter is taken as a chromosome and optimized at each wavelength and temperature point. The difference between the contact temperature (reading from a platinum resistance thermometer or liquid in glass thermometer) and radiance temperature (calculated from the ε{sub eff} values) is used as an objective function where merit values are calculated and best fit ε{sub eff} values selected. The best fit ε{sub eff} values obtained as a solution show how sensitive they are to temperature and wavelength parameter variation. Uncertainty components that arise from wavelength and temperature variation are determined based on the sensitivity analysis. Numerical examples are considered for illustration.« less

  8. Exact and Heuristic Algorithms for Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Malik, Waqar A.; Jung, Yoon C.

    2016-01-01

    This paper explores the Single Runway Scheduling (SRS) problem with arrivals, departures, and crossing aircraft on the airport surface. Constraints for wake vortex separations, departure area navigation separations and departure time window restrictions are explicitly considered. The main objective of this research is to develop exact and heuristic based algorithms that can be used in real-time decision support tools for Air Traffic Control Tower (ATCT) controllers. The paper provides a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the SRS problem, but may prove unusable for application in real-time environment due to large computation times for moderate sized problems. We next propose a second algorithm that uses heuristics to restrict the search space for the DP based algorithm. A third algorithm based on a combination of insertion and local search (ILS) heuristics is then presented. Simulation conducted for the east side of Dallas/Fort Worth International Airport allows comparison of the three proposed algorithms and indicates that the ILS algorithm performs favorably in its ability to find efficient solutions and its computation times.

  9. Back-trajectory modeling of high time-resolution air measurement data to separate nearby sources

    EPA Science Inventory

    Strategies to isolate air pollution contributions from sources is of interest as voluntary or regulatory measures are undertaken to reduce air pollution. When different sources are located in close proximity to one another and have similar emissions, separating source emissions ...

  10. Planck 2015 results: XXII. A map of the thermal Sunyaev-Zeldovich effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghanim, N.; Arnaud, M.; Ashdown, M.

    In this article, we have constructed all-sky Compton parameters maps, y-maps, of the thermal Sunyaev-Zeldovich (tSZ) effect by applying specifically tailored component separation algorithms to the 30 to 857 GHz frequency channel maps from the Planck satellite. These reconstructed y-maps are delivered as part of the Planck 2015 release. The y-maps are characterized in terms of noise properties and residual foreground contamination, mainly thermal dust emission at large angular scales, and cosmic infrared background and extragalactic point sources at small angular scales. Specific masks are defined to minimize foreground residuals and systematics. Using these masks, we compute the y-map angularmore » power spectrum and higher order statistics. From these we conclude that the y-map is dominated by tSZ signal in the multipole range, 20« less

  11. Planck 2015 results: XXII. A map of the thermal Sunyaev-Zeldovich effect

    DOE PAGES

    Aghanim, N.; Arnaud, M.; Ashdown, M.; ...

    2016-09-20

    In this article, we have constructed all-sky Compton parameters maps, y-maps, of the thermal Sunyaev-Zeldovich (tSZ) effect by applying specifically tailored component separation algorithms to the 30 to 857 GHz frequency channel maps from the Planck satellite. These reconstructed y-maps are delivered as part of the Planck 2015 release. The y-maps are characterized in terms of noise properties and residual foreground contamination, mainly thermal dust emission at large angular scales, and cosmic infrared background and extragalactic point sources at small angular scales. Specific masks are defined to minimize foreground residuals and systematics. Using these masks, we compute the y-map angularmore » power spectrum and higher order statistics. From these we conclude that the y-map is dominated by tSZ signal in the multipole range, 20« less

  12. Particulate emission abatement for Krakow boiler houses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wysk, R.

    1995-12-31

    Among the many strategies for improving air quality in Krakow, one possible method is to adapt new and improved emission control technology. This project focuses on such a strategy. In order to reduce dust emissions from coal-fueled boilers, a new device called a Core Separator has been introduced in several boiler house applications. This advanced technology has been successfully demonstrated in Poland and several commercial units are now in operation. Particulate emissions from the Core Separator are typically 3 to 5 times lower than those from the best cyclone collectors. It can easily meet the new standard for dust emissionsmore » which will be in effect in Poland after 1997. The Core Separator is a completely inertial collector and is based on a unique recirculation method. It can effectively remove dust particles below 10 microns in diameter, the so-called PM-10 emissions. Its performance approaches that of fabric filters, but without the attendant cost and maintenance. It is well-suited to the industrial size boilers located in Krakow. Core Separators are now being marketed and sold by EcoInstal, one of the leading environmental firms in Poland, through a cooperative agreement with LSR Technologies.« less

  13. Sensitive dual color in vivo bioluminescence imaging using a new red codon optimized firefly luciferase and a green click beetle luciferase.

    PubMed

    Mezzanotte, Laura; Que, Ivo; Kaijzel, Eric; Branchini, Bruce; Roda, Aldo; Löwik, Clemens

    2011-04-22

    Despite a plethora of bioluminescent reporter genes being cloned and used for cell assays and molecular imaging purposes, the simultaneous monitoring of multiple events in small animals is still challenging. This is partly attributable to the lack of optimization of cell reporter gene expression as well as too much spectral overlap of the color-coupled reporter genes. A new red emitting codon-optimized luciferase reporter gene mutant of Photinus pyralis, Ppy RE8, has been developed and used in combination with the green click beetle luciferase, CBG99. Human embryonic kidney cells (HEK293) were transfected with vectors that expressed red Ppy RE8 and green CBG99 luciferases. Populations of red and green emitting cells were mixed in different ratios. After addition of the shared single substrate, D-luciferin, bioluminescent (BL) signals were imaged with an ultrasensitive cooled CCD camera using a series of band pass filters (20 nm). Spectral unmixing algorithms were applied to the images where good separation of signals was observed. Furthermore, HEK293 cells that expressed the two luciferases were injected at different depth in the animals. Spectrally-separate images and quantification of the dual BL signals in a mixed population of cells was achieved when cells were either injected subcutaneously or directly into the prostate. We report here the re-engineering of different luciferase genes for in vitro and in vivo dual color imaging applications to address the technical issues of using dual luciferases for imaging. In respect to previously used dual assays, our study demonstrated enhanced sensitivity combined with spatially separate BL spectral emissions using a suitable spectral unmixing algorithm. This new D-luciferin-dependent reporter gene couplet opens up the possibility in the future for more accurate quantitative gene expression studies in vivo by simultaneously monitoring two events in real time.

  14. Sensitive Dual Color In Vivo Bioluminescence Imaging Using a New Red Codon Optimized Firefly Luciferase and a Green Click Beetle Luciferase

    PubMed Central

    Mezzanotte, Laura; Que, Ivo; Kaijzel, Eric; Branchini, Bruce; Roda, Aldo; Löwik, Clemens

    2011-01-01

    Background Despite a plethora of bioluminescent reporter genes being cloned and used for cell assays and molecular imaging purposes, the simultaneous monitoring of multiple events in small animals is still challenging. This is partly attributable to the lack of optimization of cell reporter gene expression as well as too much spectral overlap of the color-coupled reporter genes. A new red emitting codon-optimized luciferase reporter gene mutant of Photinus pyralis, Ppy RE8, has been developed and used in combination with the green click beetle luciferase, CBG99. Principal Findings Human embryonic kidney cells (HEK293) were transfected with vectors that expressed red Ppy RE8 and green CBG99 luciferases. Populations of red and green emitting cells were mixed in different ratios. After addition of the shared single substrate, D-luciferin, bioluminescent (BL) signals were imaged with an ultrasensitive cooled CCD camera using a series of band pass filters (20 nm). Spectral unmixing algorithms were applied to the images where good separation of signals was observed. Furthermore, HEK293 cells that expressed the two luciferases were injected at different depth in the animals. Spectrally-separate images and quantification of the dual BL signals in a mixed population of cells was achieved when cells were either injected subcutaneously or directly into the prostate. Significance We report here the re-engineering of different luciferase genes for in vitro and in vivo dual color imaging applications to address the technical issues of using dual luciferases for imaging. In respect to previously used dual assays, our study demonstrated enhanced sensitivity combined with spatially separate BL spectral emissions using a suitable spectral unmixing algorithm. This new D-luciferin-dependent reporter gene couplet opens up the possibility in the future for more accurate quantitative gene expression studies in vivo by simultaneously monitoring two events in real time. PMID:21544210

  15. Evaluation of a global algorithm for wavefront reconstruction for Shack-Hartmann wave-front sensors and thick fundus reflectors.

    PubMed

    Liu, Tao; Thibos, Larry; Marin, Gildas; Hernandez, Martha

    2014-01-01

    Conventional aberration analysis by a Shack-Hartmann aberrometer is based on the implicit assumption that an injected probe beam reflects from a single fundus layer. In fact, the biological fundus is a thick reflector and therefore conventional analysis may produce errors of unknown magnitude. We developed a novel computational method to investigate this potential failure of conventional analysis. The Shack-Hartmann wavefront sensor was simulated by computer software and used to recover by two methods the known wavefront aberrations expected from a population of normally-aberrated human eyes and bi-layer fundus reflection. The conventional method determines the centroid of each spot in the SH data image, from which wavefront slopes are computed for least-squares fitting with derivatives of Zernike polynomials. The novel 'global' method iteratively adjusted the aberration coefficients derived from conventional centroid analysis until the SH image, when treated as a unitary picture, optimally matched the original data image. Both methods recovered higher order aberrations accurately and precisely, but only the global algorithm correctly recovered the defocus coefficients associated with each layer of fundus reflection. The global algorithm accurately recovered Zernike coefficients for mean defocus and bi-layer separation with maximum error <0.1%. The global algorithm was robust for bi-layer separation up to 2 dioptres for a typical SH wavefront sensor design. For 100 randomly generated test wavefronts with 0.7 D axial separation, the retrieved mean axial separation was 0.70 D with standard deviations (S.D.) of 0.002 D. Sufficient information is contained in SH data images to measure the dioptric thickness of dual-layer fundus reflection. The global algorithm is superior since it successfully recovered the focus value associated with both fundus layers even when their separation was too small to produce clearly separated spots, while the conventional analysis misrepresents the defocus component of the wavefront aberration as the mean defocus for the two reflectors. Our novel global algorithm is a promising method for SH data image analysis in clinical and visual optics research for human and animal eyes. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  16. An AZTEC/ASTE 1.1mm Survey Of The Young, Dense, Nearby Star-forming Region, Serpens South

    NASA Astrophysics Data System (ADS)

    Gutermuth, Robert A.; Bourke, T.; Matthews, B.; Dunham, M.; Allen, L.; Myers, P.; Jorgensen, J.; Wilson, G.; Yun, M.; Hughes, D.; Aretxaga, I.; Ryohei, K.; Kotaro, K.; Scott, K.; Austermann, J.

    2010-01-01

    The Serpens South embedded cluster, recently discovered by the Spitzer Gould Belt Legacy Survey, stands out among over 100 clusters and groups surveyed by Spitzer as the densest (>430 pc-2) and youngest (77% Class I protostars) clustered star forming region known within the nearest 400 pc. In order to better characterize the primordial structure of the cluster's natal cloud, we have made a 1.1mm dust continuum map of Serpens South from the AzTEC instrument on the 10m Atacama Submillimeter Telescope Experiment (ASTE). The projected morphology of the emission is best described by a central dense hub with numerous 0.5 pc-long filaments radiating away from the center. Large scale flux features that are typically removed via modern sky subtraction techniques are recovered using a novel iterative flux retrieval algorithm. Using standard assumptions (emissivity, dust-to-gas ratio, and T=10K), we compute the total mass of the Serpens South cloud core and filaments to be 480 Msun. We construct separate large and small scale structure maps via wavelet decomposition, and deploy a watershed structure isolation technique separately to each map in order to isolate all empirically observed substructure. This technique confirms our qualitative observation that the filaments north of the hub are notably less clumpy than those to the south, while the total mass is similar between the two regions. Both regions have relatively small numbers of young stellar objects, thus we speculate that we have caught this cloud in the act of fragmenting into pre-stellar cores.

  17. [An improved algorithm for electrohysterogram envelope extraction].

    PubMed

    Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia

    2017-02-01

    Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.

  18. 40 CFR 61.347 - Standards: Oil-water separators.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 9 2014-07-01 2014-07-01 false Standards: Oil-water separators. 61.347 Section 61.347 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS National Emission Standard for Benzene Waste Operations § 61.347 Standards:...

  19. Detection of Methamphetamine and Morphine in Urine and Saliva Using Excitation-Emission Matrix Fluorescence and a Second-Order Calibration Algorithm

    NASA Astrophysics Data System (ADS)

    Xu, B. Y.; Ye, Y.; Liao, L. C.

    2016-07-01

    A new method was developed to determine the methamphetamine and morphine concentrations in urine and saliva based on excitation-emission matrix fluorescence coupled to a second-order calibration algorithm. In the case of single-drug abuse, the results showed that the average recoveries of methamphetamine and morphine were 95.3 and 96.7% in urine samples, respectively, and 98.1 and 106.2% in saliva samples, respectively. The relative errors were all below 5%. The simultaneous determination of methamphetamine and morphine in urine using two second-order algorithms was also investigated. Satisfactory results were obtained with a self-weighted alternating trilinear decomposition algorithm. The root-mean-square errors of the predictions were 0.540 and 0.0382 μg/mL for methamphetamine and morphine, respectively. The limits of detection of the proposed methods were very low and sufficient for studying methamphetamine and morphine in urine.

  20. Warm gas towards young stellar objects in Corona Australis. Herschel/PACS observations from the DIGIT key programme

    NASA Astrophysics Data System (ADS)

    Lindberg, Johan E.; Jørgensen, Jes K.; Green, Joel D.; Herczeg, Gregory J.; Dionatos, Odysseas; Evans, Neal J.; Karska, Agata; Wampfler, Susanne F.

    2014-05-01

    Context. The effects of external irradiation on the chemistry and physics in the protostellar envelope around low-mass young stellar objects are poorly understood. The Corona Australis star-forming region contains the R CrA dark cloud, comprising several low-mass protostellar cores irradiated by an intermediate-mass young star. Aims: We study the effects of the irradiation coming from the young luminous Herbig Be star R CrA on the warm gas and dust in a group of low-mass young stellar objects. Methods: Herschel/PACS far-infrared datacubes of two low-mass star-forming regions in the R CrA dark cloud are presented. The distributions of CO, OH, H2O, [C ii], [O i], and continuum emission are investigated. We have developed a deconvolution algorithm which we use to deconvolve the maps, separating the point-source emission from the extended emission. We also construct rotational diagrams of the molecular species. Results: By deconvolution of the Herschel data, we find large-scale (several thousand AU) dust continuum and spectral line emission not associated with the point sources. Similar rotational temperatures are found for the warm CO (282 ± 4 K), hot CO (890 ± 84 K), OH (79 ± 4 K), and H2O (197 ± 7 K) emission in the point sources and the extended emission. The rotational temperatures are also similar to those found in other more isolated cores. The extended dust continuum emission is found in two ridges similar in extent and temperature to molecular millimetre emission, indicative of external heating from the Herbig Be star R CrA. Conclusions: Our results show that nearby luminous stars do not increase the molecular excitation temperatures of the warm gas around young stellar objects (YSOs). However, the emission from photodissociation products of H2O, such as OH and O, is enhanced in the warm gas associated with these protostars and their surroundings compared to similar objects not subjected to external irradiation. Table 9 and appendices are available in electronic form at http://www.aanda.org

  1. High-resolution seismic data regularization and wavefield separation

    NASA Astrophysics Data System (ADS)

    Cao, Aimin; Stump, Brian; DeShon, Heather

    2018-04-01

    We present a new algorithm, non-equispaced fast antileakage Fourier transform (NFALFT), for irregularly sampled seismic data regularization. Synthetic tests from 1-D to 5-D show that the algorithm may efficiently remove leaked energy in the frequency wavenumber domain, and its corresponding regularization process is accurate and fast. Taking advantage of the NFALFT algorithm, we suggest a new method (wavefield separation) for the detection of the Earth's inner core shear wave with irregularly distributed seismic arrays or networks. All interfering seismic phases that propagate along the minor arc are removed from the time window around the PKJKP arrival. The NFALFT algorithm is developed for seismic data, but may also be used for other irregularly sampled temporal or spatial data processing.

  2. Time-frequency analysis of time-varying modulated signals based on improved energy separation by iterative generalized demodulation

    NASA Astrophysics Data System (ADS)

    Feng, Zhipeng; Chu, Fulei; Zuo, Ming J.

    2011-03-01

    Energy separation algorithm is good at tracking instantaneous changes in frequency and amplitude of modulated signals, but it is subject to the constraints of mono-component and narrow band. In most cases, time-varying modulated vibration signals of machinery consist of multiple components, and have so complicated instantaneous frequency trajectories on time-frequency plane that they overlap in frequency domain. For such signals, conventional filters fail to obtain mono-components of narrow band, and their rectangular decomposition of time-frequency plane may split instantaneous frequency trajectories thus resulting in information loss. Regarding the advantage of generalized demodulation method in decomposing multi-component signals into mono-components, an iterative generalized demodulation method is used as a preprocessing tool to separate signals into mono-components, so as to satisfy the requirements by energy separation algorithm. By this improvement, energy separation algorithm can be generalized to a broad range of signals, as long as the instantaneous frequency trajectories of signal components do not intersect on time-frequency plane. Due to the good adaptability of energy separation algorithm to instantaneous changes in signals and the mono-component decomposition nature of generalized demodulation, the derived time-frequency energy distribution has fine resolution and is free from cross term interferences. The good performance of the proposed time-frequency analysis is illustrated by analyses of a simulated signal and the on-site recorded nonstationary vibration signal of a hydroturbine rotor during a shut-down transient process, showing that it has potential to analyze time-varying modulated signals of multi-components.

  3. Comparative analysis of peak-detection techniques for comprehensive two-dimensional chromatography.

    PubMed

    Latha, Indu; Reichenbach, Stephen E; Tao, Qingping

    2011-09-23

    Comprehensive two-dimensional gas chromatography (GC×GC) is a powerful technology for separating complex samples. The typical goal of GC×GC peak detection is to aggregate data points of analyte peaks based on their retention times and intensities. Two techniques commonly used for two-dimensional peak detection are the two-step algorithm and the watershed algorithm. A recent study [4] compared the performance of the two-step and watershed algorithms for GC×GC data with retention-time shifts in the second-column separations. In that analysis, the peak retention-time shifts were corrected while applying the two-step algorithm but the watershed algorithm was applied without shift correction. The results indicated that the watershed algorithm has a higher probability of erroneously splitting a single two-dimensional peak than the two-step approach. This paper reconsiders the analysis by comparing peak-detection performance for resolved peaks after correcting retention-time shifts for both the two-step and watershed algorithms. Simulations with wide-ranging conditions indicate that when shift correction is employed with both algorithms, the watershed algorithm detects resolved peaks with greater accuracy than the two-step method. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. A memory structure adapted simulated annealing algorithm for a green vehicle routing problem.

    PubMed

    Küçükoğlu, İlker; Ene, Seval; Aksoy, Aslı; Öztürk, Nursel

    2015-03-01

    Currently, reduction of carbon dioxide (CO2) emissions and fuel consumption has become a critical environmental problem and has attracted the attention of both academia and the industrial sector. Government regulations and customer demands are making environmental responsibility an increasingly important factor in overall supply chain operations. Within these operations, transportation has the most hazardous effects on the environment, i.e., CO2 emissions, fuel consumption, noise and toxic effects on the ecosystem. This study aims to construct vehicle routes with time windows that minimize the total fuel consumption and CO2 emissions. The green vehicle routing problem with time windows (G-VRPTW) is formulated using a mixed integer linear programming model. A memory structure adapted simulated annealing (MSA-SA) meta-heuristic algorithm is constructed due to the high complexity of the proposed problem and long solution times for practical applications. The proposed models are integrated with a fuel consumption and CO2 emissions calculation algorithm that considers the vehicle technical specifications, vehicle load, and transportation distance in a green supply chain environment. The proposed models are validated using well-known instances with different numbers of customers. The computational results indicate that the MSA-SA heuristic is capable of obtaining good G-VRPTW solutions within a reasonable amount of time by providing reductions in fuel consumption and CO2 emissions.

  5. The NLS-Based Nonlinear Grey Multivariate Model for Forecasting Pollutant Emissions in China.

    PubMed

    Pei, Ling-Ling; Li, Qin; Wang, Zheng-Xin

    2018-03-08

    The relationship between pollutant discharge and economic growth has been a major research focus in environmental economics. To accurately estimate the nonlinear change law of China's pollutant discharge with economic growth, this study establishes a transformed nonlinear grey multivariable (TNGM (1, N )) model based on the nonlinear least square (NLS) method. The Gauss-Seidel iterative algorithm was used to solve the parameters of the TNGM (1, N ) model based on the NLS basic principle. This algorithm improves the precision of the model by continuous iteration and constantly approximating the optimal regression coefficient of the nonlinear model. In our empirical analysis, the traditional grey multivariate model GM (1, N ) and the NLS-based TNGM (1, N ) models were respectively adopted to forecast and analyze the relationship among wastewater discharge per capita (WDPC), and per capita emissions of SO₂ and dust, alongside GDP per capita in China during the period 1996-2015. Results indicated that the NLS algorithm is able to effectively help the grey multivariable model identify the nonlinear relationship between pollutant discharge and economic growth. The results show that the NLS-based TNGM (1, N ) model presents greater precision when forecasting WDPC, SO₂ emissions and dust emissions per capita, compared to the traditional GM (1, N ) model; WDPC indicates a growing tendency aligned with the growth of GDP, while the per capita emissions of SO₂ and dust reduce accordingly.

  6. A stethoscope with wavelet separation of cardiac and respiratory sounds for real time telemedicine implemented on field-programmable gate array

    NASA Astrophysics Data System (ADS)

    Castro, Víctor M.; Muñoz, Nestor A.; Salazar, Antonio J.

    2015-01-01

    Auscultation is one of the most utilized physical examination procedures for listening to lung, heart and intestinal sounds during routine consults and emergencies. Heart and lung sounds overlap in the thorax. An algorithm was used to separate them based on the discrete wavelet transform with multi-resolution analysis, which decomposes the signal into approximations and details. The algorithm was implemented in software and in hardware to achieve real-time signal separation. The heart signal was found in detail eight and the lung signal in approximation six. The hardware was used to separate the signals with a delay of 256 ms. Sending wavelet decomposition data - instead of the separated full signa - allows telemedicine applications to function in real time over low-bandwidth communication channels.

  7. The algorithms for rational spline interpolation of surfaces

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.

    1986-01-01

    Two algorithms for interpolating surfaces with spline functions containing tension parameters are discussed. Both algorithms are based on the tensor products of univariate rational spline functions. The simpler algorithm uses a single tension parameter for the entire surface. This algorithm is generalized to use separate tension parameters for each rectangular subregion. The new algorithm allows for local control of tension on the interpolating surface. Both algorithms are illustrated and the results are compared with the results of bicubic spline and bilinear interpolation of terrain elevation data.

  8. Intelligent emissions controller for substance injection in the post-primary combustion zone of fossil-fired boilers

    DOEpatents

    Reifman, Jaques; Feldman, Earl E.; Wei, Thomas Y. C.; Glickert, Roger W.

    2003-01-01

    The control of emissions from fossil-fired boilers wherein an injection of substances above the primary combustion zone employs multi-layer feedforward artificial neural networks for modeling static nonlinear relationships between the distribution of injected substances into the upper region of the furnace and the emissions exiting the furnace. Multivariable nonlinear constrained optimization algorithms use the mathematical expressions from the artificial neural networks to provide the optimal substance distribution that minimizes emission levels for a given total substance injection rate. Based upon the optimal operating conditions from the optimization algorithms, the incremental substance cost per unit of emissions reduction, and the open-market price per unit of emissions reduction, the intelligent emissions controller allows for the determination of whether it is more cost-effective to achieve additional increments in emission reduction through the injection of additional substance or through the purchase of emission credits on the open market. This is of particular interest to fossil-fired electrical power plant operators. The intelligent emission controller is particularly adapted for determining the economical control of such pollutants as oxides of nitrogen (NO.sub.x) and carbon monoxide (CO) emitted by fossil-fired boilers by the selective introduction of multiple inputs of substances (such as natural gas, ammonia, oil, water-oil emulsion, coal-water slurry and/or urea, and combinations of these substances) above the primary combustion zone of fossil-fired boilers.

  9. Detection of Partial Discharge Sources Using UHF Sensors and Blind Signal Separation

    PubMed Central

    Boya, Carlos; Parrado-Hernández, Emilio

    2017-01-01

    The measurement of the emitted electromagnetic energy in the UHF region of the spectrum allows the detection of partial discharges and, thus, the on-line monitoring of the condition of the insulation of electrical equipment. Unfortunately, determining the affected asset is difficult when there are several simultaneous insulation defects. This paper proposes the use of an independent component analysis (ICA) algorithm to separate the signals coming from different partial discharge (PD) sources. The performance of the algorithm has been tested using UHF signals generated by test objects. The results are validated by two automatic classification techniques: support vector machines and similarity with class mean. Both methods corroborate the suitability of the algorithm to separate the signals emitted by each PD source even when they are generated by the same type of insulation defect. PMID:29140267

  10. ENDOCRINE DISRUPTING CHEMICAL EMISSIONS FROM COMBUSTION SOURCES: DIESEL PARTICULATE EMISSIONS AND DOMESTIC WASTE OPEN BURN EMISSIONS

    EPA Science Inventory

    Emissions of endocrine disrupting chemicals (EDCs) from combustion sources are poorly characterized due to the large number of compounds present in the emissions, the complexity of the analytical separations required, and the uncertainty regarding identification of chemicals with...

  11. An FBG acoustic emission source locating system based on PHAT and GA

    NASA Astrophysics Data System (ADS)

    Shen, Jing-shi; Zeng, Xiao-dong; Li, Wei; Jiang, Ming-shun

    2017-09-01

    Using the acoustic emission locating technology to monitor the health of the structure is important for ensuring the continuous and healthy operation of the complex engineering structures and large mechanical equipment. In this paper, four fiber Bragg grating (FBG) sensors are used to establish the sensor array to locate the acoustic emission source. Firstly, the nonlinear locating equations are established based on the principle of acoustic emission, and the solution of these equations is transformed into an optimization problem. Secondly, time difference extraction algorithm based on the phase transform (PHAT) weighted generalized cross correlation provides the necessary conditions for the accurate localization. Finally, the genetic algorithm (GA) is used to solve the optimization model. In this paper, twenty points are tested in the marble plate surface, and the results show that the absolute locating error is within the range of 10 mm, which proves the accuracy of this locating method.

  12. A TCAS-II Resolution Advisory Detection Algorithm

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar; Narkawicz, Anthony; Chamberlain, James

    2013-01-01

    The Traffic Alert and Collision Avoidance System (TCAS) is a family of airborne systems designed to reduce the risk of mid-air collisions between aircraft. TCASII, the current generation of TCAS devices, provides resolution advisories that direct pilots to maintain or increase vertical separation when aircraft distance and time parameters are beyond designed system thresholds. This paper presents a mathematical model of the TCASII Resolution Advisory (RA) logic that assumes accurate aircraft state information. Based on this model, an algorithm for RA detection is also presented. This algorithm is analogous to a conflict detection algorithm, but instead of predicting loss of separation, it predicts resolution advisories. It has been formally verified that for a kinematic model of aircraft trajectories, this algorithm completely and correctly characterizes all encounter geometries between two aircraft that lead to a resolution advisory within a given lookahead time interval. The RA detection algorithm proposed in this paper is a fundamental component of a NASA sense and avoid concept for the integration of Unmanned Aircraft Systems in civil airspace.

  13. Highly stable individual differences in the emission of separation calls during early development in the domestic cat.

    PubMed

    Hudson, Robyn; Chacha, Jimena; Bánszegi, Oxána; Szenczi, Péter; Rödel, Heiko G

    2017-04-01

    Study of the development of individuality is often hampered by rapidly changing behavioral repertoires and the need for minimally intrusive tests. We individually tested 33 kittens from eight litters of the domestic cat in an arena for 3 min once a week for the first 3 postnatal weeks, recording the number of separation calls and the duration of locomotor activity. Kittens showed consistent and stable individual differences on both measures across and within trials. Stable individual differences in the emission of separation calls across trials emerged already within the first 10 s of testing, and in locomotor activity within the first 30 s. Furthermore, individual kittens' emission of separation calls, but not their locomotor activity, was highly stable within trials. We conclude that separation calls provide an efficient, minimally intrusive and reliable measure of individual differences in behavior during development in the cat, and possibly in other species emitting such calls. © 2017 Wiley Periodicals, Inc.

  14. Scheduling logic for Miles-In-Trail traffic management

    NASA Technical Reports Server (NTRS)

    Synnestvedt, Robert G.; Swenson, Harry; Erzberger, Heinz

    1995-01-01

    This paper presents an algorithm which can be used for scheduling arrival air traffic in an Air Route Traffic Control Center (ARTCC or Center) entering a Terminal Radar Approach Control (TRACON) Facility . The algorithm aids a Traffic Management Coordinator (TMC) in deciding how to restrict traffic while the traffic expected to arrive in the TRACON exceeds the TRACON capacity. The restrictions employed fall under the category of Miles-in-Trail, one of two principal traffic separation techniques used in scheduling arrival traffic . The algorithm calculates aircraft separations for each stream of aircraft destined to the TRACON. The calculations depend upon TRACON characteristics, TMC preferences, and other parameters adapted to the specific needs of scheduling traffic in a Center. Some preliminary results of traffic simulations scheduled by this algorithm are presented, and conclusions are drawn as to the effectiveness of using this algorithm in different traffic scenarios.

  15. Report of the first Nimbus-7 SMMR Experiment Team Workshop

    NASA Technical Reports Server (NTRS)

    Campbell, W. J.; Gloersen, P.

    1983-01-01

    Preliminary results of sea ice and techniques for calculating sea ice concentration and multiyear fraction from the microwave radiances obtained from the Nimbus-7 SMMR were presented. From these results, it is evident that these groups used different and independent approaches in deriving sea ice emissivities and algorithms. This precluded precise comparisons of their results. A common set of sea ice emissivities were defined for all groups to use for subsequent more careful comparison of the results from the various sea ice parameter algorithms. To this end, three different geographical areas in two different time intervals were defined as typifying SMMR beam-filling conditions for first year sea ice, multiyear sea ice, and open water and to be used for determining the required microwave emissivities.

  16. Tactical Conflict Detection in Terminal Airspace

    NASA Technical Reports Server (NTRS)

    Tang, Huabin; Robinson, John E.; Denery, Dallas G.

    2010-01-01

    Air traffic systems have long relied on automated short-term conflict prediction algorithms to warn controllers of impending conflicts (losses of separation). The complexity of terminal airspace has proven difficult for such systems as it often leads to excessive false alerts. Thus, the legacy system, called Conflict Alert, which provides short-term alerts in both en-route and terminal airspace currently, is often inhibited or degraded in areas where frequent false alerts occur, even though the alerts are provided only when an aircraft is in dangerous proximity of other aircraft. This research investigates how a minimal level of flight intent information may be used to improve short-term conflict detection in terminal airspace such that it can be used by the controller to maintain legal aircraft separation. The flight intent information includes a site-specific nominal arrival route and inferred altitude clearances in addition to the flight plan that includes the RNAV (Area Navigation) departure route. A new tactical conflict detection algorithm is proposed, which uses a single analytic trajectory, determined by the flight intent and the current state information of the aircraft, and includes a complex set of current, dynamic separation standards for terminal airspace to define losses of separation. The new algorithm is compared with an algorithm that imitates a known en-route algorithm and another that imitates Conflict Alert by analysis of false-alert rate and alert lead time with recent real-world data of arrival and departure operations and a large set of operational error cases from Dallas/Fort Worth TRACON (Terminal Radar Approach Control). The new algorithm yielded a false-alert rate of two per hour and an average alert lead time of 38 seconds.

  17. Using nonlocal means to separate cardiac and respiration sounds

    NASA Astrophysics Data System (ADS)

    Rudnitskii, A. G.

    2014-11-01

    The paper presents the results of applying nonlocal means (NLMs) approach in the problem of separating respiration and cardiac sounds in a signal recorded on a human chest wall. The performance of the algorithm was tested both by simulated and real signals. As a quantitative efficiency measure of NLM filtration, the angle of divergence between isolated and reference signal was used. It is shown that for a wide range of signal-to-noise ratios, the algorithm makes it possible to efficiently solve this problem of separating cardiac and respiration sounds in the sum signal recorded on a human chest wall.

  18. 40 CFR 61.352 - Alternative standards for oil-water separators.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 9 2014-07-01 2014-07-01 false Alternative standards for oil-water separators. 61.352 Section 61.352 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS National Emission Standard for Benzene Waste Operations § 61.352...

  19. Nanoparticle-enhanced fluorescence emission for non-separation assays of carbohydrates using a boronic acid-alizarin complex.

    PubMed

    Li, Qianjin; Kamra, Tripta; Ye, Lei

    2016-03-04

    Addition of crosslinked polymer nanoparticles into a solution of a 3-nitrophenylboronic acid-alizarin complex leads to significant enhancement of fluorescence emission. Using the nanoparticle-enhanced boronic acid-alizarin system has improved greatly the sensitivity and extended the dynamic range of separation-free fluorescence assays for carbohydrates.

  20. Unsupervised Learning for Monaural Source Separation Using Maximization–Minimization Algorithm with Time–Frequency Deconvolution †

    PubMed Central

    Bouridane, Ahmed; Ling, Bingo Wing-Kuen

    2018-01-01

    This paper presents an unsupervised learning algorithm for sparse nonnegative matrix factor time–frequency deconvolution with optimized fractional β-divergence. The β-divergence is a group of cost functions parametrized by a single parameter β. The Itakura–Saito divergence, Kullback–Leibler divergence and Least Square distance are special cases that correspond to β=0, 1, 2, respectively. This paper presents a generalized algorithm that uses a flexible range of β that includes fractional values. It describes a maximization–minimization (MM) algorithm leading to the development of a fast convergence multiplicative update algorithm with guaranteed convergence. The proposed model operates in the time–frequency domain and decomposes an information-bearing matrix into two-dimensional deconvolution of factor matrices that represent the spectral dictionary and temporal codes. The deconvolution process has been optimized to yield sparse temporal codes through maximizing the likelihood of the observations. The paper also presents a method to estimate the fractional β value. The method is demonstrated on separating audio mixtures recorded from a single channel. The paper shows that the extraction of the spectral dictionary and temporal codes is significantly more efficient by using the proposed algorithm and subsequently leads to better source separation performance. Experimental tests and comparisons with other factorization methods have been conducted to verify its efficacy. PMID:29702629

  1. Advancements to the planogram frequency–distance rebinning algorithm

    PubMed Central

    Champley, Kyle M; Raylman, Raymond R; Kinahan, Paul E

    2010-01-01

    In this paper we consider the task of image reconstruction in positron emission tomography (PET) with the planogram frequency–distance rebinning (PFDR) algorithm. The PFDR algorithm is a rebinning algorithm for PET systems with panel detectors. The algorithm is derived in the planogram coordinate system which is a native data format for PET systems with panel detectors. A rebinning algorithm averages over the redundant four-dimensional set of PET data to produce a three-dimensional set of data. Images can be reconstructed from this rebinned three-dimensional set of data. This process enables one to reconstruct PET images more quickly than reconstructing directly from the four-dimensional PET data. The PFDR algorithm is an approximate rebinning algorithm. We show that implementing the PFDR algorithm followed by the (ramp) filtered backprojection (FBP) algorithm in linogram coordinates from multiple views reconstructs a filtered version of our image. We develop an explicit formula for this filter which can be used to achieve exact reconstruction by means of a modified FBP algorithm applied to the stack of rebinned linograms and can also be used to quantify the errors introduced by the PFDR algorithm. This filter is similar to the filter in the planogram filtered backprojection algorithm derived by Brasse et al. The planogram filtered backprojection and exact reconstruction with the PFDR algorithm require complete projections which can be completed with a reprojection algorithm. The PFDR algorithm is similar to the rebinning algorithm developed by Kao et al. By expressing the PFDR algorithm in detector coordinates, we provide a comparative analysis between the two algorithms. Numerical experiments using both simulated data and measured data from a positron emission mammography/tomography (PEM/PET) system are performed. Images are reconstructed by PFDR+FBP (PFDR followed by 2D FBP reconstruction), PFDRX (PFDR followed by the modified FBP algorithm for exact reconstruction) and planogram filtered backprojection image reconstruction algorithms. We show that the PFDRX algorithm produces images that are nearly as accurate as images reconstructed with the planogram filtered backprojection algorithm and more accurate than images reconstructed with the PFDR+FBP algorithm. Both the PFDR+FBP and PFDRX algorithms provide a dramatic improvement in computation time over the planogram filtered backprojection algorithm. PMID:20436790

  2. Recognizing Age-Separated Face Images: Humans and Machines

    PubMed Central

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario. PMID:25474200

  3. Recognizing age-separated face images: humans and machines.

    PubMed

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  4. Evaluating four N2O emission algorithms in RZWQM2 in response to N rate on an irrigated corn field

    USDA-ARS?s Scientific Manuscript database

    Nitrous oxide (N2O) emissions from agricultural soils are major contributors to greenhouse gases. Correctly assessing the effects of the interactions between agricultural practices and environmental factors on N2O emissions is required for better crop and nitrogen (N) management. We used an enhanced...

  5. Evaluation of Long-term Aerosol Data Records from SeaWiFS over Land and Ocean

    NASA Astrophysics Data System (ADS)

    Bettenhausen, C.; Hsu, C.; Jeong, M.; Huang, J.

    2010-12-01

    Deserts around the globe produce mineral dust aerosols that may then be transported over cities, across continents, or even oceans. These aerosols affect the Earth’s energy balance through direct and indirect interactions with incoming solar radiation. They also have a biogeochemical effect as they deliver scarce nutrients to remote ecosystems. Large dust storms regularly disrupt air traffic and are a general nuisance to those living in transport regions. In the past, measuring dust aerosols has been incomplete at best. Satellite retrieval algorithms were limited to oceans or vegetated surfaces and typically neglected desert regions due to their high surface reflectivity in the mid-visible and near-infrared wavelengths, which have been typically used for aerosol retrievals. The Deep Blue aerosol retrieval algorithm was developed to resolve these shortcomings by utilizing the blue channels from instruments such as the Sea-Viewing Wide-Field-of-View Sensor (SeaWiFS) and the Moderate Resolution Imaging Spectroradiometer (MODIS) to infer aerosol properties over these highly reflective surfaces. The surface reflectivity of desert regions is much lower in the blue channels and thus it is easier to separate the aerosol and surface signals than at the longer wavelengths used in other algorithms. More recently, the Deep Blue algorithm has been expanded to retrieve over vegetated surfaces and oceans as well. A single algorithm can now follow dust from source to sink. In this work, we introduce the SeaWiFS instrument and the Deep Blue aerosol retrieval algorithm. We have produced global aerosol data records over land and ocean from 1997 through 2009 using the Deep Blue algorithm and SeaWiFS data. We describe these data records and validate them with data from the Aerosol Robotic Network (AERONET). We also show the relative performance compared to the current MODIS Deep Blue operational aerosol data in desert regions. The current results are encouraging and this dataset will be useful to future studies in understanding the effects of dust aerosols on global processes, long-term aerosol trends, quantifying dust emissions, transport, and inter-annual variability.

  6. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients.

    PubMed

    Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong

    2013-01-07

    Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.

  7. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients

    PubMed Central

    Jin, Shuo; Li, Dengwang; Yin, Yong

    2013-01-01

    Accurate registration of  18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from  18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381

  8. Noninvasive glucose monitoring by optical reflective and thermal emission spectroscopic measurements

    NASA Astrophysics Data System (ADS)

    Saetchnikov, V. A.; Tcherniavskaia, E. A.; Schiffner, G.

    2005-08-01

    Noninvasive method for blood glucose monitoring in cutaneous tissue based on reflective spectrometry combined with a thermal emission spectroscopy has been developed. Regression analysis, neural network algorithms and cluster analysis are used for data processing.

  9. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    NASA Astrophysics Data System (ADS)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  10. Automatic segmentation of thermal images of diabetic-at-risk feet using the snakes algorithm

    NASA Astrophysics Data System (ADS)

    Etehadtavakol, Mahnaz; Ng, E. Y. K.; Kaabouch, Naima

    2017-11-01

    Diabetes is a disease with multi-systemic problems. It is a leading cause of death, medical costs, and loss of productivity. Foot ulcers are one generally known problem of uncontrolled diabetes that can lead to amputation signs of foot ulcers are not always obvious. Sometimes, symptoms won't even show up until ulcer is infected. Hence, identification of pre-ulceration of the plantar surface of the foot in diabetics is beneficial. Thermography has the potential to identify regions of the plantar with no evidence of ulcer but yet risk. Thermography is a technique that is safe, easy, non-invasive, with no contact, and repeatable. In this study, 59 thermographic images of the plantar foot of patients with diabetic neuropathy are implemented using the snakes algorithm to separate two feet from background automatically and separating the right foot from the left on each image. The snakes algorithm both separates the right and left foot into segmented different clusters according to their temperatures. The hottest regions will have the highest risk of ulceration for each foot. This algorithm also worked perfectly for all the current images.

  11. Carbon emission trading system of China: a linked market vs. separated markets

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Feng, Shenghao; Cai, Songfeng; Zhang, Yaxiong; Zhou, Xiang; Chen, Yanbin; Chen, Zhanming

    2013-12-01

    The Chinese government intends to upgrade its current provincial carbon emission trading pilots to a nationwide scheme by 2015. This study investigates two of scenarios: separated provincial markets and a linked inter-provincial market. The carbon abatement effects of separated and linked markets are compared using two pilot provinces of Hubei and Guangdong based on a computable general equilibrium model termed Sino-TERMCo2. Simulation results show that the linked market can improve social welfare and reduce carbon emission intensity for the nation as well as for the Hubei-Guangdong bloc compared to the separated market. However, the combined system also distributes welfare more unevenly and thus increases social inequity. On the policy ground, the current results suggest that a well-constructed, nationwide carbon market complemented with adequate welfare transfer policies can be employed to replace the current top-down abatement target disaggregation practice.

  12. Determination of urine-derived odorous compounds in a source separation sanitation system.

    PubMed

    Liu, Bianxia; Giannis, Apostolos; Chen, Ailu; Zhang, Jiefeng; Chang, Victor W C; Wang, Jing-Yuan

    2017-02-01

    Source separation sanitation systems have attracted more and more attention recently. However, separate urine collection and treatment could induce odor issues, especially in large scale application. In order to avoid such issues, it is necessary to monitor the odor related compounds that might be generated during urine storage. This study investigated the odorous compounds that emitted from source-separated human urine under different hydrolysis conditions. Batch experiments were conducted to investigate the effect of temperature, stale/fresh urine ratio and urine dilution on odor emissions. It was found that ammonia, dimethyl disulfide, allyl methyl sulfide and 4-heptanone were the main odorous compounds generated from human urine, with headspace concentrations hundreds of times higher than their respective odor thresholds. Furthermore, the high temperature accelerated urine hydrolysis and liquid-gas mass transfer, resulting a remarkable increase of odor emissions from the urine solution. The addition of stale urine enhanced urine hydrolysis and expedited odor emissions. On the contrary, diluted urine emitted less odorous compounds ascribed to reduced concentrations of odorant precursors. In addition, this study quantified the odor emissions and revealed the constraints of urine source separation in real-world applications. To address the odor issue, several control strategies are recommended for odor mitigation or elimination from an engineering perspective. Copyright © 2016. Published by Elsevier B.V.

  13. FPGA implementation of ICA algorithm for blind signal separation and adaptive noise canceling.

    PubMed

    Kim, Chang-Min; Park, Hyung-Min; Kim, Taesu; Choi, Yoon-Kyung; Lee, Soo-Young

    2003-01-01

    An field programmable gate array (FPGA) implementation of independent component analysis (ICA) algorithm is reported for blind signal separation (BSS) and adaptive noise canceling (ANC) in real time. In order to provide enormous computing power for ICA-based algorithms with multipath reverberation, a special digital processor is designed and implemented in FPGA. The chip design fully utilizes modular concept and several chips may be put together for complex applications with a large number of noise sources. Experimental results with a fabricated test board are reported for ANC only, BSS only, and simultaneous ANC/BSS, which demonstrates successful speech enhancement in real environments in real time.

  14. An Integrated Approach to Economic and Environmental Aspects of Air Pollution and Climate Interactions

    NASA Astrophysics Data System (ADS)

    Sarofim, M. C.

    2007-12-01

    Emissions of greenhouses gases and conventional pollutants are closely linked through shared generation processes and thus policies directed toward long-lived greenhouse gases affect emissions of conventional pollutants and, similarly, policies directed toward conventional pollutants affect emissions of greenhouse gases. Some conventional pollutants such as aerosols also have direct radiative effects. NOx and VOCs are ozone precursors, another substance with both radiative and health impacts, and these ozone precursors also interact with the chemistry of the hydroxyl radical which is the major methane sink. Realistic scenarios of future emissions and concentrations must therefore account for both air pollution and greenhouse gas policies and how they interact economically as well as atmospherically, including the regional pattern of emissions and regulation. We have modified a 16 region computable general equilibrium economic model (the MIT Emissions Prediction and Policy Analysis model) by including elasticities of substitution for ozone precursors and aerosols in order to examine these interactions between climate policy and air pollution policy on a global scale. Urban emissions are distributed based on population density, and aged using a reduced form urban model before release into an atmospheric chemistry/climate model (the earth systems component of the MIT Integrated Global Systems Model). This integrated approach enables examination of the direct impacts of air pollution on climate, the ancillary and complementary interactions between air pollution and climate policies, and the impact of different population distribution algorithms or urban emission aging schemes on global scale properties. This modeling exercise shows that while ozone levels are reduced due to NOx and VOC reductions, these reductions lead to an increase in methane concentrations that eliminates the temperature effects of the ozone reductions. However, black carbon reductions do have significant direct effects on global mean temperatures, as do ancillary reductions of greenhouse gases due to the pollution constraints imposed in the economic model. Finally, we show that the economic benefits of coordinating air pollution and climate policies rather than separate implementation are on the order of 20% of the total policy cost.

  15. Conception of discrete systems decomposition algorithm using p-invariants and hypergraphs

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Ł.

    2016-09-01

    In the article author presents an idea of decomposition algorithm of discrete systems described by Petri Nets using pinvariants. Decomposition process is significant from the point of view of discrete systems design, because it allows separation of the smaller sequential parts. Proposed algorithm uses modified Martinez-Silva method as well as author's selection algorithm. The developed method is a good complement of classical decomposition algorithms using graphs and hypergraphs.

  16. Physics-Based Computational Algorithm for the Multi-Fluid Plasma Model

    DTIC Science & Technology

    2014-06-30

    applying it to study laser - 20 Physics-Based Multi-Fluid Plasma Algorithm Shumlak Figure 6: Blended finite element method applied to the species...separation problem in capsule implosions. Number densities and electric field are shown after the laser drive has compressed the multi-fluid plasma and...6 after the laser drive has started the compression. A separation clearly develops. The solution is found using an explicit advance (CFL=1) for the

  17. Parallel Algorithm Solves Coupled Differential Equations

    NASA Technical Reports Server (NTRS)

    Hayashi, A.

    1987-01-01

    Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.

  18. 78 FR 54502 - Self-Regulatory Organizations; Financial Industry Regulatory Authority, Inc.; Notice of Filing of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... transactions. Transactions that originate from unrelated algorithms or separate and distinct trading strategies... transactions were undertaken for manipulative or other fraudulent purposes. Algorithms or trading strategies... activity and the use of algorithms by firms to make trading decisions, FINRA has observed an increase in...

  19. REACTT: an algorithm for solving spatial equilibrium problems.

    Treesearch

    D.J. Brooks; J. Kincaid

    1987-01-01

    The problem of determining equilibrium prices and quantities in spatially separated markets is reviewed. Algorithms that compute spatial equilibria are discussed. A computer program using the reactive programming algorithm for solving spatial equilibrium problems that involve multiple commodities is presented, along with detailed documentation. A sample data set,...

  20. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    NASA Astrophysics Data System (ADS)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-01

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.

  1. The NLS-Based Nonlinear Grey Multivariate Model for Forecasting Pollutant Emissions in China

    PubMed Central

    Pei, Ling-Ling; Li, Qin

    2018-01-01

    The relationship between pollutant discharge and economic growth has been a major research focus in environmental economics. To accurately estimate the nonlinear change law of China’s pollutant discharge with economic growth, this study establishes a transformed nonlinear grey multivariable (TNGM (1, N)) model based on the nonlinear least square (NLS) method. The Gauss–Seidel iterative algorithm was used to solve the parameters of the TNGM (1, N) model based on the NLS basic principle. This algorithm improves the precision of the model by continuous iteration and constantly approximating the optimal regression coefficient of the nonlinear model. In our empirical analysis, the traditional grey multivariate model GM (1, N) and the NLS-based TNGM (1, N) models were respectively adopted to forecast and analyze the relationship among wastewater discharge per capita (WDPC), and per capita emissions of SO2 and dust, alongside GDP per capita in China during the period 1996–2015. Results indicated that the NLS algorithm is able to effectively help the grey multivariable model identify the nonlinear relationship between pollutant discharge and economic growth. The results show that the NLS-based TNGM (1, N) model presents greater precision when forecasting WDPC, SO2 emissions and dust emissions per capita, compared to the traditional GM (1, N) model; WDPC indicates a growing tendency aligned with the growth of GDP, while the per capita emissions of SO2 and dust reduce accordingly. PMID:29517985

  2. Global Warming: Predicting OPEC Carbon Dioxide Emissions from Petroleum Consumption Using Neural Network and Hybrid Cuckoo Search Algorithm.

    PubMed

    Chiroma, Haruna; Abdul-kareem, Sameem; Khan, Abdullah; Nawi, Nazri Mohd; Gital, Abdulsalam Ya'u; Shuib, Liyana; Abubakar, Adamu I; Rahman, Muhammad Zubair; Herawan, Tutut

    2015-01-01

    Global warming is attracting attention from policy makers due to its impacts such as floods, extreme weather, increases in temperature by 0.7°C, heat waves, storms, etc. These disasters result in loss of human life and billions of dollars in property. Global warming is believed to be caused by the emissions of greenhouse gases due to human activities including the emissions of carbon dioxide (CO2) from petroleum consumption. Limitations of the previous methods of predicting CO2 emissions and lack of work on the prediction of the Organization of the Petroleum Exporting Countries (OPEC) CO2 emissions from petroleum consumption have motivated this research. The OPEC CO2 emissions data were collected from the Energy Information Administration. Artificial Neural Network (ANN) adaptability and performance motivated its choice for this study. To improve effectiveness of the ANN, the cuckoo search algorithm was hybridised with accelerated particle swarm optimisation for training the ANN to build a model for the prediction of OPEC CO2 emissions. The proposed model predicts OPEC CO2 emissions for 3, 6, 9, 12 and 16 years with an improved accuracy and speed over the state-of-the-art methods. An accurate prediction of OPEC CO2 emissions can serve as a reference point for propagating the reorganisation of economic development in OPEC member countries with the view of reducing CO2 emissions to Kyoto benchmarks--hence, reducing global warming. The policy implications are discussed in the paper.

  3. Field emission characteristics of a small number of carbon fiber emitters

    NASA Astrophysics Data System (ADS)

    Tang, Wilkin W.; Shiffler, Donald A.; Harris, John R.; Jensen, Kevin L.; Golby, Ken; LaCour, Matthew; Knowles, Tim

    2016-09-01

    This paper reports an experiment that studies the emission characteristics of small number of field emitters. The experiment consists of nine carbon fibers in a square configuration. Experimental results show that the emission characteristics depend strongly on the separation between each emitter, providing evidence of the electric field screening effects. Our results indicate that as the separation between the emitters decreases, the emission current for a given voltage also decreases. The authors compare the experimental results to four carbon fiber emitters in a linear and square configurations as well as to two carbon fiber emitters in a paired array. Voltage-current traces show that the turn-on voltage is always larger for the nine carbon fiber emitters as compared to the two and four emitters in linear configurations, and approximately identical to the four emitters in a square configuration. The observations and analysis reported here, based on Fowler-Nordheim field emission theory, suggest the electric field screening effect depends critically on the number of emitters, the separation between them, and their overall geometric configuration.

  4. Joint optimization of green vehicle scheduling and routing problem with time-varying speeds

    PubMed Central

    Zhang, Dezhi; Wang, Xin; Ni, Nan; Zhang, Zhuo

    2018-01-01

    Based on an analysis of the congestion effect and changes in the speed of vehicle flow during morning and evening peaks in a large- or medium-sized city, the piecewise function is used to capture the rules of the time-varying speed of vehicles, which are very important in modelling their fuel consumption and CO2 emission. A joint optimization model of the green vehicle scheduling and routing problem with time-varying speeds is presented in this study. Extra wages during nonworking periods and soft time-window constraints are considered. A heuristic algorithm based on the adaptive large neighborhood search algorithm is also presented. Finally, a numerical simulation example is provided to illustrate the optimization model and its algorithm. Results show that, (1) the shortest route is not necessarily the route that consumes the least energy, (2) the departure time influences the vehicle fuel consumption and CO2 emissions and the optimal departure time saves on fuel consumption and reduces CO2 emissions by up to 5.4%, and (3) extra driver wages have significant effects on routing and departure time slot decisions. PMID:29466370

  5. Joint optimization of green vehicle scheduling and routing problem with time-varying speeds.

    PubMed

    Zhang, Dezhi; Wang, Xin; Li, Shuangyan; Ni, Nan; Zhang, Zhuo

    2018-01-01

    Based on an analysis of the congestion effect and changes in the speed of vehicle flow during morning and evening peaks in a large- or medium-sized city, the piecewise function is used to capture the rules of the time-varying speed of vehicles, which are very important in modelling their fuel consumption and CO2 emission. A joint optimization model of the green vehicle scheduling and routing problem with time-varying speeds is presented in this study. Extra wages during nonworking periods and soft time-window constraints are considered. A heuristic algorithm based on the adaptive large neighborhood search algorithm is also presented. Finally, a numerical simulation example is provided to illustrate the optimization model and its algorithm. Results show that, (1) the shortest route is not necessarily the route that consumes the least energy, (2) the departure time influences the vehicle fuel consumption and CO2 emissions and the optimal departure time saves on fuel consumption and reduces CO2 emissions by up to 5.4%, and (3) extra driver wages have significant effects on routing and departure time slot decisions.

  6. A MODIS direct broadcast algorithm for mapping wildfire burned area in the western United States

    Treesearch

    S. P. Urbanski; J. M. Salmon; B. L. Nordgren; W. M. Hao

    2009-01-01

    Improved wildland fire emission inventory methods are needed to support air quality forecasting and guide the development of air shed management strategies. Air quality forecasting requires dynamic fire emission estimates that are generated in a timely manner to support real-time operations. In the regulatory and planning realm, emission inventories are essential for...

  7. Development and Evaluation of the Biogenic Emissions Inventory System (BEIS) Model v3.6

    EPA Science Inventory

    We have developed new canopy emission algorithms and land use data for BEIS v3.6. Simulations with BEIS v3.4 and BEIS v3.6 in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observati...

  8. Military, Charter, Unreported Domestic Traffic and General Aviation 1976, 1984, 1992, and 2015 Emission Scenarios

    NASA Technical Reports Server (NTRS)

    Mortlock, Alan; VanAlstyne, Richard

    1998-01-01

    The report describes development of databases estimating aircraft engine exhaust emissions for the years 1976 and 1984 from global operations of Military, Charter, historic Soviet and Chinese, Unreported Domestic traffic, and General Aviation (GA). These databases were developed under the National Aeronautics and Space Administration's (NASA) Advanced Subsonic Assessment (AST). McDonnell Douglas Corporation's (MDC), now part of the Boeing Company has previously estimated engine exhaust emissions' databases for the baseline year of 1992 and a 2015 forecast year scenario. Since their original creation, (Ward, 1994 and Metwally, 1995) revised technology algorithms have been developed. Additionally, GA databases have been created and all past NIDC emission inventories have been updated to reflect the new technology algorithms. Revised data (Baughcum, 1996 and Baughcum, 1997) for the scheduled inventories have been used in this report to provide a comparison of the total aviation emission forecasts from various components. Global results of two historic years (1976 and 1984), a baseline year (1992) and a forecast year (2015) are presented. Since engine emissions are directly related to fuel usage, an overview of individual aviation annual global fuel use for each inventory component is also given in this report.

  9. A Global Catalogue of Large SO2 Sources and Emissions Derived from the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Fioletov, Vitali E.; McLinden, Chris A.; Krotkov, Nickolay; Li, Can; Joiner, Joanna; Theys, Nicolas; Carn, Simon; Moran, Mike D.

    2016-01-01

    Sulfur dioxide (SO2) measurements from the Ozone Monitoring Instrument (OMI) satellite sensor processed with the new principal component analysis (PCA) algorithm were used to detect large point emission sources or clusters of sources. The total of 491 continuously emitting point sources releasing from about 30 kt yr(exp -1) to more than 4000 kt yr(exp -1) of SO2 per year have been identified and grouped by country and by primary source origin: volcanoes (76 sources); power plants (297); smelters (53); and sources related to the oil and gas industry (65). The sources were identified using different methods, including through OMI measurements themselves applied to a new emission detection algorithm, and their evolution during the 2005- 2014 period was traced by estimating annual emissions from each source. For volcanic sources, the study focused on continuous degassing, and emissions from explosive eruptions were excluded. Emissions from degassing volcanic sources were measured, many for the first time, and collectively they account for about 30% of total SO2 emissions estimated from OMI measurements, but that fraction has increased in recent years given that cumulative global emissions from power plants and smelters are declining while emissions from oil and gas industry remained nearly constant. Anthropogenic emissions from the USA declined by 80% over the 2005-2014 period as did emissions from western and central Europe, whereas emissions from India nearly doubled, and emissions from other large SO2-emitting regions (South Africa, Russia, Mexico, and the Middle East) remained fairly constant. In total, OMI-based estimates account for about a half of total reported anthropogenic SO2 emissions; the remaining half is likely related to sources emitting less than 30 kt yr(exp -1) and not detected by OMI.

  10. On the degree conjecture for separability of multipartite quantum states

    NASA Astrophysics Data System (ADS)

    Hassan, Ali Saif M.; Joag, Pramod S.

    2008-01-01

    We settle the so-called degree conjecture for the separability of multipartite quantum states, which are normalized graph Laplacians, first given by Braunstein et al. [Phys. Rev. A 73, 012320 (2006)]. The conjecture states that a multipartite quantum state is separable if and only if the degree matrix of the graph associated with the state is equal to the degree matrix of the partial transpose of this graph. We call this statement to be the strong form of the conjecture. In its weak version, the conjecture requires only the necessity, that is, if the state is separable, the corresponding degree matrices match. We prove the strong form of the conjecture for pure multipartite quantum states using the modified tensor product of graphs defined by Hassan and Joag [J. Phys. A 40, 10251 (2007)], as both necessary and sufficient condition for separability. Based on this proof, we give a polynomial-time algorithm for completely factorizing any pure multipartite quantum state. By polynomial-time algorithm, we mean that the execution time of this algorithm increases as a polynomial in m, where m is the number of parts of the quantum system. We give a counterexample to show that the conjecture fails, in general, even in its weak form, for multipartite mixed states. Finally, we prove this conjecture, in its weak form, for a class of multipartite mixed states, giving only a necessary condition for separability.

  11. Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains.

    PubMed

    Reich, W; Scheuermann, G

    2012-12-01

    Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.

  12. MOLECULAR CLOUDS AND CLUMPS IN THE BOSTON UNIVERSITY-FIVE COLLEGE RADIO ASTRONOMY OBSERVATORY GALACTIC RING SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rathborne, J. M.; Johnson, A. M.; Jackson, J. M.

    2009-05-15

    The Boston University-Five College Radio Astronomy Observatory (BU-FCRAO) Galactic Ring Survey (GRS) of {sup 13}CO J = 1 {yields} 0 emission covers Galactic longitudes 18{sup 0} < l < 55.{sup 0}7 and Galactic latitudes |b| {<=} 1{sup 0}. Using the SEQUOIA array on the FCRAO 14 m telescope, the GRS fully sampled the {sup 13}CO Galactic emission (46'' angular resolution on a 22'' grid) and achieved a spectral resolution of 0.21 km s{sup -1}. Because the GRS uses {sup 13}CO, an optically thin tracer, rather than {sup 12}CO, an optically thick tracer, the GRS allows a much better determination ofmore » column density and also a cleaner separation of velocity components along a line of sight. With this homogeneous, fully sampled survey of {sup 13}CO emission, we have identified 829 molecular clouds and 6124 clumps throughout the inner Galaxy using the CLUMPFIND algorithm. Here we present details of the catalog and a preliminary analysis of the properties of the molecular clouds and their clumps. Moreover, we compare clouds inside and outside of the 5 kpc ring and find that clouds within the ring typically have warmer temperatures, higher column densities, larger areas, and more clumps compared with clouds located outside the ring. This is expected if these clouds are actively forming stars. This catalog provides a useful tool for the study of molecular clouds and their embedded young stellar objects.« less

  13. Single-fiber multi-color pyrometry

    DOEpatents

    Small, IV, Ward; Celliers, Peter

    2004-01-27

    This invention is a fiber-based multi-color pyrometry set-up for real-time non-contact temperature and emissivity measurement. The system includes a single optical fiber to collect radiation emitted by a target, a reflective rotating chopper to split the collected radiation into two or more paths while modulating the radiation for lock-in amplification (i.e., phase-sensitive detection), at least two detectors possibly of different spectral bandwidths with or without filters to limit the wavelength regions detected and optics to direct and focus the radiation onto the sensitive areas of the detectors. A computer algorithm is used to calculate the true temperature and emissivity of a target based on blackbody calibrations. The system components are enclosed in a light-tight housing, with provision for the fiber to extend outside to collect the radiation. Radiation emitted by the target is transmitted through the fiber to the reflective chopper, which either allows the radiation to pass straight through or reflects the radiation into one or more separate paths. Each path includes a detector with or without filters and corresponding optics to direct and focus the radiation onto the active area of the detector. The signals are recovered using lock-in amplification. Calibration formulas for the signals obtained using a blackbody of known temperature are used to compute the true temperature and emissivity of the target. The temperature range of the pyrometer system is determined by the spectral characteristics of the optical components.

  14. Single-fiber multi-color pyrometry

    DOEpatents

    Small, IV, Ward; Celliers, Peter

    2000-01-01

    This invention is a fiber-based multi-color pyrometry set-up for real-time non-contact temperature and emissivity measurement. The system includes a single optical fiber to collect radiation emitted by a target, a reflective rotating chopper to split the collected radiation into two or more paths while modulating the radiation for lock-in amplification (i.e., phase-sensitive detection), at least two detectors possibly of different spectral bandwidths with or without filters to limit the wavelength regions detected and optics to direct and focus the radiation onto the sensitive areas of the detectors. A computer algorithm is used to calculate the true temperature and emissivity of a target based on blackbody calibrations. The system components are enclosed in a light-tight housing, with provision for the fiber to extend outside to collect the radiation. Radiation emitted by the target is transmitted through the fiber to the reflective chopper, which either allows the radiation to pass straight through or reflects the radiation into one or more separate paths. Each path includes a detector with or without filters and corresponding optics to direct and focus the radiation onto the active area of the detector. The signals are recovered using lock-in amplification. Calibration formulas for the signals obtained using a blackbody of known temperature are used to compute the true temperature and emissivity of the target. The temperature range of the pyrometer system is determined by the spectral characteristics of the optical components.

  15. 40 CFR 98.143 - Calculating GHG emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Fuel Combustion Sources). (2) Calculate and report the process and combustion CO2 emissions separately... Fuel Combustion Sources) the combustion CO2 emissions in the glass furnace according to the applicable... calculate and report the annual process CO2 emissions from each continuous glass melting furnace using the...

  16. 40 CFR 98.193 - Calculating GHG emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Stationary Fuel Combustion Sources). (2) Calculate and report process and combustion CO2 emissions separately... Stationary Fuel Combustion Sources) the combustion CO2 emissions from each lime kiln according to the... must calculate and report the annual process CO2 emissions from all lime kilns combined using the...

  17. 40 CFR 98.193 - Calculating GHG emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Stationary Fuel Combustion Sources). (2) Calculate and report process and combustion CO2 emissions separately... Stationary Fuel Combustion Sources) the combustion CO2 emissions from each lime kiln according to the... must calculate and report the annual process CO2 emissions from all lime kilns combined using the...

  18. 40 CFR 98.143 - Calculating GHG emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Fuel Combustion Sources). (2) Calculate and report the process and combustion CO2 emissions separately... Fuel Combustion Sources) the combustion CO2 emissions in the glass furnace according to the applicable... calculate and report the annual process CO2 emissions from each continuous glass melting furnace using the...

  19. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    PubMed Central

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  20. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    PubMed

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  1. Compact CFB: The next generation CFB boiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utt, J.

    1996-12-31

    The next generation of compact circulating fluidized bed (CFB) boilers is described in outline form. The following topics are discussed: compact CFB = pyroflow + compact separator; compact CFB; compact separator is a breakthrough design; advantages of CFB; new design with substantial development history; KUHMO: successful demo unit; KUHMO: good performance over load range with low emissions; KOKKOLA: first commercial unit and emissions; KOKKOLA: first commercial unit and emissions; compact CFB installations; next generation CFB boiler; grid nozzle upgrades; cast segmented vortex finders; vortex finder installation; ceramic anchors; pre-cast vertical bullnose; refractory upgrades; and wet gunning.

  2. Stationary-phase optimized selectivity liquid chromatography: development of a linear gradient prediction algorithm.

    PubMed

    De Beer, Maarten; Lynen, Fréderic; Chen, Kai; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat

    2010-03-01

    Stationary-phase optimized selectivity liquid chromatography (SOS-LC) is a tool in reversed-phase LC (RP-LC) to optimize the selectivity for a given separation by combining stationary phases in a multisegment column. The presently (commercially) available SOS-LC optimization procedure and algorithm are only applicable to isocratic analyses. Step gradient SOS-LC has been developed, but this is still not very elegant for the analysis of complex mixtures composed of components covering a broad hydrophobicity range. A linear gradient prediction algorithm has been developed allowing one to apply SOS-LC as a generic RP-LC optimization method. The algorithm allows operation in isocratic, stepwise, and linear gradient run modes. The features of SOS-LC in the linear gradient mode are demonstrated by means of a mixture of 13 steroids, whereby baseline separation is predicted and experimentally demonstrated.

  3. Formal Verification of Air Traffic Conflict Prevention Bands Algorithms

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.; Dowek, Gilles

    2010-01-01

    In air traffic management, a pairwise conflict is a predicted loss of separation between two aircraft, referred to as the ownship and the intruder. A conflict prevention bands system computes ranges of maneuvers for the ownship that characterize regions in the airspace that are either conflict-free or 'don't go' zones that the ownship has to avoid. Conflict prevention bands are surprisingly difficult to define and analyze. Errors in the calculation of prevention bands may result in incorrect separation assurance information being displayed to pilots or air traffic controllers. This paper presents provably correct 3-dimensional prevention bands algorithms for ranges of track angle; ground speed, and vertical speed maneuvers. The algorithms have been mechanically verified in the Prototype Verification System (PVS). The verification presented in this paper extends in a non-trivial way that of previously published 2-dimensional algorithms.

  4. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  5. SOSS User Guide

    NASA Technical Reports Server (NTRS)

    Zhu, Zhifan; Gridnev, Sergei; Windhorst, Robert D.

    2015-01-01

    This User Guide describes SOSS (Surface Operations Simulator and Scheduler) software build and graphic user interface. SOSS is a desktop application that simulates airport surface operations in fast time using traffic management algorithms. It moves aircraft on the airport surface based on information provided by scheduling algorithm prototypes, monitors separation violation and scheduling conformance, and produces scheduling algorithm performance data.

  6. Attenuation correction in emission tomography using the emission data—A review

    PubMed Central

    Li, Yusheng

    2016-01-01

    The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors then look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging. PMID:26843243

  7. Intelligent emission-sensitive routing for plugin hybrid electric vehicles.

    PubMed

    Sun, Zhonghao; Zhou, Xingshe

    2016-01-01

    The existing transportation sector creates heavily environmental impacts and is a prime cause for the current climate change. The need to reduce emissions from this sector has stimulated efforts to speed up the application of electric vehicles (EVs). A subset of EVs, called plug-in hybrid electric vehicles (PHEVs), backup batteries with combustion engine, which makes PHEVs have a comparable driving range to conventional vehicles. However, this hybridization comes at a cost of higher emissions than all-electric vehicles. This paper studies the routing problem for PHEVs to minimize emissions. The existing shortest-path based algorithms cannot be applied to solving this problem, because of the several new challenges: (1) an optimal route may contain circles caused by detour for recharging; (2) emissions of PHEVs not only depend on the driving distance, but also depend on the terrain and the state of charge (SOC) of batteries; (3) batteries can harvest energy by regenerative braking, which makes some road segments have negative energy consumption. To address these challenges, this paper proposes a green navigation algorithm (GNA) which finds the optimal strategies: where to go and where to recharge. GNA discretizes the SOC, then makes the PHEV routing problem to satisfy the principle of optimality. Finally, GNA adopts dynamic programming to solve the problem. We evaluate GNA using synthetic maps generated by the delaunay triangulation. The results show that GNA can save more than 10 % energy and reduce 10 % emissions when compared to the shortest path algorithm. We also observe that PHEVs with the battery capacity of 10-15 KWh detour most and nearly no detour when larger than 30 KWh. This observation gives some insights when developing PHEVs.

  8. Attenuation correction in emission tomography using the emission data—A review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berker, Yannick, E-mail: berker@mail.med.upenn.edu; Li, Yusheng

    2016-02-15

    The problem of attenuation correction (AC) for quantitative positron emission tomography (PET) had been considered solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT) in 2001; single photon emission computed tomography (SPECT) has seen a similar development. However, stimulated in particular by technical advances toward clinical systems combining PET and magnetic resonance imaging (MRI), research interest in alternative approaches for PET AC has grown substantially in the last years. In this comprehensive literature review, the authors first present theoretical results with relevance to simultaneous reconstruction of attenuation and activity. The authors thenmore » look back at the early history of this research area especially in PET; since this history is closely interwoven with that of similar approaches in SPECT, these will also be covered. We then review algorithmic advances in PET, including analytic and iterative algorithms. The analytic approaches are either based on the Helgason–Ludwig data consistency conditions of the Radon transform, or generalizations of John’s partial differential equation; with respect to iterative methods, we discuss maximum likelihood reconstruction of attenuation and activity (MLAA), the maximum likelihood attenuation correction factors (MLACF) algorithm, and their offspring. The description of methods is followed by a structured account of applications for simultaneous reconstruction techniques: this discussion covers organ-specific applications, applications specific to PET/MRI, applications using supplemental transmission information, and motion-aware applications. After briefly summarizing SPECT applications, we consider recent developments using emission data other than unscattered photons. In summary, developments using time-of-flight (TOF) PET emission data for AC have shown promising advances and open a wide range of applications. These techniques may both remedy deficiencies of purely MRI-based AC approaches in PET/MRI and improve standalone PET imaging.« less

  9. Peak picking and the assessment of separation performance in two-dimensional high performance liquid chromatography.

    PubMed

    Stevenson, Paul G; Mnatsakanyan, Mariam; Guiochon, Georges; Shalliker, R Andrew

    2010-07-01

    An algorithm was developed for 2DHPLC that automated the process of peak recognition, measuring their retention times, and then subsequently plotting the information in a two-dimensional retention plane. Following the recognition of peaks, the software then performed a series of statistical assessments of the separation performance, measuring for example, correlation between dimensions, peak capacity and the percentage of usage of the separation space. Peak recognition was achieved by interpreting the first and second derivatives of each respective one-dimensional chromatogram to determine the 1D retention times of each solute and then compiling these retention times for each respective fraction 'cut'. Due to the nature of comprehensive 2DHPLC adjacent cut fractions may contain peaks common to more than one cut fraction. The algorithm determined which components were common in adjacent cuts and subsequently calculated the peak maximum profile by interpolating the space between adjacent peaks. This algorithm was applied to the analysis of a two-dimensional separation of an apple flesh extract separated in a first dimension comprising a cyano stationary phase and an aqueous/THF mobile phase as the first dimension and a second dimension comprising C18-Hydro with an aqueous/MeOH mobile phase. A total of 187 peaks were detected.

  10. CHP Energy and Emissions Savings Calculator

    EPA Pesticide Factsheets

    Download the CHP Emissions Calculator, a tool that calculates the difference between the anticipated carbon dioxide, methane, nitrous oxide, sulfur dioxide, and nitrogen oxide emissions from a CHP system to those of a separate heat and power system.

  11. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.

    PubMed

    Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas

    2009-03-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.

  12. The Spitzer-IRAC/MIPS Extragalactic Survey (SIMES). II. Enhanced Nuclear Accretion Rate in Galaxy Groups at z ∼ 0.2

    NASA Astrophysics Data System (ADS)

    Baronchelli, I.; Rodighiero, G.; Teplitz, H. I.; Scarlata, C. M.; Franceschini, A.; Berta, S.; Barrufet, L.; Vaccari, M.; Bonato, M.; Ciesla, L.; Zanella, A.; Carraro, R.; Mancini, C.; Puglisi, A.; Malkan, M.; Mei, S.; Marchetti, L.; Colbert, J.; Sedgwick, C.; Serjeant, S.; Pearson, C.; Radovich, M.; Grado, A.; Limatola, L.; Covone, G.

    2018-04-01

    For a sample of star-forming galaxies in the redshift interval 0.15 < z < 0.3, we study how both the relative strength of the active galactic nucleus (AGN) infrared emission, compared to that due to the star formation (SF), and the numerical fraction of AGNs change as a function of the total stellar mass of the hosting galaxy group ({M}group}* ) between 1010.25 and 1011.9 M ⊙. Using a multicomponent spectral energy distribution SED fitting analysis, we separate the contribution of stars, AGN torus, and star formation to the total emission at different wavelengths. This technique is applied to a new multiwavelength data set in the SIMES field (23 not-redundant photometric bands), spanning the wavelength range from the UV (GALEX) to the far-IR (Herschel) and including crucial AKARI and WISE mid-IR observations (4.5 μm < λ < 24 μm), where the black hole thermal emission is stronger. This new photometric catalog, which includes our best photo-z estimates, is released through the NASA/IPAC Infrared Science Archive (IRSA). Groups are identified through a friends-of-friends algorithm (∼62% purity, ∼51% completeness). We identified a total of 45 galaxies requiring an AGN emission component, 35 of which are in groups and 10 in the field. We find the black hole accretion rate (BHAR) ∝ ({M}group}* {)}1.21+/- 0.27 and (BHAR/SFR) ∝ ({M}group}* {)}1.04+/- 0.24, while, in the same range of {M}group}* , we do not observe any sensible change in the numerical fraction of AGNs. Our results indicate that the nuclear activity (i.e., the BHAR and the BHAR/SFR ratio) is enhanced when galaxies are located in more massive and richer groups.

  13. Multiple Emission Angle Surface-Atmosphere Separations of MGS Thermal Emission Spectrometer Data

    NASA Technical Reports Server (NTRS)

    Bandfield, J. L.; Smith, M. D.

    2001-01-01

    Multiple emission angle observations taken by MGS-TES have been used to derive atmospheric opacities and surface temperatures and emissivities with increased accuracy and wavelength coverage. Martian high albedo region surface spectra have now been isolated. Additional information is contained in the original extended abstract.

  14. [Cluster analysis in biomedical researches].

    PubMed

    Akopov, A S; Moskovtsev, A A; Dolenko, S A; Savina, G D

    2013-01-01

    Cluster analysis is one of the most popular methods for the analysis of multi-parameter data. The cluster analysis reveals the internal structure of the data, group the separate observations on the degree of their similarity. The review provides a definition of the basic concepts of cluster analysis, and discusses the most popular clustering algorithms: k-means, hierarchical algorithms, Kohonen networks algorithms. Examples are the use of these algorithms in biomedical research.

  15. A fire model with distinct crop, pasture, and non-agricultural burning: use of new data and a model-fitting algorithm for FINAL.1

    NASA Astrophysics Data System (ADS)

    Rabin, Sam S.; Ward, Daniel S.; Malyshev, Sergey L.; Magi, Brian I.; Shevliakova, Elena; Pacala, Stephen W.

    2018-03-01

    This study describes and evaluates the Fire Including Natural & Agricultural Lands model (FINAL) which, for the first time, explicitly simulates cropland and pasture management fires separately from non-agricultural fires. The non-agricultural fire module uses empirical relationships to simulate burned area in a quasi-mechanistic framework, similar to past fire modeling efforts, but with a novel optimization method that improves the fidelity of simulated fire patterns to new observational estimates of non-agricultural burning. The agricultural fire components are forced with estimates of cropland and pasture fire seasonality and frequency derived from observational land cover and satellite fire datasets. FINAL accurately simulates the amount, distribution, and seasonal timing of burned cropland and pasture over 2001-2009 (global totals: 0.434×106 and 2.02×106 km2 yr-1 modeled, 0.454×106 and 2.04×106 km2 yr-1 observed), but carbon emissions for cropland and pasture fire are overestimated (global totals: 0.295 and 0.706 PgC yr-1 modeled, 0.194 and 0.538 PgC yr-1 observed). The non-agricultural fire module underestimates global burned area (1.91×106 km2 yr-1 modeled, 2.44×106 km2 yr-1 observed) and carbon emissions (1.14 PgC yr-1 modeled, 1.84 PgC yr-1 observed). The spatial pattern of total burned area and carbon emissions is generally well reproduced across much of sub-Saharan Africa, Brazil, Central Asia, and Australia, whereas the boreal zone sees underestimates. FINAL represents an important step in the development of global fire models, and offers a strategy for fire models to consider human-driven fire regimes on cultivated lands. At the regional scale, simulations would benefit from refinements in the parameterizations and improved optimization datasets. We include an in-depth discussion of the lessons learned from using the Levenberg-Marquardt algorithm in an interactive optimization for a dynamic global vegetation model.

  16. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE PAGES

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-27

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  17. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. Finally, the algorithm is found to out-perform current leading x-ray inversion algorithms when the error duemore » to counting statistics is high.« less

  18. Blue upconversion in Yb3+/Tm3+ co-doped silica fiber based on glass phase-separation technology

    NASA Astrophysics Data System (ADS)

    Yang, Yu; Chu, Yingbo; Chen, Zhangru; Xing, Yingbin; Hu, Xionwei; Li, Haiqing; Peng, Jinggang; Dai, Nengli; Li, Jinyan; Yang, Luyun

    2018-02-01

    Yb3+/Tm3+ co-doped silica fiber was prepared successfully by glass phase-separation technology. The measured refractive index profile indicated that the active fiber core had an excellent uniformity. The highest emission intensity was obtained in a sample with a Yb3+ concentration of 0.3 mol/L and a Tm3+ concentration of 0.1 mol/L. Under the excitation at 976 nm, intense blue upconversion emission of Tm3+ at 474 nm was observed due to energy transfer from Yb3+ to Tm3+. A three-photon process was responsible for the blue emission. Due to re-absorption resulted from the Tm3+:3H6→1G4 transition, the blue emission peak was red-shifted. It is suggested that the fiber preparation technology based on glass phase-separation technology can be a potential candidate for preparing active fibers with large core or complex fiber structure.

  19. Very long baseline interferometric observations of the hydroxyl masers in VY Canis Majoris

    NASA Technical Reports Server (NTRS)

    Reid, M. J.; Muhleman, D. O.

    1978-01-01

    Results are presented for spectral-line VLBI observations of the OH emission from VY CMa. The main-line (1665 and 1667 MHz) emission was mapped with an angular resolution of 0.02 arcsec by analyzing interferometer phase data. The main-line emission comes from many maser components of apparent size less than 0.03 arcsec which are separated by up to 0.5 arcsec. New maser features near the center of the OH spectra were detected and found to lie within the region encompassed by the low-velocity OH emission. The 1612-MHz emission was mapped by Fourier inversion of the VLBI data from two baselines. All spatially isolated maser components appeared smaller than 0.15 arcsec; however, the maser emission is very complex at most velocities. Maser components within a velocity range of 1.3 km/s are often separated by more than 1 arcsec, while components more than 10 km/s apart in each emission complex are often coincident to 0.2 arcsec.

  20. Automated detection of extended sources in radio maps: progress from the SCORPIO survey

    NASA Astrophysics Data System (ADS)

    Riggi, S.; Ingallinera, A.; Leto, P.; Cavallaro, F.; Bufano, F.; Schillirò, F.; Trigilio, C.; Umana, G.; Buemi, C. S.; Norris, R. P.

    2016-08-01

    Automated source extraction and parametrization represents a crucial challenge for the next-generation radio interferometer surveys, such as those performed with the Square Kilometre Array (SKA) and its precursors. In this paper, we present a new algorithm, called CAESAR (Compact And Extended Source Automated Recognition), to detect and parametrize extended sources in radio interferometric maps. It is based on a pre-filtering stage, allowing image denoising, compact source suppression and enhancement of diffuse emission, followed by an adaptive superpixel clustering stage for final source segmentation. A parametrization stage provides source flux information and a wide range of morphology estimators for post-processing analysis. We developed CAESAR in a modular software library, also including different methods for local background estimation and image filtering, along with alternative algorithms for both compact and diffuse source extraction. The method was applied to real radio continuum data collected at the Australian Telescope Compact Array (ATCA) within the SCORPIO project, a pathfinder of the Evolutionary Map of the Universe (EMU) survey at the Australian Square Kilometre Array Pathfinder (ASKAP). The source reconstruction capabilities were studied over different test fields in the presence of compact sources, imaging artefacts and diffuse emission from the Galactic plane and compared with existing algorithms. When compared to a human-driven analysis, the designed algorithm was found capable of detecting known target sources and regions of diffuse emission, outperforming alternative approaches over the considered fields.

  1. Monitoring Oilfield Operations and GHG Emissions Sources Using Object-based Image Analysis of High Resolution Spatial Imagery

    NASA Astrophysics Data System (ADS)

    Englander, J. G.; Brodrick, P. G.; Brandt, A. R.

    2015-12-01

    Fugitive emissions from oil and gas extraction have become a greater concern with the recent increases in development of shale hydrocarbon resources. There are significant gaps in the tools and research used to estimate fugitive emissions from oil and gas extraction. Two approaches exist for quantifying these emissions: atmospheric (or 'top down') studies, which measure methane fluxes remotely, or inventory-based ('bottom up') studies, which aggregate leakage rates on an equipment-specific basis. Bottom-up studies require counting or estimating how many devices might be leaking (called an 'activity count'), as well as how much each device might leak on average (an 'emissions factor'). In a real-world inventory, there is uncertainty in both activity counts and emissions factors. Even at the well level there are significant disagreements in data reporting. For example, some prior studies noted a ~5x difference in the number of reported well completions in the United States between EPA and private data sources. The purpose of this work is to address activity count uncertainty by using machine learning algorithms to classify oilfield surface facilities using high-resolution spatial imagery. This method can help estimate venting and fugitive emissions sources from regions where reporting of oilfield equipment is incomplete or non-existent. This work will utilize high resolution satellite imagery to count well pads in the Bakken oil field of North Dakota. This initial study examines an area of ~2,000 km2 with ~1000 well pads. We compare different machine learning classification techniques, and explore the impact of training set size, input variables, and image segmentation settings to develop efficient and robust techniques identifying well pads. We discuss the tradeoffs inherent to different classification algorithms, and determine the optimal algorithms for oilfield feature detection. In the future, the results of this work will be leveraged to be provide activity counts of oilfield surface equipment including tanks, pumpjacks, and holding ponds.

  2. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  3. Global Warming: Predicting OPEC Carbon Dioxide Emissions from Petroleum Consumption Using Neural Network and Hybrid Cuckoo Search Algorithm

    PubMed Central

    Chiroma, Haruna; Abdul-kareem, Sameem; Khan, Abdullah; Nawi, Nazri Mohd.; Gital, Abdulsalam Ya’u; Shuib, Liyana; Abubakar, Adamu I.; Rahman, Muhammad Zubair; Herawan, Tutut

    2015-01-01

    Background Global warming is attracting attention from policy makers due to its impacts such as floods, extreme weather, increases in temperature by 0.7°C, heat waves, storms, etc. These disasters result in loss of human life and billions of dollars in property. Global warming is believed to be caused by the emissions of greenhouse gases due to human activities including the emissions of carbon dioxide (CO2) from petroleum consumption. Limitations of the previous methods of predicting CO2 emissions and lack of work on the prediction of the Organization of the Petroleum Exporting Countries (OPEC) CO2 emissions from petroleum consumption have motivated this research. Methods/Findings The OPEC CO2 emissions data were collected from the Energy Information Administration. Artificial Neural Network (ANN) adaptability and performance motivated its choice for this study. To improve effectiveness of the ANN, the cuckoo search algorithm was hybridised with accelerated particle swarm optimisation for training the ANN to build a model for the prediction of OPEC CO2 emissions. The proposed model predicts OPEC CO2 emissions for 3, 6, 9, 12 and 16 years with an improved accuracy and speed over the state-of-the-art methods. Conclusion An accurate prediction of OPEC CO2 emissions can serve as a reference point for propagating the reorganisation of economic development in OPEC member countries with the view of reducing CO2 emissions to Kyoto benchmarks—hence, reducing global warming. The policy implications are discussed in the paper. PMID:26305483

  4. Mixture model based joint-MAP reconstruction of attenuation and activity maps in TOF-PET

    NASA Astrophysics Data System (ADS)

    Hemmati, H.; Kamali-Asl, A.; Ghafarian, P.; Ay, M. R.

    2018-06-01

    A challenge to have quantitative positron emission tomography (PET) images is to provide an accurate and patient-specific photon attenuation correction. In PET/MR scanners, the nature of MR signals and hardware limitations have led to a real challenge on the attenuation map extraction. Except for a constant factor, the activity and attenuation maps from emission data on TOF-PET system can be determined by the maximum likelihood reconstruction of attenuation and activity approach (MLAA) from emission data. The aim of the present study is to constrain the joint estimations of activity and attenuation approach for PET system using a mixture model prior based on the attenuation map histogram. This novel prior enforces non-negativity and its hyperparameters can be estimated using a mixture decomposition step from the current estimation of the attenuation map. The proposed method can also be helpful on the solving of scaling problem and is capable to assign the predefined regional attenuation coefficients with some degree of confidence to the attenuation map similar to segmentation-based attenuation correction approaches. The performance of the algorithm is studied with numerical and Monte Carlo simulations and a phantom experiment and was compared with MLAA algorithm with and without the smoothing prior. The results demonstrate that the proposed algorithm is capable of producing the cross-talk free activity and attenuation images from emission data. The proposed approach has potential to be a practical and competitive method for joint reconstruction of activity and attenuation maps from emission data on PET/MR and can be integrated on the other methods.

  5. 75 FR 80219 - National Emission Standards for Shipbuilding and Ship Repair (Surface Coating); National Emission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-21

    ...This action proposes how EPA will address the residual risk and technology review conducted for two industrial source categories regulated by separate national emission standards for hazardous air pollutants. It also proposes to address provisions related to emissions during periods of startup, shutdown, and malfunction.

  6. Parallel Implementation of the Wideband DOA Algorithm on the IBM Cell BE Processor

    DTIC Science & Technology

    2010-05-01

    Abstract—The Multiple Signal Classification ( MUSIC ) algorithm is a powerful technique for determining the Direction of Arrival (DOA) of signals...Broadband Engine Processor (Cell BE). The process of adapting the serial based MUSIC algorithm to the Cell BE will be analyzed in terms of parallelism and...using Multiple Signal Classification MUSIC algorithm [4] • Computation of Focus matrix • Computation of number of sources • Separation of Signal

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guiochon, Georges A; Shalliker, R. Andrew

    An algorithm was developed for 2DHPLC that automated the process of peak recognition, measuring their retention times, and then subsequently plotting the information in a two-dimensional retention plane. Following the recognition of peaks, the software then performed a series of statistical assessments of the separation performance, measuring for example, correlation between dimensions, peak capacity and the percentage of usage of the separation space. Peak recognition was achieved by interpreting the first and second derivatives of each respective one-dimensional chromatogram to determine the 1D retention times of each solute and then compiling these retention times for each respective fraction 'cut'. Duemore » to the nature of comprehensive 2DHPLC adjacent cut fractions may contain peaks common to more than one cut fraction. The algorithm determined which components were common in adjacent cuts and subsequently calculated the peak maximum profile by interpolating the space between adjacent peaks. This algorithm was applied to the analysis of a two-dimensional separation of an apple flesh extract separated in a first dimension comprising a cyano stationary phase and an aqueous/THF mobile phase as the first dimension and a second dimension comprising C18-Hydro with an aqueous/MeOH mobile phase. A total of 187 peaks were detected.« less

  8. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm.

    PubMed

    Cui, Lizhi; Poon, Josiah; Poon, Simon K; Chen, Hao; Gao, Junbin; Kwan, Paul; Fan, Kei; Ling, Zhihao

    2014-01-01

    The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective.

  9. On the degree conjecture for separability of multipartite quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hassan, Ali Saif M.; Joag, Pramod S.

    2008-01-15

    We settle the so-called degree conjecture for the separability of multipartite quantum states, which are normalized graph Laplacians, first given by Braunstein et al. [Phys. Rev. A 73, 012320 (2006)]. The conjecture states that a multipartite quantum state is separable if and only if the degree matrix of the graph associated with the state is equal to the degree matrix of the partial transpose of this graph. We call this statement to be the strong form of the conjecture. In its weak version, the conjecture requires only the necessity, that is, if the state is separable, the corresponding degree matricesmore » match. We prove the strong form of the conjecture for pure multipartite quantum states using the modified tensor product of graphs defined by Hassan and Joag [J. Phys. A 40, 10251 (2007)], as both necessary and sufficient condition for separability. Based on this proof, we give a polynomial-time algorithm for completely factorizing any pure multipartite quantum state. By polynomial-time algorithm, we mean that the execution time of this algorithm increases as a polynomial in m, where m is the number of parts of the quantum system. We give a counterexample to show that the conjecture fails, in general, even in its weak form, for multipartite mixed states. Finally, we prove this conjecture, in its weak form, for a class of multipartite mixed states, giving only a necessary condition for separability.« less

  10. Modification of Gaussian mixture models for data classification in high energy physics

    NASA Astrophysics Data System (ADS)

    Štěpánek, Michal; Franc, Jiří; Kůs, Václav

    2015-01-01

    In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).

  11. Superfast algorithms of multidimensional discrete k-wave transforms and Volterra filtering based on superfast radon transform

    NASA Astrophysics Data System (ADS)

    Labunets, Valeri G.; Labunets-Rundblad, Ekaterina V.; Astola, Jaakko T.

    2001-12-01

    Fast algorithms for a wide class of non-separable n-dimensional (nD) discrete unitary K-transforms (DKT) are introduced. They need less 1D DKTs than in the case of the classical radix-2 FFT-type approach. The method utilizes a decomposition of the nD K-transform into the product of a new nD discrete Radon transform and of a set of parallel/independ 1D K-transforms. If the nD K-transform has a separable kernel (e.g., the case of the discrete Fourier transform) our approach leads to decrease of multiplicative complexity by the factor of n comparing to the classical row/column separable approach. It is well known that an n-th order Volterra filter of one dimensional signal can be evaluated by an appropriate nD linear convolution. This work describes new superfast algorithm for Volterra filtering. New approach is based on the superfast discrete Radon and Nussbaumer polynomial transforms.

  12. Tracking the Martian CO2 Polar Ice Caps in Infrared Images

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; Castano, Rebecca; Chien, Steve

    2006-01-01

    Researchers at NASA s Jet Propulsion Laboratory have developed a method for automatically tracking the polar caps on Mars as they advance and recede each year (see figure). The seasonal Mars polar caps are composed mainly of CO2 ice and are therefore cold enough to stand out clearly in infrared data collected by the Thermal Emission Imaging System (THEMIS) onboard the Mars Odyssey spacecraft. The Bimodal Image Temperature (BIT) histogram analysis algorithm analyzes raw, uncalibrated data to identify images that contain both "cold" ("polar cap") and "warm" ("not polar cap") pixels. The algorithm dynamically identifies the temperature that separates these two regions. This flexibility is critical, because in the absence of any calibration, the threshold temperature can vary significantly from image to image. Using the identified threshold, the algorithm classifies each pixel in the image as "polar cap" or "not polar cap," then identifies the image row that contains the spatial transition from "polar cap" to "not polar cap." While this method is useful for analyzing data that has already been returned by THEMIS, it has even more significance with respect to data that has not yet been collected. Instead of seeking the polar cap only in specific, targeted images, the simplicity and efficiency of this method makes it feasible for direct, onboard use. That is, THEMIS could continuously monitor its observations for any detections of the polar-cap edge, producing detections over a wide range of spatial and temporal conditions. This effort can greatly contribute to our understanding of long-term climatic change on Mars.

  13. Measurement of greenhouse gas emissions from agricultural sites using open-path optical remote sensing method.

    PubMed

    Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick

    2009-08-01

    Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.

  14. Multi-band morpho-Spectral Component Analysis Deblending Tool (MuSCADeT): Deblending colourful objects

    NASA Astrophysics Data System (ADS)

    Joseph, R.; Courbin, F.; Starck, J.-L.

    2016-05-01

    We introduce a new algorithm for colour separation and deblending of multi-band astronomical images called MuSCADeT which is based on Morpho-spectral Component Analysis of multi-band images. The MuSCADeT algorithm takes advantage of the sparsity of astronomical objects in morphological dictionaries such as wavelets and their differences in spectral energy distribution (SED) across multi-band observations. This allows us to devise a model independent and automated approach to separate objects with different colours. We show with simulations that we are able to separate highly blended objects and that our algorithm is robust against SED variations of objects across the field of view. To confront our algorithm with real data, we use HST images of the strong lensing galaxy cluster MACS J1149+2223 and we show that MuSCADeT performs better than traditional profile-fitting techniques in deblending the foreground lensing galaxies from background lensed galaxies. Although the main driver for our work is the deblending of strong gravitational lenses, our method is fit to be used for any purpose related to deblending of objects in astronomical images. An example of such an application is the separation of the red and blue stellar populations of a spiral galaxy in the galaxy cluster Abell 2744. We provide a python package along with all simulations and routines used in this paper to contribute to reproducible research efforts. Codes can be found at http://lastro.epfl.ch/page-126973.html

  15. 40 CFR 60.692-3 - Standards: Oil-water separators.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Standards: Oil-water separators. 60.692... Emissions From Petroleum Refinery Wastewater Systems § 60.692-3 Standards: Oil-water separators. (a) Each oil-water separator tank, slop oil tank, storage vessel, or other auxiliary equipment subject to the...

  16. 40 CFR 60.692-3 - Standards: Oil-water separators.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Standards: Oil-water separators. 60.692... Emissions From Petroleum Refinery Wastewater Systems § 60.692-3 Standards: Oil-water separators. (a) Each oil-water separator tank, slop oil tank, storage vessel, or other auxiliary equipment subject to the...

  17. 40 CFR 60.692-3 - Standards: Oil-water separators.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Standards: Oil-water separators. 60.692... Emissions From Petroleum Refinery Wastewater Systems § 60.692-3 Standards: Oil-water separators. (a) Each oil-water separator tank, slop oil tank, storage vessel, or other auxiliary equipment subject to the...

  18. Particle identification algorithms for the PANDA Endcap Disc DIRC

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Ali, A.; Belias, A.; Dzhygadlo, R.; Gerhardt, A.; Götzen, K.; Kalicy, G.; Krebs, M.; Lehmann, D.; Nerling, F.; Patsyuk, M.; Peters, K.; Schepers, G.; Schmitt, L.; Schwarz, C.; Schwiening, J.; Traxler, M.; Böhm, M.; Eyrich, W.; Lehmann, A.; Pfaffinger, M.; Uhlig, F.; Düren, M.; Etzelmüller, E.; Föhl, K.; Hayrapetyan, A.; Kreutzfeld, K.; Merle, O.; Rieke, J.; Wasem, T.; Achenbach, P.; Cardinali, M.; Hoek, M.; Lauth, W.; Schlimme, S.; Sfienti, C.; Thiel, M.

    2017-12-01

    The Endcap Disc DIRC has been developed to provide an excellent particle identification for the future PANDA experiment by separating pions and kaons up to a momentum of 4 GeV/c with a separation power of 3 standard deviations in the polar angle region from 5o to 22o. This goal will be achieved using dedicated particle identification algorithms based on likelihood methods and will be applied in an offline analysis and online event filtering. This paper evaluates the resulting PID performance using Monte-Carlo simulations to study basic single track PID as well as the analysis of complex physics channels. The online reconstruction algorithm has been tested with a Virtex4 FGPA card and optimized regarding the resulting constraints.

  19. Resolving boosted jets with XCone

    DOE PAGES

    Thaler, Jesse; Wilkason, Thomas F.

    2015-12-01

    We show how the recently proposed XCone jet algorithm smoothly interpolates between resolved and boosted kinematics. When using standard jet algorithms to reconstruct the decays of hadronic resonances like top quarks and Higgs bosons, one typically needs separate analysis strategies to handle the resolved regime of well-separated jets and the boosted regime of fat jets with substructure. XCone, by contrast, is an exclusive cone jet algorithm that always returns a fixed number of jets, so jet regions remain resolved even when (sub)jets are overlapping in the boosted regime. In this paper, we perform three LHC case studies $-$ dijet resonances,more » Higgs decays to bottom quarks, and all-hadronic top pairs$-$ that demonstrate the physics applications of XCone over a wide kinematic range.« less

  20. Direct Final Rule for Exhaust Emission Standards for 2012 and Later Model Year Snowmobiles

    EPA Pesticide Factsheets

    In this action removing the NOX component from the Phase 3 emission standard calculation and deferring action on the 2012 CO and HC emission standards portion of the court’s remand to a separate rulemaking action.

  1. ANALYSIS OF REAL-TIME VEHICLE HYDROCARBON EMISSIONS DATA

    EPA Science Inventory

    The report gives results of analyses using real-time dynamometer test emissions data from 13 passenger cars to examine variations in emissions during different speeds or modes of travel. The resulting data provided a way to separately identify idle, cruise, acceleration, and dece...

  2. 75 FR 27647 - Approval and Promulgation of Air Quality Implementation Plans; Texas; Revisions to the Emission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-18

    ... taken under parts C and D of the CAA. In a separate rulemaking, EPA is approving the severable Discrete Emission Credit Banking and Trading Program (referred to elsewhere in this notice as the Discrete Emission...

  3. Further theoretical studies of modified cyclone separator as a diesel soot particulate emission arrester.

    PubMed

    Mukhopadhyay, N; Bose, P K

    2009-10-01

    Soot particulate emission reduction from diesel engine is one of the most emerging problems associated with the exhaust pollution. Diesel particulate filters (DPF) hold out the prospects of substantially reducing regulated particulate emissions but the question of the reliable regeneration of filters still remains a difficult hurdle to overcome. Many of the solutions proposed to date suffer from design complexity, cost, regeneration problem and energy demands. This study presents a computer aided theoretical analysis for controlling diesel soot particulate emission by cyclone separator--a non contact type particulate removal system considering outer vortex flow, inner vortex flow and packed ceramic fiber filter at the end of vortex finder tube. Cyclone separator with low initial cost, simple construction produces low back pressure and reasonably high collection efficiencies with reduced regeneration problems. Cyclone separator is modified by placing a continuous ceramic packed fiber filter placed at the end of the vortex finder tube. In this work, the grade efficiency model of diesel soot particulate emission is proposed considering outer vortex, inner vortex and the continuous ceramic packed fiber filter. Pressure drop model is also proposed considering the effect of the ceramic fiber filter. Proposed model gives reasonably good collection efficiency with permissible pressure drop limit of diesel engine operation. Theoretical approach is predicted for calculating the cut size diameter considering the effect of Cunningham molecular slip correction factor. The result shows good agreements with existing cyclone and DPF flow characteristics.

  4. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    NASA Astrophysics Data System (ADS)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  5. Multi-objective optimization of combustion, performance and emission parameters in a jatropha biodiesel engine using Non-dominated sorting genetic algorithm-II

    NASA Astrophysics Data System (ADS)

    Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar

    2014-03-01

    The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.

  6. Inverse problem of flame surface properties of wood using a repulsive particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yoon, Kyung-Beom; Park, Won-Hee

    2015-04-01

    The convective heat transfer coefficient and surface emissivity before and after flame occurrence on a wood specimen surface and the flame heat flux were estimated using the repulsive particle swarm optimization algorithm and cone heater test results. The cone heater specified in the ISO 5660 standards was used, and six cone heater heat fluxes were tested. Preservative-treated Douglas fir 21 mm in thickness was used as the wood specimen in the tests. This study confirmed that the surface temperature of the specimen, which was calculated using the convective heat transfer coefficient, surface emissivity and flame heat flux on the wood specimen by a repulsive particle swarm optimization algorithm, was consistent with the measured temperature. Considering the measurement errors in the surface temperature of the specimen, the applicability of the optimization method considered in this study was evaluated.

  7. Acoustic emission localization based on FBG sensing network and SVR algorithm

    NASA Astrophysics Data System (ADS)

    Sai, Yaozhang; Zhao, Xiuxia; Hou, Dianli; Jiang, Mingshun

    2017-03-01

    In practical application, carbon fiber reinforced plastics (CFRP) structures are easy to appear all sorts of invisible damages. So the damages should be timely located and detected for the safety of CFPR structures. In this paper, an acoustic emission (AE) localization system based on fiber Bragg grating (FBG) sensing network and support vector regression (SVR) is proposed for damage localization. AE signals, which are caused by damage, are acquired by high speed FBG interrogation. According to the Shannon wavelet transform, time differences between AE signals are extracted for localization algorithm based on SVR. According to the SVR model, the coordinate of AE source can be accurately predicted without wave velocity. The FBG system and localization algorithm are verified on a 500 mm×500 mm×2 mm CFRP plate. The experimental results show that the average error of localization system is 2.8 mm and the training time is 0.07 s.

  8. From prompt gamma distribution to dose: a novel approach combining an evolutionary algorithm and filtering based on Gaussian-powerlaw convolutions.

    PubMed

    Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F

    2016-10-07

    Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.

  9. The Chorus Conflict and Loss of Separation Resolution Algorithms

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.

    2013-01-01

    The Chorus software is designed to investigate near-term, tactical conflict and loss of separation detection and resolution concepts for air traffic management. This software is currently being used in two different problem domains: en-route self- separation and sense and avoid for unmanned aircraft systems. This paper describes the core resolution algorithms that are part of Chorus. The combination of several features of the Chorus program distinguish this software from other approaches to conflict and loss of separation resolution. First, the program stores a history of state information over time which enables it to handle communication dropouts and take advantage of previous input data. Second, the underlying conflict algorithms find resolutions that solve the most urgent conflict, but also seek to prevent secondary conflicts with the other aircraft. Third, if the program is run on multiple aircraft, and the two aircraft maneuver at the same time, the result will be implicitly co-ordinated. This implicit coordination property is established by ensuring that a resolution produced by Chorus will comply with a mathematically-defined criteria whose correctness has been formally verified. Fourth, the program produces both instantaneous solutions and kinematic solutions, which are based on simple accel- eration models. Finally, the program provides resolutions for recovery from loss of separation. Different versions of this software are implemented as Java and C++ software programs, respectively.

  10. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    NASA Astrophysics Data System (ADS)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  11. The development of a 3D mesoscopic model of metallic foam based on an improved watershed algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Jinhua; Zhang, Yadong; Wang, Guikun; Fang, Qin

    2018-06-01

    The watershed algorithm has been used widely in the x-ray computed tomography (XCT) image segmentation. It provides a transformation defined on a grayscale image and finds the lines that separate adjacent images. However, distortion occurs in developing a mesoscopic model of metallic foam based on XCT image data. The cells are oversegmented at some events when the traditional watershed algorithm is used. The improved watershed algorithm presented in this paper can avoid oversegmentation and is composed of three steps. Firstly, it finds all of the connected cells and identifies the junctions of the corresponding cell walls. Secondly, the image segmentation is conducted to separate the adjacent cells. It generates the lost cell walls between the adjacent cells. Optimization is then performed on the segmentation image. Thirdly, this improved algorithm is validated when it is compared with the image of the metallic foam, which shows that it can avoid the image segmentation distortion. A mesoscopic model of metallic foam is thus formed based on the improved algorithm, and the mesoscopic characteristics of the metallic foam, such as cell size, volume and shape, are identified and analyzed.

  12. Efficient iterative image reconstruction algorithm for dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Antropova, Natalia; Sanchez, Adrian; Reiser, Ingrid S.; Sidky, Emil Y.; Boone, John; Pan, Xiaochuan

    2016-03-01

    Dedicated breast computed tomography (bCT) is currently being studied as a potential screening method for breast cancer. The X-ray exposure is set low to achieve an average glandular dose comparable to that of mammography, yielding projection data that contains high levels of noise. Iterative image reconstruction (IIR) algorithms may be well-suited for the system since they potentially reduce the effects of noise in the reconstructed images. However, IIR outcomes can be difficult to control since the algorithm parameters do not directly correspond to the image properties. Also, IIR algorithms are computationally demanding and have optimal parameter settings that depend on the size and shape of the breast and positioning of the patient. In this work, we design an efficient IIR algorithm with meaningful parameter specifications and that can be used on a large, diverse sample of bCT cases. The flexibility and efficiency of this method comes from having the final image produced by a linear combination of two separately reconstructed images - one containing gray level information and the other with enhanced high frequency components. Both of the images result from few iterations of separate IIR algorithms. The proposed algorithm depends on two parameters both of which have a well-defined impact on image quality. The algorithm is applied to numerous bCT cases from a dedicated bCT prototype system developed at University of California, Davis.

  13. Fission Reaction Event Yield Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagmann, Christian; Verbeke, Jerome; Vogt, Ramona

    FREYA (Fission Reaction Event Yield Algorithm) is a code that simulated the decay of a fissionable nucleus at specified excitation energy. In its present form, FREYA models spontaneous fission and neutron-induced fission up to 20 MeV. It includes the possibility of neutron emission from the nuclear prior to its fussion (nth chance fission).

  14. Evolutionary design optimization of traffic signals applied to Quito city.

    PubMed

    Armas, Rolando; Aguirre, Hernán; Daolio, Fabio; Tanaka, Kiyoshi

    2017-01-01

    This work applies evolutionary computation and machine learning methods to study the transportation system of Quito from a design optimization perspective. It couples an evolutionary algorithm with a microscopic transport simulator and uses the outcome of the optimization process to deepen our understanding of the problem and gain knowledge about the system. The work focuses on the optimization of a large number of traffic lights deployed on a wide area of the city and studies their impact on travel time, emissions and fuel consumption. An evolutionary algorithm with specialized mutation operators is proposed to search effectively in large decision spaces, evolving small populations for a short number of generations. The effects of the operators combined with a varying mutation schedule are studied, and an analysis of the parameters of the algorithm is also included. In addition, hierarchical clustering is performed on the best solutions found in several runs of the algorithm. An analysis of signal clusters and their geolocation, estimation of fuel consumption, spatial analysis of emissions, and an analysis of signal coordination provide an overall picture of the systemic effects of the optimization process.

  15. Evolutionary design optimization of traffic signals applied to Quito city

    PubMed Central

    2017-01-01

    This work applies evolutionary computation and machine learning methods to study the transportation system of Quito from a design optimization perspective. It couples an evolutionary algorithm with a microscopic transport simulator and uses the outcome of the optimization process to deepen our understanding of the problem and gain knowledge about the system. The work focuses on the optimization of a large number of traffic lights deployed on a wide area of the city and studies their impact on travel time, emissions and fuel consumption. An evolutionary algorithm with specialized mutation operators is proposed to search effectively in large decision spaces, evolving small populations for a short number of generations. The effects of the operators combined with a varying mutation schedule are studied, and an analysis of the parameters of the algorithm is also included. In addition, hierarchical clustering is performed on the best solutions found in several runs of the algorithm. An analysis of signal clusters and their geolocation, estimation of fuel consumption, spatial analysis of emissions, and an analysis of signal coordination provide an overall picture of the systemic effects of the optimization process. PMID:29236733

  16. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    NASA Astrophysics Data System (ADS)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  17. Fast algorithm for spectral processing with application to on-line welding quality assurance

    NASA Astrophysics Data System (ADS)

    Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.

    2006-10-01

    A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.

  18. 76 FR 72049 - National Emission Standards for Hazardous Air Pollutant Emissions for Shipbuilding and Ship...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-21

    ...This action finalizes the residual risk and technology review conducted for two industrial source categories regulated by separate national emission standards for hazardous air pollutants. The two national emission standards for hazardous air pollutants are: National Emissions Standards for Shipbuilding and Ship Repair (Surface Coating) and National Emissions Standards for Wood Furniture Manufacturing Operations. This action also finalizes revisions to the regulatory provisions related to emissions during periods of startup, shutdown and malfunction.

  19. Passive Microwave Algorithms for Sea Ice Concentration: A Comparison of Two Techniques

    NASA Technical Reports Server (NTRS)

    Comiso, Josefino C.; Cavalieri, Donald J.; Parkinson, Claire L.; Gloersen, Per

    1997-01-01

    The most comprehensive large-scale characterization of the global sea ice cover so far has been provided by satellite passive microwave data. Accurate retrieval of ice concentrations from these data is important because of the sensitivity of surface flux(e.g. heat, salt, and water) calculations to small change in the amount of open water (leads and polynyas) within the polar ice packs. Two algorithms that have been used for deriving ice concentrations from multichannel data are compared. One is the NASA Team algorithm and the other is the Bootstrap algorithm, both of which were developed at NASA's Goddard Space Flight Center. The two algorithms use different channel combinations, reference brightness temperatures, weather filters, and techniques. Analyses are made to evaluate the sensitivity of algorithm results to variations of emissivity and temperature with space and time. To assess the difference in the performance of the two algorithms, analyses were performed with data from both hemispheres and for all seasons. The results show only small differences in the central Arctic in but larger disagreements in the seasonal regions and in summer. In some ares in the Antarctic, the Bootstrap technique show ice concentrations higher than those of the Team algorithm by as much as 25%; whereas, in other areas, it shows ice concentrations lower by as much as 30%. The The differences in the results are caused by temperature effects, emissivity effects, and tie point differences. The Team and the Bootstrap results were compared with available Landsat, advanced very high resolution radiometer (AVHRR) and synthetic aperture radar (SAR) data. AVHRR, Landsat, and SAR data sets all yield higher concentrations than the passive microwave algorithms. Inconsistencies among results suggest the need for further validation studies.

  20. Biologically driven neural platform invoking parallel electrophoretic separation and urinary metabolite screening.

    PubMed

    Page, Tessa; Nguyen, Huong Thi Huynh; Hilts, Lindsey; Ramos, Lorena; Hanrahan, Grady

    2012-06-01

    This work reveals a computational framework for parallel electrophoretic separation of complex biological macromolecules and model urinary metabolites. More specifically, the implementation of a particle swarm optimization (PSO) algorithm on a neural network platform for multiparameter optimization of multiplexed 24-capillary electrophoresis technology with UV detection is highlighted. Two experimental systems were examined: (1) separation of purified rabbit metallothioneins and (2) separation of model toluene urinary metabolites and selected organic acids. Results proved superior to the use of neural networks employing standard back propagation when examining training error, fitting response, and predictive abilities. Simulation runs were obtained as a result of metaheuristic examination of the global search space with experimental responses in good agreement with predicted values. Full separation of selected analytes was realized after employing optimal model conditions. This framework provides guidance for the application of metaheuristic computational tools to aid in future studies involving parallel chemical separation and screening. Adaptable pseudo-code is provided to enable users of varied software packages and modeling framework to implement the PSO algorithm for their desired use.

  1. Methods for predicting properties and tailoring salt solutions for industrial processes

    NASA Technical Reports Server (NTRS)

    Ally, Moonis R.

    1993-01-01

    An algorithm developed at Oak Ridge National Laboratory accurately and quickly predicts thermodynamic properties of concentrated aqueous salt solutions. This algorithm is much simpler and much faster than other modeling schemes and is unique because it can predict solution behavior at very high concentrations and under varying conditions. Typical industrial applications of this algorithm would be in manufacture of inorganic chemicals by crystallization, thermal storage, refrigeration and cooling, extraction of metals, emissions controls, etc.

  2. Anomaly detection in hyperspectral imagery: statistics vs. graph-based algorithms

    NASA Astrophysics Data System (ADS)

    Berkson, Emily E.; Messinger, David W.

    2016-05-01

    Anomaly detection (AD) algorithms are frequently applied to hyperspectral imagery, but different algorithms produce different outlier results depending on the image scene content and the assumed background model. This work provides the first comparison of anomaly score distributions between common statistics-based anomaly detection algorithms (RX and subspace-RX) and the graph-based Topological Anomaly Detector (TAD). Anomaly scores in statistical AD algorithms should theoretically approximate a chi-squared distribution; however, this is rarely the case with real hyperspectral imagery. The expected distribution of scores found with graph-based methods remains unclear. We also look for general trends in algorithm performance with varied scene content. Three separate scenes were extracted from the hyperspectral MegaScene image taken over downtown Rochester, NY with the VIS-NIR-SWIR ProSpecTIR instrument. In order of most to least cluttered, we study an urban, suburban, and rural scene. The three AD algorithms were applied to each scene, and the distributions of the most anomalous 5% of pixels were compared. We find that subspace-RX performs better than RX, because the data becomes more normal when the highest variance principal components are removed. We also see that compared to statistical detectors, anomalies detected by TAD are easier to separate from the background. Due to their different underlying assumptions, the statistical and graph-based algorithms highlighted different anomalies within the urban scene. These results will lead to a deeper understanding of these algorithms and their applicability across different types of imagery.

  3. A Comprehensive Two-Dimensional Retention Time Alignment Algorithm To Enhance Chemometric Analysis of Comprehensive Two-Dimensional Separation Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, Karisa M.; Wood, Lianna F.; Wright, Bob W.

    2005-12-01

    A comprehensive two-dimensional (2D) retention time alignment algorithm was developed using a novel indexing scheme. The algorithm is termed comprehensive because it functions to correct the entire chromatogram in both dimensions and it preserves the separation information in both dimensions. Although the algorithm is demonstrated by correcting comprehensive two-dimensional gas chromatography (GC x GC) data, the algorithm is designed to correct shifting in all forms of 2D separations, such as LC x LC, LC x CE, CE x CE, and LC x GC. This 2D alignment algorithm was applied to three different data sets composed of replicate GC x GCmore » separations of (1) three 22-component control mixtures, (2) three gasoline samples, and (3) three diesel samples. The three data sets were collected using slightly different temperature or pressure programs to engender significant retention time shifting in the raw data and then demonstrate subsequent corrections of that shifting upon comprehensive 2D alignment of the data sets. Thirty 12-min GC x GC separations from three 22-component control mixtures were used to evaluate the 2D alignment performance (10 runs/mixture). The average standard deviation of the first column retention time improved 5-fold from 0.020 min (before alignment) to 0.004 min (after alignment). Concurrently, the average standard deviation of second column retention time improved 4-fold from 3.5 ms (before alignment) to 0.8 ms (after alignment). Alignment of the 30 control mixture chromatograms took 20 min. The quantitative integrity of the GC x GC data following 2D alignment was also investigated. The mean integrated signal was determined for all components in the three 22-component mixtures for all 30 replicates. The average percent difference in the integrated signal for each component before and after alignment was 2.6%. Singular value decomposition (SVD) was applied to the 22-component control mixture data before and after alignment to show the restoration of trilinearity to the data, since trilinearity benefits chemometric analysis. By applying comprehensive 2D retention time alignment to all three data sets (control mixtures, gasoline samples, and diesel samples), classification by principal component analysis (PCA) substantially improved, resulting in 100% accurate scores clustering.« less

  4. Influence of Co-57 and CT Transmission Measurements on the Quantification Accuracy and Partial Volume Effect of a Small Animal PET Scanner.

    PubMed

    Mannheim, Julia G; Schmid, Andreas M; Pichler, Bernd J

    2017-12-01

    Non-invasive in vivo positron emission tomography (PET) provides high detection sensitivity in the nano- to picomolar range and in addition to other advantages, the possibility to absolutely quantify the acquired data. The present study focuses on the comparison of transmission data acquired with an X-ray computed tomography (CT) scanner or a Co-57 source for the Inveon small animal PET scanner (Siemens Healthcare, Knoxville, TN, USA), as well as determines their influences on the quantification accuracy and partial volume effect (PVE). A special focus included the impact of the performed calibration on the quantification accuracy. Phantom measurements were carried out to determine the quantification accuracy, the influence of the object size on the quantification, and the PVE for different sphere sizes, along the field of view and for different contrast ratios. An influence of the emission activity on the Co-57 transmission measurements was discovered (deviations up to 24.06 % measured to true activity), whereas no influence of the emission activity on the CT attenuation correction was identified (deviations <3 % for measured to true activity). The quantification accuracy was substantially influenced by the applied calibration factor and by the object size. The PVE demonstrated a dependency on the sphere size, the position within the field of view, the reconstruction and correction algorithms and the count statistics. Depending on the reconstruction algorithm, only ∼30-40 % of the true activity within a small sphere could be resolved. The iterative 3D reconstruction algorithms uncovered substantially increased recovery values compared to the analytical and 2D iterative reconstruction algorithms (up to 70.46 % and 80.82 % recovery for the smallest and largest sphere using iterative 3D reconstruction algorithms). The transmission measurement (CT or Co-57 source) to correct for attenuation did not severely influence the PVE. The analysis of the quantification accuracy and the PVE revealed an influence of the object size, the reconstruction algorithm and the applied corrections. Particularly, the influence of the emission activity during the transmission measurement performed with a Co-57 source must be considered. To receive comparable results, also among different scanner configurations, standardization of the acquisition (imaging parameters, as well as applied reconstruction and correction protocols) is necessary.

  5. Autonomous Operations Planner: A Flexible Platform for Research in Flight-Deck Support for Airborne Self-Separation

    NASA Technical Reports Server (NTRS)

    Karr, David A.; Vivona, Robert A.; DePascale, Stephen M.; Wing, David J.

    2012-01-01

    The Autonomous Operations Planner (AOP), developed by NASA, is a flexible and powerful prototype of a flight-deck automation system to support self-separation of aircraft. The AOP incorporates a variety of algorithms to detect and resolve conflicts between the trajectories of its own aircraft and traffic aircraft while meeting route constraints such as required times of arrival and avoiding airspace hazards such as convective weather and restricted airspace. This integrated suite of algorithms provides flight crew support for strategic and tactical conflict resolutions and conflict-free trajectory planning while en route. The AOP has supported an extensive set of experiments covering various conditions and variations on the self-separation concept, yielding insight into the system s design and resolving various challenges encountered in the exploration of the concept. The design of the AOP will enable it to continue to evolve and support experimentation as the self-separation concept is refined.

  6. Optimized emission in nanorod arrays through quasi-aperiodic inverse design.

    PubMed

    Anderson, P Duke; Povinelli, Michelle L

    2015-06-01

    We investigate a new class of quasi-aperiodic nanorod structures for the enhancement of incoherent light emission. We identify one optimized structure using an inverse design algorithm and the finite-difference time-domain method. We carry out emission calculations on both the optimized structure as well as a simple periodic array. The optimized structure achieves nearly perfect light extraction while maintaining a high spontaneous emission rate. Overall, the optimized structure can achieve a 20%-42% increase in external quantum efficiency relative to a simple periodic design, depending on material quality.

  7. 40 CFR 63.686 - Standards: Oil-water and organic-water separators.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 10 2011-07-01 2011-07-01 false Standards: Oil-water and organic-water... Operations § 63.686 Standards: Oil-water and organic-water separators. (a) The provisions of this section apply to the control of air emissions from oil-water separators and organic-water separators for which...

  8. 40 CFR 63.686 - Standards: Oil-water and organic-water separators.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 10 2010-07-01 2010-07-01 false Standards: Oil-water and organic-water... Operations § 63.686 Standards: Oil-water and organic-water separators. (a) The provisions of this section apply to the control of air emissions from oil-water separators and organic-water separators for which...

  9. 40 CFR 63.686 - Standards: Oil-water and organic-water separators.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 11 2014-07-01 2014-07-01 false Standards: Oil-water and organic-water... Operations § 63.686 Standards: Oil-water and organic-water separators. (a) The provisions of this section apply to the control of air emissions from oil-water separators and organic-water separators for which...

  10. 40 CFR 63.686 - Standards: Oil-water and organic-water separators.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 11 2012-07-01 2012-07-01 false Standards: Oil-water and organic-water... Operations § 63.686 Standards: Oil-water and organic-water separators. (a) The provisions of this section apply to the control of air emissions from oil-water separators and organic-water separators for which...

  11. 40 CFR 63.686 - Standards: Oil-water and organic-water separators.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 11 2013-07-01 2013-07-01 false Standards: Oil-water and organic-water... Operations § 63.686 Standards: Oil-water and organic-water separators. (a) The provisions of this section apply to the control of air emissions from oil-water separators and organic-water separators for which...

  12. A hybrid multiscale Monte Carlo algorithm (HyMSMC) to cope with disparity in time scales and species populations in intracellular networks.

    PubMed

    Samant, Asawari; Ogunnaike, Babatunde A; Vlachos, Dionisios G

    2007-05-24

    The fundamental role that intrinsic stochasticity plays in cellular functions has been shown via numerous computational and experimental studies. In the face of such evidence, it is important that intracellular networks are simulated with stochastic algorithms that can capture molecular fluctuations. However, separation of time scales and disparity in species population, two common features of intracellular networks, make stochastic simulation of such networks computationally prohibitive. While recent work has addressed each of these challenges separately, a generic algorithm that can simultaneously tackle disparity in time scales and population scales in stochastic systems is currently lacking. In this paper, we propose the hybrid, multiscale Monte Carlo (HyMSMC) method that fills in this void. The proposed HyMSMC method blends stochastic singular perturbation concepts, to deal with potential stiffness, with a hybrid of exact and coarse-grained stochastic algorithms, to cope with separation in population sizes. In addition, we introduce the computational singular perturbation (CSP) method as a means of systematically partitioning fast and slow networks and computing relaxation times for convergence. We also propose a new criteria of convergence of fast networks to stochastic low-dimensional manifolds, which further accelerates the algorithm. We use several prototype and biological examples, including a gene expression model displaying bistability, to demonstrate the efficiency, accuracy and applicability of the HyMSMC method. Bistable models serve as stringent tests for the success of multiscale MC methods and illustrate limitations of some literature methods.

  13. Mitigation of crosstalk based on CSO-ICA in free space orbital angular momentum multiplexing systems

    NASA Astrophysics Data System (ADS)

    Xing, Dengke; Liu, Jianfei; Zeng, Xiangye; Lu, Jia; Yi, Ziyao

    2018-09-01

    Orbital angular momentum (OAM) multiplexing has caused a lot of concerns and researches in recent years because of its great spectral efficiency and many OAM systems in free space channel have been demonstrated. However, due to the existence of atmospheric turbulence, the power of OAM beams will diffuse to beams with neighboring topological charges and inter-mode crosstalk will emerge in these systems, resulting in the system nonavailability in severe cases. In this paper, we introduced independent component analysis (ICA), which is known as a popular method of signal separation, to mitigate inter-mode crosstalk effects; furthermore, aiming at the shortcomings of traditional ICA algorithm's fixed iteration speed, we proposed a joint algorithm, CSO-ICA, to improve the process of solving the separation matrix by taking advantage of fast convergence rate and high convergence precision of chicken swarm algorithm (CSO). We can get the optimal separation matrix by adjusting the step size according to the last iteration in CSO-ICA. Simulation results indicate that the proposed algorithm has a good performance in inter-mode crosstalk mitigation and the optical signal-to-noise ratio (OSNR) requirement of received signals (OAM+2, OAM+4, OAM+6, OAM+8) is reduced about 3.2 dB at bit error ratio (BER) of 3.8 × 10-3. Meanwhile, the convergence speed is much faster than the traditional ICA algorithm by improving about an order of iteration times.

  14. Analysis of superimposed ultrasonic guided waves in long bones by the joint approximate diagonalization of eigen-matrices algorithm.

    PubMed

    Song, Xiaojun; Ta, Dean; Wang, Weiqi

    2011-10-01

    The parameters of ultrasonic guided waves (GWs) are very sensitive to mechanical and structural changes in long cortical bones. However, it is a challenge to obtain the group velocity and other parameters of GWs because of the presence of mixed multiple modes. This paper proposes a blind identification algorithm using the joint approximate diagonalization of eigen-matrices (JADE) and applies it to the separation of superimposed GWs in long bones. For the simulation case, the velocity of the single mode was calculated after separation. A strong agreement was obtained between the estimated velocity and the theoretical expectation. For the experiments in bovine long bones, by using the calculated velocity and a theoretical model, the cortical thickness (CTh) was obtained. For comparison with the JADE approach, an adaptive Gaussian chirplet time-frequency (ACGTF) method was also used to estimate the CTh. The results showed that the mean error of the CTh acquired by the JADE approach was 4.3%, which was smaller than that of the ACGTF method (13.6%). This suggested that the JADE algorithm may be used to separate the superimposed GWs and that the JADE algorithm could potentially be used to evaluate long bones. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  15. Improving the accuracy of S02 column densities and emission rates obtained from upward-looking UV-spectroscopic measurements of volcanic plumes by taking realistic radiative transfer into account

    USGS Publications Warehouse

    Kern, Christoph; Deutschmann, Tim; Werner, Cynthia; Sutton, A. Jeff; Elias, Tamar; Kelly, Peter J.

    2012-01-01

    Sulfur dioxide (SO2) is monitored using ultraviolet (UV) absorption spectroscopy at numerous volcanoes around the world due to its importance as a measure of volcanic activity and a tracer for other gaseous species. Recent studies have shown that failure to take realistic radiative transfer into account during the spectral retrieval of the collected data often leads to large errors in the calculated emission rates. Here, the framework for a new evaluation method which couples a radiative transfer model to the spectral retrieval is described. In it, absorption spectra are simulated, and atmospheric parameters are iteratively updated in the model until a best match to the measurement data is achieved. The evaluation algorithm is applied to two example Differential Optical Absorption Spectroscopy (DOAS) measurements conducted at Kilauea volcano (Hawaii). The resulting emission rates were 20 and 90% higher than those obtained with a conventional DOAS retrieval performed between 305 and 315 nm, respectively, depending on the different SO2 and aerosol loads present in the volcanic plume. The internal consistency of the method was validated by measuring and modeling SO2 absorption features in a separate wavelength region around 375 nm and comparing the results. Although additional information about the measurement geometry and atmospheric conditions is needed in addition to the acquired spectral data, this method for the first time provides a means of taking realistic three-dimensional radiative transfer into account when analyzing UV-spectral absorption measurements of volcanic SO2 plumes.

  16. Lattice-matched double dip-shaped BAlGaN/AlN quantum well structures for ultraviolet light emission devices

    NASA Astrophysics Data System (ADS)

    Park, Seoung-Hwan; Ahn, Doyeol

    2018-05-01

    Ultraviolet light emission characteristics of lattice-matched BxAlyGa1-x-y N/AlN quantum well (QW) structures with double AlGaN delta layers were investigated theoretically. In contrast to conventional single dip-shaped QW structure where the reduction effect of the spatial separation between electron and hole wave functions is negligible, proposed double dip-shaped QW shows significant enhancement of the ultraviolet light emission intensity from a BAlGaN/AlN QW structure due to the reduced spatial separation between electron and hole wave functions. The emission peak of the double dip-shaped QW structure is expected to be about three times larger than that of the conventional rectangular AlGaN/AlN QW structure.

  17. Final technical report for ITS for voluntary emission reduction : an ITS operational test using real-time vehicle emissions detection

    DOT National Transportation Integrated Search

    1998-05-01

    The Smart Sign project has successfully demonstrated the merging of two separate technological disciplines of highway messaging and on-road vehicle emissions sensing into an advanced ITS public information system. This operational test has demonstrat...

  18. 40 CFR 63.3151 - How do I demonstrate initial compliance with the emission limitations?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants... separately calculate the mass average organic HAP content of the materials used during the initial compliance...

  19. 40 CFR 63.3151 - How do I demonstrate initial compliance with the emission limitations?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants... separately calculate the mass average organic HAP content of the materials used during the initial compliance...

  20. Industrial SO2 emissions monitoring using a portable multi-channel gas analyzer with an optimized retrieval algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Y. W.; Liu, C.; Xie, P. H.; Hartl, A.; Chan, K. L.; Tian, Y.; Wang, W.; Qin, M.; Liu, J. G.; Liu, W. Q.

    2015-12-01

    In this paper, we demonstrate achieving accurate industrial SO2 emissions monitoring using a portable multi-channel gas analyzer with an optimized retrieval algorithm. The introduced analyzer features with large dynamic measurement range and correction of interferences from other co-existing infrared absorbers, e.g., NO, CO, CO2, NO2, CH4, HC, N2O and H2O. Both effects have been the major limitations of industrial SO2 emissions monitoring. The multi-channel gas analyzer measures 11 different wavelength channels simultaneously in order to achieve correction of several major problems of an infrared gas analyzer, including system drift, conflict of sensitivity, interferences among different infrared absorbers and limitation of measurement range. The optimized algorithm makes use of a 3rd polynomial rather than a constant factor to quantify gas-to-gas interference. The measurement results show good performance in both linear and nonlinear range, thereby solving the problem that the conventional interference correction is restricted by the linearity of both intended and interfering channels. The result implies that the measurement range of the developed multi-channel analyzer can be extended to the nonlinear absorption region. The measurement range and accuracy are evaluated by experimental laboratory calibration. An excellent agreement was achieved with a Pearson correlation coefficient (r2) of 0.99977 with measurement range from ~5 ppmv to 10 000 ppmv and measurement error <2 %. The instrument was also deployed for field measurement. Emissions from 3 different factories were measured. The emissions of these factories have been characterized with different co-existing infrared absorbers, covering a wide range of concentration levels. We compared our measurements with the commercial SO2 analyzers. The overall good agreements are achieved.

  1. Optimizing Fukushima Emissions Through Pattern Matching and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Simpson, M. D.; Philip, C. S.; Baskett, R.

    2017-12-01

    Hazardous conditions during the Fukushima Daiichi nuclear power plant (NPP) accident hindered direct observations of the emissions of radioactive materials into the atmosphere. A wide range of emissions are estimated from bottom-up studies using reactor inventories and top-down approaches based on inverse modeling. We present a new inverse modeling estimate of cesium-137 emitted from the Fukushima NPP. Our estimate considers weather uncertainty through a large ensemble of Weather Research and Forecasting model simulations and uses the FLEXPART atmospheric dispersion model to transport and deposit cesium. The simulations are constrained by observations of the spatial distribution of cumulative cesium deposited on the surface of Japan through April 2, 2012. Multiple spatial metrics are used to quantify differences between observed and simulated deposition patterns. In order to match the observed pattern, we use a multi-objective genetic algorithm to optimize the time-varying emissions. We find that large differences with published bottom-up estimates are required to explain the observations. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  2. Supervisory Power Management Control Algorithms for Hybrid Electric Vehicles. A Survey

    DOE PAGES

    Malikopoulos, Andreas

    2014-03-31

    The growing necessity for environmentally benign hybrid propulsion systems has led to the development of advanced power management control algorithms to maximize fuel economy and minimize pollutant emissions. This paper surveys the control algorithms for hybrid electric vehicles (HEVs) and plug-in HEVs (PHEVs) that have been reported in the literature to date. The exposition ranges from parallel, series, and power split HEVs and PHEVs and includes a classification of the algorithms in terms of their implementation and the chronological order of their appearance. Remaining challenges and potential future research directions are also discussed.

  3. Exploring methods of cGPS transient detections for the Chilean cGPS network in conjunction with displacement predictions from seismic catalogues: To what extent can we detect seismic and aseismic motion in the cGPS network?

    NASA Astrophysics Data System (ADS)

    Bedford, J. R.; Moreno, M.; Oncken, O.; Li, S.; Schurr, B.; Metzger, S.; Baez, J. C.; Deng, Z.; Melnick, D.

    2016-12-01

    Various algorithms for the detection of transient deformation in cGPS networks are under currently being developed to relieve us of by-eye detection, which is an error prone and time-expensive activity. Such algorithms aim to separate the time series into secular, seasonal, and transient components. Additional white and coloured noise, as well as common-mode (network correlated) noise, may remain in the separated transient component of the signal, depending on the processing flow before the separation step. The a-priori knowledge of regional seismicity can assist in the recognition of steps in the data, which are generally corrected for if they are above the noise-floor. Sometimes, the cumulative displacement caused by small earthquakes can create a seemingly continuous transient signal in the cGPS leading to confusion as to whether to attribute this transient motion as seismic or aseismic. Here we demonstrate the efficacy of various transient detection algorithms for subsets of the Chilean cGPS network and present the optimal processing flow for teasing out the transients. We present a step-detection and removal algorithm and estimate the seismic efficiency of any detected transient signals by forward modelling the surface displacements of the earthquakes and comparing to the recovered transient signals. A major challenge in separating signals in the Chilean cGPS network is the overlapping of postseismic effects at adjacent segments: For example, a Mw 9 earthquake will produce a postseismic viscoelastic relaxation that is sustained over decades and several hundreds of kilometres. Additionally, it has been observed in Chile and Japan that following moderately large earthquakes (e.g. Mw > 8) the secular velocities of adjacent segments in the subduction margin suddenly change and remain changed: this effect may be related to a change in speed of slab subduction rather than viscoelastic relaxation, and therefore the signal separation algorithms that assume a time-independent secular velocity at each station may need to be revised to account for this effect. Accordingly, we categorize the recovered separated secular and transient signals of a particular station in terms of the seismic cycle in both its own and adjacent segments and discuss the appropriate modelling strategy for this station given its category.

  4. Density functional theory for field emission from carbon nano-structures.

    PubMed

    Li, Zhibing

    2015-12-01

    Electron field emission is understood as a quantum mechanical many-body problem in which an electronic quasi-particle of the emitter is converted into an electron in vacuum. Fundamental concepts of field emission, such as the field enhancement factor, work-function, edge barrier and emission current density, will be investigated, using carbon nanotubes and graphene as examples. A multi-scale algorithm basing on density functional theory is introduced. We will argue that such a first principle approach is necessary and appropriate for field emission of nano-structures, not only for a more accurate quantitative description, but, more importantly, for deeper insight into field emission. Copyright © 2015 The Author. Published by Elsevier B.V. All rights reserved.

  5. SEPHYDRO: An Integrated Multi-Filter Web-Based Tool for Baseflow Separation

    NASA Astrophysics Data System (ADS)

    Serban, D.; MacQuarrie, K. T. B.; Popa, A.

    2017-12-01

    Knowledge of baseflow contributions to streamflow is important for understanding watershed scale hydrology, including groundwater-surface water interactions, impact of geology and landforms on baseflow, estimation of groundwater recharge rates, etc. Baseflow (or hydrograph) separation methods can be used as supporting tools in many areas of environmental research, such as the assessment of the impact of agricultural practices, urbanization and climate change on surface water and groundwater. Over the past few decades various digital filtering and graphically-based methods have been developed in an attempt to improve the assessment of the dynamics of the various sources of streamflow (e.g. groundwater, surface runoff, subsurface flow); however, these methods are not available under an integrated platform and, individually, often require significant effort for implementation. Here we introduce SEPHYDRO, an open access, customizable web-based tool, which integrates 11 algorithms allowing for separation of streamflow hydrographs. The streamlined interface incorporates a reference guide as well as additional information that allows users to import their own data, customize the algorithms, and compare, visualise and export results. The tool includes one-, two- and three-parameter digital filters as well as graphical separation methods and has been successfully applied in Atlantic Canada, in studies dealing with nutrient loading to fresh water and coastal water ecosystems. Future developments include integration of additional separation algorithms as well as incorporation of geochemical separation methods. SEPHYDRO has been developed through a collaborative research effort between the Canadian Rivers Institute, University of New Brunswick (Fredericton, New Brunswick, Canada), Agriculture and Agri-Food Canada and Environment and Climate Change Canada and is currently available at http://canadianriversinstitute.com/tool/

  6. An improved independent component analysis model for 3D chromatogram separation and its solution by multi-areas genetic algorithm

    PubMed Central

    2014-01-01

    Background The 3D chromatogram generated by High Performance Liquid Chromatography-Diode Array Detector (HPLC-DAD) has been researched widely in the field of herbal medicine, grape wine, agriculture, petroleum and so on. Currently, most of the methods used for separating a 3D chromatogram need to know the compounds' number in advance, which could be impossible especially when the compounds are complex or white noise exist. New method which extracts compounds from 3D chromatogram directly is needed. Methods In this paper, a new separation model named parallel Independent Component Analysis constrained by Reference Curve (pICARC) was proposed to transform the separation problem to a multi-parameter optimization issue. It was not necessary to know the number of compounds in the optimization. In order to find all the solutions, an algorithm named multi-areas Genetic Algorithm (mGA) was proposed, where multiple areas of candidate solutions were constructed according to the fitness and distances among the chromosomes. Results Simulations and experiments on a real life HPLC-DAD data set were used to demonstrate our method and its effectiveness. Through simulations, it can be seen that our method can separate 3D chromatogram to chromatogram peaks and spectra successfully even when they severely overlapped. It is also shown by the experiments that our method is effective to solve real HPLC-DAD data set. Conclusions Our method can separate 3D chromatogram successfully without knowing the compounds' number in advance, which is fast and effective. PMID:25474487

  7. Feature Extraction from Subband Brain Signals and Its Classification

    NASA Astrophysics Data System (ADS)

    Mukul, Manoj Kumar; Matsuno, Fumitoshi

    This paper considers both the non-stationarity as well as independence/uncorrelated criteria along with the asymmetry ratio over the electroencephalogram (EEG) signals and proposes a hybrid approach of the signal preprocessing methods before the feature extraction. A filter bank approach of the discrete wavelet transform (DWT) is used to exploit the non-stationary characteristics of the EEG signals and it decomposes the raw EEG signals into the subbands of different center frequencies called as rhythm. A post processing of the selected subband by the AMUSE algorithm (a second order statistics based ICA/BSS algorithm) provides the separating matrix for each class of the movement imagery. In the subband domain the orthogonality as well as orthonormality criteria over the whitening matrix and separating matrix do not come respectively. The human brain has an asymmetrical structure. It has been observed that the ratio between the norms of the left and right class separating matrices should be different for better discrimination between these two classes. The alpha/beta band asymmetry ratio between the separating matrices of the left and right classes will provide the condition to select an appropriate multiplier. So we modify the estimated separating matrix by an appropriate multiplier in order to get the required asymmetry and extend the AMUSE algorithm in the subband domain. The desired subband is further subjected to the updated separating matrix to extract subband sub-components from each class. The extracted subband sub-components sources are further subjected to the feature extraction (power spectral density) step followed by the linear discriminant analysis (LDA).

  8. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  9. Detecting and visualizing weak signatures in hyperspectral data

    NASA Astrophysics Data System (ADS)

    MacPherson, Duncan James

    This thesis evaluates existing techniques for detecting weak spectral signatures from remotely sensed hyperspectral data. Algorithms are presented that successfully detect hard-to-find 'mystery' signatures in unknown cluttered backgrounds. The term 'mystery' is used to describe a scenario where the spectral target and background endmembers are unknown. Sub-Pixel analysis and background suppression are used to find deeply embedded signatures which can be less than 10% of the total signal strength. Existing 'mystery target' detection algorithms are derived and compared. Several techniques are shown to be superior both visually and quantitatively. Detection performance is evaluated using confidence metrics that are developed. A multiple algorithm approach is shown to improve detection confidence significantly. Although the research focuses on remote sensing applications, the algorithms presented can be applied to a wide variety of diverse fields such as medicine, law enforcement, manufacturing, earth science, food production, and astrophysics. The algorithms are shown to be general and can be applied to both the reflective and emissive parts of the electromagnetic spectrum. The application scope is a broad one and the final results open new opportunities for many specific applications including: land mine detection, pollution and hazardous waste detection, crop abundance calculations, volcanic activity monitoring, detecting diseases in food, automobile or airplane target recognition, cancer detection, mining operations, extracting galactic gas emissions, etc.

  10. Exemplar-Based Image Inpainting Using a Modified Priority Definition.

    PubMed

    Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le

    2015-01-01

    Exemplar-based algorithms are a popular technique for image inpainting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem, we introduce an independent strategy through investigating the process of patches propagation in this paper. We first define a new separated priority definition to propagate geometry and then synthesize image textures, aiming to well recover image geometry and textures. In addition, an automatic algorithm is designed to estimate steps for the new separated priority definition. Comparing with some competitive approaches, the new priority definition can recover image geometry and textures well.

  11. Parabolized Navier-Stokes solutions of separation and trailing-edge flows

    NASA Technical Reports Server (NTRS)

    Brown, J. L.

    1983-01-01

    A robust, iterative solution procedure is presented for the parabolized Navier-Stokes or higher order boundary layer equations as applied to subsonic viscous-inviscid interaction flows. The robustness of the present procedure is due, in part, to an improved algorithmic formulation. The present formulation is based on a reinterpretation of stability requirements for this class of algorithms and requires only second order accurate backward or central differences for all streamwise derivatives. Upstream influence is provided for through the algorithmic formulation and iterative sweeps in x. The primary contribution to robustness, however, is the boundary condition treatment, which imposes global constraints to control the convergence path. Discussed are successful calculations of subsonic, strong viscous-inviscid interactions, including separation. These results are consistent with Navier-Stokes solutions and triple deck theory.

  12. Exemplar-Based Image Inpainting Using a Modified Priority Definition

    PubMed Central

    Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le

    2015-01-01

    Exemplar-based algorithms are a popular technique for image inpainting. They mainly have two important phases: deciding the filling-in order and selecting good exemplars. Traditional exemplar-based algorithms are to search suitable patches from source regions to fill in the missing parts, but they have to face a problem: improper selection of exemplars. To improve the problem, we introduce an independent strategy through investigating the process of patches propagation in this paper. We first define a new separated priority definition to propagate geometry and then synthesize image textures, aiming to well recover image geometry and textures. In addition, an automatic algorithm is designed to estimate steps for the new separated priority definition. Comparing with some competitive approaches, the new priority definition can recover image geometry and textures well. PMID:26492491

  13. Synthesis of asymmetric polyetherimide membrane for CO2/N2 separation

    NASA Astrophysics Data System (ADS)

    Ahmad, A. L.; Salaudeen, Y. O.; Jawad, Z. A.

    2017-06-01

    Large emission of carbon dioxide (CO2) to the environment requires mitigation to avoid unbearable consequences on global climate change. The CO2 emissions generated by fossil fuel combustion within the power and industrial sectors need to be quickly curbed. The gas emission can be abated using membrane technology; this is one of the most promising approaches for selective separation of CO2/N2. The purpose of the study is to synthesis an asymmetric polyetherimide (PEI) membrane and to establish its morphological characteristics for CO2/N2 separation. The PEI flat-sheet asymmetric membrane was fabricated using phase inversion with N-methyl-2-pyrrolidone (NMP) as solvent and water-isopropanol as a coagulant. Particularly, polymer concentration of 20, 25, and 30 wt. % were studied. In addition, the structure and morphology of the produced membrane were observed using scanning electron microscopy (SEM). Importantly, results showed that the membrane with high PEI concentration of 30 wt. % yield an optimal selectivity of 10.7 for CO2/Nitrogen (N2) separation at 1 bar and 25 ºC for pure gas, aided by the membrane surface morphology. The dense skin present was as a result of non-solvent (water) while isopropanol generates a porous sponge structure. This appreciable separation performance makes the PEI asymmetric membrane an attractive alternative for CO2/N2 separation.

  14. Simultaneous inversion of multiple land surface parameters from MODIS optical-thermal observations

    NASA Astrophysics Data System (ADS)

    Ma, Han; Liang, Shunlin; Xiao, Zhiqiang; Shi, Hanyu

    2017-06-01

    Land surface parameters from remote sensing observations are critical in monitoring and modeling of global climate change and biogeochemical cycles. Current methods for estimating land surface variables usually focus on individual parameters separately even from the same satellite observations, resulting in inconsistent products. Moreover, no efforts have been made to generate global products from integrated observations from the optical to Thermal InfraRed (TIR) spectrum. Particularly, Middle InfraRed (MIR) observations have received little attention due to the complexity of the radiometric signal, which contains both reflected and emitted radiation. In this paper, we propose a unified algorithm for simultaneously retrieving six land surface parameters - Leaf Area Index (LAI), Fraction of Absorbed Photosynthetically Active Radiation (FAPAR), land surface albedo, Land Surface Emissivity (LSE), Land Surface Temperature (LST), and Upwelling Longwave radiation (LWUP) by exploiting MODIS visible-to-TIR observations. We incorporate a unified physical radiative transfer model into a data assimilation framework. The MODIS visible-to-TIR time series datasets include the daily surface reflectance product and MIR-to-TIR surface radiance, which are atmospherically corrected from the MODIS data using the Moderate Resolution Transmittance program (MODTRAN, ver. 5.0). LAI was first estimated using a data assimilation method that combines MODIS daily reflectance data and a LAI phenology model, and then the LAI was input to the unified radiative transfer model to simulate spectral surface reflectance and surface emissivity for calculating surface broadband albedo and emissivity, and FAPAR. LST was estimated from the MIR-TIR surface radiance data and the simulated emissivity, using an iterative optimization procedure. Lastly, LWUP was estimated using the LST and surface emissivity. The retrieved six parameters were extensively validated across six representative sites with different biome types, and compared with MODIS, GLASS, and GlobAlbedo land surface products. The results demonstrate that the unified inversion algorithm can retrieve temporally complete and physically consistent land surface parameters, and provides more accurate estimates of surface albedo, LST, and LWUP than existing products, with R2 values of 0.93 and 0.62, RMSE of 0.029 and 0.037, and BIAS values of 0.016 and 0.012 for the retrieved and MODIS albedo products, respectively, compared with field albedo measurements; R2 values of 0.95 and 0.93, RMSE of 2.7 and 4.2 K, and BIAS values of -0.6 and -2.7 K for the retrieved and MODIS LST products, respectively, compared with field LST measurements; and R2 values of 0.93 and 0.94, RMSE of 18.2 and 22.8 W/m2, and BIAS values of -2.7 and -14.6 W/m2 for the retrieved and MODIS LWUP products, respectively, compared with field LWUP measurements.

  15. Performance of Blind Source Separation Algorithms for FMRI Analysis using a Group ICA Method

    PubMed Central

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D.

    2007-01-01

    Independent component analysis (ICA) is a popular blind source separation (BSS) technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist, however the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely information maximization, maximization of non-gaussianity, joint diagonalization of cross-cumulant matrices, and second-order correlation based methods when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study the variability among different ICA algorithms and propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA, and JADE all yield reliable results; each having their strengths in specific areas. EVD, an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for the iterative ICA algorithms, it is important to investigate the variability of the estimates from different runs. We test the consistency of the iterative algorithms, Infomax and FastICA, by running the algorithm a number of times with different initializations and note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis. PMID:17540281

  16. Performance of blind source separation algorithms for fMRI analysis using a group ICA method.

    PubMed

    Correa, Nicolle; Adali, Tülay; Calhoun, Vince D

    2007-06-01

    Independent component analysis (ICA) is a popular blind source separation technique that has proven to be promising for the analysis of functional magnetic resonance imaging (fMRI) data. A number of ICA approaches have been used for fMRI data analysis, and even more ICA algorithms exist; however, the impact of using different algorithms on the results is largely unexplored. In this paper, we study the performance of four major classes of algorithms for spatial ICA, namely, information maximization, maximization of non-Gaussianity, joint diagonalization of cross-cumulant matrices and second-order correlation-based methods, when they are applied to fMRI data from subjects performing a visuo-motor task. We use a group ICA method to study variability among different ICA algorithms, and we propose several analysis techniques to evaluate their performance. We compare how different ICA algorithms estimate activations in expected neuronal areas. The results demonstrate that the ICA algorithms using higher-order statistical information prove to be quite consistent for fMRI data analysis. Infomax, FastICA and joint approximate diagonalization of eigenmatrices (JADE) all yield reliable results, with each having its strengths in specific areas. Eigenvalue decomposition (EVD), an algorithm using second-order statistics, does not perform reliably for fMRI data. Additionally, for iterative ICA algorithms, it is important to investigate the variability of estimates from different runs. We test the consistency of the iterative algorithms Infomax and FastICA by running the algorithm a number of times with different initializations, and we note that they yield consistent results over these multiple runs. Our results greatly improve our confidence in the consistency of ICA for fMRI data analysis.

  17. A climatology of fine absorbing biomass burning, urban and industrial aerosols detected from satellites

    NASA Astrophysics Data System (ADS)

    Kalaitzi, Nikoleta; Hatzianastassiou, Nikos; Gkikas, Antonis; Papadimas, Christos D.; Torres, Omar; Mihalopoulos, Nikos

    2017-04-01

    Natural biomass burning (BB) along with anthropogenic urban and industrial aerosol particles, altogether labeled here as BU aerosols, contain black and brown carbon which both absorb strongly the solar radiation. Thus, BU aerosols warm significantly the atmosphere also causing adjustments to cloud properties, which traditionally are known as cloud indirect and semi-direct effects. Given the role of the effects of BU aerosols for contemporary and future climate change, and the uncertainty associated with BU, both ascertained by the latest IPCC reports, there is an urgent need for improving our knowledge on the spatial and temporal variability of BU aerosols all over the globe. Over the last few decades, thanks to the rapid development of satellite observational techniques and retrieval algorithms it is now possible to detect BU aerosols based on satellite measurements. However, care must be taken in order to ensure the ability to distinguish BU from other aerosol types usually co-existing in the Earth's atmosphere. In the present study, an algorithm is presented, based on a synergy of different satellite measurements, aiming to identify and quantify BU aerosols over the entire globe and during multiple years. The objective is to build a satellite-based climatology of BU aerosols intended for use for various purposes. The produced regime, namely the spatial and temporal variability of BU aerosols, emphasizes the BU frequency of occurrence and their intensity, in terms of aerosol optical depth (AOD). The algorithm is using the following aerosol optical properties describing the size and atmospheric loading of BU aerosols: (i) spectral AOD, (ii) Ångström Exponent (AE), (iii) Fine Fraction (FF) and (iv) Aerosol Index (AI). The relevant data are taken from Collection 006 MODIS-Aqua, except for AI which is taken from OMI-Aura. The identification of BU aerosols by the algorithm is based on a specific thresholding technique, with AI≥1.5, AE≥1.2 and FF≥0.6 threshold values. The study spans the 11-year period 2005-2015, which enables to examine the inter-annual variability and possible changes of BU aerosols. Emphasis is given on specific world areas known to be sources of BU emissions. An effort is also made to separate with the algorithm the BB from BU aerosols, aiming to create a satellite database of biomass burning aerosols. The results of the algorithm, as to BB aerosols and the ability to separate them, are evaluated through comparisons against the global satellite databases of MODIS active fire counts as well as AIRS carbon monoxide (CO), which is a key indicator of presence of biomass burning activities. The algorithm estimates frequencies of occurrence of BU aerosols reaching up to 10 days/year and AOD values up to 1.5 or even larger. The results indicate the existence of seasonal cycles of biomass burning in south and central Africa as well as in South America (Amazonia), with highest BU frequencies during June-September, December-February and August-October, respectively, whereas they successfully reproduce features like the export of African BB aerosols into the Atlantic Ocean.

  18. 40 CFR 60.265 - Monitoring of operations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... quantity, by weight. (3) Time and duration of each tapping period and the identification of material tapped... only the volumetric flow rate through the capture system for control of emissions from the tapping... performance test. If emissions due to tapping are captured and ducted separately from emissions of the...

  19. 40 CFR 63.4551 - How do I demonstrate initial compliance with the emission limitations?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants for Surface Coating... limits or work practice standards in §§ 63.4492 and 63.4493, respectively. You must conduct a separate...

  20. [Object Separation from Medical X-Ray Images Based on ICA].

    PubMed

    Li, Yan; Yu, Chun-yu; Miao, Ya-jian; Fei, Bin; Zhuang, Feng-yun

    2015-03-01

    X-ray medical image can examine diseased tissue of patients and has important reference value for medical diagnosis. With the problems that traditional X-ray images have noise, poor level sense and blocked aliasing organs, this paper proposes a method for the introduction of multi-spectrum X-ray imaging and independent component analysis (ICA) algorithm to separate the target object. Firstly image de-noising preprocessing ensures the accuracy of target extraction based on independent component analysis and sparse code shrinkage. Then according to the main proportion of organ in the images, aliasing thickness matrix of each pixel was isolated. Finally independent component analysis obtains convergence matrix to reconstruct the target object with blind separation theory. In the ICA algorithm, it found that when the number is more than 40, the target objects separate successfully with the aid of subjective evaluation standard. And when the amplitudes of the scale are in the [25, 45] interval, the target images have high contrast and less distortion. The three-dimensional figure of Peak signal to noise ratio (PSNR) shows that the different convergence times and amplitudes have a greater influence on image quality. The contrast and edge information of experimental images achieve better effects with the convergence times 85 and amplitudes 35 in the ICA algorithm.

  1. Regional uncertainty of GOSAT XCO2 retrievals in China: quantification and attribution

    NASA Astrophysics Data System (ADS)

    Bie, Nian; Lei, Liping; Zeng, ZhaoCheng; Cai, Bofeng; Yang, Shaoyuan; He, Zhonghua; Wu, Changjiang; Nassar, Ray

    2018-03-01

    The regional uncertainty of the column-averaged dry air mole fraction of CO2 (XCO2) retrieved using different algorithms from the Greenhouse gases Observing SATellite (GOSAT) and its attribution are still not well understood. This paper investigates the regional performance of XCO2 within a latitude band of 37-42° N segmented into 8 cells in a grid of 5° from west to east (80-120° E) in China, where typical land surface types and geographic conditions exist. The former includes desert, grassland and built-up areas mixed with cropland; and the latter includes anthropogenic emissions that change from small to large from west to east, including those from the megacity of Beijing. For these specific cells, we evaluate the regional uncertainty of GOSAT XCO2 retrievals by quantifying and attributing the consistency of XCO2 retrievals from four algorithms (ACOS, NIES, OCFP and SRFP) by intercomparison. These retrievals are then specifically compared with simulated XCO2 from the high-resolution nested model in East Asia of the Goddard Earth Observing System 3-D chemical transport model (GEOS-Chem). We also introduce the anthropogenic CO2 emissions data generated from the investigation of surface emitting point sources that was conducted by the Ministry of Environmental Protection of China to GEOS-Chem simulations of XCO2 over the Chinese mainland. The results indicate that (1) regionally, the four algorithms demonstrate smaller absolute biases of 0.7-1.1 ppm in eastern cells, which are covered by built-up areas mixed with cropland with intensive anthropogenic emissions, than those in the western desert cells (1.0-1.6 ppm) with a high-brightness surface from the pairwise comparison results of XCO2 retrievals. (2) Compared with XCO2 simulated by GEOS-Chem (GEOS-XCO2), the XCO2 values from ACOS and SRFP have better agreement, while values from OCFP are the least consistent with GEOS-XCO2. (3) Viewing attributions of XCO2 in the spatio-temporal pattern, ACOS and SRFP demonstrate similar patterns, while OCFP is largely different from the others. In conclusion, the discrepancy in the four algorithms is the smallest in eastern cells in the study area, where the megacity of Beijing is located and where there are strong anthropogenic CO2 emissions, which implies that XCO2 from satellite observations could be reliably applied in the assessment of atmospheric CO2 enhancements induced by anthropogenic CO2 emissions. The large inconsistency among the four algorithms presented in western deserts which displays a high albedo and dust aerosols, moreover, demonstrates that further improvement is still necessary in such regions, even though many algorithms have endeavored to minimize the effects of aerosols scattering and surface albedo.

  2. Multispectral autofluorescence diagnosis of non-melanoma cutaneous tumors

    NASA Astrophysics Data System (ADS)

    Borisova, Ekaterina; Dogandjiiska, Daniela; Bliznakova, Irina; Avramov, Latchezar; Pavlova, Elmira; Troyanova, Petranka

    2009-07-01

    Fluorescent analysis of basal cell carcinoma (BCC), squamous cell carcinoma (SCC), keratoacanthoma and benign cutaneous lesions is carried out under initial phase of clinical trial in the National Oncological Center - Sofia. Excitation sources with maximum of emission at 365, 380, 405, 450 and 630 nm are applied for better differentiation between nonmelanoma malignant cutaneous lesions fluorescence and spectral discrimination from the benign pathologies. Major spectral features are addressed and diagnostic discrimination algorithms based on lesions' emission properties are proposed. The diagnostic algorithms and evaluation procedures found will be applied for development of an optical biopsy clinical system for skin cancer detection in the frames of National Oncological Center and other university hospital dermatological departments in our country.

  3. Merging NLO multi-jet calculations with improved unitarization

    NASA Astrophysics Data System (ADS)

    Bellm, Johannes; Gieseke, Stefan; Plätzer, Simon

    2018-03-01

    We present an algorithm to combine multiple matrix elements at LO and NLO with a parton shower. We build on the unitarized merging paradigm. The inclusion of higher orders and multiplicities reduce the scale uncertainties for observables sensitive to hard emissions, while preserving the features of inclusive quantities. The combination allows further soft and collinear emissions to be predicted by the all-order parton-shower approximation. We inspect the impact of terms that are formally but not parametrically negligible. We present results for a number of collider observables where multiple jets are observed, either on their own or in the presence of additional uncoloured particles. The algorithm is implemented in the event generator Herwig.

  4. Visualization of NO2 emission sources using temporal and spatial pattern analysis in Asia

    NASA Astrophysics Data System (ADS)

    Schütt, A. M. N.; Kuhlmann, G.; Zhu, Y.; Lipkowitsch, I.; Wenig, M.

    2016-12-01

    Nitrogen dioxide (NO2) is an indicator for population density and level of development, but the contributions of the different emission sources to the overall concentrations remains mostly unknown. In order to allocate fractions of OMI NO2 to emission types, we investigate several temporal cycles and regional patterns.Our analysis is based on daily maps of tropospheric NO2 vertical column densities (VCDs) from the Ozone Monitoring Instrument (OMI). The data set is mapped to a high resolution grid by a histopolation algorithm. This algorithm is based on a continuous parabolic spline, producing more realistic smooth distributions while reproducing the measured OMI values when integrating over ground pixel areas.In the resulting sequence of zoom in maps, we analyze weekly and annual cycles for cities, countryside and highways in China, Japan and Korea Republic and look for patterns and trends and compare the derived results to emission sources in Middle Europe and North America. Due to increased heating in winter compared to summer and more traffic during the week than on Sundays, we dissociate traffic, heating and power plants and visualized maps with different sources. We will also look into the influence of emission control measures during big events like the Olympic Games 2008 and the World Expo 2010 as a possibility to confirm our classification of NO2 emission sources.

  5. Optimizing management of the condensing heat and cooling of gases compression in oxy block using of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Brzęczek, Mateusz; Bartela, Łukasz

    2013-12-01

    This paper presents the parameters of the reference oxy combustion block operating with supercritical steam parameters, equipped with an air separation unit and a carbon dioxide capture and compression installation. The possibility to recover the heat in the analyzed power plant is discussed. The decision variables and the thermodynamic functions for the optimization algorithm were identified. The principles of operation of genetic algorithm and methodology of conducted calculations are presented. The sensitivity analysis was performed for the best solutions to determine the effects of the selected variables on the power and efficiency of the unit. Optimization of the heat recovery from the air separation unit, flue gas condition and CO2 capture and compression installation using genetic algorithm was designed to replace the low-pressure section of the regenerative water heaters of steam cycle in analyzed unit. The result was to increase the power and efficiency of the entire power plant.

  6. Algorithm for removing scalp signals from functional near-infrared spectroscopy signals in real time using multidistance optodes.

    PubMed

    Kiguchi, Masashi; Funane, Tsukasa

    2014-11-01

    A real-time algorithm for removing scalp-blood signals from functional near-infrared spectroscopy signals is proposed. Scalp and deep signals have different dependencies on the source-detector distance. These signals were separated using this characteristic. The algorithm was validated through an experiment using a dynamic phantom in which shallow and deep absorptions were independently changed. The algorithm for measurement of oxygenated and deoxygenated hemoglobins using two wavelengths was explicitly obtained. This algorithm is potentially useful for real-time systems, e.g., brain-computer interfaces and neuro-feedback systems.

  7. An algorithm for converting a virtual-bond chain into a complete polypeptide backbone chain

    NASA Technical Reports Server (NTRS)

    Luo, N.; Shibata, M.; Rein, R.

    1991-01-01

    A systematic analysis is presented of the algorithm for converting a virtual-bond chain, defined by the coordinates of the alpha-carbons of a given protein, into a complete polypeptide backbone. An alternative algorithm, based upon the same set of geometric parameters used in the Purisima-Scheraga algorithm but with a different "linkage map" of the algorithmic procedures, is proposed. The global virtual-bond chain geometric constraints are more easily separable from the loal peptide geometric and energetic constraints derived from, for example, the Ramachandran criterion, within the framework of this approach.

  8. Modelled and field measurements of biogenic hydrocarbon emissions from a Canadian deciduous forest

    NASA Astrophysics Data System (ADS)

    Fuentes, J. D.; Wang, D.; Den Hartog, G.; Neumann, H. H.; Dann, T. F.; Puckett, K. J.

    The Biogenic Emission Inventory System (BEIS) used by the United States Environmental Protection Agency (Lamb et al., 1993, Atmospheric Environment21, 1695-1705; Pierce and Waldruff, 1991, J. Air Waste Man. Ass.41, 937-941) was tested for its ability to provide realistic microclimate descriptions within a deciduous forest in Canada. The microclimate description within plant canopies is required because isoprene emission rates from plants are strongly influenced by foliage temperature and photosynthetically active radiation impinging on leaves while monoterpene emissions depend primarily on leaf temperature. Model microclimate results combined with plant emission rates and local biomass distribution were used to derive isoprene and α-pinene emissions from the deciduous forest canopy. In addition, modelled isoprene emission estimates were compared to measured emission rates at the leaf level. The current model formulation provides realistic microclimatic conditions for the forest crown where modelled and measured air and foliage temperature are within 3°C. However, the model provides inadequate microclimate characterizations in the lower canopy where estimated and measured foliage temperatures differ by as much as 10°C. This poor agreement may be partly due to improper model characterization of relative humidity and ambient temperature within the canopy. These uncertainties in estimated foliage temperature can lead to underestimates of hydrocarbon emission estimates of two-fold. Moreover, the model overestimates hydrocarbon emissions during the early part of the growing season and underestimates emissions during the middle and latter part of the growing season. These emission uncertainties arise because of the assumed constant biomass distribution of the forest and constant hydrocarbon emission rates throughout the season. The BEIS model, which is presently used in Canada to estimate inventories of hydrocarbon emissions from vegetation, underestimates emission rates by at least two-fold compared to emissions derived from field measurements. The isoprene emission algorithm proposed by Guenther et al. (1993), applied at the leaf level, provides relatively good agreement compared to measurements. Field measurements indicate that isoprene emissions change with leaf ontogeny and differ amongst tree species. Emission rates defined as function of foliage development stage and plant species need to be introduced in the hydrocarbon emission algorithms. Extensive model evaluation and more hydrocarbon emission measurement;: from different plant species are required to fully assess the appropriateness of this emission calculation approach for Canadian forests.

  9. Identification of column edges of DNA fragments by using K-means clustering and mean algorithm on lane histograms of DNA agarose gel electrophoresis images

    NASA Astrophysics Data System (ADS)

    Turan, Muhammed K.; Sehirli, Eftal; Elen, Abdullah; Karas, Ismail R.

    2015-07-01

    Gel electrophoresis (GE) is one of the most used method to separate DNA, RNA, protein molecules according to size, weight and quantity parameters in many areas such as genetics, molecular biology, biochemistry, microbiology. The main way to separate each molecule is to find borders of each molecule fragment. This paper presents a software application that show columns edges of DNA fragments in 3 steps. In the first step the application obtains lane histograms of agarose gel electrophoresis images by doing projection based on x-axis. In the second step, it utilizes k-means clustering algorithm to classify point values of lane histogram such as left side values, right side values and undesired values. In the third step, column edges of DNA fragments is shown by using mean algorithm and mathematical processes to separate DNA fragments from the background in a fully automated way. In addition to this, the application presents locations of DNA fragments and how many DNA fragments exist on images captured by a scientific camera.

  10. SST algorithm based on radiative transfer model

    NASA Astrophysics Data System (ADS)

    Mat Jafri, Mohd Z.; Abdullah, Khiruddin; Bahari, Alui

    2001-03-01

    An algorithm for measuring sea surface temperature (SST) without recourse to the in-situ data for calibration has been proposed. The algorithm which is based on the recorded infrared signal by the satellite sensor is composed of three terms, namely, the surface emission, the up-welling radiance emitted by the atmosphere, and the down-welling atmospheric radiance reflected at the sea surface. This algorithm requires the transmittance values of thermal bands. The angular dependence of the transmittance function was modeled using the MODTRAN code. Radiosonde data were used with the MODTRAN code. The expression of transmittance as a function of zenith view angle was obtained for each channel through regression of the MODTRAN output. The Ocean Color Temperature Scanner (OCTS) data from the Advanced Earth Observation Satellite (ADEOS) were used in this study. The study area covers the seas of the North West of Peninsular Malaysia region. The in-situ data (ship collected SST values) were used for verification of the results. Cloud contaminated pixels were masked out using the standard procedures which have been applied to the Advanced Very High Resolution Radiometer (AVHRR) data. The cloud free pixels at the in-situ sites were extracted for analysis. The OCTS data were then substituted in the proposed algorithm. The appropriate transmittance value for each channel was then assigned in the calculation. Assessment for the accuracy was made by observing the correlation and the rms deviations between the computed and the ship collected values. The results were also compared with the results from OCTS multi- channel sea surface temperature algorithm. The comparison produced high correlation values. The performance of this algorithm is comparable with the established OCTS algorithm. The effect of emissivity on the retrieved SST values was also investigated. SST map was generated and contoured manually.

  11. Effects of soil water content and elevated CO2 concentration on the monoterpene emission rate of Cryptomeria japonica.

    PubMed

    Mochizuki, Tomoki; Amagai, Takashi; Tani, Akira

    2018-09-01

    Monoterpenes emitted from plants contribute to the formation of secondary pollution and affect the climate system. Monoterpene emission rates may be affected by environmental changes such as increasing CO 2 concentration caused by fossil fuel burning and drought stress induced by climate change. We measured monoterpene emissions from Cryptomeria japonica clone saplings grown under different CO 2 concentrations (control: ambient CO 2 level, elevated CO 2 : 1000μmolmol -1 ). The saplings were planted in the ground and we did not artificially control the SWC. The relationship between the monoterpene emissions and naturally varying SWC was investigated. The dominant monoterpene was α-pinene, followed by sabinene. The monoterpene emission rates were exponentially correlated with temperature for all measurements and normalized (35°C) for each measurement day. The daily normalized monoterpene emission rates (E s0.10 ) were positively and linearly correlated with SWC under both control and elevated CO 2 conditions (control: r 2 =0.55, elevated CO 2 : r 2 =0.89). The slope of the regression line of E s0.10 against SWC was significantly higher under elevated CO 2 than under control conditions (ANCOVA: P<0.01), indicating that the effect of CO 2 concentration on monoterpene emission rates differed by soil water status. The monoterpene emission rates estimated by considering temperature and SWC (Improved G93 algorithm) better agreed with the measured monoterpene emission rates, when compared with the emission rates estimated by considering temperature alone (G93 algorithm). Our results demonstrated that the combined effects of SWC and CO 2 concentration are important for controlling the monoterpene emissions from C. japonica clone saplings. If these relationships can be applied to the other coniferous tree species, our results may be useful to improve accuracy of monoterpene emission estimates from the coniferous forests as affected by climate change in the present and foreseeable future. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. NASA MEaSUREs Combined ASTER and MODIS Emissivity over Land (CAMEL) Uncertainty Estimation

    NASA Astrophysics Data System (ADS)

    Feltz, M.; Borbas, E. E.; Knuteson, R. O.; Hulley, G. C.; Hook, S. J.

    2016-12-01

    Under the NASA MEASUREs project a new global, land surface emissivity database is being made available as part of the Unified and Coherent Land Surface Temperature and Emissivity Earth System Data Record. This new CAMEL emissivity database is created by the merging of the MODIS baseline-fit emissivity database (UWIREMIS) developed at the University of Wisconsin-Madison and the ASTER Global Emissivity Dataset v4 produced at the Jet Propulsion Labratory. The combined CAMEL product leverages the ability of ASTER's 5 bands to more accurately resolve the TIR (8-12 micron) region and the ability of UWIREMIS to provide information throughout the 3.6-12 micron IR region. It will be made available for 2000 through 2017 at monthly mean, 5 km resolution for 13 bands within the 3.6-14.3 micron region, and will also be extended to 417 infrared spectral channels using a principal component regression approach. Uncertainty estimates of the CAMEL will be provided that combine temporal, spatial, and algorithm variability as part of a total uncertainty estimate for the emissivity product. The spatial and temporal uncertainties are calculated as the standard deviation of the surrounding 5x5 pixels and 3 neighboring months respectively while the algorithm uncertainty is calculated using a measure of the difference between the two CAMEL emissivity inputs—the ASTER GED and MODIS baseline-fit products. This work describes these uncertainty estimation methods in detail and shows first results. Global, monthly results for different seasons are shown as well as case study examples at locations with different land surface types. Comparisons of the case studies to both lab values and an independent emissivity climatology derived from IASI measurements (Dan Zhou et al., IEEE Trans., 2011) are included.

  13. A three-dimensional model-based partial volume correction strategy for gated cardiac mouse PET imaging

    NASA Astrophysics Data System (ADS)

    Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.

    2012-07-01

    Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.

  14. Development of microwave rainfall retrieval algorithm for climate applications

    NASA Astrophysics Data System (ADS)

    KIM, J. H.; Shin, D. B.

    2014-12-01

    With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.

  15. Planck 2015 results. XXII. A map of the thermal Sunyaev-Zeldovich effect

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chiang, H. C.; Christensen, P. R.; Churazov, E.; Clements, D. L.; Colombo, L. P. L.; Combet, C.; Comis, B.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Giard, M.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Holmes, W. A.; Hornstrup, A.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lacasa, F.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Macías-Pérez, J. F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Melchiorri, A.; Melin, J.-B.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savelainen, M.; Savini, G.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tramonte, D.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We have constructed all-sky Compton parameters maps, y-maps, of the thermal Sunyaev-Zeldovich (tSZ) effect by applying specifically tailored component separation algorithms to the 30 to 857 GHz frequency channel maps from the Planck satellite. These reconstructed y-maps are delivered as part of the Planck 2015 release. The y-maps are characterized in terms of noise properties and residual foreground contamination, mainly thermal dust emission at large angular scales, and cosmic infrared background and extragalactic point sources at small angular scales. Specific masks are defined to minimize foreground residuals and systematics. Using these masks, we compute the y-map angular power spectrum and higher order statistics. From these we conclude that the y-map is dominated by tSZ signal in the multipole range, 20 <ℓ< 600. We compare the measured tSZ power spectrum and higher order statistics to various physically motivated models and discuss the implications of our results in terms of cluster physics and cosmology.

  16. Complex Wall Boundary Conditions for Modeling Combustion in Catalytic Channels

    NASA Astrophysics Data System (ADS)

    Zhu, Huayang; Jackson, Gregory

    2000-11-01

    Monolith catalytic reactors for exothermic oxidation are being used in automobile exhaust clean-up and ultra-low emissions combustion systems. The reactors present a unique coupling between mass, heat, and momentum transport in a channel flow configuration. The use of porous catalytic coatings along the channel wall presents a complex boundary condition when modeled with the two-dimensional channel flow. This current work presents a 2-D transient model for predicting the performance of catalytic combustion systems for methane oxidation on Pd catalysts. The model solves the 2-D compressible transport equations for momentum, species, and energy, which are solved with a porous washcoat model for the wall boundary conditions. A time-splitting algorithm is used to separate the stiff chemical reactions from the convective/diffusive equations for the channel flow. A detailed surface chemistry mechanism is incorporated for the catalytic wall model and is used to predict transient ignition and steady-state conversion of CH4-air flows in the catalytic reactor.

  17. Photon-photon scattering at the high-intensity frontier

    NASA Astrophysics Data System (ADS)

    Gies, Holger; Karbstein, Felix; Kohlfürst, Christian; Seegert, Nico

    2018-04-01

    The tremendous progress in high-intensity laser technology and the establishment of dedicated high-field laboratories in recent years have paved the way towards a first observation of quantum vacuum nonlinearities at the high-intensity frontier. We advocate a particularly prospective scenario, where three synchronized high-intensity laser pulses are brought into collision, giving rise to signal photons, whose frequency and propagation direction differ from the driving laser pulses, thus providing various means to achieve an excellent signal to background separation. Based on the theoretical concept of vacuum emission, we employ an efficient numerical algorithm which allows us to model the collision of focused high-intensity laser pulses in unprecedented detail. We provide accurate predictions for the numbers of signal photons accessible in experiment. Our study is the first to predict the precise angular spread of the signal photons, and paves the way for a first verification of quantum vacuum nonlinearity in a well-controlled laboratory experiment at one of the many high-intensity laser facilities currently coming online.

  18. Absorption cooling sources atmospheric emissions decrease by implementation of simple algorithm for limiting temperature of cooling water

    NASA Astrophysics Data System (ADS)

    Wojdyga, Krzysztof; Malicki, Marcin

    2017-11-01

    Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.

  19. Physically corrected forward operators for induced emission tomography: a simulation study

    NASA Astrophysics Data System (ADS)

    Viganò, Nicola Roberto; Solé, Vicente Armando

    2018-03-01

    X-ray emission tomography techniques over non-radioactive materials allow one to investigate different and important aspects of the matter that are usually not addressable with the standard x-ray transmission tomography, such as density, chemical composition and crystallographic information. However, the quantitative reconstruction of these investigated properties is hindered by additional problems, including the self-attenuation of the emitted radiation. Work has been done in the past, especially concerning x-ray fluorescence tomography, but this has always focused on solving very specific problems. The novelty of this work resides in addressing the problem of induced emission tomography from a much wider perspective, introducing a unified discrete representation that can be used to modify existing algorithms to reconstruct the data of the different types of experiments. The direct outcome is a clear and easy mathematical description of the implementation details of such algorithms, despite small differences in geometry and other practical aspects, but also the possibility to express the reconstruction as a minimization problem, allowing the use of variational methods, and a more flexible modeling of the noise involved in the detection process. In addition, we look at the results of a few selected simulated data reconstructions that describe the effect of physical corrections like the self-attenuation, and the response to noise of the adapted reconstruction algorithms.

  20. Subspace algorithms for identifying separable-in-denominator 2D systems with deterministic-stochastic inputs

    NASA Astrophysics Data System (ADS)

    Ramos, José A.; Mercère, Guillaume

    2016-12-01

    In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.

  1. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    NASA Astrophysics Data System (ADS)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  2. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation ofmore » quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.« less

  3. A Charrelation Matrix-Based Blind Adaptive Detector for DS-CDMA Systems

    PubMed Central

    Luo, Zhongqiang; Zhu, Lidong

    2015-01-01

    In this paper, a blind adaptive detector is proposed for blind separation of user signals and blind estimation of spreading sequences in DS-CDMA systems. The blind separation scheme exploits a charrelation matrix for simple computation and effective extraction of information from observation signal samples. The system model of DS-CDMA signals is modeled as a blind separation framework. The unknown user information and spreading sequence of DS-CDMA systems can be estimated only from the sampled observation signals. Theoretical analysis and simulation results show that the improved performance of the proposed algorithm in comparison with the existing conventional algorithms used in DS-CDMA systems. Especially, the proposed scheme is suitable for when the number of observation samples is less and the signal to noise ratio (SNR) is low. PMID:26287209

  4. A Charrelation Matrix-Based Blind Adaptive Detector for DS-CDMA Systems.

    PubMed

    Luo, Zhongqiang; Zhu, Lidong

    2015-08-14

    In this paper, a blind adaptive detector is proposed for blind separation of user signals and blind estimation of spreading sequences in DS-CDMA systems. The blind separation scheme exploits a charrelation matrix for simple computation and effective extraction of information from observation signal samples. The system model of DS-CDMA signals is modeled as a blind separation framework. The unknown user information and spreading sequence of DS-CDMA systems can be estimated only from the sampled observation signals. Theoretical analysis and simulation results show that the improved performance of the proposed algorithm in comparison with the existing conventional algorithms used in DS-CDMA systems. Especially, the proposed scheme is suitable for when the number of observation samples is less and the signal to noise ratio (SNR) is low.

  5. Two-stream Convolutional Neural Network for Methane Emissions Quantification

    NASA Astrophysics Data System (ADS)

    Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.

    2017-12-01

    Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair programs.

  6. Land surface temperature measurements from EOS MODIS data

    NASA Technical Reports Server (NTRS)

    Wan, Zhengming

    1995-01-01

    A significant progress has been made in TIR instrumentation which is required to establish the spectral BRDF/emissivity knowledge base of land-surface materials and to validate the land-surface temperature (LST) algorithms. The SIBRE (spectral Infrared Bidirectional Reflectance and Emissivity) system and a TIR system for measuring spectral directional-hemispherical emissivity have been completed and tested successfully. Optical properties and performance features of key components (including spectrometer, and TIR source) of these systems have been characterized by integrated use of local standards (blackbody and reference plates). The stabilization of the spectrometer performance was improved by a custom designed and built liquid cooling system. Methods and procedures for measuring spectral TIR BRDF and directional-hemispheric emissivity with these two systems have been verified in sample measurements. These TIR instruments have been used in the laboratory and the field, giving very promising results. The measured spectral emissivities of water surface are very close to the calculated values based on well established water refractive index values in published papers. Preliminary results show that the TIR instruments can be used for validation of the MODIS LST algorithm in homogeneous test sites. The beta-3 version of the MODIS LST software is being prepared for its delivery scheduled in the early second half of this year.

  7. Emissions of Volatile Organic Compounds (VOCs) from Animal Husbandry: Chemical Compositions, Separation of Sources and Animal Types

    NASA Astrophysics Data System (ADS)

    Yuan, B.; Coggon, M.; Koss, A.; Warneke, C.; Eilerman, S. J.; Neuman, J. A.; Peischl, J.; Aikin, K. C.; Ryerson, T. B.; De Gouw, J. A.

    2016-12-01

    Concentrated animal feeding operations (CAFOs) are important sources of volatile organic compounds (VOCs) in the atmosphere. We used a hydronium ion time-of-flight chemical ionization mass spectrometer (H3O+ ToF-CIMS) to measure VOC emissions from CAFOs in the Northern Front Range of Colorado during an aircraft campaign (SONGNEX) for regional contributions and from a mobile laboratory sampling for chemical characterizations of individual animal feedlots. The main VOCs emitted from CAFOs include carboxylic acids, alcohols, carbonyls, phenolic species, sulfur- and nitrogen-containing species. Alcohols and carboxylic acids dominate VOC concentrations. Sulfur-containing and phenolic species become more important in terms of odor activity values and NO3 reactivity, respectively. The high time-resolution mobile measurements allow the separation of the sources of VOCs from different parts of the operations occurring within the facilities. We show that the increase of ethanol concentrations were primarily associated with feed storage and handling. We apply a multivariate regression analysis using NH3 and ethanol as tracers to attribute the relative importance of animal-related emissions (animal exhalation and waste) and feed-related emissions (feed storage and handling) for different VOC species. Feed storage and handling contribute significantly to emissions of alcohols, carbonyls and carboxylic acids. Phenolic species and nitrogen-containing species are predominantly associated with animals and their waste. VOC ratios can be potentially used as indicators for the separation of emissions from dairy and beef cattle from the regional aircraft measurements.

  8. Planck 2015 results. XXV. Diffuse low-frequency Galactic foregrounds

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Orlando, E.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vidal, M.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We discuss the Galactic foreground emission between 20 and 100 GHz based on observations by Planck and WMAP. The total intensity in this part of the spectrum is dominated by free-free and spinning dust emission, whereas the polarized intensity is dominated by synchrotron emission. The Commander component-separation tool has been used to separate the various astrophysical processes in total intensity. Comparison with radio recombination line templates verifies the recovery of the free-free emission along the Galactic plane. Comparison of the high-latitude Hα emission with our free-free map shows residuals that correlate with dust optical depth, consistent with a fraction (≈30%) of Hα having been scattered by high-latitude dust. We highlight a number of diffuse spinning dust morphological features at high latitude. There is substantial spatial variation in the spinning dust spectrum, with the emission peak (in Iν) ranging from below 20 GHz to more than 50 GHz. There is a strong tendency for the spinning dust component near many prominent H II regions to have a higher peak frequency, suggesting that this increase in peak frequency is associated with dust in the photo-dissociation regions around the nebulae. The emissivity of spinning dust in these diffuse regions is of the same order as previous detections in the literature. Over the entire sky, the Commander solution finds more anomalous microwave emission (AME) than the WMAP component maps, at the expense of synchrotron and free-free emission. This can be explained by the difficulty in separating multiple broadband components with a limited number of frequency maps. Future surveys, particularly at 5-20 GHz, will greatly improve the separation by constraining the synchrotron spectrum. We combine Planck and WMAP data to make the highest signal-to-noise ratio maps yet of the intensity of the all-sky polarized synchrotron emission at frequencies above a few GHz. Most of the high-latitude polarized emission is associated with distinct large-scale loops and spurs, and we re-discuss their structure. We argue that nearly all the emission at 40deg > l > -90deg is part of the Loop I structure, and show that the emission extends much further in to the southern Galactic hemisphere than previously recognised, giving Loop I an ovoid rather than circular outline. However, it does not continue as far as the "Fermi bubble/microwave haze", making it less probable that these are part of the same structure. We identify a number of new faint features in the polarized sky, including a dearth of polarized synchrotron emission directly correlated with a narrow, roughly 20deg long filament seen in Hα at high Galactic latitude. Finally, we look for evidence of polarized AME, however many AME regions are significantly contaminated by polarized synchrotron emission, and we find a 2σ upper limit of 1.6% in the Perseus region.

  9. Planck 2015 results: XXV. Diffuse low-frequency Galactic foregrounds

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.; ...

    2016-09-20

    In this paper, we discuss the Galactic foreground emission between 20 and 100 GHz based on observations by Planck and WMAP. The total intensity in this part of the spectrum is dominated by free-free and spinning dust emission, whereas the polarized intensity is dominated by synchrotron emission. The Commander component-separation tool has been used to separate the various astrophysical processes in total intensity. Comparison with radio recombination line templates verifies the recovery of the free-free emission along the Galactic plane. Comparison of the high-latitude Hα emission with our free-free map shows residuals that correlate with dust optical depth, consistent withmore » a fraction (≈30%) of Hα having been scattered by high-latitude dust. We highlight a number of diffuse spinning dust morphological features at high latitude. There is substantial spatial variation in the spinning dust spectrum, with the emission peak (in I ν) ranging from below 20 GHz to more than 50 GHz. There is a strong tendency for the spinning dust component near many prominent H ii regions to have a higher peak frequency, suggesting that this increase in peak frequency is associated with dust in the photo-dissociation regions around the nebulae. The emissivity of spinning dust in these diffuse regions is of the same order as previous detections in the literature. Over the entire sky, the Commander solution finds more anomalous microwave emission (AME) than the WMAP component maps, at the expense of synchrotron and free-free emission. This can be explained by the difficulty in separating multiple broadband components with a limited number of frequency maps. Future surveys, particularly at 5–20 GHz, will greatly improve the separation by constraining the synchrotron spectrum. We combine Planck and WMAP data to make the highest signal-to-noise ratio maps yet of the intensity of the all-sky polarized synchrotron emission at frequencies above a few GHz. Most of the high-latitude polarized emission is associated with distinct large-scale loops and spurs, and we re-discuss their structure. We argue that nearly all the emission at 40deg > l > -90deg is part of the Loop I structure, and show that the emission extends much further in to the southern Galactic hemisphere than previously recognised, giving Loop I an ovoid rather than circular outline. However, it does not continue as far as the “Fermi bubble/microwave haze”, making it less probable that these are part of the same structure. We identify a number of new faint features in the polarized sky, including a dearth of polarized synchrotron emission directly correlated with a narrow, roughly 20deg long filament seen in Hα at high Galactic latitude. In conclusion, we look for evidence of polarized AME, however many AME regions are significantly contaminated by polarized synchrotron emission, and we find a 2σ upper limit of 1.6% in the Perseus region.« less

  10. Planck 2015 results: XXV. Diffuse low-frequency Galactic foregrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.

    In this paper, we discuss the Galactic foreground emission between 20 and 100 GHz based on observations by Planck and WMAP. The total intensity in this part of the spectrum is dominated by free-free and spinning dust emission, whereas the polarized intensity is dominated by synchrotron emission. The Commander component-separation tool has been used to separate the various astrophysical processes in total intensity. Comparison with radio recombination line templates verifies the recovery of the free-free emission along the Galactic plane. Comparison of the high-latitude Hα emission with our free-free map shows residuals that correlate with dust optical depth, consistent withmore » a fraction (≈30%) of Hα having been scattered by high-latitude dust. We highlight a number of diffuse spinning dust morphological features at high latitude. There is substantial spatial variation in the spinning dust spectrum, with the emission peak (in I ν) ranging from below 20 GHz to more than 50 GHz. There is a strong tendency for the spinning dust component near many prominent H ii regions to have a higher peak frequency, suggesting that this increase in peak frequency is associated with dust in the photo-dissociation regions around the nebulae. The emissivity of spinning dust in these diffuse regions is of the same order as previous detections in the literature. Over the entire sky, the Commander solution finds more anomalous microwave emission (AME) than the WMAP component maps, at the expense of synchrotron and free-free emission. This can be explained by the difficulty in separating multiple broadband components with a limited number of frequency maps. Future surveys, particularly at 5–20 GHz, will greatly improve the separation by constraining the synchrotron spectrum. We combine Planck and WMAP data to make the highest signal-to-noise ratio maps yet of the intensity of the all-sky polarized synchrotron emission at frequencies above a few GHz. Most of the high-latitude polarized emission is associated with distinct large-scale loops and spurs, and we re-discuss their structure. We argue that nearly all the emission at 40deg > l > -90deg is part of the Loop I structure, and show that the emission extends much further in to the southern Galactic hemisphere than previously recognised, giving Loop I an ovoid rather than circular outline. However, it does not continue as far as the “Fermi bubble/microwave haze”, making it less probable that these are part of the same structure. We identify a number of new faint features in the polarized sky, including a dearth of polarized synchrotron emission directly correlated with a narrow, roughly 20deg long filament seen in Hα at high Galactic latitude. In conclusion, we look for evidence of polarized AME, however many AME regions are significantly contaminated by polarized synchrotron emission, and we find a 2σ upper limit of 1.6% in the Perseus region.« less

  11. Acoustic emission frequency discrimination

    NASA Technical Reports Server (NTRS)

    Sugg, Frank E. (Inventor); Graham, Lloyd J. (Inventor)

    1988-01-01

    In acoustic emission nondestructive testing, broadband frequency noise is distinguished from narrow banded acoustic emission signals, since the latter are valid events indicative of structural flaws in the material being examined. This is accomplished by separating out those signals which contain frequency components both within and beyond (either above or below) the range of valid acoustic emission events. Application to acoustic emission monitoring during nondestructive bond verification and proof loading of undensified tiles on the Space Shuttle Orbiter is considered.

  12. Finite-difference simulation of transonic separated flow using a full potential boundary layer interaction approach

    NASA Technical Reports Server (NTRS)

    Van Dalsem, W. R.; Steger, J. L.

    1983-01-01

    A new, fast, direct-inverse, finite-difference boundary-layer code has been developed and coupled with a full-potential transonic airfoil analysis code via new inviscid-viscous interaction algorithms. The resulting code has been used to calculate transonic separated flows. The results are in good agreement with Navier-Stokes calculations and experimental data. Solutions are obtained in considerably less computer time than Navier-Stokes solutions of equal resolution. Because efficient inviscid and viscous algorithms are used, it is expected this code will also compare favorably with other codes of its type as they become available.

  13. Comparison of methodologies estimating emissions of aircraft pollutants, environmental impact assessment around airports

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurniawan, Jermanto S., E-mail: Jermanto.kurniawan@inrets.fr; Khardi, S., E-mail: Salah.khardi@inrets.f

    2011-04-15

    Air transportation growth has increased continuously over the years. The rise in air transport activity has been accompanied by an increase in the amount of energy used to provide air transportation services. It is also assumed to increase environmental impacts, in particular pollutant emissions. Traditionally, the environmental impacts of atmospheric emissions from aircraft have been addressed in two separate ways; aircraft pollutant emissions occurring during the landing and take-off (LTO) phase (local pollutant emissions) which is the focus of this study, and the non-LTO phase (global/regional pollutant emissions). Aircraft pollutant emissions are an important source of pollution and directly ormore » indirectly harmfully affect human health, ecosystems and cultural heritage. There are many methods to asses pollutant emissions used by various countries. However, using different and separate methodology will cause a variation in results, some lack of information and the use of certain methods will require justification and reliability that must be demonstrated and proven. In relation to this issue, this paper presents identification, comparison and reviews of some of the methodologies of aircraft pollutant assessment from the past, present and future expectations of some studies and projects focusing on emissions factors, fuel consumption, and uncertainty. This paper also provides reliable information on the impacts of aircraft pollutant emissions in short term and long term predictions.« less

  14. Refinements to HIRS CO2 Slicing Algorithm with Results Compared to CALIOP and MODIS

    NASA Astrophysics Data System (ADS)

    Frey, R.; Menzel, P.

    2012-12-01

    This poster reports on the refinement of a cloud top property algorithm using High-resolution Infrared Radiation Sounder (HIRS) measurements. The HIRS sensor has been flown on fifteen satellites from TIROS-N through NOAA-19 and MetOp-A forming a continuous 30 year cloud data record. Cloud Top Pressure and effective emissivity (cloud fraction multiplied by cloud emissivity) are derived using the 15 μm spectral bands in the CO2 absorption band, implementing the CO2 slicing technique which is strong for high semi-transparent clouds but weak for low clouds with little thermal contrast from clear skies. We report on algorithm adjustments suggested from MODIS cloud record validations and the inclusion of collocated AVHRR cloud fraction data from the PATMOS-x algorithm. Reprocessing results for 2008 are shown using NOAA-18 HIRS and collocated CALIOP data for validation, as well as comparisons to MODIS monthly mean values. Adjustments to the cloud algorithm include (a) using CO2 slicing for all ice and mixed phase clouds and infrared window determinations for all water clouds, (b) determining the cloud top pressure from the most opaque CO2 spectral band pair seeing the cloud, (c) reducing the cloud detection threshold for the CO2 slicing algorithm to include conditions of smaller radiance differences that are often due to thin ice clouds, and (d) identifying stratospheric clouds when an opaque band is warmer than a less opaque band.

  15. Photon and radiowave emission from peeling pressure sensitive adhesives in air

    NASA Technical Reports Server (NTRS)

    Donaldson, E. E.; Shen, X. A.; Dickinson, J. T.

    1985-01-01

    During separation of an adhesive from a polymer substrate in air, intense bursts of photons ('phE', for photon emission) and long wavelength electromagnetic radiation ('RE', for radiowave emission), similar to those reported earlier by Deryagin, et al. (1978) have been observed. In this paper, careful measurements of the phE time distributions, as well as time correlations between bursts of phE and RE, are reported. These results support the view that patches of electrical charge produced by charge separation between dissimilar materials lead to microdischarges in and near the crack tip. The role of these discharges in producing sustained phE after the discharge has been extinguished is also discussed.

  16. Methods to Calculate the Heat Index as an Exposure Metric in Environmental Health Research

    PubMed Central

    Bell, Michelle L.; Peng, Roger D.

    2013-01-01

    Background: Environmental health research employs a variety of metrics to measure heat exposure, both to directly study the health effects of outdoor temperature and to control for temperature in studies of other environmental exposures, including air pollution. To measure heat exposure, environmental health studies often use heat index, which incorporates both air temperature and moisture. However, the method of calculating heat index varies across environmental studies, which could mean that studies using different algorithms to calculate heat index may not be comparable. Objective and Methods: We investigated 21 separate heat index algorithms found in the literature to determine a) whether different algorithms generate heat index values that are consistent with the theoretical concepts of apparent temperature and b) whether different algorithms generate similar heat index values. Results: Although environmental studies differ in how they calculate heat index values, most studies’ heat index algorithms generate values consistent with apparent temperature. Additionally, most different algorithms generate closely correlated heat index values. However, a few algorithms are potentially problematic, especially in certain weather conditions (e.g., very low relative humidity, cold weather). To aid environmental health researchers, we have created open-source software in R to calculate the heat index using the U.S. National Weather Service’s algorithm. Conclusion: We identified 21 separate heat index algorithms used in environmental research. Our analysis demonstrated that methods to calculate heat index are inconsistent across studies. Careful choice of a heat index algorithm can help ensure reproducible and consistent environmental health research. Citation: Anderson GB, Bell ML, Peng RD. 2013. Methods to calculate the heat index as an exposure metric in environmental health research. Environ Health Perspect 121:1111–1119; http://dx.doi.org/10.1289/ehp.1206273 PMID:23934704

  17. Using spectral distribution of radiance temperature and relative emissivity for determination of the true temperature of opaque materials

    NASA Astrophysics Data System (ADS)

    Rusin, S. P.

    2018-01-01

    A method for determining the thermodynamic (true) temperature of opaque materials by the registered spectrum of thermal radiation under the conditions when we do not know emissivity of a free-radiating body is presented. A special function, which is a product of relative emissivity of tungsten by the radiation wavelength, was used as the input data. The accuracy of results is analyzed. It is shown that when using relative emissivity, the proposed algorithm can be used both within the range of applicability of the Wien approximation and the Planck formula.

  18. 40 CFR 98.183 - Calculating GHG emissions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Stationary Fuel Combustion Sources). (2) Calculate and report process and combustion CO2 emissions separately... calculate and report the annual process CO2 emissions from each smelting furnace using the procedure in... § 98.33(b)(4)(ii) or (b)(4)(iii), you must calculate and report combined process and combustion CO2...

  19. 40 CFR 98.183 - Calculating GHG emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Stationary Fuel Combustion Sources). (2) Calculate and report process and combustion CO2 emissions separately... calculate and report the annual process CO2 emissions from each smelting furnace using the procedure in... § 98.33(b)(4)(ii) or (b)(4)(iii), you must calculate and report combined process and combustion CO2...

  20. 40 CFR 98.183 - Calculating GHG emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Stationary Fuel Combustion Sources). (2) Calculate and report process and combustion CO2 emissions separately... calculate and report the annual process CO2 emissions from each smelting furnace using the procedure in... § 98.33(b)(4)(ii) or (b)(4)(iii), you must calculate and report combined process and combustion CO2...

  1. 40 CFR 98.183 - Calculating GHG emissions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Stationary Fuel Combustion Sources). (2) Calculate and report process and combustion CO2 emissions separately... calculate and report the annual process CO2 emissions from each smelting furnace using the procedure in... § 98.33(b)(4)(ii) or (b)(4)(iii), you must calculate and report combined process and combustion CO2...

  2. Evaluation of Multilayer Cloud Detection Using a MODIS CO2-Slicing Algorithm With CALIPSO-CloudSat Measurements

    NASA Technical Reports Server (NTRS)

    Viudez-Mora, Antonio; Kato, Seiji

    2015-01-01

    This work evaluates the multilayer cloud (MCF) algorithm based on CO2-slicing techniques against CALISPO-CloudSat (CLCS) measurement. This evaluation showed that the MCF underestimates the presence of multilayered clouds compared with CLCS and are retrained to cloud emissivities below 0.8 and cloud optical septs no larger than 0.3.

  3. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  4. High Dynamic Velocity Range Particle Image Velocimetry Using Multiple Pulse Separation Imaging

    PubMed Central

    Persoons, Tim; O’Donovan, Tadhg S.

    2011-01-01

    The dynamic velocity range of particle image velocimetry (PIV) is determined by the maximum and minimum resolvable particle displacement. Various techniques have extended the dynamic range, however flows with a wide velocity range (e.g., impinging jets) still challenge PIV algorithms. A new technique is presented to increase the dynamic velocity range by over an order of magnitude. The multiple pulse separation (MPS) technique (i) records series of double-frame exposures with different pulse separations, (ii) processes the fields using conventional multi-grid algorithms, and (iii) yields a composite velocity field with a locally optimized pulse separation. A robust criterion determines the local optimum pulse separation, accounting for correlation strength and measurement uncertainty. Validation experiments are performed in an impinging jet flow, using laser-Doppler velocimetry as reference measurement. The precision of mean flow and turbulence quantities is significantly improved compared to conventional PIV, due to the increase in dynamic range. In a wide range of applications, MPS PIV is a robust approach to increase the dynamic velocity range without restricting the vector evaluation methods. PMID:22346564

  5. Sequential fitting-and-separating reflectance components for analytical bidirectional reflectance distribution function estimation.

    PubMed

    Lee, Yu; Yu, Chanki; Lee, Sang Wook

    2018-01-10

    We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.

  6. TREFEX: Trend Estimation and Change Detection in the Response of MOX Gas Sensors

    PubMed Central

    Pashami, Sepideh; Lilienthal, Achim J.; Schaffernicht, Erik; Trincavelli, Marco

    2013-01-01

    Many applications of metal oxide gas sensors can benefit from reliable algorithms to detect significant changes in the sensor response. Significant changes indicate a change in the emission modality of a distant gas source and occur due to a sudden change of concentration or exposure to a different compound. As a consequence of turbulent gas transport and the relatively slow response and recovery times of metal oxide sensors, their response in open sampling configuration exhibits strong fluctuations that interfere with the changes of interest. In this paper we introduce TREFEX, a novel change point detection algorithm, especially designed for metal oxide gas sensors in an open sampling system. TREFEX models the response of MOX sensors as a piecewise exponential signal and considers the junctions between consecutive exponentials as change points. We formulate non-linear trend filtering and change point detection as a parameter-free convex optimization problem for single sensors and sensor arrays. We evaluate the performance of the TREFEX algorithm experimentally for different metal oxide sensors and several gas emission profiles. A comparison with the previously proposed GLR method shows a clearly superior performance of the TREFEX algorithm both in detection performance and in estimating the change time. PMID:23736853

  7. Correlated Noise: How it Breaks NMF, and What to Do About It.

    PubMed

    Plis, Sergey M; Potluru, Vamsi K; Lane, Terran; Calhoun, Vince D

    2011-01-12

    Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data.

  8. Correlated Noise: How it Breaks NMF, and What to Do About It

    PubMed Central

    Plis, Sergey M.; Potluru, Vamsi K.; Lane, Terran; Calhoun, Vince D.

    2010-01-01

    Non-negative matrix factorization (NMF) is a problem of decomposing multivariate data into a set of features and their corresponding activations. When applied to experimental data, NMF has to cope with noise, which is often highly correlated. We show that correlated noise can break the Donoho and Stodden separability conditions of a dataset and a regular NMF algorithm will fail to decompose it, even when given freedom to be able to represent the noise as a separate feature. To cope with this issue, we present an algorithm for NMF with a generalized least squares objective function (glsNMF) and derive multiplicative updates for the method together with proving their convergence. The new algorithm successfully recovers the true representation from the noisy data. Robust performance can make glsNMF a valuable tool for analyzing empirical data. PMID:23750288

  9. Fault Detection of Bearing Systems through EEMD and Optimization Algorithm

    PubMed Central

    Lee, Dong-Han; Ahn, Jong-Hyo; Koh, Bong-Hwan

    2017-01-01

    This study proposes a fault detection and diagnosis method for bearing systems using ensemble empirical mode decomposition (EEMD) based feature extraction, in conjunction with particle swarm optimization (PSO), principal component analysis (PCA), and Isomap. First, a mathematical model is assumed to generate vibration signals from damaged bearing components, such as the inner-race, outer-race, and rolling elements. The process of decomposing vibration signals into intrinsic mode functions (IMFs) and extracting statistical features is introduced to develop a damage-sensitive parameter vector. Finally, PCA and Isomap algorithm are used to classify and visualize this parameter vector, to separate damage characteristics from healthy bearing components. Moreover, the PSO-based optimization algorithm improves the classification performance by selecting proper weightings for the parameter vector, to maximize the visualization effect of separating and grouping of parameter vectors in three-dimensional space. PMID:29143772

  10. 40 CFR 63.7912 - What are my inspection and monitoring requirements for separators?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 14 2014-07-01 2014-07-01 false What are my inspection and monitoring requirements for separators? 63.7912 Section 63.7912 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants: Site Remediation Separators...

  11. 40 CFR 63.7912 - What are my inspection and monitoring requirements for separators?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 14 2013-07-01 2013-07-01 false What are my inspection and monitoring requirements for separators? 63.7912 Section 63.7912 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants: Site Remediation Separators...

  12. 40 CFR 63.7912 - What are my inspection and monitoring requirements for separators?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 14 2012-07-01 2011-07-01 true What are my inspection and monitoring requirements for separators? 63.7912 Section 63.7912 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants: Site Remediation Separators...

  13. 40 CFR 63.7912 - What are my inspection and monitoring requirements for separators?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 13 2010-07-01 2010-07-01 false What are my inspection and monitoring requirements for separators? 63.7912 Section 63.7912 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants: Site Remediation Separators...

  14. 40 CFR 63.7912 - What are my inspection and monitoring requirements for separators?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 13 2011-07-01 2011-07-01 false What are my inspection and monitoring requirements for separators? 63.7912 Section 63.7912 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... CATEGORIES (CONTINUED) National Emission Standards for Hazardous Air Pollutants: Site Remediation Separators...

  15. Series Hybrid Electric Vehicle Power System Optimization Based on Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Zhu, Tianjun; Li, Bin; Zong, Changfu; Wu, Yang

    2017-09-01

    Hybrid electric vehicles (HEV), compared with conventional vehicles, have complex structures and more component parameters. If variables optimization designs are carried on all these parameters, it will increase the difficulty and the convergence of algorithm program, so this paper chooses the parameters which has a major influence on the vehicle fuel consumption to make it all work at maximum efficiency. First, HEV powertrain components modelling are built. Second, taking a tandem hybrid structure as an example, genetic algorithm is used in this paper to optimize fuel consumption and emissions. Simulation results in ADVISOR verify the feasibility of the proposed genetic optimization algorithm.

  16. Electric Field Induce Blue Shift and Intensity Enhancement in 2D Exciplex Organic Light Emitting Diodes; Controlling Electron-Hole Separation.

    PubMed

    Al Attar, Hameed A; Monkman, Andy P

    2016-09-01

    A simple but novel method is designed to study the characteristics of the exciplex state pinned at a donor-acceptor abrupt interface and the effect an external electric field has on these excited states. The reverse Onsager process, where the field induces blue-shifted emission and increases the efficiency of the exciplex emission as the e-h separation reduces, is discussed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Maritime NOx Emissions Over Chinese Seas Derived From Satellite Observations

    NASA Astrophysics Data System (ADS)

    Ding, J.; van der A, R. J.; Mijling, B.; Jalkanen, J.-P.; Johansson, L.; Levelt, P. F.

    2018-02-01

    By applying an inversion algorithm to NOx satellite observations from Ozone Monitoring Instrument, monthly NOx emissions for a 10 year period (2007 to 2016) over Chinese seas are presented for the first time. No effective regulations on NOx emissions have been implemented for ships in China, which is reflected in the trend analysis of maritime emissions. The maritime emissions display a continuous increase rate of about 20% per year until 2012 and slow down to 3% after that. The seasonal cycle of shipping emissions has regional variations, but all regions show lower emissions during winter. Simulations by an atmospheric chemistry transport model show a notable influence of maritime emissions on air pollution over coastal areas, especially in summer. The satellite-derived spatial distribution and the magnitude of maritime emissions over Chinese seas are in good agreement with bottom-up studies based on the Automatic Identification System of ships.

  18. Robust MST-Based Clustering Algorithm.

    PubMed

    Liu, Qidong; Zhang, Ruisheng; Zhao, Zhili; Wang, Zhenghai; Jiao, Mengyao; Wang, Guangjing

    2018-06-01

    Minimax similarity stresses the connectedness of points via mediating elements rather than favoring high mutual similarity. The grouping principle yields superior clustering results when mining arbitrarily-shaped clusters in data. However, it is not robust against noises and outliers in the data. There are two main problems with the grouping principle: first, a single object that is far away from all other objects defines a separate cluster, and second, two connected clusters would be regarded as two parts of one cluster. In order to solve such problems, we propose robust minimum spanning tree (MST)-based clustering algorithm in this letter. First, we separate the connected objects by applying a density-based coarsening phase, resulting in a low-rank matrix in which the element denotes the supernode by combining a set of nodes. Then a greedy method is presented to partition those supernodes through working on the low-rank matrix. Instead of removing the longest edges from MST, our algorithm groups the data set based on the minimax similarity. Finally, the assignment of all data points can be achieved through their corresponding supernodes. Experimental results on many synthetic and real-world data sets show that our algorithm consistently outperforms compared clustering algorithms.

  19. Daytime sea fog retrieval based on GOCI data: a case study over the Yellow Sea.

    PubMed

    Yuan, Yibo; Qiu, Zhongfeng; Sun, Deyong; Wang, Shengqiang; Yue, Xiaoyuan

    2016-01-25

    In this paper, a new daytime sea fog detection algorithm has been developed by using Geostationary Ocean Color Imager (GOCI) data. Based on spectral analysis, differences in spectral characteristics were found over different underlying surfaces, which include land, sea, middle/high level clouds, stratus clouds and sea fog. Statistical analysis showed that the Rrc (412 nm) (Rayleigh Corrected Reflectance) of sea fog pixels is approximately 0.1-0.6. Similarly, various band combinations could be used to separate different surfaces. Therefore, three indices (SLDI, MCDI and BSI) were set to discern land/sea, middle/high level clouds and fog/stratus clouds, respectively, from which it was generally easy to extract fog pixels. The remote sensing algorithm was verified using coastal sounding data, which demonstrated that the algorithm had the ability to detect sea fog. The algorithm was then used to monitor an 8-hour sea fog event and the results were consistent with observational data from buoys data deployed near the Sheyang coast (121°E, 34°N). The goal of this study was to establish a daytime sea fog detection algorithm based on GOCI data, which shows promise for detecting fog separately from stratus.

  20. Artifact Removal from Biosignal using Fixed Point ICA Algorithm for Pre-processing in Biometric Recognition

    NASA Astrophysics Data System (ADS)

    Mishra, Puneet; Singla, Sunil Kumar

    2013-01-01

    In the modern world of automation, biological signals, especially Electroencephalogram (EEG) and Electrocardiogram (ECG), are gaining wide attention as a source of biometric information. Earlier studies have shown that EEG and ECG show versatility with individuals and every individual has distinct EEG and ECG spectrum. EEG (which can be recorded from the scalp due to the effect of millions of neurons) may contain noise signals such as eye blink, eye movement, muscular movement, line noise, etc. Similarly, ECG may contain artifact like line noise, tremor artifacts, baseline wandering, etc. These noise signals are required to be separated from the EEG and ECG signals to obtain the accurate results. This paper proposes a technique for the removal of eye blink artifact from EEG and ECG signal using fixed point or FastICA algorithm of Independent Component Analysis (ICA). For validation, FastICA algorithm has been applied to synthetic signal prepared by adding random noise to the Electrocardiogram (ECG) signal. FastICA algorithm separates the signal into two independent components, i.e. ECG pure and artifact signal. Similarly, the same algorithm has been applied to remove the artifacts (Electrooculogram or eye blink) from the EEG signal.

  1. Simple Common Plane contact detection algorithm for FE/FD methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vorobiev, O

    2006-07-19

    Common-plane (CP) algorithm is widely used in Discrete Element Method (DEM) to model contact forces between interacting particles or blocks. A new simple contact detection algorithm is proposed to model contacts in FE/FD methods which is similar to the CP algorithm. The CP is defined as a plane separating interacting faces of FE/FD mesh instead of blocks or particles in the original CP method. The method does not require iterations. It is very robust and easy to implement both in 2D and 3D case.

  2. Multifractal analysis of multiparticle emission data in the framework of visibility graph and sandbox algorithm

    NASA Astrophysics Data System (ADS)

    Mali, P.; Manna, S. K.; Mukhopadhyay, A.; Haldar, P. K.; Singh, G.

    2018-03-01

    Multiparticle emission data in nucleus-nucleus collisions are studied in a graph theoretical approach. The sandbox algorithm used to analyze complex networks is employed to characterize the multifractal properties of the visibility graphs associated with the pseudorapidity distribution of charged particles produced in high-energy heavy-ion collisions. Experimental data on 28Si+Ag/Br interaction at laboratory energy Elab = 14 . 5 A GeV, and 16O+Ag/Br and 32S+Ag/Br interactions both at Elab = 200 A GeV, are used in this analysis. We observe a scale free nature of the degree distributions of the visibility and horizontal visibility graphs associated with the event-wise pseudorapidity distributions. Equivalent event samples simulated by ultra-relativistic quantum molecular dynamics, produce degree distributions that are almost identical to the respective experiment. However, the multifractal variables obtained by using sandbox algorithm for the experiment to some extent differ from the respective simulated results.

  3. Infrared atmospheric sounding interferometer correlation interferometry for the retrieval of atmospheric gases: the case of H2O and CO2.

    PubMed

    Grieco, Giuseppe; Masiello, Guido; Serio, Carmine; Jones, Roderic L; Mead, Mohammed I

    2011-08-01

    Correlation interferometry is a particular application of Fourier transform spectroscopy with partially scanned interferograms. Basically, it is a technique to obtain the difference between the spectra of atmospheric radiance at two diverse spectral resolutions. Although the technique could be exploited to design an appropriate correlation interferometer, in this paper we are concerned with the analytical aspects of the method and its application to high-spectral-resolution infrared observations in order to separate the emission of a given atmospheric gas from a spectral signal dominated by surface emission, such as in the case of satellite spectrometers operated in the nadir looking mode. The tool will be used to address some basic questions concerning the vertical spatial resolution of H2O and to develop an algorithm to retrieve the columnar amount of CO2. An application to complete interferograms from the Infrared Atmospheric Sounding Interferometer will be presented and discussed. For H2O, we have concluded that the vertical spatial resolution in the lower troposphere mostly depends on broad features associated with the spectrum, whereas for CO2, we have derived a technique capable of retrieving a CO2 columnar amount with accuracy of ≈±7 parts per million by volume at the level of each single field of view.

  4. Form Subdivisions: Their Identification and Use in LCSH.

    ERIC Educational Resources Information Center

    O'Neill, Edward T.; Chan, Lois Mai; Childress, Eric; Dean, Rebecca; El-Hoshy, Lynn M.; Vizine-Goetz, Diane

    2001-01-01

    Discusses form subdivisions as part of Library of Congress Subject Headings (LCSH) and the MARC format, which did not have a separate subfield code to identify form subdivisions. Describes the development of an algorithm to identify form subdivisions and reports results of an evaluation of the algorithm. (LRW)

  5. Nutrient Losses during Winter and Summer Storage of Separated and Unseparated Digested Cattle Slurry.

    PubMed

    Perazzolo, Francesca; Mattachini, Gabriele; Riva, Elisabetta; Provolo, Giorgio

    2017-07-01

    Management factors affect nutrient loss during animal manure slurry storage in different ways. We conducted a pilot-scale study to evaluate carbon (C) and nitrogen (N) losses from unseparated and digested dairy slurry during winter and summer storage. In addition to season, treatments included mechanical separation of digestate into liquid and solid fractions and bimonthly mixing. Chemical analyses were performed every 2 wk for the mixed materials and at the start and end of storage for unmixed materials. The parameters examined allowed us to estimate C and N losses and examine the factors that determine these losses as well as emission patterns. Gas measurements were done every 2 wk to determine the main forms in which gaseous losses occurred. To evaluate the effect of separation, measured losses and emissions of separated liquid and solid fractions were mathematically combined using the mass separation efficiency of the mechanical separator. Nutrient losses were mainly affected by climatic conditions. Losses of C (up to 23%) from unseparated, unmixed digestate and of N (38% from combined separated fractions and from unseparated digestate) were much greater in summer than in winter, when C and N losses were <7%. Mixing tended to significantly increase N losses ( < 0.1) only in winter. Mechanical separation resulted in lower GHG emissions from combined separated fractions than from unseparated digestate. Results indicate that to maximize the fertilizer value of digested slurry, dairy farmers must carefully choose management practices, especially in summer. For separated digestates, practices should focus on storage of the liquid fraction, the major contributor of C and N losses (up to 64 and 90% of total losses, respectively) in summer. Moreover, management practices should limit NH, the main form of N losses (up to 99.5%). Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  6. 40 CFR 63.1044 - Standards-Separator vented to control device.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... outdoor exposure to wind, moisture, and sunlight; and the operating practices used for the separator on... operator shall inspect and monitor the air emission control equipment in accordance with the procedures...

  7. 40 CFR 63.1044 - Standards-Separator vented to control device.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... outdoor exposure to wind, moisture, and sunlight; and the operating practices used for the separator on... operator shall inspect and monitor the air emission control equipment in accordance with the procedures...

  8. 40 CFR 63.1044 - Standards-Separator vented to control device.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... outdoor exposure to wind, moisture, and sunlight; and the operating practices used for the separator on... operator shall inspect and monitor the air emission control equipment in accordance with the procedures...

  9. 40 CFR 63.1044 - Standards-Separator vented to control device.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... outdoor exposure to wind, moisture, and sunlight; and the operating practices used for the separator on... operator shall inspect and monitor the air emission control equipment in accordance with the procedures...

  10. 40 CFR 63.1044 - Standards-Separator vented to control device.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... outdoor exposure to wind, moisture, and sunlight; and the operating practices used for the separator on... operator shall inspect and monitor the air emission control equipment in accordance with the procedures...

  11. Estimation of snow emissivity via assimilation of multi-frequency passive microwave data into an ensemble-based data assimilation system

    NASA Astrophysics Data System (ADS)

    Farhadi, L.; Bateni, S. M.; Auligne, T.; Navari, M.

    2017-12-01

    Snow emissivity is a key parameter for the estimation of snow surface temperature, which is needed as an initial value in climate models and determination of the outgoing long-wave radiation. Moreover, snow emissivity is required for retrieval of atmospheric parameters (e.g., temperature and humidity profiles) from satellite measurements and satellite data assimilations in numerical weather prediction systems. Microwave emission models and remote sensing data cannot accurately estimate snow emissivity due to limitations attributed to each of them. Existing microwave emission models introduce significant uncertainties in their snow emissivity estimates. This is mainly due to shortcomings of the dense media theory for snow medium at high frequencies, and erroneous forcing variables. The well-known limitations of passive microwave data such as coarse spatial resolution, saturation in deep snowpack, and signal loss in wet snow are the major drawbacks of passive microwave retrieval algorithms for estimation of snow emissivity. A full exploitation of the information contained in the remote sensing data can be achieved by merging them with snow emission models within a data assimilation framework. Such an optimal merging can overcome the specific limitations of models and remote sensing data. An Ensemble Batch Smoother (EnBS) data assimilation framework was developed in this study to combine the synthetically generated passive microwave brightness temperatures at 1.4-, 18.7-, 36.5-, and 89-GHz frequencies with the MEMLS microwave emission model to reduce the uncertainty of the snow emissivity estimates. We have used the EnBS algorithm in the context of observing system simulation experiment (or synthetic experiment) at the local scale observation site (LSOS) of the NASA CLPX field campaign. Our findings showed that the developed methodology significantly improves the estimates of the snow emissivity. The simultaneous assimilation of passive microwave brightness temperatures at all frequencies (i.e., 1.4-, 18.7-, 36.5-, and 89-GHz) reduce the root-mean-square-error (RMSE) of snow emissivity at 1.4-, 18.7-, 36.5-, and 89-GHz (H-pol.) by 80%, 42%, 52%, 40%, respectively compared to the corresponding snow emissivity estimates from the open-loop model.

  12. A study on fast digital discrimination of neutron and gamma-ray for improvement neutron emission profile measurementa)

    PubMed Central

    Uchida, Y.; Takada, E.; Fujisaki, A.; Isobe, M.; Shinohara, K.; Tomita, H.; Kawarabayashi, J.; Iguchi, T.

    2014-01-01

    Neutron and γ-ray (n-γ) discrimination with a digital signal processing system has been used to measure the neutron emission profile in magnetic confinement fusion devices. However, a sampling rate must be set low to extend the measurement time because the memory storage is limited. Time jitter decreases a discrimination quality due to a low sampling rate. As described in this paper, a new charge comparison method was developed. Furthermore, automatic n-γ discrimination method was examined using a probabilistic approach. Analysis results were investigated using the figure of merit. Results show that the discrimination quality was improved. Automatic discrimination was applied using the EM algorithm and k-means algorithm. PMID:25430297

  13. Optimization of Location-Routing Problem for Cold Chain Logistics Considering Carbon Footprint.

    PubMed

    Wang, Songyi; Tao, Fengming; Shi, Yuhe

    2018-01-06

    In order to solve the optimization problem of logistics distribution system for fresh food, this paper provides a low-carbon and environmental protection point of view, based on the characteristics of perishable products, and combines with the overall optimization idea of cold chain logistics distribution network, where the green and low-carbon location-routing problem (LRP) model in cold chain logistics is developed with the minimum total costs as the objective function, which includes carbon emission costs. A hybrid genetic algorithm with heuristic rules is designed to solve the model, and an example is used to verify the effectiveness of the algorithm. Furthermore, the simulation results obtained by a practical numerical example show the applicability of the model while provide green and environmentally friendly location-distribution schemes for the cold chain logistics enterprise. Finally, carbon tax policies are introduced to analyze the impact of carbon tax on the total costs and carbon emissions, which proves that carbon tax policy can effectively reduce carbon dioxide emissions in cold chain logistics network.

  14. Operational Level 2 Data Processing System for the JEM/SMILES

    NASA Astrophysics Data System (ADS)

    Takahashi, C.; Mitsuda, C.; Suzuki, M.; Iwata, Y.; Horikawa, M.; Matsumoto, T.; Hayashi, H.; Imai, K.; Sano, T.; Takayanagi, M.

    2009-12-01

    To measure the thermal emission from stratospheric minor species with high sensitivity, the Superconducting Submillimeter-wave Limb-Emission Sounder (SMILES) aboard the Japanese Experiment Module (JEM) of the International Space Station (ISS) carries 4 K cooled Superconductor-Insulator-Superconductor (SIS) mixers. The major feature of the SMILES is its high-sensitive measurement ability with low system noise temperature less than 700 K. The SMILES measures the atmospheric limb emission from stratospheric minor constituents in the submillimeter-wave range, which are band A (624.3-625.5 GHz), band B (625.1-626.3 GHz), and band C (649.1-650.3 GHz). The target species of the SMILES are O3, ClO, HCl, HNO3, HOCl, CH3CN, HO2, BrO, and O3 isotopes (18OOO, 17OOO, and O17OO). Since a complete vertical scan takes 53 s and the orbital period of the ISS is approximately 93 min, approximately 105 scans per orbit give 1630 scans per day. There are 68 individual limb rays in a single scan, and the nominal altitude coverage is from 10 to 60 km. The spatial coverage is on a near global (38S - 65N). It is expected that it will be possible to make measurements within the elongated polar vortex in the Northern Hemisphere. As a part of the ground system for the SMILES, a level 2 data processing system (DPS-L2) designed to use a highly portable software working on a general Linux operating system has been developed. It retrieves the density distributions of the target species (level 2 data) from calibrated spectra (level 1B data) in near-real-time. The level 2 data are converted into the HDF-EOS format and are distributed to users accompanied with the ancillary data on the SMILES status through a web server. To support the analysis of the level 2 data, it is implemented on the Gfdnavi (geophysical fluid data navigator), which is a suite of software that facilitates databasing, analysis, data publication, and visualization of geophysical fluid data. This paper presents the development of the DPS-L2 along with the details on its algorithm and performance. The retrieval process consists of two parts: the forward model, which computes radiative transfer, and the inverse model, which deduces atmospheric states. Since the forward model must provide the most accurate basis for results and be implemented under limited computing resources, the forward model algorithm for an operational system has to be accurate and fast. Hence, the algorithm is improved (1) by designing accurate instrument functions such as the instrumental field of view (FOV), sideband rejection ratio of sideband separator, and spectral responses of acousto-optic spectrometer (AOS) and (2) by optimizing radiative transfer calculation. We have achieved that the accuracy of this algorithm is better than 1%, and the processing time for single-scan spectra is less than 1 min with 8 parallel processing using a 3.16-GHz Quad-Core Intel Xeon processor.

  15. [Evoked Potential Blind Extraction Based on Fractional Lower Order Spatial Time-Frequency Matrix].

    PubMed

    Long, Junbo; Wang, Haibin; Zha, Daifeng

    2015-04-01

    The impulsive electroencephalograph (EEG) noises in evoked potential (EP) signals is very strong, usually with a heavy tail and infinite variance characteristics like the acceleration noise impact, hypoxia and etc., as shown in other special tests. The noises can be described by a stable distribution model. In this paper, Wigner-Ville distribution (WVD) and pseudo Wigner-Ville distribution (PWVD) time-frequency distribution based on the fractional lower order moment are presented to be improved. We got fractional lower order WVD (FLO-WVD) and fractional lower order PWVD (FLO-PWVD) time-frequency distribution which could be suitable for a stable distribution process. We also proposed the fractional lower order spatial time-frequency distribution matrix (FLO-STFM) concept. Therefore, combining with time-frequency underdetermined blind source separation (TF-UBSS), we proposed a new fractional lower order spatial time-frequency underdetermined blind source separation (FLO-TF-UBSS) which can work in a stable distribution environment. We used the FLO-TF-UBSS algorithm to extract EPs. Simulations showed that the proposed method could effectively extract EPs in EEG noises, and the separated EPs and EEG signals based on FLO-TF-UBSS were almost the same as the original signal, but blind separation based on TF-UBSS had certain deviation. The correlation coefficient of the FLO-TF-UBSS algorithm was higher than the TF-UBSS algorithm when generalized signal-to-noise ratio (GSNR) changed from 10 dB to 30 dB and a varied from 1. 06 to 1. 94, and was approximately e- qual to 1. Hence, the proposed FLO-TF-UBSS method might be better than the TF-UBSS algorithm based on second order for extracting EP signal under an EEG noise environment.

  16. Development of a Smart Release Algorithm for Mid-Air Separation of Parachute Test Articles

    NASA Technical Reports Server (NTRS)

    Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project is currently developing an autonomous method to separate a capsule-shaped parachute test vehicle from an air-drop platform for use in the test program to develop and validate the parachute system for the Orion spacecraft. The CPAS project seeks to perform air-drop tests of an Orion-like boilerplate capsule. Delivery of the boilerplate capsule to the test condition has proven to be a critical and complicated task. In the current concept, the boilerplate vehicle is extracted from an aircraft on top of a Type V pallet and then separated from the pallet in mid-air. The attitude of the vehicles at separation is critical to avoiding re-contact and successfully deploying the boilerplate into a heatshield-down orientation. Neither the pallet nor the boilerplate has an active control system. However, the attitude of the mated vehicle as a function of time is somewhat predictable. CPAS engineers have designed an avionics system to monitor the attitude of the mated vehicle as it is extracted from the aircraft and command a release when the desired conditions are met. The algorithm includes contingency capabilities designed to release the test vehicle before undesirable orientations occur. The algorithm was verified with simulation and ground testing. The pre-flight development and testing is discussed and limitations of ground testing are noted. The CPAS project performed a series of three drop tests as a proof-of-concept of the release technique. These tests helped to refine the attitude instrumentation and software algorithm to be used on future tests. The drop tests are described in detail and the evolution of the release system with each test is described.

  17. scarlet: Source separation in multi-band images by Constrained Matrix Factorization

    NASA Astrophysics Data System (ADS)

    Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert

    2018-03-01

    SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.

  18. Spectroscopic CCD surveys for quasars at large redshift. 3: The Palomar Transit Grism Survey catalog

    NASA Technical Reports Server (NTRS)

    Schneider, Donald P.; Schmidt, Maarten; Gunn, James E.

    1994-01-01

    This paper reports the initial results of the Palomar Transit Grism Survey (PTGS). The PTGS was designed to produce a sample of z greater than 2.7 quasars that were identified by well-defined selection criteria. The survey consists of six narrow (approximately equal to 8.5 min wide) strips of sky; the total effective area is 61.47 sq deg. Low-resolution slitless spectra, covering the wavelength range from 4400 to 7500 A, were obtained for approximately 600 000 objects. The wavelength- and flux-calibrated spectra were searched for emission lines with an automatic software algorithm. A total to 1655 emission features in the grism data satisfied our signal-to-noise ratio and equivalent width selection criteria; subsequent slit spectroscopy of the candidates confirmed the existence of 1052 lines (928 different objects). Six groups of emission lines were detected in the survey: Lyman alpha + N V, C IV, C III1, Mg II, H Beta + (O III), and H alpha + (S II). More than two-thirds of the candidates are low-redshift (z less than 0.45) emission-line galaxies; ninety objects are high-redshift quasars (z greater than 2.7) detected via their Lyman alpha + N V emission lines. The survey contains three previously unknown quasars brighter than 17th magnitude; all three have redshifts of approximately equal to 1.3. In this paper we present the observational properties of the survey, the algorithms used to select the emission-line candidates, and the catalog of emission-line objects.

  19. Clustering the Orion B giant molecular cloud based on its molecular emission.

    PubMed

    Bron, Emeric; Daudon, Chloé; Pety, Jérôme; Levrier, François; Gerin, Maryvonne; Gratier, Pierre; Orkisz, Jan H; Guzman, Viviana; Bardeau, Sébastien; Goicoechea, Javier R; Liszt, Harvey; Öberg, Karin; Peretto, Nicolas; Sievers, Albrecht; Tremblin, Pascal

    2018-02-01

    Previous attempts at segmenting molecular line maps of molecular clouds have focused on using position-position-velocity data cubes of a single molecular line to separate the spatial components of the cloud. In contrast, wide field spectral imaging over a large spectral bandwidth in the (sub)mm domain now allows one to combine multiple molecular tracers to understand the different physical and chemical phases that constitute giant molecular clouds (GMCs). We aim at using multiple tracers (sensitive to different physical processes and conditions) to segment a molecular cloud into physically/chemically similar regions (rather than spatially connected components), thus disentangling the different physical/chemical phases present in the cloud. We use a machine learning clustering method, namely the Meanshift algorithm, to cluster pixels with similar molecular emission, ignoring spatial information. Clusters are defined around each maximum of the multidimensional Probability Density Function (PDF) of the line integrated intensities. Simple radiative transfer models were used to interpret the astrophysical information uncovered by the clustering analysis. A clustering analysis based only on the J = 1 - 0 lines of three isotopologues of CO proves suffcient to reveal distinct density/column density regimes ( n H ~ 100 cm -3 , ~ 500 cm -3 , and > 1000 cm -3 ), closely related to the usual definitions of diffuse, translucent and high-column-density regions. Adding two UV-sensitive tracers, the J = 1 - 0 line of HCO + and the N = 1 - 0 line of CN, allows us to distinguish two clearly distinct chemical regimes, characteristic of UV-illuminated and UV-shielded gas. The UV-illuminated regime shows overbright HCO + and CN emission, which we relate to a photochemical enrichment effect. We also find a tail of high CN/HCO + intensity ratio in UV-illuminated regions. Finer distinctions in density classes ( n H ~ 7 × 10 3 cm -3 ~ 4 × 10 4 cm -3 ) for the densest regions are also identified, likely related to the higher critical density of the CN and HCO + (1 - 0) lines. These distinctions are only possible because the high-density regions are spatially resolved. Molecules are versatile tracers of GMCs because their line intensities bear the signature of the physics and chemistry at play in the gas. The association of simultaneous multi-line, wide-field mapping and powerful machine learning methods such as the Meanshift clustering algorithm reveals how to decode the complex information available in these molecular tracers.

  20. An algorithm of the wildfire classification by its acoustic emission spectrum using Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Khamukhin, A. A.; Demin, A. Y.; Sonkin, D. M.; Bertoldo, S.; Perona, G.; Kretova, V.

    2017-01-01

    Crown fires are extremely dangerous as the speed of their distribution is dozen times higher compared to surface fires. Therefore, it is important to classify the fire type as early as possible. A method for forest fires classification exploits their computed acoustic emission spectrum compared with a set of samples of the typical fire acoustic emission spectrum stored in the database. This method implies acquisition acoustic data using Wireless Sensors Networks (WSNs) and their analysis in a central processing and a control center. The paper deals with an algorithm which can be directly implemented on a sensor network node that will allow reducing considerably the network traffic and increasing its efficiency. It is hereby suggested to use the sum of the squares ratio, with regard to amplitudes of low and high frequencies of the wildfire acoustic emission spectrum, as the indicator of a forest fire type. It is shown that the value of the crown fires indicator is several times higher than that of the surface ones. This allows classifying the fire types (crown, surface) in a short time interval and transmitting a fire type indicator code alongside with an alarm signal through the network.

  1. GLOBAL INVENTORY OF VOLATILE COMPOUND EMISSIONS FROM ANTHROPOGENIC SOURCES

    EPA Science Inventory

    The report describes a global inventory anthropogenic volatile organic compound (VOC) emissions that includes a separate inventory for each of seven pollutant groups--paraffins, olefins, aromatics, formaldehyde, other aldehydes, other aromatics, and marginally reactive compounds....

  2. Measurement of precipitation induced FUV emission and Geocoronal Lyman Alpha from the IMI mission

    NASA Technical Reports Server (NTRS)

    Mende, Stephen B.; Fuselier, S. A.; Rairden, R. L.

    1995-01-01

    This final report describes the activities of the Lockheed Martin Palo Alto Research Laboratory in studying the measurement of ion and electron precipitation induced Far Ultra-Violet (FUV) emissions and Geocoronal Lyman Alpha for the NASA Inner Magnetospheric Imager (IMI) mission. this study examined promising techniques that may allow combining several FUV instruments that would separately measure proton aurora, electron aurora, and geocoronal Lyman alpha into a single instrument operated on a spinning spacecraft. The study consisted of two parts. First, the geocoronal Lyman alpha, proton aurora, and electron aurora emissions were modeled to determine instrument requirements. Second, several promising techniques were investigated to determine if they were suitable for use in an IMI-type mission. Among the techniques investigated were the Hydrogen gas cell for eliminating cold geocoronal Lyman alpha emissions, and a coded aperture spectrometer with sufficient resolution to separate Doppler shifted Lyman alpha components.

  3. Engine Tune-up Service. Unit 6: Emission Control Systems. Posttests. Automotive Mechanics Curriculum.

    ERIC Educational Resources Information Center

    Morse, David T.; May, Theodore R.

    This book of posttests is designed to accompany the Engine Tune-Up Service Student Guide for Unit 6, Emission Control Systems, available separately as CE 031 220. Focus of the posttests is inspecting, testing, and servicing emission control systems. One multiple choice posttest is provided that covers the seven performance objectives contained in…

  4. A Fast, Automatic Segmentation Algorithm for Locating and Delineating Touching Cell Boundaries in Imaged Histopathology

    PubMed Central

    Qi, Xin; Xing, Fuyong; Foran, David J.; Yang, Lin

    2013-01-01

    Summary Background Automated analysis of imaged histopathology specimens could potentially provide support for improved reliability in detection and classification in a range of investigative and clinical cancer applications. Automated segmentation of cells in the digitized tissue microarray (TMA) is often the prerequisite for quantitative analysis. However overlapping cells usually bring significant challenges for traditional segmentation algorithms. Objectives In this paper, we propose a novel, automatic algorithm to separate overlapping cells in stained histology specimens acquired using bright-field RGB imaging. Methods It starts by systematically identifying salient regions of interest throughout the image based upon their underlying visual content. The segmentation algorithm subsequently performs a quick, voting based seed detection. Finally, the contour of each cell is obtained using a repulsive level set deformable model using the seeds generated in the previous step. We compared the experimental results with the most current literature, and the pixel wise accuracy between human experts' annotation and those generated using the automatic segmentation algorithm. Results The method is tested with 100 image patches which contain more than 1000 overlapping cells. The overall precision and recall of the developed algorithm is 90% and 78%, respectively. We also implement the algorithm on GPU. The parallel implementation is 22 times faster than its C/C++ sequential implementation. Conclusion The proposed overlapping cell segmentation algorithm can accurately detect the center of each overlapping cell and effectively separate each of the overlapping cells. GPU is proven to be an efficient parallel platform for overlapping cell segmentation. PMID:22526139

  5. Technical Report Series on Global Modeling and Data Assimilation. Volume 12; Comparison of Satellite Global Rainfall Algorithms

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.

    1997-01-01

    Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.

  6. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    NASA Astrophysics Data System (ADS)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  7. GPM Pre-Launch Algorithm Development for Physically-Based Falling Snow Retrievals

    NASA Technical Reports Server (NTRS)

    Jackson, Gail Skofronick; Tokay, Ali; Kramer, Anne W.; Hudak, David

    2008-01-01

    In this work we compare and correlate the long time series (Nov.-March) neasurements of precipitation rate from the Parsivels and 2DVD to the passive (89, 150, 183+/-1, +/-3, +/-7 GHz) observations of NOAA's AMSU-B radiometer. There are approximately 5-8 AMSU-B overpass views of the CARE site a day. We separate the comparisons into categories of no precipitation, liquid rain and falling snow precipitation. Scatterplots between the Parsivel snowfall rates and AMSU-B brightness temperatures (TBs) did not show an exploitable relationship for retrievals. We further compared and contrasted brightness temperatures to other surface measurements such as temperature and relative humidity with equally unsatisfying results. We found that there are similar TBs (especially at 89 and 150 GHz) for cases with falling snow and for non-precipitating cases. The comparisons indicate that surface emissivity contributions to the satellite observed TB over land can add uncertainty in detecting and estimating falling snow. The newest results show that the cloud icc scattering signal in the AMSU-B data call be detected by computing clear air TBs based on CARE radiosonde data and a rough estimate of surface emissivity. That is the differences in computed TI3 and AMSU-B TB for precipitating and nonprecipitating cases are unique such that the precipitating versus lon-precipitating cases can be identified. These results require that the radiosonde releases are within an hour of the AMSU-B data and allow for three surface types: no snow on the ground, less than 5 cm snow on the ground, and greater than 5 cm on the ground (as given by ground station data). Forest fraction and measured emissivities were combined to calculate the surface emissivities. The above work and future work to incorporate knowledge about falling snow retrievals into the framework of the expected GPM Bayesian retrievals will be described during this presentation.

  8. 40 CFR 60.692-3 - Standards: Oil-water separators.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Emissions From Petroleum Refinery Wastewater Systems § 60.692-3 Standards: Oil-water separators. (a) Each... wastewater shall, in addition to the requirements in paragraph (a) of this section, be equipped and operated... wastewater which was equipped and operated with a fixed roof covering the entire separator tank or a portion...

  9. 40 CFR 60.692-3 - Standards: Oil-water separators.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Emissions From Petroleum Refinery Wastewater Systems § 60.692-3 Standards: Oil-water separators. (a) Each... wastewater shall, in addition to the requirements in paragraph (a) of this section, be equipped and operated... wastewater which was equipped and operated with a fixed roof covering the entire separator tank or a portion...

  10. Physical Retrieval of Surface Emissivity Spectrum from Hyperspectral Infrared Radiances

    NASA Technical Reports Server (NTRS)

    Li, Jun; Weisz, Elisabeth; Zhou, Daniel K.

    2007-01-01

    Retrieval of temperature, moisture profiles and surface skin temperature from hyperspectral infrared (IR) radiances requires spectral information about the surface emissivity. Using constant or inaccurate surface emissivities typically results in large retrieval errors, particularly over semi-arid or arid areas where the variation in emissivity spectrum is large both spectrally and spatially. In this study, a physically based algorithm has been developed to retrieve a hyperspectral IR emissivity spectrum simultaneously with the temperature and moisture profiles, as well as the surface skin temperature. To make the solution stable and efficient, the hyperspectral emissivity spectrum is represented by eigenvectors, derived from the laboratory measured hyperspectral emissivity database, in the retrieval process. Experience with AIRS (Atmospheric InfraRed Sounder) radiances shows that a simultaneous retrieval of the emissivity spectrum and the sounding improves the surface skin temperature as well as temperature and moisture profiles, particularly in the near surface layer.

  11. Building the Fire Energetics and Emissions Research (FEER) Smoke Emissions Inventory Version 1.0

    NASA Technical Reports Server (NTRS)

    Ellison, Luke; Ichoku, Charles; Zhang, Feng; Wang, Jun

    2014-01-01

    The Fire Energetics and Emissions Research (FEER) group's new coefficient of emission global gridded product at 1x1 resolution that directly relates fire readiative energy (FRE) to smoke aerosol release, FEERv1.0 Ce, made its public debut in August 2013. Since then, steps have been taken to generate corresponding maps and totals of total particulate matter (PM) emissions using different sources of FRE, and subsequently to simulate the resulting PM(sub 2.5) in the WRF-Chem 3.5 model using emission rates from FEERv1.0 as well as other standard biomass burning emission inventories. An flowchart of the FEER algorithm to calculate Ce is outlined here along with a display of the resulting emissions of total PM globally and also regionally. The modeling results from the WRF-Chem3.5 simulations are also shown.

  12. Improved optical flow velocity analysis in SO2 camera images of volcanic plumes - implications for emission-rate retrievals investigated at Mt Etna, Italy and Guallatiri, Chile

    NASA Astrophysics Data System (ADS)

    Gliß, Jonas; Stebel, Kerstin; Kylling, Arve; Sudbø, Aasmund

    2018-02-01

    Accurate gas velocity measurements in emission plumes are highly desirable for various atmospheric remote sensing applications. The imaging technique of UV SO2 cameras is commonly used to monitor SO2 emissions from volcanoes and anthropogenic sources (e.g. power plants, ships). The camera systems capture the emission plumes at high spatial and temporal resolution. This allows the gas velocities in the plume to be retrieved directly from the images. The latter can be measured at a pixel level using optical flow (OF) algorithms. This is particularly advantageous under turbulent plume conditions. However, OF algorithms intrinsically rely on contrast in the images and often fail to detect motion in low-contrast image areas. We present a new method to identify ill-constrained OF motion vectors and replace them using the local average velocity vector. The latter is derived based on histograms of the retrieved OF motion fields. The new method is applied to two example data sets recorded at Mt Etna (Italy) and Guallatiri (Chile). We show that in many cases, the uncorrected OF yields significantly underestimated SO2 emission rates. We further show that our proposed correction can account for this and that it significantly improves the reliability of optical-flow-based gas velocity retrievals. In the case of Mt Etna, the SO2 emissions of the north-eastern crater are investigated. The corrected SO2 emission rates range between 4.8 and 10.7 kg s-1 (average of 7.1 ± 1.3 kg s-1) and are in good agreement with previously reported values. For the Guallatiri data, the emissions of the central crater and a fumarolic field are investigated. The retrieved SO2 emission rates are between 0.5 and 2.9 kg s-1 (average of 1.3 ± 0.5 kg s-1) and provide the first report of SO2 emissions from this remotely located and inaccessible volcano.

  13. Analysis of multispectral and hyperspectral longwave infrared (LWIR) data for geologic mapping

    NASA Astrophysics Data System (ADS)

    Kruse, Fred A.; McDowell, Meryl

    2015-05-01

    Multispectral MODIS/ASTER Airborne Simulator (MASTER) data and Hyperspectral Thermal Emission Spectrometer (HyTES) data covering the 8 - 12 μm spectral range (longwave infrared or LWIR) were analyzed for an area near Mountain Pass, California. Decorrelation stretched images were initially used to highlight spectral differences between geologic materials. Both datasets were atmospherically corrected using the ISAC method, and the Normalized Emissivity approach was used to separate temperature and emissivity. The MASTER data had 10 LWIR spectral bands and approximately 35-meter spatial resolution and covered a larger area than the HyTES data, which were collected with 256 narrow (approximately 17nm-wide) spectral bands at approximately 2.3-meter spatial resolution. Spectra for key spatially-coherent, spectrally-determined geologic units for overlap areas were overlain and visually compared to determine similarities and differences. Endmember spectra were extracted from both datasets using n-dimensional scatterplotting and compared to emissivity spectral libraries for identification. Endmember distributions and abundances were then mapped using Mixture-Tuned Matched Filtering (MTMF), a partial unmixing approach. Multispectral results demonstrate separation of silica-rich vs non-silicate materials, with distinct mapping of carbonate areas and general correspondence to the regional geology. Hyperspectral results illustrate refined mapping of silicates with distinction between similar units based on the position, character, and shape of high resolution emission minima near 9 μm. Calcite and dolomite were separated, identified, and mapped using HyTES based on a shift of the main carbonate emissivity minimum from approximately 11.3 to 11.2 μm respectively. Both datasets demonstrate the utility of LWIR spectral remote sensing for geologic mapping.

  14. Characteristics of soft x-ray and extreme ultraviolet (XUV) emission from laser-produced highly charged rhodium ions

    NASA Astrophysics Data System (ADS)

    Barte, Ellie Floyd; Hara, Hiroyuki; Tamura, Toshiki; Gisuji, Takuya; Chen, When-Bo; Lokasani, Ragava; Hatano, Tadashi; Ejima, Takeo; Jiang, Weihua; Suzuki, Chihiro; Li, Bowen; Dunne, Padraig; O'Sullivan, Gerry; Sasaki, Akira; Higashiguchi, Takeshi; Limpouch, Jiří

    2018-05-01

    We have characterized the soft x-ray and extreme ultraviolet (XUV) emission of rhodium (Rh) plasmas produced using dual pulse irradiation by 150-ps or 6-ns pre-pulses, followed by a 150-ps main pulse. We have studied the emission enhancement dependence on the inter-pulse time separation and found it to be very significant for time separations less than 10 ns between the two laser pulses when using 6-ns pre-pulses. The behavior using a 150-ps pre-pulse was consistent with such plasmas displaying only weak self-absorption effects in the expanding plasma. The results demonstrate the advantage of using dual pulse irradiation to produce the brighter plasmas required for XUV applications.

  15. Integrated field emission array for ion desorption

    DOEpatents

    Resnick, Paul J; Hertz, Kristin L.; Holland, Christopher; Chichester, David

    2016-08-23

    An integrated field emission array for ion desorption includes an electrically conductive substrate; a dielectric layer lying over the electrically conductive substrate comprising a plurality of laterally separated cavities extending through the dielectric layer; a like plurality of conically-shaped emitter tips on posts, each emitter tip/post disposed concentrically within a laterally separated cavity and electrically contacting the substrate; and a gate electrode structure lying over the dielectric layer, including a like plurality of circular gate apertures, each gate aperture disposed concentrically above an emitter tip/post to provide a like plurality of annular gate electrodes and wherein the lower edge of each annular gate electrode proximate the like emitter tip/post is rounded. Also disclosed herein are methods for fabricating an integrated field emission array.

  16. Integrated field emission array for ion desorption

    DOEpatents

    Resnick, Paul J; Hertz, Kristin L; Holland, Christopher; Chichester, David; Schwoebel, Paul

    2013-09-17

    An integrated field emission array for ion desorption includes an electrically conductive substrate; a dielectric layer lying over the electrically conductive substrate comprising a plurality of laterally separated cavities extending through the dielectric layer; a like plurality of conically-shaped emitter tips on posts, each emitter tip/post disposed concentrically within a laterally separated cavity and electrically contacting the substrate; and a gate electrode structure lying over the dielectric layer, including a like plurality of circular gate apertures, each gate aperture disposed concentrically above an emitter tip/post to provide a like plurality of annular gate electrodes and wherein the lower edge of each annular gate electrode proximate the like emitter tip/post is rounded. Also disclosed herein are methods for fabricating an integrated field emission array.

  17. Time-resolved tomography using acoustic emissions in the laboratory, and application to sandstone compaction

    NASA Astrophysics Data System (ADS)

    Brantut, Nicolas

    2018-06-01

    Acoustic emission (AE) and active ultrasonic wave velocity monitoring are often performed during laboratory rock deformation experiments, but are typically processed separately to yield homogenized wave velocity measurements and approximate source locations. Here, I present a numerical method and its implementation in a free software to perform a joint inversion of AE locations together with the 3-D, anisotropic P-wave structure of laboratory samples. The data used are the P-wave first arrivals obtained from AEs and active ultrasonic measurements. The model parameters are the source locations and the P-wave velocity and anisotropy parameter (assuming transverse isotropy) at discrete points in the material. The forward problem is solved using the fast marching method, and the inverse problem is solved by the quasi-Newton method. The algorithms are implemented within an integrated free software package called FaATSO (Fast Marching Acoustic Emission Tomography using Standard Optimisation). The code is employed to study the formation of compaction bands in a porous sandstone. During deformation, a front of AEs progresses from one end of the sample, associated with the formation of a sequence of horizontal compaction bands. Behind the active front, only sparse AEs are observed, but the tomography reveals that the P-wave velocity has dropped by up to 15 per cent, with an increase in anisotropy of up to 20 per cent. Compaction bands in sandstones are therefore shown to produce sharp changes in seismic properties. This result highlights the potential of the methodology to image temporal variations of elastic properties in complex geomaterials, including the dramatic, localized changes associated with microcracking and damage generation.

  18. Noninvasive in vivo plasma volume and hematocrit in humans: observing long-term baseline behavior to establish homeostasis for intravascular volume and composition

    NASA Astrophysics Data System (ADS)

    Dent, Paul; Deng, Bin; Goodisman, Jerry; Peterson, Charles M.; Narsipur, Sriram; Chaiken, J.

    2016-04-01

    A new device incorporating a new algorithm and measurement process allows simultaneous noninvasive in vivo monitoring of intravascular plasma volume and red blood cell volume. The purely optical technique involves probing fingertip skin with near infrared laser light and collecting the wavelength shifted light, that is, the inelastic emission (IE) which includes the unresolved Raman and fluorescence, and the un-shifted emission, that is, the elastic emission (EE) which includes both the Rayleigh and Mie scattered light. Our excitation and detection geometry is designed so that from these two simultaneous measurements we can calculate two parameters within the single scattering regime using radiation transfer theory, the intravascular plasma volume fraction and the red blood cell volume fraction. Previously calibrated against a gold standard FDA approved device, 2 hour monitoring sessions on three separate occasions over a three week span for a specific, motionless, and mostly sleeping individual produced 3 records containing a total of 5706 paired measurements of hematocrit and plasma volume. The average over the three runs, relative to the initial plasma volume taken as 100%, of the plasma volume±1σ was 97.56+/-0.55 or 0.56%.For the same three runs, the average relative hematocrit (Hct), referenced to an assumed initial value of 28.35 was 29.37+/-0.12 or stable to +/-0.4%.We observe local deterministic circulation effects apparently associated with the pressure applied by the finger probe as well as longer timescale behavior due to normal ebb and flow of internal fluids due to posture changes and tilt table induced gravity gradients.

  19. Integrated Active Fire Retrievals and Biomass Burning Emissions Using Complementary Near-Coincident Ground, Airborne and Spaceborne Sensor Data

    NASA Technical Reports Server (NTRS)

    Schroeder, Wilfrid; Ellicott, Evan; Ichoku, Charles; Ellison, Luke; Dickinson, Matthew B.; Ottmar, Roger D.; Clements, Craig; Hall, Dianne; Ambrosia, Vincent; Kremens, Robert

    2013-01-01

    Ground, airborne and spaceborne data were collected for a 450 ha prescribed fire implemented on 18 October 2011 at the Henry W. Coe State Park in California. The integration of various data elements allowed near coincident active fire retrievals to be estimated. The Autonomous Modular Sensor-Wildfire (AMS) airborne multispectral imaging system was used as a bridge between ground and spaceborne data sets providing high quality reference information to support satellite fire retrieval error analyses and fire emissions estimates. We found excellent agreement between peak fire radiant heat flux data (less than 1% error) derived from near-coincident ground radiometers and AMS. Both MODIS and GOES imager active fire products were negatively influenced by the presence of thick smoke, which was misclassified as cloud by their algorithms, leading to the omission of fire pixels beneath the smoke, and resulting in the underestimation of their retrieved fire radiative power (FRP) values for the burn plot, compared to the reference airborne data. Agreement between airborne and spaceborne FRP data improved significantly after correction for omission errors and atmospheric attenuation, resulting in as low as 5 difference between AquaMODIS and AMS. Use of in situ fuel and fire energy estimates in combination with a collection of AMS, MODIS, and GOES FRP retrievals provided a fuel consumption factor of 0.261 kg per MJ, total energy release of 14.5 x 10(exp 6) MJ, and total fuel consumption of 3.8 x 10(exp 6) kg. Fire emissions were calculated using two separate techniques, resulting in as low as 15 difference for various species

  20. Errors in Sounding of the Atmosphere Using Broadband Emission Radiometry (SABER) Kinetic Temperature Caused by Non-Local Thermodynamic Equilibrium Model Parameters

    NASA Technical Reports Server (NTRS)

    Garcia-Comas, Maya; Lopez-Puertas, M.; Funke, B.; Bermejo-Pantaleon, D.; Marshall, Benjamin T.; Mertens, Christopher J.; Remsberg, Ellis E.; Mlynczak, Martin G.; Gordley, L. L.; Russell, James M.

    2008-01-01

    The vast set of near global and continuous atmospheric measurements made by the SABER instrument since 2002, including daytime and nighttime kinetic temperature (T(sub k)) from 20 to 105 km, is available to the scientific community. The temperature is retrieved from SABER measurements of the atmospheric 15 micron CO2 limb emission. This emission separates from local thermodynamic equilibrium (LTE) conditions in the rarefied mesosphere and thermosphere, making it necessary to consider the CO2 vibrational state non-LTE populations in the retrieval algorithm above 70 km. Those populations depend on kinetic parameters describing the rate at which energy exchange between atmospheric molecules take place, but some of these collisional rates are not well known. We consider current uncertainties in the rates of quenching of CO2 (v2 ) by N2 , O2 and O, and the CO2 (v2 ) vibrational-vibrational exchange to estimate their impact on SABER T(sub k) for different atmospheric conditions. The T(sub k) is more sensitive to the uncertainty in the latter two and their effects depend on altitude. The T(sub k) combined systematic error due to non-LTE kinetic parameters does not exceed +/- 1.5 K below 95 km and +/- 4-5 K at 100 km for most latitudes and seasons (except for polar summer) if the Tk profile does not have pronounced vertical structure. The error is +/- 3 K at 80 km, +/- 6 K at 84 km and +/- 18 K at 100 km under the less favourable polar summer conditions. For strong temperature inversion layers, the errors reach +/- 3 K at 82 km and +/- 8 K at 90 km. This particularly affects tide amplitude estimates, with errors of up to +/- 3 K.

Top